"uuid","repository link","title","author","contributor","publication year","abstract","subject topic","language","publication type","publisher","isbn","issn","patent","patent status","bibliographic note","access restriction","embargo date","faculty","department","research group","programme","project","coordinates"
"uuid:d46d5f09-9a3b-4887-a4b5-17690920cb1d","http://resolver.tudelft.nl/uuid:d46d5f09-9a3b-4887-a4b5-17690920cb1d","Non-Assembly Additive Manufacturing of Medical Devices","Lussenburg, K.M. (TU Delft Medical Instruments & Bio-Inspired Technology)","Breedveld, P. (promotor); Sakes, A. (copromotor); Delft University of Technology (degree granting institution)","2024","Additive manufacturing, or 3D printing, offers a new paradigm for the way functional products are designed, manufactured, and assembled. Its additive nature provides the ability to create complex-shaped parts, without an increase in production time or costs, which would be difficult to produce with conventional manufacturing. In addition, integration of different functions and materials allows for the production of completely functional assemblies or mechanisms that can be produced in a single production step, known as non-assembly additive manufacturing. These mechanisms are functional immediately after 3D printing, without requiring additional assembly steps. Non-assembly mechanisms have some advantages over traditional assembly-based mechanisms, as they reduce the processing time and costs, and allow for an increase in complexity....","non-assembly design; Additive manufacturing (AM); 3D printing; Design for manufacture and assembly; medical device; surgical instruments; miniaturization","en","doctoral thesis","","978-94-6496-092-1","","","","","","","","","Medical Instruments & Bio-Inspired Technology","","",""
"uuid:b8954e95-15d9-430d-b026-71f4cf99ef23","http://resolver.tudelft.nl/uuid:b8954e95-15d9-430d-b026-71f4cf99ef23","On ice mechanics in ice-induced vibrations","Owen, C.C. (TU Delft Offshore Engineering)","Metrikine, A. (promotor); Hendrikse, H. (copromotor); Delft University of Technology (degree granting institution)","2024","The imminence of anthropogenic climate change has motivated a global energy transition towards sustainable power generation. Offshore wind—an important contributor to the energy transition—is expanding, not only in turbine size and number of installations, but also into regions with harsher environmental conditions. One of those conditions in places such as the Baltic Sea is drift ice. Offshore wind turbine support structures, with vertical sides at the waterline, must be designed to survive dynamic ice-structure interaction when ice fails in crushing against the structure. For a safe and efficient design of the support structure, dynamic ice-structure interaction resulting in ice-induced vibrations must be considered. Therefore, both an understanding of the problem and accurate modeling for the prediction of the development of ice-induced vibrations are required.
Significant progress has been made in recent years on the topic of ice-induced vibrations, and a numerical model for prediction of ice-induced vibrations has been developed based on the principles of velocity-dependent deformation and failure behavior of ice, and contact area variation between ice and structure during interaction. However, uncertainty remains regarding physical mechanisms within the ice which govern ice-induced vibrations. The ice mechanics involved in the development of ice-induced vibrations is therefore the main topic of this thesis.
The main objective was to investigate and identify the ice mechanics involved in the development of ice-induced vibrations, especially in the regime of frequency lock-in as historically defined. It was hypothesized that dynamic recrystallization played a relevant role in the ice mechanics involved in ice-induced vibrations. To test the hypothesis, ice mechanics experiments were performed at the ice laboratory specifically developed at Delft University of Technology for this purpose.
To identify grain-scale mechanisms in ice, such as dynamic recrystallization, a method was devised to elucidate ice thin section textures and (quarter) fabrics by means of crossed-polarized transmitted light and interference coloration of ice. An attempt was made to apply the method to the laboratory experiments which applied compressive loading to the edge of a thin freshwater columnar-grained ice plate, laterally confined by glass plates. Crossed-polarized transmitted light was shone through the glass plates to observe the grain structure of the ice during cyclic compression with a haversine velocity waveform. The loading and confinement scenario was intended to reproduce a vertical section of the ice edge during frequency lock-in vibrations. The experimental design demonstrated that the grain-scale mechanics of dynamic recrystallization did not obviously contribute to the peak load-velocity relation associated with frequency lock-in vibrations. As expected, fracture initiated on the grain scale was responsible for load drops. But, more interestingly, stress relaxation during periods of low relative velocity between ice and structure occurred rapidly. Following the stress relaxation, when velocity increased, the peak load was higher than previous brittle peak loads. The results indicated that the mechanisms involved in the stress relaxation were occurring on a scale smaller than the grain size. A loading path dependency was also observed with respect to the peak load-velocity relation.
Ice penetration experiments at the Aalto Ice and Wave Tank in ethanol-doped cold model ice were performed with a rigid structure, controlled oscillation, and a single-degree-of-freedom structure, and comparison of results showed that the peak global ice loads depended on the amount of time spent at low relative velocities where an ice strengthening effect developed. This has implications for the so-called velocity effect and compliance effect in design of structures subject to dynamic ice-structure interaction.
Overall, the load signals from the ice mechanics experiments on freshwater ice resembled the load signals obtained from the controlled-oscillation experiments from the model-scale ice tank tests. The qualitatively similar velocity and resulting load patterns give confidence in the idea that the mechanisms involved in both types of experiments were similar, even for different ice types and loading scenarios.
These similar results demonstrate a link in the ice mechanics across different ice types and loading scenarios, which may be explained with further research on path-dependent constitutive ice behavior, and with scrutiny regarding ice dislocation and grain boundary mechanics. Suggestions for future research are proposed, including the testing of strain rate-varying uniaxial compression of ice and ice penetration experiments with haversine velocity waveforms.","dynamic ice-structure interaction; ice-induced vibrations; frequency lock-in; c-axis; interference coloration; ice microstructure; ice fabric; ice texture; image processing; birefringence; grain boundary; controlled oscillation; ice failure length; anelasticity; ice crushing; model tests; compliance effect; velocity effect","en","doctoral thesis","","978-94-6366-819-4","","","","","","2024-04-08","","","Offshore Engineering","","",""
"uuid:dc575434-78fd-475d-81ed-f2a6a6d46845","http://resolver.tudelft.nl/uuid:dc575434-78fd-475d-81ed-f2a6a6d46845","Integrated Electrical Steady-State Power Flow Simulations on Transmission and Distribution Networks","Kootte, M.E. (TU Delft Mathematical Physics)","Vuik, Cornelis (promotor); van Gijzen, M.B. (promotor); Delft University of Technology (degree granting institution)","2024","Integrated electrical power flow simulations are concerned with solving the steady-state load flow problem on integrated transmission and distribution electricity networks. We have developed a framework to run these simulations efficiently, whilst keeping in mind the differences between these network types and accommodating the practical considerations of system operators. We need such a framework to analyse the interaction that these systems might have as a result of the energy transition.
To develop a framework to run integrated power flow simulations, we have worked in two stages. Firstly, we have studied how we can model an integrated network. We have found two ways of modelling an integrated network: using a homogeneous configuration in which both networks are modelled using three phases and using a hybrid network configuration in which both networks keep their original configuration but in which the coupling substation takes care of the phase dimension mismatch between the two sides. Next to that, we have found two ways of solving an integrated system: either by coupling them into one system and solving that as a whole (we call this the unified approach) or by keeping two separate systems and iterating between these networks (we call this the Manager‐Fellow Splitting (MFS) method).
We have concluded that the unified methods are generally faster than MFS methods and that a hybrid network configuration leads to faster results, making the interconnected method the most efficient.
In the second stage, we have focused on the efficiency of these simulations. During every Newton‐Raphson iteration in power flow simulations, a linear system is solved. We have therefore studied several Krylov subspace and preconditioning techniques that can solve this linear system efficiently. We have applied Krylov and preconditioning combinations to integrated network simulations to check again the performances of the simulations on large test cases . During this stage, we applied them to networks up to a size of 800,000 buses as we were interested in efficient scaling of the methods that were originally the object of study.
In the second stage, we saw that the MFS methods were performing better than unified methods. Furthermore, preconditioned Krylov subspace methods had a similar performance to direct methods. t is difficult to judge why this happened. A reason could be that the library in which we performed these simulations, PETSc, is optimised for parallel computations in which multiple smaller blocks are solved at the same time whilst we were doing only sequential computations.
Finally, we have striven to incorporate operational convenience for Transmission and Distribution System Operators (TSOs and DSOs) during the development of this integration framework, by considering their computational and privacy concerns. The way that this framework is built, can take away some of their concerns.
To summarise, we have created an open‐source framework to run efficient steady-state power flow simulations on integrated transmission and distribution networks. This framework is tested on simplified test cases but shows potential for large system simulations. Moreover, it takes into account the considerations of system operators and can be utilised in other applications besides integrated analysis.","Power Flow; Numerical analysis; Newton-Krylov methods; Iterative methods","en","doctoral thesis","","978-94-6483-990-6","","","","","","","","","Mathematical Physics","","",""
"uuid:6b1e6a9c-e014-4092-bbd6-a35cce0503a1","http://resolver.tudelft.nl/uuid:6b1e6a9c-e014-4092-bbd6-a35cce0503a1","Structure Guided Directed Evolution of Enzymes","Hüppi, S.N. (TU Delft BT/Biocatalysis)","Hollmann, F. (promotor); Delft University of Technology (degree granting institution)","2024","Our ability to tailor enzymatic properties is a critical factor for biocatalyst application in the industrial sector. Although many wild-type enzymes have been found capable of promiscuously catalysing desired anthropogenic reactions, their activity and selectivity for non-natural transformations is often poor. Consequently, it is crucial to optimise enzymes such that they can be effectively integrated into industrial processes. Notable added advantages in this context are that enzymes are considered 'green' catalysts - enhancing the perceived value of products in today's environmentally-conscious society – and that biocatalysts can carry out intricate chemistries with exceptional regio- and stereoselectivity, complementing traditional organic synthesis.","Biocatalysis; Enzymes","en","doctoral thesis","","","","","","","","2024-04-24","","","BT/Biocatalysis","","",""
"uuid:2ee2a492-6588-46db-aa6a-7056fd37fd24","http://resolver.tudelft.nl/uuid:2ee2a492-6588-46db-aa6a-7056fd37fd24","Gate-tunable kinetic inductances for superconducting circuits","Splitthoff, L.J. (TU Delft QRD/Andersen Lab)","Kouwenhoven, Leo P. (promotor); Andersen, C.K. (copromotor); Delft University of Technology (degree granting institution)","2024","Superconducting circuits in cryogenic environments form an excellent material platform for the realization and study of quantum systems.
In this thesis, we continue the exploration of novel types of circuit elements which expand the circuit quantum electrodynamics toolbox to enable exotic, and potentially better circuit implementations. To this end, we combine the study of condensed matter systems and circuit quantum electrodynamics in what is called hybrid cQED experiments to arrive at the implementation of gate-tunable kinetic inductances for superconducting circuits. This discovery shed new light on the physics of gate-tunable kinetic inductances and enabled the observation of emergent phenomena in gate-tunable metamaterials, in particular the phase transition in a bosonic Su-Schrieffer-Heeger chain. Moreover, as gate-tunable kinetic inductances became available we realized tunable resonators and parametric amplifiers for enhanced control and readout of superconducting circuits.","Gate-tunable superconducting circuits; Resonator-based parametric amplifiers; Topological metamaterials; Proximitized nanowires","en","doctoral thesis","","978-94-6384-540-3","","","","","","2024-04-15","","","QRD/Andersen Lab","","",""
"uuid:33283954-fd1d-40c9-a6bf-7bd020350bbe","http://resolver.tudelft.nl/uuid:33283954-fd1d-40c9-a6bf-7bd020350bbe","Context-specific value inference via hybrid intelligence","Liscio, E. (TU Delft Interactive Intelligence)","Jonker, C.M. (promotor); Murukannaiah, P.K. (copromotor); Delft University of Technology (degree granting institution)","2024","Human values are the abstract motivations that drive our opinions and actions. AI agents ought to align their behavior with our value preferences (the relative importance we ascribe to different values) to co-exist with us in our society. However, value preferences differ across individuals and are dependent on context. To reflect diversity in society and to align with contextual value preferences, AI agents must be able to discern the value preferences of the relevant individuals by interacting with them. We refer to this as the value inference challenge, which is the focus of this thesis. Value inference entails several challenges and the related work on value inference is scattered across different AI subfields. We present a comprehensive overview of the value inference challenge by breaking it down into three distinct steps and showing the interconnections among these steps.","Values; Natural Language Processing; Morality; Ethics; Explainable AI; Active Learning; Hybrid Intelligence","en","doctoral thesis","","978-94-6366-840-8","","","","","","","","","Interactive Intelligence","","",""
"uuid:697b44bb-0bf5-4e9a-878c-626cdb831bf3","http://resolver.tudelft.nl/uuid:697b44bb-0bf5-4e9a-878c-626cdb831bf3","Green Health: Examining the role of green space characteristics and their proximity in green space health pathways","Cardinali, M. (TU Delft Heritage & Architecture)","Pottgiesser, U. (promotor); van Timmeren, A. (promotor); Beenackers, Mariëlle A. (copromotor); Delft University of Technology (degree granting institution)","2024","This doctoral thesis critically examines green space characteristics and their proximity to residents in their ability to help reduce the global disease burden of non-communicable diseases. By dissecting three pivotal pathways of theorized green space health effects through increased physical activity, increased social cohesion, and reduced air pollution, the thesis aims to provide new insights into which green space characteristics drive these relationships and in which distance they occur. To achieve these aims, this thesis develops reporting guidelines for the research field, a QGIS script for automatization of green space indicator development and uses two complementary sources for data collection. It builds on the self-reported data on physical activity, social cohesion, air pollution, health and mental health from the URBiNAT project and its case studies in the four European satellite neighbourhoods Nantes-Nord (France), Porto-Campanhã (Portugal), Sofia-Nadezhda (Bulgaria), and Høje-Taastrup (Denmark) and complements it with a rigorous spatial analysis. This enabled a rigorous sensitivity analysis based on up to 135 structural equation models per pathway. The results of this doctoral research revealed distinct green space characteristics and proximities that drive each pathway, including thresholds where these associations disappear or even change direction. It concludes that interconnected, multi-use green corridors are more beneficial than isolated patches for all analysed health pathways, challenging current municipal green space strategies to shift focus from mere ratios to green mobility infrastructures. Although rooted primarily in European contexts and of a cross-sectional nature, the doctoral research provides new evidence for urban planning and public health. It emphasizes the practical implications of how to design green spaces to address health concerns. The results not only resonate with the WHO's Urban Health Research Agenda but also provide tangible recommendations for a healthier human habitat.","green space; greenness; health; well-being; mediation","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-849-1","","","","","","","","","Heritage & Architecture","","",""
"uuid:1e9082db-02d5-44d6-8c3e-c08932162d65","http://resolver.tudelft.nl/uuid:1e9082db-02d5-44d6-8c3e-c08932162d65","Resource management in wireless networks","Raftopoulou, M. (TU Delft Network Architectures and Services)","Van Mieghem, P.F.A. (promotor); Litjens, R. (copromotor); Delft University of Technology (degree granting institution)","2024","Following the trend of previous years, the number of devices, and hence the traffic in cellular networks is increasing. Moreover, new applications with stringent requirements are envisioned. Examples of such applications include collaborative learning and coverage extension with drones. To accommodate the traffic with its respective Quality of Service (QoS) requirements and to support new challenging applications in the Radio Access Network (RAN), we need to develop new algorithms and tools for efficient resource management. In this dissertation, resource management in the RAN is considered in three distinct areas.
In Chapter 2 we provide an introduction to the key concepts, which establish the technological context of the following chapters. The first part of this dissertation focuses on serving traffic with diverse requirements in the context of 5G networks. In 5G, RAN slicing has been introduced, to support services with diverse QoS requirements in the same network infrastructure. Moreover, RAN slicing allows the Mobile Network Operators (MNOs) to configure customer-specific slices. In Chapter 3, we assess RAN slicing in terms of the traffic handling capacity for an Industry 4.0-inspired scenario. For the assessment, we compare a network with isolated slices and a non-sliced network. Extensive simulations show that the non-sliced network can serve more traffic than the sliced network while satisfying the same class-specific QoS requirements. Considering that RAN slicing will be adopted by the MNOs, this result highlights that additional radio resource management mechanisms are needed when RAN slicing is configured. To that end, in Chapter 4 we evaluate RAN slicing in combination with allowing slices to use idle resources of other slices, in a realistic smart city environment. The results show that idle resource sharing significantly improves the traffic performance. However, it is not until RAN slicing is further combined with other technology features, i.e. flexible numerology and mini-slots that it provides better traffic performance than non-sliced networks.
The second part of this dissertation focuses on the application of collaborative learning, and more specifically on Federated Learning (FL) in resource-constrained wireless networks. In Chapter 5, we characterise agents by their importance in the learning process and the resource efficiency of their wireless channel. Then, we provide a general agent selection framework to indicate which agents should participate in the learning process. Extensive simulations in various scenarios verify the potential of the proposed framework. Additionally, it is revealed that in scenarios where agents have small data sets or the latency requirement is stringent, it is more beneficial to perform pure learning-based agent selection. In Chapter 6 we extend the previously proposed framework to perform joint agent selection and resource allocation. We describe the problem in resource-constrained vehicular wireless networks with Multi-User Multiple Input Multiple Output (MU-MIMO) capable base stations. To approximate the optimal solution of the problem, we propose the Vehicle-Beam-Iterative (VBI) algorithm. Then, we evaluate the VBI algorithm in scenarios related to vehicular communications. The results show that in scenarios where the vehicles have the same data set sizes, the application-specific accuracy targets are achieved faster than in scenarios where the data set sizes are different. Additionally, it is shown that MU-MIMO improves the convergence time of the global FL model.
In the third part of this dissertation, the deployment of a drone swarm is addressed. In Chapter 7 we study the link density is Random Geometric Graphs (RGGs). Specifically, we very accurately approximate the link density in any two- and three-dimensional rectangular spaces with the Fréchet distribution. Then, we express the minimum number of nodes needed to ensure network connectivity in terms of the link density. Finally, we model a drone swarm with a RGG and we estimate the required size of the swarm such that communication among all drones can be ensured.
The conclusions of this dissertation and the directions for future work are presented in Chapter 8.
Advanced convex economic model predictive control (CEMPC) methods have garnered attention lately in the wind turbine control community. Such techniques possess several advantages apart from those inherent in being subsets of the model predictive control (MPC) family. First, it is capable of accounting for multiple economic objectives for wind turbines, such as power production optimization, fatigue load reduction, and excessive actuation limitation, in a straightforward and unified way. This also means that the trade-off calibration between the economic objectives (by weight tuning) can be done with ease. Additionally, the convexity of the underlying optimization control problem (OCP) guarantees that a globally optimal solution can be found with high numerical effectiveness, which may lead to real-time feasibility. This thesis, in particular, is focused on the development of a unified CEMPC framework, combining the potentials of two emerging CEMPCs in the wind turbine area, namely the power-and-energy CEMPC and the quasi-linear parameter-varying model predictive control (qLPV-MPC), for addressing multiple wind turbine structural loads.
The former achieves its convexity by exchanging nominal wind turbine variables, such as blade pitch, generator torque, and rotational speed, with alternative variables in terms of aerodynamic and generator powers and rotor kinetic energy. This results in the OCP containing linear dynamics, convex constraints, and concave objectives to be maximized. Being originally focused on fulfilling power gradient requirements from a grid code, a fatigue load mitigation consideration was introduced later on for fore-aft tower motion in the literature. Unfortunately, little attention was paid to the mitigation of the more weakly-damped side-side tower loading, as well as blade fatigue loads.
Such a knowledge gap is filled in this thesis; in particular, both key components' fatigue loads are mitigated by exploiting the individual blade pitching capabilities of the power-and-energy CEMPC framework. Since, in this framework, blade pitch actuation is achieved mainly by manipulating aerodynamic power inside the CEMPC, a redefinition of the latter is necessary to enable such a feature. To be precise, multiple aerodynamic powers, each representing that of a single blade, were employed as decision variables of the CEMPC instead of a single quantity. Further mapping of the aerodynamic powers into side-side blade forces, as well as augmentation of side-side tower dynamics into the CEMPC's internal model, enables counteractive control actions for reducing side-side tower load. Mapping the powers into blade and rotor moments enables alleviation of the blade loads.
On the other hand, the utilization of qLPV-MPC for deploying a passive wind turbine tower resonance prevention by dynamically optimal frequency skipping has been gaining attention in the literature. For enabling active load cancelation in this framework, however, a periodic load estimation is needed. In this thesis, such an estimation scheme is developed, employing a Kalman filtering method. Aligned with the qLPV-MPC implementation for the aforementioned passive method, the internal model of the filter is rendered in a demodulated fashion by applying a model demodulation transformation (MDT) to an extended wind turbine side-side tower dynamics. Measurement signal demodulation (MSD) is utilized for capturing the slow-varying components of wind turbine tower measurements to be fed to the Kalman filter. The filter is thus capable of not only estimating the demodulated periodic load signals but also those of the unknown and unmeasured tower states with good agreement with the ground truth.
The next challenge addressed in this thesis is the provision of an active control method specifically aimed at tackling the side-side periodic loading of the tower. A family of repetitive control methods, namely modulation\-/demodulation control (MDC), is adopted in this thesis to handle the cancellation of the periodic loading. In principle, MDC consists of output signal demodulation, projecting the frequency component of interest (namely the rotor frequency) in the signal into low-frequency quadrature and in-phase representations. On these axes, diagonal single-input, single-output (SISO) controllers can be designed, resulting in control signals, which, by a modulation process, are translated into a single control signal, being an additive generator torque signal, oscillating at the frequency of the disturbance and thereby canceling it. A phase offset, with its optimal value determined by the plant's phase at the disturbance frequency, is needed and included in the modulation. This results in the full decoupling of the control channels, as well as the correction of an occurring gain sign flip due to the varying excitation frequency, which could have deteriorated the controller's performance and induced instabilities. The MDC extends a conventional tower damper controller specifically aimed at mitigating the tower loading at its natural frequency. As a result, both the tower load components at the natural frequency and the rotor frequency are mitigated simultaneously.
This thesis has, thus, highlighted the significant role various coordinate transformations play in advancing state-of-the-art wind turbine control, be it a transformation of signals into a different set of variables in power and energy terms or into different time scales. The former has enabled the formulation of power-and-energy CEMPC for side-side tower load and blade loads mitigation, extending this framework's fatigue load mitigation capabilities. The latter transformation, demonstrated by the MDT, paves the way for estimating unknown and unmeasurable periodic load and tower states in a demodulated manner, essential in activating the periodic load cancelation feature of the novel qLPV-MPC method. The MDC method has successfully enabled active side-side periodic tower load cancelation by leveraging a modulation-demodulation scheme, another way of transforming coordinates into different time scales where convenient yet effective control system design can be made. This thesis has, therefore, provided elements required for constructing a unified CEMPC framework, where the benefits of the said coordinate transformations may be further harnessed.","","en","doctoral thesis","","978-94-6366-842-2","","","","","","","","","Team Jan-Willem van Wingerden","","",""
"uuid:158d75d1-e0f9-4547-bab3-3389e5c0e1f6","http://resolver.tudelft.nl/uuid:158d75d1-e0f9-4547-bab3-3389e5c0e1f6","Input design and data-driven approaches based on convex optimization for fault diagnosis in linear systems","Noom, J. (TU Delft Team Michel Verhaegen)","Verhaegen, M.H.G. (promotor); Soloviev, O.A. (copromotor); Smith, C.S. (copromotor); Delft University of Technology (degree granting institution)","2024","The complexity of automated systems has grown considerably during the past decades. This convolutes the observation of possible faults in these systems. If not being revealed timely, such faults can lead to catastrophic failures. As a result, there is a continuous interest in sophisticated fault diagnosis techniques. Since it is generally desired to diagnose faults in the earliest possible stages, computational challenges are imposed on the algorithms. Whereas the field of fault diagnosis comprises of a large variety of techniques in various categories, these computational challenges appear to emerge wide-ranging.
At the same time, convex optimization has developed as a valuable tool to solve a large variety of mathematical problems with computational efficiency. This computational efficiency is achieved by exploiting favorable structures of the problem. Depending on the specific problem, these structures vary in difficulty to be recognized or arranged. Moreover, some problems lead to a convex optimization problem naturally, while other problems first need some kind of relaxation or sequential process in order to employ convex optimization.
This thesis explores how convex optimization can be utilized in order to solve fault diagnosis problems with computational efficiency. The state-of-the-art is studied for multiple computationally challenging categories of fault diagnosis: online input design approaches, diagnosis of many concurrent faults, and data-driven approaches. First, online input design approaches facilitate fault diagnosis by computing discriminating input sequences during system operation. Since the input is calculated in real-time those approaches allow only limited computational effort, whereas adequate input determination typically appears to be nontrivial. In this contribution it is shown that an established upper bound on the error probability for linear candidate models with Gaussian noise is concave in the most challenging discrimination conditions. This finding allows to use sequential convex programs for online determination of a discriminating input with low computational effort.
The second contribution in this thesis regards the cantilever dynamics in high-speed atomic force microscopy. Due to the oscillatory behavior above the scrutinized sample, the cantilever typically has intermittent physical contact with the sample. This leads to a large number of (dynamically dependent) impulsive faults. Instead of performing an intractable explicit examination of all (combinations of) hypotheses, this contribution applies sparse estimation as a convex optimization method in order to diagnose these concurrent faults. In a simulation study, the resulting effect on the sample height reconstruction is discernible both qualitatively and quantitatively with respect to the conventional approach to sample height reconstruction in atomic force microscopy.
The third contribution introduces a novel problem formulation for model-free data-driven fault diagnosis. Instead of separate time periods for system identification and fault diagnosis in typical data-driven approaches, model-free data-driven fault diagnosis aims for the simultaneous system identification and fault diagnosis from one single data set. Whereas this is originally a non-convex bilinear problem, a proposed solution reformulates it as a convex optimization problem using a so-called lifting technique. Furthermore, online evaluation of this optimization problem is facilitated by a developed recursive implementation. The proposed methodology is tested both on simulation data and real-life flight test data.
By demonstrating the potential of convex optimization to a deliberate selection of fault diagnosis problems, this thesis serves as a source of inspiration for solving a wider variety of fault diagnosis problems efficiently. Furthermore, various elements related to convex optimization and its recursive implementation presented in this thesis have additional relevance to the general field of control science beyond fault diagnosis. Future applications of the presented methodology can arise for instance in the data-driven control in the presence of disturbances, or recursive blind deconvolution of real-time image sequences.","Fault diagnosis; Convex optimization; Kalman filtering; System identification; Linear systems","en","doctoral thesis","","978-94-6384-567-0","","","","","","","","","Team Michel Verhaegen","","",""
"uuid:ab99217e-5ae7-4322-b4c9-311547a3feb9","http://resolver.tudelft.nl/uuid:ab99217e-5ae7-4322-b4c9-311547a3feb9","Product lifetime extension through design: Encouraging consumers to repair electronic products in a circular economy","van den Berge, R.B.R. (TU Delft Responsible Marketing and Consumer Behavior)","Mugge, R. (promotor); Magnier, L.B.M. (copromotor); Delft University of Technology (degree granting institution)","2024","Our production and consumption patterns of electronic products exceed the limits of what one planet can handle. Prolonging product lifetimes decreases the value losses caused by the destruction of existing products and lowers the amount of e-waste. Repair is an impactful strategy to tackle the issues associated with the production and consumption of electronic products. However, most discarded products are never repaired during their lifetime. Literature proposed several design for repair strategies, predominantly from a technical (engineering) perspective. However, a technically repairable design may not automatically result in repair behavior. Consumers and their behavior play a key role in prolonging the lifetimes of our daily used products.
The objective of this thesis is to explore the role of design in stimulating consumers to extending product lifetimes via repair. A consumer perspective investigates why consumers decide to prematurely replace products and their barriers towards repair. Design and marketing strategies to stimulate repair (e.g., support in failure diagnosis, modularity, and lifetime labels) are identified from literature. The effectiveness, boundaries and the required conditions of these strategies are tested in several empirical studies. They showed that high perceived repair self-efficacy, explicit cues guiding the repair act, and specific information about product’s reliability and upgradeability can increase consumers’ repair intentions.
By adopting a consumer-centric approach, this thesis offers contributions to design research on product lifetime extension and repair. However, creating a repairing society is not solely a consumer’s responsibility. One should realize that product lifetime extension requires a shift in current industry practice and businesses organization, as well as the design of appropriate policies. Therefore, a systemic approach and cooperation between all involved stakeholders is required. Designers, researchers and policymakers can use our insights to stimulate much-needed consumer repair practices of (electronic) products within a circular economy.
The first part of this dissertation focuses on clustering the nodes of a network or community detection. Here, the nodes of a network are partitioned into several clusters and the objective is to precisely determine the cluster memberships based on only the network topology. Many clustering methods assume that the true number of clusters is known a priori. In Chapter 2, we investigate how exactly to find this number of clusters for a given graph. We discuss several modularity maximization and spectral clustering methods, and we outline how they can be used to find the number of clusters. We compare the performance of several different algorithms by evaluating these methods on benchmark graph models where the ground truth clusters are known.
In the second part, we explore network representations in the hyperbolic space. In Chapter 3, we extend the 2-dimensional random hyperbolic graph model to a hyperbolic space of arbitrary dimensionality. Our rescaling of the model parameters and variables casts the random hyperbolic graph model of any dimension to a unified mathematical framework, such that the degree distribution is invariant to the dimensionality of the space. We analyze the different connectivity regimes of the model and their limiting cases. In Chapter 4, we describe how hyperbolic graphs are built on a connection principle based on similarity, and we identify a class of real-world networks in which the links are driven by principles of complementarity rather than similarity. We propose a framework for embedding complementarity-driven networks into hyperbolic space and we describe the ensuing complementarity random hyperbolic graph model. In Chapter 5, we further investigate the topological properties of the complementarity random hyperbolic graph.
The third and final part of the dissertation centers on semantic networks, which describe semantic relations between words or concepts. In Chapter 6, we systematically analyze the topological properties of a large, multilingual dataset of semantic networks. Our investigation covers both universal and language-specific structural properties of these networks. We examine the roles that the connection principles of similarity and complementarity play in their link formation, and we discuss how a deeper understanding of these organizing principles benefits applications in natural language processing.","Complex networks; Complementarity; Similarity; Hyperbolic geometry; Network clustering","en","doctoral thesis","","978-94-6366-845-3","","","","","","","","","Network Architectures and Services","","",""
"uuid:d9cae0f9-23ca-45ab-83a1-0965014db6b7","http://resolver.tudelft.nl/uuid:d9cae0f9-23ca-45ab-83a1-0965014db6b7","High-Performance Multilevel Class-D Audio Amplifiers","Zhang, H. (TU Delft Electronic Instrumentation)","Makinwa, K.A.A. (promotor); Fan, Q. (copromotor); Delft University of Technology (degree granting institution)","2024","This thesis describes the analysis, design, prototype implementation, and measurement results of high-performance Class-D amplifiers (CDAs) for audio applications.","","en","doctoral thesis","","","","","","","","","","","Electronic Instrumentation","","",""
"uuid:7cd49965-5106-4a28-8609-cbcb7aeec0d5","http://resolver.tudelft.nl/uuid:7cd49965-5106-4a28-8609-cbcb7aeec0d5","Rural futures for young adults: Rural development and regeneration in the Netherlands","Koreman, M.C.J. (TU Delft Urban Development Management)","Korthals Altes, W.K. (promotor); Spaans, M. (copromotor); Delft University of Technology (degree granting institution)","2024","Young adults are essential in the future of rural municipalities. They can revive places in decline and create new opportunities. But what future dreams, plans and opportunities do they have? Embark on a journey through the Dutch countryside to uncover the future dreams of young adults. Delve into the dreams, plans, and obstacles shaping the future of rural municipalities. Through the vibrant tapestry of cultural festivals, witness the revitalization of once-declining communities. Additionally, examine the innovative re-use of vacant farm buildings, offering promising opportunities for regeneration, economic growth and entrepreneurship.
However, amidst these prospects lies a challenge: the political landscape, where urban interests often overshadow rural needs. Shedding light on these dynamics and navigating its complexities, this research aims to empower rural communities. It suggests how to pave the way towards better policies for rural municipalities in the Netherlands. Where young adults can build their rural futures.","Young adults; Rural municipalities; Personal futures; Community-led; Rural development; Spatial justice; The Netherlands","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-841-5","","","","","","2024-04-12","","","Urban Development Management","","",""
"uuid:78be5850-2df9-40fe-973d-e537d0d172c0","http://resolver.tudelft.nl/uuid:78be5850-2df9-40fe-973d-e537d0d172c0","Machine Learning-Induced Epistemic Injustice in Medicine and Healthcare","Pozzi, G. (TU Delft Ethics & Philosophy of Technology)","van den Hoven, M.J. (promotor); Duran, J.M. (copromotor); Delft University of Technology (degree granting institution)","2024","The advancement of AI-based technologies, such as machine learning (ML) systems, for implementation in healthcare is progressing rapidly. Since these systems are used to support healthcare professionals in crucial medical practices, their role in medical decision-making needs to be epistemologically and ethically assessed. However, a central issue at the intersection of the ethics and epistemology of ML has been largely neglected. This pertains to the careful scrutiny of how ML systems can degrade individuals’ epistemic standing as receivers and conveyors of knowledge and, thereby, perpetrate epistemic injustice. Since ML systems are powerful epistemic entities that are not easily contestable, and their decision-making rationale is often inaccessible, it is crucial to consider their role in creating imbalances in patients’ disfavor and the ways to mitigate such imbalances. This is especially important when it comes to interactions between patients and physicians, in which questions of credibility, trust, and understanding are central. Against this background, the overarching purpose of this dissertation is to fill this research gap by providing a framework to identify and, on occasion, mitigate epistemic injustices that are ML-induced, i.e., that emerge specifically due to the role that ML systems play in patient-physician interactions.","ethics of AI; epistemology of AI; machine learning-induced epistemic injustice; trustworthy AI; medical machine learning; automated hermeneutical appropriation","en","doctoral thesis","","","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:110bc70f-0e08-431d-bd41-00293f04ecee","http://resolver.tudelft.nl/uuid:110bc70f-0e08-431d-bd41-00293f04ecee","If it ain't broke, don't fix it: Optimizing the predictive aircraft maintenance schedule with Remaining Useful Life prognostics","de Pater, I.I. (TU Delft Air Transport & Operations)","Mulder, Max (promotor); Mitici, M.A. (copromotor); Delft University of Technology (degree granting institution)","2024","Predictive aircraft maintenance is a maintenance strategy that aims to reduce the number of failures, the number of inspections, the number of maintenance tasks and the aircraft maintenance costs. Aircraft are equipped with health monitoring systems, where sensors continuously measure the condition of the aircraft components. In predictive maintenance, these sensor measurements are used to estimate the time left until the failure of these components, called the Remaining Useful Life (RUL). These RUL prognostics are subsequently used to optimize the aircraft maintenance schedule. There are several challenges that complicate the implementation of predictive aircraft maintenance in practice. In this thesis, the threemain challenges are addressed.","Predictive maintenance; Remaining Useful Life prognostics; Aircraft maintenance","en","doctoral thesis","","","","","","","","","","","Air Transport & Operations","","",""
"uuid:2f2dfc76-5e29-4a04-84c1-3d337e3bf645","http://resolver.tudelft.nl/uuid:2f2dfc76-5e29-4a04-84c1-3d337e3bf645","Engineering Synthetic Cells through Module Integration and Evolution","Restrepo Sierra, A.M. (TU Delft BN/Gijsje Koenderink Lab; TU Delft BN/Christophe Danelon Lab)","Danelon, C.J.A. (promotor); Koenderink, G.H. (promotor); Delft University of Technology (degree granting institution)","2024","Life, the most complex and admirable machine that one could think of has evolved over billions of years to display a beautiful variety of mechanisms that keep cells adapting, self-maintaining, reproducing, and evolving. If we think about it, what is this magic? What are the mechanisms behind life’s origins and wonderful coordination? Attracted by these intricates, different scientific disciplines have for long studied all life’s scales to grasp the fundamental principles of life. In particular, the synthetic biology field has set the goal of discerning life until the point that a minimal synthetic cell can be fully recreated in a controlled laboratory set-up. Synthetic cells, modular enough to be crafted by scientists, could not only reveal fundamental insights of how life works, but can also help unlock great biotechnological applications that lie beyond the reach of our current technologies and understanding of life. In this thesis, we delve into how in vitro evolution, module integration, and high throughput characterization are valuable steps to consider for accelerating the bottom-up assembly of artificial cells.","synthetic biology; synthetic cell; liposomes; cell-free gene expression; module integration; DNA replication; phospholipid biosynthesis; in vitro evolution","en","doctoral thesis","","978-94-6384-563-2","","","","","","","","","BN/Gijsje Koenderink Lab","","",""
"uuid:239cbb59-90d6-49b9-9dc5-ea6addb3d6e1","http://resolver.tudelft.nl/uuid:239cbb59-90d6-49b9-9dc5-ea6addb3d6e1","Homeostasis in intestinal organoids at the single cell level","Kok, R.N.U. (TU Delft BN/Sander Tans Lab)","Tans, S.J. (promotor); Ten Wolde, Pieter Rein (copromotor); Delft University of Technology (degree granting institution)","2024","How does the small intestine maintain itself? What mechanism does it use to achieve
homeostasis, and how optimal is this mechanism? We approached these questions
mainly using cell tracking in organoids, for which we developed new cell trackers.
We found that the intestinal crypt uses a surprisingly simple and effective strategy.
Finally, we showed that cell segmentation and tracking is possible in organoids without fluorescent markers.","organoids; small intestine; homeostasis; cell tracking","en","doctoral thesis","","","","","","","","","","","BN/Sander Tans Lab","","",""
"uuid:ca8b3b1e-41db-4481-a5ab-f4f6d446ed2d","http://resolver.tudelft.nl/uuid:ca8b3b1e-41db-4481-a5ab-f4f6d446ed2d","Sustainability of bio-based plastics in a circular economy","Ritzen, L. (TU Delft Design for Sustainability)","Balkenende, R. (promotor); Bakker, C.A. (promotor); Sprecher, B. (copromotor); Delft University of Technology (degree granting institution)","2024","Plastics have become indispensable in modern life due to their versatility and affordability. However, their widespread use has resulted in far-reaching environmental damage, including the accumulation of plastic waste, fossil fuel depletion, and significant greenhouse gas emissions. Bio-based plastics have been proposed as a sustainable, circular solution to the environmental issues associated with plastics. However, bio-based plastics are not implicitly sustainable or circular. These aspects are influenced by how a plastic is produced and how it is recovered at end-of-life, implying that careful attention needs to be paid to material development and product design. This thesis explores the sustainability and circularity of bio-based plastics by looking at: how they are perceived by value chain actors, potential recovery pathways in a circular economy, and environmental impact.
Although bio-based plastics have the potential to be sustainable, the emissions associated with producing them depend heavily on the biomass sourcing. At the same time, bio-based plastics are not de-facto biodegradable and thus efficient recovery at end-of-life needs to be guaranteed. Circular product design with bio-based plastics requires careful consideration of biomass sourcing and recovery. Although much information regarding these aspects is still missing, the research presented in this dissertation provides some guidelines for circular product design with bio-based plastics. In order to reduce environmental impacts, bio-based plastics should be produced with agricultural by-products or with biomass types with a high conversion efficiency. Biomass for bio-based plastics should be cultivated with minimal use of land, water, chemicals and fossil fuels. Environmental impacts can be reduced further by using renewable energy in the production process. Product designers should also consider what recovery pathway they want to target at end-of-life of a product. The plastic composition and product architecture need to reflect the targeted recovery pathway.","","en","doctoral thesis","","978-94-6384-555-7","","","","","","2024-04-08","","","Design for Sustainability","","",""
"uuid:f3ed96a3-c436-4027-a3fc-5c22a9ee905d","http://resolver.tudelft.nl/uuid:f3ed96a3-c436-4027-a3fc-5c22a9ee905d","Systems for Digital Self-Sovereignty","Stokkink, Q.A. (TU Delft Data-Intensive Systems)","Epema, D.H.J. (promotor); Pouwelse, J.A. (promotor); Delft University of Technology (degree granting institution)","2024","The digital world is evolving toward representing - and serving the interconnection of - natural persons. Instead of depending on the intrastructure of Big Tech companies and governments, users can cooperate and use their hardware to form public infrastructure. Instead of existing by virtue of a reference in some institution's database, users can interact based on a digital representation of their own choosing. It is no longer sufficient to depend on users to act out of system-imposed altruism. A new digital world is emerging that aims to provide systems that respect the rights of users to control their own digital representation. The complete control over one's own representation and all the data that belongs to it is what we know as Self-Sovereignty.
Solutions for digital Self-Sovereignty are wildly sought after, though their solution space remains woefully underexplored. Numerous global entities, e.g., the European Union, have stated their support for Self-Sovereign systems. However, many old problems of peer-to-peer systems that have gone ignored for decennia resurge with the need for Self-Sovereignty. For example, interconnections in peer-to-peer networks are vulnerable to attacks using fake identities and attackers can manipulate peers by depriving them of data. As most deployed peer-to-peer solutions have very little incentive for disruption by attackers, we have seen very few attacks. However, cryptocurrencies have shown that these attacks do surface when there is sizable monetary gain for attackers. In order to secure our future digital society, we must define and study these systems for Self-Sovereignty.
In this thesis we take the first steps toward defining the systems that can power a Self-Sovereign ""Web3"" ecosystem. In particular, we explore systems that apply Self-Sovereignty for identity, for public infrastructure, and for the execution of shared code. We describe four prototype mechanisms to form a guide for future work and to derive their general properties. Each mechanism is evaluated as realistically as possible. Thereby, this thesis mostly fulfills an exploratory role to guide the further evolution of our digital world.","anonymity; Web3; Sybil; smart contract; Self-Sovereign; reputation; replication; pseudonymity; privacy; peer-to-peer; network latency; network; local-first; identity management; green; gossip; decentralization; blockchain","en","doctoral thesis","","978-94-6366-839-2","","","","","","","","","Data-Intensive Systems","","",""
"uuid:22d798db-61a0-47c8-a686-3602e6ee62cb","http://resolver.tudelft.nl/uuid:22d798db-61a0-47c8-a686-3602e6ee62cb","Brackish Waters: Integrating Justice in Climate Adaptation and Long-Term Water Management","Brackel, A.K.C. (TU Delft Ethics & Philosophy of Technology)","Doorn, N. (promotor); Pesch, U. (copromotor); Delft University of Technology (degree granting institution)","2024","Brackish waters can be found in transition zones between freshwater rivers and saltier seas. These dynamic coastal landscapes harbor multiple functions such as housing, agriculture, nature, and industry. Because of climate change, existing borders between fresh and saline water - and between land and water - are becoming contested. Extreme rainfall, typhoons, heat waves, and droughts occur more frequently and are expected to intensify. Shifting water levels and chloride concentrations affect which livelihoods and land use practices can be sustained in the future.
Land use transformations may be needed to adapt to climate hazards such as flooding, drought, and sea-level rise. Climate risks can be reduced when people or infrastructures are moved out of areas exposed to climate hazards. Examples of these so-called exposure reduction measures are zoning, managed retreat, buy-outs, the elevation of the water table in agricultural land or projects such as the Dutch Room for the River program. However, land use changes are often contested by the people currently living and working on those lands.
This dissertation aims to contribute to the debate about just transitions in climate adaptation and land use transitions in the Netherlands and beyond. Anticipating climate risk also means anticipating conflicts about what to protect and what to let go. Not everyone will agree about the necessity of these adaptation measures, nor about what ‘just’ climate adaptation actually means at the local level. This research therefore describes the prevalence of competing justice claims in multiple adaptation controversies. At the same time, this dissertation further develops a capabilities-based approach to climate adaptation ethics.","climate adaptation; justice; water management; conflict; involuntary land use change","en","doctoral thesis","","978-94-6384-547-2","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:5d444c43-0e3a-4912-838c-5a9c20ffee97","http://resolver.tudelft.nl/uuid:5d444c43-0e3a-4912-838c-5a9c20ffee97","Comprehensive Human Oversight over Autonomous Weapon Systems","Verdiesen, E.P. (TU Delft Information and Communication Technology)","Dignum, M.V. (promotor); Santoni De Sio, F. (promotor); Delft University of Technology (degree granting institution)","2024","","","en","doctoral thesis","","978-94-6496-075-4","","","","","","2024-04-04","","","Information and Communication Technology","","",""
"uuid:a45acef5-5ef9-4797-be5e-08498566ec8a","http://resolver.tudelft.nl/uuid:a45acef5-5ef9-4797-be5e-08498566ec8a","Wind turbine blade damage detection using aerodynamic noise","Zhang, Y. (TU Delft Wind Energy)","Watson, S.J. (promotor); Avallone, F. (promotor); Delft University of Technology (degree granting institution)","2024","Wind energy is one of the most important renewable energy sources, effectively addressing climate change issues and promoting sustainable development on a global scale. Blade failures may cause long shut-down times and may present a safety hazard. Continuous and real-time monitoring of the blade conditions is helpful for finding blade damage at an early stage and for predicting its development. Non-contact damage detection methods have the advantage of easy and flexible installation and deployment, especially for current in-service wind turbines. This thesis aims to investigate and develop a new non-contact method for wind turbine blade damage detection based on measurements of aerodynamic noise. The principle of the proposed method relies on the fact that damage to the blade may modify the boundary layer over the blade surface and the flow field around the blade, and, as a consequence, alter the noise generated aerodynamically. This noise propagates to the far-field and be measured by microphones, which could provide a remote way to detect blade damage. In this thesis, the detection of two types of damage, trailing edge crack and leading edge erosion, is experimentally investigated in the wind tunnel. The results show that the proposed aeroacoustics-based approach can effectively detect the damage mentioned above under some circumstances, which might be a promising solution complementing traditional damage detection methods in wind farms in the future.","wind turbine blade damage; aerodynamic noise; trailing edge crack; leading edge erosion; damage detection","en","doctoral thesis","","978-94-6384-556-4","","","","","","2024-04-03","","","Wind Energy","","",""
"uuid:270173e7-6ce6-4a71-ac42-79eab09cce5f","http://resolver.tudelft.nl/uuid:270173e7-6ce6-4a71-ac42-79eab09cce5f","Stress Evolution in Early-Age Cementitious Materials Considering Autogenous Deformation and Creep: New experimental and modelling techniques","Liang, M. (TU Delft Materials and Environment)","Schlangen, E. (promotor); Šavija, B. (promotor); Delft University of Technology (degree granting institution)","2024","Since the introduction of cementitious materials, shrinkage-induced earlyage cracking (EAC) has emerged as a significant issue that negatively influences the function, durability, and aesthetics of concrete structures like dams, tunnels, and underground garages. This thesis aims to develop new experimental and modelling techniques that help resolve this longlasting issue, with a particular emphasis on the EAC induced by AD (AD). Unlike the thermal and drying deformation which are induced by heat and moisture transport, respectively, the AD is an intrinsic behavior caused by the self-desiccation of the hydration of cementitious materials. The ADinduced EAC risk is especially high when it comes to modern (or future) cementitious materials, such as high-performance concrete, ultra-highperformance concrete, and alkali-activated slag concrete.","Early-age cracking; autogenous deformation; creep/ relaxation; Temperature-Stress-Testing-Machine; finite element model; machine learning","en","doctoral thesis","","978-94-6366-843-9","","","","","","","","","Materials and Environment","","",""
"uuid:4a1519ba-3542-4d8f-ab91-2342e8f5bb1a","http://resolver.tudelft.nl/uuid:4a1519ba-3542-4d8f-ab91-2342e8f5bb1a","Understanding Adversary Behavior via XAI: Leveraging Sequence Clustering To Extract Threat Intelligence","Nadeem, A. (TU Delft Algorithmics)","Lagendijk, R.L. (promotor); Verwer, S.E. (promotor); Delft University of Technology (degree granting institution)","2024","Understanding the behavior of cyber adversaries provides threat intelligence to security practitioners, and improves the cyber readiness of an organization. With the rapidly evolving threat landscape, data-driven solutions are becoming essential for automatically extracting behavioral patterns from data that are otherwise too time-consuming to discover manually. This dissertation advocates the use of machine learning (ML) to obtain insights into adversary behavior for creating AI-assisted practitioners. However, developing adversary behavior models is challenging since cyber data is often unlabeled, noisy, infrequent, and contains intricate patterns that evolve over time. We demonstrate that sequential features are effective at addressing these challenges. Yet, they have limited interpretability and algorithmic support.
This dissertation starts by defining the notion of explainability as it is currently used within cybersecurity by systematizing available literature in Chapter 2. We find that the literature frequently relies on black-box models that use off-the-shelf explanation methods without considering the explanation stakeholders. In contrast, literature on sequence learning models that are interpretable by design is severely limited.
We address these challenges by developing special algorithms that learn sequential patterns from infrequent events, and evolving data in an unsupervised setting. We utilize these algorithms to create interpretable tool-chains for understanding the behavior of various types of adversaries. We show that it is possible to learn interpretable models (even for complex sequential data in an unsupervised setting) that provide more insights than just prediction probabilities, while achieving competitive performance. In doing so, we encourage the security community to look beyond accuracy scores, and focus on extracting actionable insights from ML models. We make our tool-chains open-source.
The first part of this thesis models the strategies employed by human threat actors. Chapters 3 and 4 develop a novel paradigm of attack graphs (AG) that are learned directly from intrusion alerts for capturing attacker strategies. The attacker strategies are learned using our S-PDFA model, which is interpretable, fast, and effective. We learn alert-driven AGs from 3 open-source datasets, and show their ability to compress over 1.4 million alerts in 401 AGs in under 5 minutes. The AGs provide actionable intelligence regarding strategic differences and fingerprintable paths. They also reduce analyst alert fatigue by triaging critical attacks.
The second part of this thesis models the capabilities exhibited by automated threat actors (malware). Chapters 5 and 6 develop an explainable sequence clustering tool-chain to automatically characterize the network behavior of malware samples. We use this tool-chain to create behavioral profiles of 1196 real-world malware samples for explaining their capabilities. We also develop a streaming sequence clustering algorithm for real-time behavior profiling, which is evaluated on 5 datasets and against 4 clustering algorithms. By automatically creating behavioral profiles of bot-infected hosts in real-time, we distinguish benign and malicious hosts with 100% accuracy.","Cybersecurity; Explainable machine learning; Behavior modeling","en","doctoral thesis","","978-94-6366-828-6","","","","","","","","","Algorithmics","","",""
"uuid:3f2cad24-d7f5-4b19-9630-9f40207275ec","http://resolver.tudelft.nl/uuid:3f2cad24-d7f5-4b19-9630-9f40207275ec","Research on Urban Heritage Values based on the UNESCO Historic Urban Landscape (HUL) Approach: The case study of Suzhou","Huang, H. (TU Delft History, Form & Aesthetics)","van Thoor, M.T.A. (promotor); Hein, C.M. (promotor); Delft University of Technology (degree granting institution)","2024","As far as a historic city is concerned, a city is a dynamic complex which consists of many different interrelated and interactive elements. It is unreasonable to assess urban heritage by using a single value category. The evaluation of urban heritage values needs to develop a theoretical framework to represent the relationships between different elements. In view of the above issues, there is so far still a lack of systematic study on urban heritage values in Chinese academic circles. Therefore, it is necessary to construct the value system of urban heritage by adopting a scientific method.
This study aims to build up an integrated value system to facilitate the identification of urban heritage values, so that the complexity of urban heritage values is revealed through connections of different elements. The research work includes theoretical construction and a case study. First, the HUL is interpreted as a method of spatialtemporal scale by discussing the philosophical framework of HUL. Based on this finding, the gap between HUL at the operational level and the heritage value theories is filled. Second, as a case study, the analysis of the ancient city Suzhou is a verification of the value system of urban heritage in the practical sense. It also proves that the constructed value system is reasonable and achievable for urban conservation in the Chinese context.","historic urban landscape; spatial-temporal scale; ynamic and structural value system","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-836-1","","","","","","","","","History, Form & Aesthetics","","",""
"uuid:a24dac77-8bfd-4850-826b-c99fa5a7ace2","http://resolver.tudelft.nl/uuid:a24dac77-8bfd-4850-826b-c99fa5a7ace2","Dynamic laser speckle imaging for velocimetry in blood flow: A numerical study","van As, K. (TU Delft ChemE/Transport Phenomena)","Kenjeres, S. (promotor); Kleijn, C.R. (promotor); Bhattacharya, N. (promotor); Delft University of Technology (degree granting institution)","2024","Cardiovascular diseases are one of the leading causes of death worldwide, for example by causing strokes. Timely diagnosis of such diseases is pivotal for a patient’s chance of survival. Furthermore, in the present world in which medical expenses are going through the roof, we can save greatly on costs if certain diseases are detected in an earlier stage. To that end, our research is focused on improving medical measurement techniques, to give doctors a greater arsenal to combat these diseases.
Ideally, a measurement technique is cheap, accurate, and all while causing minimal discomfort to the patient. Light-based techniques have proven previously to have great potential to fulfil that role. For example, that tiny device that you can put on your finger, and similarly the sensor in a sports watch, are able to measure your heart rate using light. For our research we have developed a computer model, such that we can use the power of modern computing. Our model is able to predict how light is reflected by red blood cells flowing through an artery. The computer is then able to rapidly simulate many scenarios, producing a lot of data about what the reflected light looks like for each scenario. From that data, we are able to say something about what a certain pattern in the reflected light says about the underlying system: the flowing red blood cells.
As a first step, we have used our model to figure out how we can determine the heart rate from the reflected light. You could argue that that’s nothing special, as your sports watch can already do precisely that, but it’s an important step nonetheless, since our technique is different than what your sports watch is doing. Namely, the data our technique provides is more complex, but as a consequence also contains much more information and thereby yields a greater potential if we just become able to extract that information from the data.
Therefore, our second step was to determine the exact velocity of the red blood cells from the reflected light, which is quite of a magical thing when you think about it: even though we cannot ‘see’ the red blood cells directly, we can still ‘see’ how fast they are moving. Although we succeeded in determining the velocity, in reality a doctor will likely need to do some tweaking to account for patient-specific factors, such as skin tone.
Finally, we studied the disease atherosclerosis, in which accumulating cholesterol causes arteries to become more narrow, which ultimately could lead to a stroke. The narrowing of an artery, alters the flow behavior of the red blood cells, which we were able to pick up by studying changing patterns in the reflected light from our simulations. By extension, it should be possible to use reflected light to detect atherosclerosis, rapidly and cheaply flagging patients who are at risk.
We have shown the potential of reflected light techniques for medical diagnosis purposes. Although further research and work is still required to put these techniques into practice for doctor’s to use, we have set the groundwork to enable these techniques in the not-too-distant future.
One factor that can impair the performance of a deep net is a distribution shift between the training data and the test data. Depending on the availability of either data or label, some coping strategies for distribution shifts are domain adaptation, domain generalization, transfer learning and multi-domain learning. We first show how domain adaptation can help to mitigate the gap between historic and modern photos for visual place recognition. We show that this can be realized by focusing the network on the buildings rather than the background with an attention module. In addition, we introduce a domain adaptation loss to align the source domain and the target domain. We thenmove to domain generalization and show that learning domain invariant representations cannot lead to good performance for domain generalization. We suggest to relax the constraint of learning domain invariant representation by learning representations that guarantee a domain invariant posterior, but the resulting representations are not necessarily domain invariant. We coin this type of representation as hypothesis invariant representation. Finally, we study multi-domain learning and transfer learning with the application of deep learning to classify Parkinson’s disease. We show that a temporal attention mechanism is key for transferring useful information from large non-medical public video datasets to Parkinson videos. Weights are learned for various tasks involved in this Parkinson dataset to decide a final score for each single patient.
A deep net is also sensitive to malicious attacks, e.g., adversarial classification attacks or explanation attacks. Adversarial classification attacks manipulate the classification result while explanation attacks change the explanation heatmap but do not alter the original classification results. We notice that the robustness to an adversarial classification attack is linked to the shape of the softmax function and can be improved by using a polynomial softRmax, which is based on a Cauchy class conditional distribution. This also shows that the performance of deep learning is sensitive to the choice of class conditional distribution. Regarding the explanation attacks, we design several ways to attack the GradCAM explanation heatmap to become a predetermined target explanation which does not explain the classification result.
We further explore the influence of human trainers in hyperparameter tuning during the learning of deep nets. A user study is designed to explore the correlation between the performance of a network and the human trainer’s experience of deep learning. Experience of deep learning is found to be correlated with the performance of the deep net.","","en","doctoral thesis","","","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:a324902b-a157-4244-9cf3-1ca627ef641b","http://resolver.tudelft.nl/uuid:a324902b-a157-4244-9cf3-1ca627ef641b","Cleanroom in an SEM","Jeevanandam, G. (TU Delft ImPhys/Hagen group)","Hagen, C.W. (promotor); Kruit, P. (promotor); Delft University of Technology (degree granting institution)","2024","The work described in the doctoral thesis aims to enhance the functionality of a scanning electron microscope (SEM) by integrating miniaturized versions of cleanroom tools used in microfabrication. The thesis presents the integration of substrate heating, in-situ thermal atomic layer deposition (ALD), sputtering, and thermal evaporation within the SEM. These techniques are expected to enable fast and efficient fabrication of proof-of-concept devices with minimal resources. The thesis also discusses the challenges, limitations, and future work of the integrated cleanroom processes in SEM.","","en","doctoral thesis","","978-94-6384-541-0","","","","","","2024-03-22","","","ImPhys/Hagen group","","",""
"uuid:19906f66-bca0-46e6-9af8-584a8ee0a73c","http://resolver.tudelft.nl/uuid:19906f66-bca0-46e6-9af8-584a8ee0a73c","Bacterial chromosome organization by ParB proteins","Tišma, M. (TU Delft BN/Cees Dekker Lab)","Dekker, C. (promotor); Dogterom, A.M. (promotor); Delft University of Technology (degree granting institution)","2024","This thesis explores the mechanisms that underlie chromosome organization in bacteria. Bacteria are considered amongst the simplest living organisms on our planet. They lack the cellular organization found in other domains of life (Archaea or Eukaryotics) and often have simpler life cycles. Over the past decade, we gained increasing knowledge pointing to the fact that bacteria allocate a lot of resources to precisely organize their genome within the cell, and to segregate two genomes after DNA replication to daughter cells.
In this thesis, I investigated DNA organization and segregation systems in a model system bacterium Bacillus subtilis. I approached this feat both from the in vivo aspect – imaging in a live bacterium, and from the in vitro aspect – observing isolated proteins and DNA molecules. This holistic approach allowed me to gain deep insight into the proteins and mechanisms needed for DNA organization and segregation....","Single-molecule Biophysics; Single-molecule fluorescence; In vitro Assays; Magnetic Tweezers; ParABS; Chromosome Segregation; ParB Protein; Supercoiling; DNA Dyes; ParB-ParB Recruitment; DNA Condensation; Live Cell Imaging","en","doctoral thesis","","","","","","","","","","","BN/Cees Dekker Lab","","",""
"uuid:4a6b3544-c7ce-4456-af3e-000c64d531d7","http://resolver.tudelft.nl/uuid:4a6b3544-c7ce-4456-af3e-000c64d531d7","Variability of the raindrop size distribution: model and estimation uncertainties across different scales","Gatidis, C. (TU Delft Atmospheric Remote Sensing)","Russchenberg, H.W.J. (promotor); Schleiss, M.A. (copromotor); Delft University of Technology (degree granting institution)","2024","Precipitation is a profoundly important meteorological process and a crucial component of the water cycle. Thus, the continuous and reliable monitoring of precipitation at global scale is fundamental for scientific sectors such as numerical weather prediction and hydrology. However, accurately estimating precipitation, its type and intensity at planetary scale remains a notoriously challenging task. While point measurements from rain gauges provide generally accurate direct rain observations, their lack of spatial coverage is a significant limitation. Therefore, global-scale precipitation monitoring heavily relies on remote sensing sensors, such as weather radars (ground-based or spaceborne). Radars are capable of indirectly measuring rainfall over extended domains but with a higher level of uncertainty. For accurate rainfall estimates from radar, the complex microphysical properties of rain must be known or inferred. The drop size distribution (DSD) plays a crucial role by offering valuable insights into the microphysical properties of precipitation and linking radar observations to physical quantities such as rainfall intensity. However, similar to rainfall, DSD exhibits significant variability in space and time. The objective of this PhD thesis is to better understand the small-scale variability of rainfall, contributing to the improvement of quantitative precipitation estimation. In this study various critical aspects around DSD which are often overlooked such as DSD measurements, modeling and retrievals across different scales are investigated.","Drop size distribution (DSD); Rainfall microphysics; DSD retrievals; Rainfall variability; Disdrometer; DSD model; μ-Λ relationship; Scale","en","doctoral thesis","","978-94-6384-545-8","","","","","","","","","Atmospheric Remote Sensing","","",""
"uuid:a3931544-84fe-4fbd-a227-72ad21a3c402","http://resolver.tudelft.nl/uuid:a3931544-84fe-4fbd-a227-72ad21a3c402","Numerical Analysis of Low-Prandtl Jets in Turbulent Forced Convection Regimes","Cascioli, E. (TU Delft ChemE/Transport Phenomena)","Kenjeres, S. (promotor); Kleijn, C.R. (promotor); Delft University of Technology (degree granting institution)","2024","The current need to ensure an effective and prompt transition of the energy sector towards zero-carbon has renewed the interest for nuclear technology. Small Modular Reactors (SMRs) seem particularly interesting for their reduced capital cost, operational flexibility and enhanced safety and security. Different SMR concepts are being developed around the world and the liquid metal-cooled technology is one of the most convincing design options. Liquid Metal Fast Reactor (LMFR) technology was identified as one of the possible Generation IV reactor options too....","Jets; Low-Prandtl Fluids; Forced Convection; Turbulent Heat Transfer","en","doctoral thesis","","978-94-6366-829-3","","","","","","","","","ChemE/Transport Phenomena","","",""
"uuid:5330a6bb-a7b3-4345-ab2d-e82ad3a0e527","http://resolver.tudelft.nl/uuid:5330a6bb-a7b3-4345-ab2d-e82ad3a0e527","MOFs in Motion: Piezoelectricity and Rotational Dynamics of linkers in Metal-Organic Frameworks","Mula, S. (TU Delft ChemE/Catalysis Engineering)","van der Veen, M.A. (promotor); Grozema, F.C. (promotor); Delft University of Technology (degree granting institution)","2024","Metal–organic frameworks (MOFs) are a class of hybrid materials with metal-based inorganic nodes connected by organic linkers via strong coordination bonds. These building blocks can be arranged in 3-D crystalline lattices to synthesize structures with varying pore sizes and a variety of structures (tuneability). These hybrid materials possess exceptional porosity and large surface areas, making them suitable for applications in gas separation and storage, catalysis, and biomedical fields. MOFs also exhibit remarkable flexibility, which is determined by the topology of the framework and the degrees of freedom between bond angles in the organic linkers or coordination bonds between the organic linkers and inorganic nodes. One among the major categories of flexibility in MOFs is the rotational dynamics of organic linkers. The structural dynamics can have a pronounced influence on gas adsorption, diffusion and optical properties. Chapter 4 and Chapter 5 study the rotational dynamics of terephthalate linkers in functionalized MIL-53 MOFs by varying the steric interactions between the linkers using computational methods like ab initio molecular dynamics and classical molecular dynamics. Using the remarkable porosity, structural flexibility, and tuneability features of MOFs as central handles, in this thesis, we aim to study the (a) Piezoelectric properties in MOFs for their application as energy harvesters and (b) Rotational dynamics of linkers in MOFs. It is well-known that MOFs possess a high degree of flexibility and permanent porosity. High porosity of MOFs leads to low dielectric constants. This, together with higher flexibility of MOFs, makes them promising candidates for piezoelectric energy harvesting. Although all non-centrosymmetric MOFs are piezoelectric, their piezoresponse has hardly been studied thoroughly. Chapter 2 and Chapter 3 of this thesis will investigate the structure-property relationships of piezoelectric properties in MOFs through computational methods, and provide design guidelines that can contribute to the development of high-performing piezoelectrics.","Metal organic frameworks; structure–property relationships; piezoelectricity; rotational dynamics","en","doctoral thesis","","","","","","","","","","","ChemE/Catalysis Engineering","","",""
"uuid:276f2e9f-677b-4e46-8ab5-7de40c8e454c","http://resolver.tudelft.nl/uuid:276f2e9f-677b-4e46-8ab5-7de40c8e454c","Simulating Dynamics of Institutions","Ale Ebrahim Dehkordi, Molood (TU Delft Energie and Industrie)","Herder, P.M. (promotor); Ghorbani, Amineh (promotor); Delft University of Technology (degree granting institution)","2024","In society, institutions are the foundation that governs human behaviour through rules, norms, and regulations. The actions and interactions of individuals are shaped by these institutions, forming a cyclic system with numerous parameters and factors. Altering any of these factors, triggers the entire system to transition into a new state that comprises new emergent institutions. This process can take anywhere from days to thousands of years.
Employing agent-based models and simulation techniques enables the study of the emergence and transformation of institutions in a shorter timeframe, with reasonable cost, and under diverse parameters and conditions.
The purpose of this dissertation is to enhance institutional theories by generating new insights, testing hypotheses, and offering support to researchers, historians, policymakers, and social scientists who are studying institutional dynamics. The outcomes of this research may assist in the identification of successful institutions and the comprehension of the factors that contribute to their success....","institutions; institutional modelling; institutional evolution; values; value change; wealth inequality; cooperation; common-pool resources; machine learning; agent-based modelling; modelling purpose","en","doctoral thesis","","978-94-6366-834-7","","","","","","","","","Energie and Industrie","","",""
"uuid:e5b2f672-9dd2-498f-828f-f587b509e298","http://resolver.tudelft.nl/uuid:e5b2f672-9dd2-498f-828f-f587b509e298","High-Temperature Oxidation of Steels Investigating the Kinetics of High-Temperature Oxidation of Steels Through Experimental, Numerical, and Data-Driven Approaches","Aghaeian, S. (TU Delft Team Amarante Bottger)","Bottger, A.J. (promotor); Mol, J.M.C. (promotor); Delft University of Technology (degree granting institution)","2024","H igh-temperature (HT) oxidation plays a significant role in various stages of the steelmaking process, including hot rolling. When exposed to high temperatures and oxygen partial pressure, the steel composition near the surface can be altered as alloying elements deplete. Additionally, the characteristics of the oxide scale, such as thickness and phase composition, vary depending on the oxidation conditions. Due to the experimental challenges of studying such rapid processes under extreme conditions, predictive models are necessary to estimate the substrate surface and oxide scale composition as well as the general oxidation rate of the alloy....","High-Temperature Oxidation; Diffusion-Based Models; Oxidation Kinetics; Machine Learning Models; Oxidation of Steels","en","doctoral thesis","","978-94-6483-837-4","","","","","","","","","Team Amarante Bottger","","",""
"uuid:7660e4df-420c-4283-ab05-fd5ea6ec5a1b","http://resolver.tudelft.nl/uuid:7660e4df-420c-4283-ab05-fd5ea6ec5a1b","Development of experimental and analytical/modelling methods for the investigation of biomass pyrolysis and gasification in a novel indirect fluidized bed reactor","Tsekos, C. (TU Delft Large Scale Energy Storage)","de Jong, W. (promotor); Padding, J.T. (promotor); Delft University of Technology (degree granting institution)","2024","The modern world is faced with a multitude of environmental and socio-economic issues, stemming from the way that energy is used and converted. Climate change due to anthropogenic activities (use of fossil fuel resources and its associated CO2 equivalent emissions), has greatly affected humankind and nature in general, by leading to extreme weather phenomena and reducing the quality of life especially of people belonging to vulnerable communities. The unsustainable practices of the energy sector and the increased energy and materials needs of the public, have brought the situation to a point where immediate action is required to mitigate the effects of climate change. Furthermore, as became apparent by studying the impact of the COVID-19 pandemic and the Russo-Ukrainian war, the global energy market is heavily exposed to price and availability shocks that have a significant negative impact on the quality of life of people globally also in the very short term. Overall, the transition of the energy sector to a green and renewable alternative is essential and bioenergy constitutes a crucial piece of the puzzle of such a sustainable future....","","en","doctoral thesis","","978-94-6496-063-1","","","","","","","","","Large Scale Energy Storage","","",""
"uuid:158da0dc-e2a8-4c59-8af8-5b50b2b96c94","http://resolver.tudelft.nl/uuid:158da0dc-e2a8-4c59-8af8-5b50b2b96c94","Interplay of Structural and Light-induced Carrier Dynamics in Metal Halide Perovskites","Zhao, J. (TU Delft ChemE/Opto-electronic Materials)","Savenije, T.J. (promotor); Houtepen, A.J. (copromotor); Delft University of Technology (degree granting institution)","2024","As one of the fastest-growing renewable energy technologies, photovoltaics play an increasingly important role in the global energy transition. Over the past decade, metal halide perovskite solar cells (PSCs) have emerged as the most promising candidates for next-generation solar cells, with a certified power conversion efficiency of 26.1% for single-junction cells. Despite these significant advances in this performance, understanding the fundamental optoelectronic properties of various compositions is crucial to improve the efficiency and stability of the development of single-junction and multi-junction solar cells, including perovskite/silicon and all-perovskite tandem solar cells. In this thesis, we have investigated the generation, recombination, and extraction of photo-generated carriers in various metal halide perovskites (MHPs) in combination with selective transport layers (TLs) mainly using the time-resolved microwave conductivity (TRMC) technique. Moreover, structural properties were revealed using various techniques including XRD, XPS, and SEM. In addition, different deposition methods of perovskite thin films are studied with the aim of providing insights into the relationship between structure and optoelectronic properties.....","","en","doctoral thesis","","","","","","","","2024-03-19","","","ChemE/Opto-electronic Materials","","",""
"uuid:353d390a-9b79-44f1-9847-136a6b880e12","http://resolver.tudelft.nl/uuid:353d390a-9b79-44f1-9847-136a6b880e12","Power to the airborne wind energy performance model: Estimating long-term energy production with an emphasis on pumping flexible-kite systems","Schelbergen, M. (TU Delft Wind Energy)","Watson, S.J. (promotor); Schmehl, R. (promotor); Delft University of Technology (degree granting institution)","2024","The potential of utility-scale airborne wind energy (AWE) systems to contribute significantly to the energy transition hinges on their large-scale deployment, which depends on the cost-competitiveness and complementarity with conventional wind turbines. Central to the assessment of these metrics is understanding long-term energy production, which is influenced by the variability of wind profiles. This thesis investigates the significance of wind profile variability on annual energy production estimation for AWE systems. The study establishes the climatology of vertical wind profiles and expands flight operation models of AWE systems. By synthesising these aspects, a new energy production estimation framework is developed to incorporate variations in the wind profile shape. This framework is utilised to assess the impact of different wind profile shapes on the energy production estimation. The research underlines the need to move away from conventional wind energy calculation methods and offers a more suitable alternative for AWE systems. The framework offers a valuable tool for increasing the understanding of the viability of large-scale deployment of AWE systems.","Airborne wind energy; flexible-kite system; performance modelling and optimisation; energy production estimation; test flight data analysis; power production characterisation; vertical wind profile characterisation","en","doctoral thesis","","978-94-6384-549-6","","","","","","","","","Wind Energy","","",""
"uuid:2b7eca2c-3dbd-48f2-8514-147cab28d1a4","http://resolver.tudelft.nl/uuid:2b7eca2c-3dbd-48f2-8514-147cab28d1a4","Multiscale Extended Finite Element Method for the Simulation of Fractured Geological Formations","Xu, F. (TU Delft Applied Mechanics)","Sluys, Lambertus J. (promotor); Hajibeygi, H. (promotor); Delft University of Technology (degree granting institution)","2024","In the prevailing context of the 21st century, characterized by a predominant reliance on oil and gas, or in the promising future where green energy shapes a human society committed to net-zero emissions, the role of underground fractured formations in energy production and storage remains pivotal and irreplaceable. Geological faults typically act as non-permeable sealing boundaries for reservoirs used in storage, including those for hydrogen and carbon dioxide. In contrast, artificial fractures can serve as highly permeable conduits for fluid flow into wellbores, particularly in applications such as enhanced geothermal reservoirs. In the past decade, the hazardous consequences of failing to predict the geomechanics behaviors of fractured formations have led to a pronounced focus on developing simulation strategies that are both accurate and efficient for fractured formations.
It is challenging to understand formations riddled with fractures. From a computational perspective, the complex fracture networks typically demand a way finer unstructured grid. However, using such an unstructured grid is impractical for real-world applications due to their high computational load. Conversely, coarser grids paired with strategies such as homogenization could result in loss of crucial details. Heterogeneous properties of geological formations that span on large length sizes require the simulation strategies to be scalable, in order to be relevant.
This thesis proposes a novel approach named as multiscale extended finite element method (MS-XFEM) to tackle these challenges. The challenges related to discretization are resolved by applying the extended finite element method (XFEM) which allows for the use of structured grids. This simplified mesh, however, leads to an augmented matrix size due to extra degrees of freedom (DOFs) introduced by enrichments. A multiscale approach is therefore combined with XFEM. The computational process is operated on the larger yet sparser coarse grids and then the coarse scale mesh solutions are interpolated back to fine scale mesh. The novelty of this work is to involve the fractures into basis functions only, thus the coarse scale system is constructed based on a finite element method. More importantly, this construction of basis functions is fully algebraic and can be updated locally and adaptively for the simulation of propagating fractures.
This method has been implemented and tested to prove its efficiency and accuracy. All tests results prove the good qualities of solutions computed from MS-XFEM when compared to fine scale XFEM solutions. Basis functions are constructed successfully with the algebraic method. These tests reveal the potential of the MS-XFEM in simulating real-world subsurface fractured formations.","multiscale extended finite element method; geological fractured formation; fractures propagation; deformation of fractured formation","en","doctoral thesis","","978-94-6366-827-9","","","","","","","","","Applied Mechanics","","",""
"uuid:bbd11d77-f64a-4498-a0b8-16975f6e1e77","http://resolver.tudelft.nl/uuid:bbd11d77-f64a-4498-a0b8-16975f6e1e77","Computational design of patient-specific orthopedic implants: from micro-architected materials to shape-matching geometry","Garner, E. (TU Delft Biomaterials & Tissue Biomechanics)","Zadpoor, A.A. (promotor); Wu, J. (promotor); Delft University of Technology (degree granting institution)","2024","Background: Despite over a century’s worth of technical improvements, the longterm survivability associated with orthopedic implants continues to fall short. In contrast to earlier designs, implant failure is no longer caused by structural failure of the implant itself. Rather, it results from the implant’s long-term detrimental effects on the surrounding bone tissue. Over time, changes in mechanical loading conditions induce a reduction in bone density, increasing the risk of fracture, and destabilizing the bone-implant interface. The mechanisms which drive peri-prosthetic bone loss are complicated and inter-related. Add to this the unique morphological variations among patients, and an optimal one-size-fits-all solution seems unlikely.....
To alleviate the data movement bottleneck, contemporary research revisits a concept historically known as Computation-In-Memory (CIM) or, alternatively, Processing-In-Memory (PIM). At its core, CIM emphasizes positioning computational capabilities close to, or within, the memory units storing the data. This placement might be within memory chips, in memory controllers, amid caches, or embedded in the logic layers of 3D-stacked memories. As a computational model, architectures leveraging CIM (referred to as CIM architectures) stand to tackle the issue of data movement overhead inherent in the von-Neumann architecture by diminishing or outright eradicating the data movement between computational locales and data storage areas. Moreover, from a techno-logical perspective, emerging memory technologies, including memristive devices and circuits, show potential to replace traditional memory systems, addressing some of the challenges posed by CMOS-based designs.
Irrespective of the specific CIM architecture deployed to optimize performance or energy efficiency in modern applications, there are substantial practical challenges to address and ponder upon first. Both system designers and developers face these hurdles and design decisions, which are critical to surmount CIM’s widespread acceptance across various computational areas and application domains.
In this dissertation, our focus is twofold: (1) We delve into the acceleration and streamlined execution of various steps in two pivotal application realms: genomics and ML; and (2) We explore several emerging memory technologies alongside circuit and architectural strategies, that show promise in enhancing CIM designs, specifically tailored for modern applications.
Therefore, in this thesis, we identify and propose strategies and designs to ameliorate the constrained performance of key kernels in genomics and ML. Recognizing that applications within these realms consist of diverse functions or kernels, it is imperative for a designer to possess a thorough understanding of them. Each function/kernel can be characterized by distinct data and control flows, calling for varied features to be enabled in either a von-Neumann or a CIM architecture. To enhance the efficacy of each function/kernel, we first profile them individually and then within a larger context of their corresponding pipeline, followed by discerning the best avenues for their memory mapping in a CIM architecture. We then undertake a concurrent assessment of essential adjunct components alongside the memory array, commonly referred to as the peripheries. For a designer, proficiency in the applications executable on a CIM system leveraging emerging memory technologies is indispensable. Grasping the fundamental characteristics of CIM and having an overarching view of its scope becomes vital prior to its integration. We aim to aggregate critical application features, improvement opportunities, and design decisions and refine them to their core essence. Through this, we aspire to shed light on present design options and identify kernels demanding heightened attention. Such insights can be instrumental in revealing prospective directions, encompassing supported kernels along with their respective merits and trade-offs.
We exploit emerging technologies and architect state-of-the-art CIM designs that optimally serve the targeted kernels, keeping a holistic improvement perspective at the forefront. Delving into emerging (memory) technologies, such as memristive devices like PCM and STT-MRAM, is crucial. These devices provide a suite of advantages, including non-volatility, compactness, and a natural aptitude for conducting logical operations (for instance, the logical AND). Additionally, other emerging technologies, such as integrated photonics, have the potential to enhance the CIM paradigm further with their capacity for high-frequency and low-latency functions. Our ambition is to integrate multiple such technologies, harnessing their distinct attributes, to craft a CIM design that surpasses the SotA counterparts across key benchmarks, be it in execution speed or energy.
This thesis demonstrates that when CIM is fused with emerging (memory) technologies, there is a marked enhancement in the performance of several Genomics pipelines and Machine Learning applications. It is our aspiration and conviction that the evaluations, methodologies, and findings detailed in this dissertation will empower the broader community to comprehend and address contemporary and upcoming challenges that revolve around enhancing the performance and energy efficiency of modern applications through the integration of (re)emerging computing paradigms and technologies. Additionally, our work provides insights for adapting these technologies to novel applications, ensuring they deliver optimal benefits.","Computation-In-Memory; Processing-in-Memory; Bioinformatics; Computer Architecture; Hardware/Software Co-Design; Memristor","en","doctoral thesis","","978-94-6384-534-2","","","","","","","","","Computer Engineering","","",""
"uuid:03dae8b4-4b3e-49af-8783-807882c62338","http://resolver.tudelft.nl/uuid:03dae8b4-4b3e-49af-8783-807882c62338","Analysis and Synthesis of Shell Flexures","van de Sande, W.W.P.J. (TU Delft Mechatronic Systems Design)","Herder, J.L. (promotor); Delft University of Technology (degree granting institution)","2024","Compliant shell mechanisms are defined as spatially curved thin-walled structures able to transfer or transform motion, force or energy through elastic deflection. They are a sub-category of compliant mechanisms which also gain there motion from elastic deformation. As such they store energy during motion, in addition to providing desired kinematics. One major benefit of this attribute is that several functions of a mechanism or a machine can be integrated into a single monolithic part; this is often called function integration.
Certain force-deflection behaviour can be purposefully designed by tailoring the energy storage over the range of motion. This is useful for passive exoskeletons where shell mechanisms are used to compensate the user's body weight and thereby decrease the fatigue accumulated during work. Other applications can be medical devices which often need specific kinetics while operating in a small environment. Shell mechanisms or shell flexures provide different kinetic behaviour than their flat counterparts: the wire flexure and leaf spring flexure. These properties of shell flexures can be leveraged to create more compact force generators.
Shell mechanism research is a relatively new field, with articles introducing novel designs with a specific behaviour in mind, such as constant force or moment generators. The state of the art presents what shell mechanisms are capable of. However, the state of the art provides little guidance in how to analyse and design shell mechanisms in general. The objective of this thesis is to propose tools for the analysis and design of compliant shell mechanisms or flexures and to develop understanding of this class of mechanisms. This thesis is divided into three parts.
Part I presents the eigenscrew decomposition as a tool to understand and design the kinetics of all compliant (shell) mechanisms. Part II discusses the properties of a buckled tape spring and a method to synthesise a wide array of force-deflection behaviour. In Part III, a novel category of shell mechanisms is introduced. A curved surface is patterned with a lattice, which is able to deform in the membrane of the shell. This is opposed to other shell mechanism that work primarily through the bending of the membrane.","","en","doctoral thesis","","978-94-6366-837-8","","","","","","","","","Mechatronic Systems Design","","",""
"uuid:0825bf0c-33d6-4ec9-b2e1-ce3870596537","http://resolver.tudelft.nl/uuid:0825bf0c-33d6-4ec9-b2e1-ce3870596537","Navigating a Petroleumscape: Shaping transnational oil modernity at the crossroads of global flows and local territories","Sarkhosh, R. (TU Delft History, Form & Aesthetics)","Hein, C.M. (promotor); Karimi, P. (copromotor); Delft University of Technology (degree granting institution)","2024","The thesis explores Ahwaz's transformation throughout the 20th century, emphasizing the profound impact of the oil industry on the city's architectural and urban development. From its emergence in obscurity to its prominence before the Iran-Iraq war, Ahwaz's journey mirrors the far-reaching influence of oil on urban modernity. The narrative weaves a complex tapestry of global dynamics and local distinctiveness, detailing how oil shaped not only the city's landscape but also its identity. The research underscores the intricate interplay between global forces and local characteristics, showcasing Ahwaz as a resilient blend of history, culture, and urban development. It highlights the city's defiance of simplistic categorizations, celebrating its diverse industries and the convergence of global influences. The study concludes by positioning Ahwaz as a living testament to the enduring legacy of the past, an ever-evolving urban canvas that reflects the complex interplay of architecture, culture, and power, perpetuated by the transformative force of petroleum.","petroleumscape; global flows; local territories; modern urchitecture & urban planning; crosscultural exchanges; Ahwaz; Iran","en","doctoral thesis","","978-94-6366-832-3","","","","","","2024-03-11","","","History, Form & Aesthetics","","",""
"uuid:ca535619-5459-4345-adf4-8c768c6934f6","http://resolver.tudelft.nl/uuid:ca535619-5459-4345-adf4-8c768c6934f6","Navigating complexity: agent-based simulations for climate-resilient economies","Taberna, A. (TU Delft Policy Analysis)","Filatova, T. (promotor); Nikolic, I. (promotor); Delft University of Technology (degree granting institution)","2024","Amid the Anthropocene, the escalating threat of flooding, driven by extreme rainfall and sea-level rise, challenges societies worldwide. In the last two decades, floods have impacted billions and inflicted colossal economic losses. Concurrently, the global trend towards urbanization predicts that by 2050, about 70\% of the global population will inhabit urban areas. This demographic trend, heavily influenced by agglomeration forces, further underscores the vulnerability of these urban centers, many of which are precariously situated in flood-prone areas. Given the confluence of escalating climate risks and the surge in populations settling in vulnerable zones, a pressing question emerges: How will rapidly urbanizing coastal societies adapt to intensifying flood risks in the face of escalating climate-induced shocks and changing regional economic landscapes?
To address this multifaceted issue, this dissertation delves into the complex nexus between climate shocks, regional economic dynamics, and societal responses. Central to this exploration is the creation of innovative simulation tools tailored to incorporate the autonomous adaptation strategies of various actors within a regional economic framework. This thesis stands at the forefront of a new wave of computational models that encompass risk and embed resilience into complex adaptive systems.
I commence by examining the current advancements and gaps in employing Agent-Based Models to unravel the dynamics of flood risk and adaptation assessments. In this exploration, I underscore the pivotal role of human actions in shaping risks and resilience within flood-prone urban settings.
Building on this foundation, I introduce the Climate-Economy Regional Agent-Based (CRAB) model. The CRAB model employs an evolutionary perspective to provide a comprehensive view of the balances struck between the driving forces of economic agglomeration and the counteracting pressures of climate hazards. It focuses on the decision-making of heterogeneous agents, representing households and firms, as they navigate the choice of relocation between safer inland regions and hazard-exposed coastal zones.
Venturing further, I enhance the CRAB model to embody autonomous household adaptation behaviors, drawing from empirical data. Here, I challenge the traditional reliance on rational agents in sustainability models, unveiling a notable adaptation deficit when juxtaposed against boundedly-rational choices gleaned from real-world surveys. This nuanced exploration uncovers how varied adaptive capacities can potentially accentuate inequality and impede resilience.
Subsequently, I include in the CRAB model a layered risk strategy that encompasses an array of climate change adaptation measures. This refined model, enriched by extensive behavioral and flood data, bridges existing gaps in the current understanding of feedback loops and cascading effects triggered by flood shocks within a socio-economic system of boundedly-rational agents.
In conclusion, this dissertation pioneers a unique trajectory in understanding societal responses to the specter of flooding, offering invaluable insights and frameworks for devising future climate-resilient strategies.","Agent-based models,; resilience; flood risk; agglomeration forces; survey; climate change adaptation; distributional impacts; path dependency","en","doctoral thesis","","978-90-361-0736-5","","","","","","","","","Policy Analysis","","",""
"uuid:facb282f-74a1-446d-bbc6-5ed294602ed2","http://resolver.tudelft.nl/uuid:facb282f-74a1-446d-bbc6-5ed294602ed2","Microstructural phenomena in pearlitic railway steels","Mattos Ferreira, V. (TU Delft Team Maria Santofimia Navarro)","Sietsma, J. (promotor); Petrov, R.H. (promotor); Delft University of Technology (degree granting institution)","2024","The railway industry constantly seeks advancements in train speed, axle load capacity, reliability, and rail longevity. Rails undergo complex and severe loading during operation due to wheel/rail contact, resulting in two main damage mechanisms: rolling contact fatigue (RCF) andwear. Furthermore, frictional heating during wheel/rail contact causes local temperature rise, leading to microstructural processes on the rail surface, known as white etching layer (WEL) and brown etching layer (BEL). This project aims to gain insight into the microstructural changes in rail steels, with a primary focus on understanding the origins of detrimental surface layers like WEL and BEL. By achieving this understanding, the lifespan of the rails can be extended and the maintenance frequency can be reduced, which has significant effects on the sustainability of the railway network as well as overall life cycle costs. Additionally, the project explores the microstructural characteristics of recently developed steel grades with enhanced resistance to rolling contact fatigue....","","en","doctoral thesis","","978-94-6483-777-3","","","","","","","","","Team Maria Santofimia Navarro","","",""
"uuid:812de44e-36fb-4e5d-acf7-973f38d965de","http://resolver.tudelft.nl/uuid:812de44e-36fb-4e5d-acf7-973f38d965de","Design for urban vertical-axis wind turbines: balancing performance and noise","Brandetti, L. (TU Delft Wind Energy)","van Wingerden, J.W. (promotor); Watson, S.J. (promotor); Mulders, S.P. (copromotor); Delft University of Technology (degree granting institution)","2024","In urban areas, vertical-axis wind turbines (VAWTs) show promise due to their omnidirectional design, addressing challenges faced by traditional horizontal-axis turbines (HAWTs). Despite significant progress in urban VAWTs, extensive multidisciplinary research is needed to optimise their efficiency and use in such environments.
This dissertation addresses this gap in four aspects. First, a low-fidelity noise model based on state-of-the-art literature is developed, allowing fast, acceptable, and accurate predictions for preliminary design stages of the primary noise sources on an urban VAWT. Then, a wind speed estimator and tip-speed ratio (WSE-TSR) tracking controller is designed to maximise the power production of an urban VAWT in turbulent wind conditions. This WSE-TSR tracking controller turned out to be an ill-posed problem, impacting the turbine and controller performance in the presence of model uncertainty. Follows the presentation of an approach that combines frequency-domain analysis and multi-objective optimisation, demonstrating its effectiveness in assessing and calibrating torque control strategies, thereby contradicting earlier assumptions and establishing new perspectives on performance optimisation for real-world wind turbines. Based on these collective findings, a decision-making framework is derived, capable of striking a balance between VAWT performance and noise acceptance, allowing for the first time to consider psychoacoustic annoyance as a metric.
In summary, this thesis contributes significantly to advancing the understanding of the complex dynamics of VAWTs, specifically focusing on human acoustic perception nearby, laying the groundwork for the successful integration of VAWTs into urban landscapes.","vertical-axis wind turbines; aerodynamics; aeroacoustics; control; optimisation; noise","en","doctoral thesis","","978-94-6496-046-4","","","","","","","","","Wind Energy","","",""
"uuid:2c4f8ca0-f44a-485a-99d6-d7952c902fa2","http://resolver.tudelft.nl/uuid:2c4f8ca0-f44a-485a-99d6-d7952c902fa2","Tackling the weathering with low ranks: Handling the complex near surface of land seismic data with low-rank-based methods","Alfaraj, Ali (TU Delft ImPhys/Medical Imaging; TU Delft ImPhys/Verschuur group)","Verschuur, D.J. (promotor); Herrmann, F.J. (promotor); Delft University of Technology (degree granting institution)","2024","Imaging and inversion with seismic data recorded with sources and receivers at the surface are powerful tools to infer knowledge about the subsurface. However, creating an image with seismic data is unfortunately not as easy as taking a picture with a smartphone. The estimated subsurface models in many situations are far from ideal due to the low quality nature of the data. One of the reasons can be weathering of the near-surface geology that generates unconsolidated material characterized by slow velocity with rapidly varying, heterogeneous and season-dependent nature. Acquiring seismic data on such near-surface leads to complex wave propagation, posing challenges to imaging and inversion. In this dissertation, we tackle the weathering effects during seismic data processing, imaging and inversion with low-rank-based methods.
One approach to tackle the weathering effects on seismic data is removing them during seismic data processing. To do so for 2D data, we propose a model-independent low-rank-based near-surface estimation and correction in the midpoint-offset-frequency domain. In this domain, ideal data exhibit low rank structures, which get destroyed due to the influence of the weathering layers. Accordingly, the method makes use of the redundant nature of seismic data that allows for accurate approximation by low-rank matrices. To estimate the time shifts that compensate for the weathering effects, we cross-correlate a data set influenced by the near-surface weathering layers with its low-rank approximated version. Since we estimate time shifts (commonly referred to as statics) and no longer the directly low-rank approximated data, we avoid losses of the amplitude information. To improve the estimated statics and to alleviate the need for accurate rank selection for low-rank approximation, we implement the method in an iterative and multi-scale fashion. Since the low-rank approximation deteriorates at high frequencies, we utilize its better performance at low frequencies and exploit the common statics amongst different frequency bands. Using synthetic and field data, we demonstrate the performance of the proposed proposed, which requires no knowledge of the subsurface model, demands minimal data pre-processing, and provides accurate solutions with high computational efficiency compared to existing techniques.
When seismic data acquired on complex near-surface are additionally subsampled for economical reasons, such as monitoring of sequestrated carbon dioxide and hydrogen, the problem is further exacerbated. Both the weathering layers and randomized subsampling render coherent energy incoherent. Therefore, they both contribute to destruction of the low-rank structure commonly associated with statics-free densely-sampled data. Frugal data acquisition in complex near-surface regimes makes separation of the distinct sampling and weathering effects on the rank structure difficult, which as a result lead to poor reconstruction. To overcome that, we propose to reconstruct the data with joint rank-reduction-based near-surface correction and interpolation. The method simultaneously accounts for the weathering and subsampling effects to provide accurate reconstruction. Since low-rank approximation is used for near-surface correction, we also utilize it in rank-minimization interpolation as a cost-free initial solution to the optimization problem. As both near-surface correction and interpolation operate in the midpoint-offset domain, we avoid the cost of transformations back and forth from the source-receiver to midpoint-offset transform domain. Consequently, the proposed reconstruction, which shows its potential on synthetic and field data, additionally increases the computational efficiency.
While the aforementioned near-surface correction deals with 2D data, the Earth is a 3D object that requires acquisition of 5D data for proper subsurface model estimation. For 5D data, the limitations and challenges of conventional near-surface correction methods are magnified. To avoid them, we propose a 5D model-independent low-rank-based near-surface correction. To compute the singular value decomposition of 5D data volumes with 1 temporal and 4 spatial dimensions, which is necessary for low-rank approximation, we need to perform matricization of the 5D data, i.e. organization of the 5D data into matrices. At the same time, it is essential that the chosen organization domain reveals the underlying low-rank structure. Therefore, we first analyze different matricization domains that can be used to organize the 5D data. Similar to the 2D case, we show that --- in the potential domain --- the near-surface weathering layers render coherent energy incoherent, which results in slowly decaying singular values compared to the statics-free data that are of low-rank nature. The proposed method, which we show on synthetic and field data, enjoys the same benefits of the proposed method for 2D data, in addition to being able to capture the 3D nature of the Earth.
Due to the complex nature of the near-surface and due to its impact on the subsurface model, the near-surface model gets treated separately from the subsurface model. However, the optimal goal is not to remove the near-surface effects with data processing, but to accurately estimate near- and sub-surface models simultaneously. To do so, we use the inherent scale separation of joint migration inversion that estimates a low-wavenumber velocity and high-wavenumber reflectivity. Since rapid variations in surface elevation and near-surface model result in high wavenumber effects, they end up affecting the reflectivity model. At the same time, the estimated reflectivity influences velocity estimation. Consequently, JMI provides erroneous subsurface models in the presence of complex weathering layers. To mitigate that, we use multi-scale low-rank updates in the reflectivity domain. The proposed method reduces the near-surface effects at the initial iterations, but it allows more details of the near-surface model to enter the solution at later iterations. In the end, we estimate accurate near- and sub-surface models simultaneously without the need to bypass the weathering layers.","Near surface; Low-rank; Weathering; Interpolation; Imaging; Statics; Land seismic data; Velocity estimation; Inversion","en","doctoral thesis","","978-94-93330-65-8","","","","","","2025-03-06","","","ImPhys/Medical Imaging","","",""
"uuid:e508869c-25b6-41b1-b8f0-1b51a4180717","http://resolver.tudelft.nl/uuid:e508869c-25b6-41b1-b8f0-1b51a4180717","Wet Biomass Treatment: Energy and Sanitation System Concepts","Recalde Moreno Del Rocio, M.D.R. (TU Delft Energy Technology)","Aravind, P.V. (promotor); Boersma, B.J. (promotor); Delft University of Technology (degree granting institution)","2024","The global market for the wastewater treatment industry has been compelled to devise new concepts due to several factors. Currently, approximately 80% of untreated wastewater is discharged worldwide. The conventional technology for municipal solid waste treatment, which has been in use for over a century, is only partially effective in terms of effluent quality and energy balance. The highly flammable CH4 gas emitted from municipal solid waste landfills impacts the chemical composition of the atmosphere, potentially affecting the Earth. Thus, new concepts for wastewater treatment should address multiple social challenges related to energy and sanitation technologies. A well–implemented wastewater treatment system not only contributes to mankind wellbeing, which results in more effective population planning, but also ensures the optimal utilization of environmental resources.....","Solid oxide cell; exergy destruction; exergy efficiency; wet biomass; energy efficiency","en","doctoral thesis","","978-94-6473-422-5","","","","","","","","","Energy Technology","","",""
"uuid:bf291028-ef7e-4a6c-9a22-9bae9fde591c","http://resolver.tudelft.nl/uuid:bf291028-ef7e-4a6c-9a22-9bae9fde591c","Rethinking Privacy in the Age of Social Robots","Coggins, T.N. (TU Delft Ethics & Philosophy of Technology)","van de Poel, I.R. (promotor); Kudina, O. (copromotor); Delft University of Technology (degree granting institution)","2024","In the introduction of this thesis, I contend that robot ethics, as a research field, generally treats privacy as the appropriate distribution of information, and therefore overlooks privacy concerns raised by robots beyond this conceptualization’s purview. I illustrate this contention by evaluating a hypothetical case involving a household companionship robot via contemporary robot ethics literature focusing on privacy. I argue that this corpus cannot identify a variety of privacy concerns raised by such robots because it relies on a narrow interpretation of privacy that can only recognize privacy harms of an informational nature. I posit that privacy represents considerably more than implied by the interpretations offered by robot ethicists. Most crucially, it signifies our need to withdraw sporadically from social engagements. Considering that robots - like the one described in the case mentioned - simulate what it is like to interact with other humans, I argue that such machines will produce privacy concerns when they successfully create the impression that another person is present during moments when their users wish to be left alone. I highlight that some researchers from robot ethics have discussed issues of this kind but rarely frame them as privacy concerns, thus leaving a significant literature gap I attempt to fill via my research. I conclude the introduction by presenting a close reading of relevant privacy scholarship to evidence the claims made above and lay the theoretical foundation for the dissertation....","privacy; social robots; human-robot-interactions; housework; norms; performativity; robot ethics","en","doctoral thesis","","","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:2879077b-f96f-4380-800a-d796611ba26a","http://resolver.tudelft.nl/uuid:2879077b-f96f-4380-800a-d796611ba26a","Bridging Worlds: Augmented Reality for Pedestrian-Automated Vehicle Interactions","Tabone, W. (TU Delft Human-Robot Interaction)","de Winter, J.C.F. (promotor); Happee, R. (promotor); Delft University of Technology (degree granting institution)","2024","This thesis explores how automated vehicles will interact with pedestrians in the urban environment through augmented reality technology. Nine distinct AR interfaces were designed, developed, and evaluated to assess how different design elements (symbols, text, colour) and distinct mappings of the AR (on the road, on the vehicle, or head-locked) would affect comprehension, and ultimately whether the pedestrian would trust and be convinced to cross in front of an automated vehicle displaying a safe message. Using increasing levels of ecological validity, from an online questionnaire to a CAVE simulator and an AR HMD experiment, the evaluation also explored how different AR anchoring (and mapping) positions affect pedestrians' crossing initiation times and the intuitiveness of the message. The thesis also explores the use of diminished reality (removal of information) to assist pedestrians in occluded scenarios, as well as the utilisation of Large Language Models in evaluating qualitative data in experiments. The outcomes of the thesis are a set of guidelines based on empirical evidence on how to design effective AR interfaces which promote safe and transparent interactions between pedestrians and automated vehicles.","automated vehicles; Augmented Reality; pedestrians; interactions; virtual reality; simulators; Eye-tracking; Large Language Models; Interface design","en","doctoral thesis","","978-94-6496-045-7","","","","","","","","","Human-Robot Interaction","","",""
"uuid:8bd55ce1-adc7-486d-9bfa-7aa7b929bbe3","http://resolver.tudelft.nl/uuid:8bd55ce1-adc7-486d-9bfa-7aa7b929bbe3","Sand-Mud Morphodynamics","Colina Alonso, A. (TU Delft Coastal Engineering)","Wang, Zhengbing (promotor); van Maren, D.S. (promotor); Delft University of Technology (degree granting institution)","2024","The world’s coasts and deltas offer a multitude of valuable ecosystem services, providing safety against flooding and economic benefits. Many of these systems are, however, under pressure by climate change and increasing human activities. Protecting these systems and preservation of their multiple functions requires a thorough understanding of their morphodynamic behaviour. The sediment bed in many coastal systems worldwide is composed of two sediment types: sand and mud. While most previous research focused on the individual sediment dynamics of sand and mud, little is still known about how combined sand-mud morphodynamics differs from the sum of individual sediment fractions. In order to assess the impacts of anthropogenic interventions and climate change, we thus need to better understand sand-mud morphodynamics.
This research aims to improve the understanding of large-scale morphodynamics in sandmud tidal systems. This is done by investigating processes related to long-term deposition, sediment supply, sand-mud interaction, and segregation of sand and mud. We focus on generic idealized cases, as well as on case studies in the Wadden Sea — an example of a heavily-impacted system whose existence is threatened by sea level rise (SLR). Unique long-term data sets of its hydrodynamics, bathymetry and sediment composition are available, making this an excellent area to study the morphological responses to human interventions in detail, and to improve our understanding of sand-mud morphodynamics.
Analysis of the morphological evolution after a closure in the Western Dutch Wadden Sea (Chapter 2) illustrates the importance of distinguishing between the response of sandy and muddy sediments when analyzing the morphodynamic impact of an intervention. Our findings reveal that sand and mud respond on different temporal and spatial scales. Moreover, the results show that the contribution of mud to the total infilling was much larger than the average mud content in the top layer of the bed, because mud preferentially deposits in areas with high net sedimentation rates. This demonstrates that the contribution of sediment types to morphological change is not necessarily reflected by the spatial bed composition.
Up to now, the availability of mud to the Wadden Sea was poorly known, while we know that this availability is crucial for predicting the response to future climate change. Therefore, a first system-wide mud budget of the Wadden Sea has been developed (Chapter 3), revealing a nearly closed balance between the sources and the sinks. This observation implies that disturbing the mud balance at one location will impact downdrift areas. Anthropogenic sediment extraction provides the second largest sink, even surpassing salt marsh deposition. Field data suggest that a mud deficit already exists in some areas of the Wadden Sea, which will only become more pronounced with increased SLR rates. Mud is thus a finite resource similar to sand, and should be treated as such in sediment management strategies. Furthermore, local interventions may have consequences in downdrift areas, stressing the need for a cross-bordering perspective.
The influence of small-scale sand-mud interaction on large-scale modeled morphodynamic development has been studied by implementing two abiotic interactions (erosion interaction and roughness interaction) in a process-based model (Chapter 4). Model output was converted into metrics that describe the macro-scale configuration of the modeled systems, allowing a quantitative comparison of scenarios. The results demonstrate that sand-mud interaction can significantly impact tidal basin evolution, especially having a large influence on the intertidal flat shape, size and composition.
Lastly, we have seen that the mud content of the sediment bed in tidal systems is often bimodally distributed, indicating a preferential sand-mud segregation (Chapter 5). Bimodality represents the existence of two stable equilibrium conditions, which result from sediment deposition processes (and not erosion processes), and can be expected for a large range of suspended sediment concentrations in sand-mud systems. In order to correctly reproduce this bimodal character in process-based models, and therefore correctly modeling the bed sediment composition, one must account for erosion interaction in the model set-up — despite the role of deposition as a driving mechanism.
In conclusion, this dissertation illustrates the importance of a sand-mud perspective in morphodynamic studies, considering the contribution of both sediment types to the morphodynamic development as well as their interactions. We have seen that advancing our understanding of sand-mud morphodynamics requires combined data-based and modeling approaches, adopting a system-wide perspective, and considering the interactions between the various spatial and temporal scales. Morphological metrics, such as the ones that have been presented, are essential for the evaluation and comparison of model results and coastal morphology worldwide. Enabling successful and sustainable management of coasts and deltas will require further increasing our understanding of sand-mud morphodynamics through additional measurements and modeling studies. Developing a system understanding should be at the heart of all of these studies.","sand-mud; morphodynamics; tidal basins; Wadden Sea; numerical modeling","en","doctoral thesis","","78-94-6366-816-3","","","","","","","","","Coastal Engineering","","",""
"uuid:b265aa7e-fb37-42b9-9104-c00f5d3e0453","http://resolver.tudelft.nl/uuid:b265aa7e-fb37-42b9-9104-c00f5d3e0453","Smart Grid standards policy in context: A discursive-institutionalist analysis of government intervention in the European Union and the United States","Muto, M.S. (TU Delft Organisation & Governance)","Herder, P.M. (promotor); de Bruijn, J.A. (promotor); Delft University of Technology (degree granting institution)","2024","Starting around 2005 and for several years, the creation of a “Smart Grid” became a key element in the quest of policymakers to operationalize the goal of “sustainable development”. In official discourse, the Smart Grid promised improved energy security and a way to support the realization of ambitious targets on reduced carbon emissions and increased use of renewable resources. Additionally, the Smart Grid was presented with the lure of “green innovation” and jobs.
The imperative of realizing these vision(s) of the Smart Grid put unprecedented focus on the world of ICT standardization. Without an agreed set of interoperability standards, promising pilot projects would not scale in a meaningful way, and the European Union (EU) and the United States (US) federal government departed from established practice within this policy domain and intervened to encourage, coordinate and accelerate standardization activities.
This thesis explores how such a policy of intervention was constructed in EU and US official policy texts. It does this by building a conceptual framework with elements from discourse theory and neo-institutionalism that aims to understand the factors of policy change in a highly technical area in the absence of crisis or repeated policy failure. How is the need to develop an agreed set of ICT interoperability standards understood as a policy problem, and how is intervention in the standardization process legitimated? What does the policy response to the challenge of Smart Grid standardization say regarding current understandings about the proper role of government and the potential for industry self-organization in policy areas relating to new technologies?
In pursuing the above questions, this thesis contributes to our understanding of a field that is under-developed yet of growing importance. As our societies are increasingly attempting to solve important challenges through the large-scale application of ICTs (Smart Transport, Smart Homes, Smart Cities), we need a better understanding of policy alternatives that go beyond the typical dichotomy of legislation versus self-regulation.","","en","doctoral thesis","","","","","","","","","","","Organisation & Governance","","",""
"uuid:e76f59d3-6c81-417e-90bd-b5270a2c55ad","http://resolver.tudelft.nl/uuid:e76f59d3-6c81-417e-90bd-b5270a2c55ad","Interface-resolved simulations of dense particulate flows: Studies on sedimentation and slurry pipe flow","Shajahan, M.T. (TU Delft Multi Phase Systems)","Breugem, W.P. (promotor); Poelma, C. (promotor); Delft University of Technology (degree granting institution)","2024","Dense suspension flows, both in the natural environment and industrial settings, are complex phenomena with significant implications. From rivers shaping landscapes to industrial processes involving slurry transport, these flows hold a prominent position in numerous sectors. This thesis delves into a specific facet of these intricate flows: slurry transport within horizontal pipes. Slurry, a mixture of solid particles and a viscous fluid, presents a challenging arena due to its dynamic nature, encompassing multiple flowregimes and diverse phenomena that govern its behavior. This research seeks to unravel the complexities of slurry transport, presenting a comprehensive analysis using interface-resolved Direct Numerical Simulation (DNS). In the context of slurry
transport (also referred to as sediment transport), a horizontal pipe is a conduit where particles suspended in a viscous fluid are transported. The dynamics of this transport are governed by several dimensionless numbers, each highlighting distinct aspects of
the flow. Prominently, in this work we explore the role of the Reynolds number (Re) which encapsulates the balance between inertial and viscous forces, the Galileo number (Ga) which characterizes the competition between inertial and viscous effects in particle settling under gravity, and concentration of particles which has an influence on particle-particle and particle-fluid interactions. Key flow dynamics that determine the behaviour of the flow include turbulent mixing, gravitational settling of particles, and shear-induced particle migration due to particle-stress gradients. Practical applications of slurry transport are numerous, spanning industries such as mining, agriculture, and chemical processing. Slurry transport is of particular relevance to the dredging industry in the Netherlands to maintain its inland waterways and for land reclamation projects. However, pipeline operators grapple with issues ranging from pressure drop and the prevention of bed formation to the control of excessive pipe abrasion, silting risks, and production instability. These challenges stem from the intricate interplay of particle behavior, fluid dynamics, and pipeline geometry....","sediment transport; slurry flow; transport regimes; flow transition; secondary flow; turbulence modulation; multiparticle interactions; dense suspensions; sedimentation; path instabilities; wake-trapping; drafting-kissing-tumbling; kinematic waves; direct numerical simulation; immersed boundary method; soft-sphere collision model; high-performance computing","en","doctoral thesis","","978-94-6496-054-9","","","","","","","","","Multi Phase Systems","","",""
"uuid:667642da-a182-4b7c-bc10-864e4fc16674","http://resolver.tudelft.nl/uuid:667642da-a182-4b7c-bc10-864e4fc16674","Integration Technologies for Smart Catheters","Li, J. (TU Delft Electronic Components, Technology and Materials)","Dekker, R. (promotor); French, P.J. (promotor); Delft University of Technology (degree granting institution)","2024","Around 10% of the population will have to go through a catheterization procedure for the treatment of a cardiovascular disease at a certain stage of their lives. During such a procedure, smart catheters will be the ""eyes and ears"" of the surgeons, significantly improving the diagnosis and treatment. However, there have been very limited improvements and innovations in smart catheters over the past decade, as most smart catheters are manufactured with technical point solutions, and therefore cannot sustain themselves with enough production volume for continuous innovation. Consequently, Flexto- Rigid (F2R) was developed as an interconnect platformfor heterogeneous integration of electronic components in submillimeter formfactors. F2R is an open technology platformthat can serve many smart catheter applications from a variety of manufactures. It consists of multiple small and thin silicon islands connected by thin flexible interconnects, which allows devices and components to be mounted with standard assembly techniques or directly fabricated onto the F2R platform. This thesis presents innovations in F2R-based applications, integration, and process optimization for smart catheters. The first part of the thesis is an example of applying F2R for making a miniaturized device, a submillimeter optical data link module (ODLM). With smart catheters migrating from analog to digital instruments, an optical interposer is needed to realize highspeed optical data transmission. The biggest challenge is the form factor of the optical interposer, as it needs to fit into a catheter tip that is inserted inside human veins. This challenge falls exactly in the scope of F2R. The ODLM was fabricated, assembled, and integrated into an ICE catheter demo system. The second part of the thesis presents high-density embedded trench capacitor integration in the F2R platform. Compared to assembling discrete capacitors on F2R, embedded capacitors in the F2R substrate save space in the catheter tip and bring the decoupling capacitors directly underneath the ASICs, resulting in better performance. The work involved the trench capacitor process development, especially the high-aspect ratio (HAR) DRIE trench etching process. More importantly, the trench capacitor process was optimized to be compatible with the standard F2R process. The last part of the thesis presents the work on improving the fabrication process of the F2R platform. The largest bottleneck and most critical step of F2R is the ""buried trench"" process, which creates separated thin silicon islands. The buried trenches consist of thin oxide membranes, that are very sensitive to thin-film stress and other mechanical forces, resulting in reduced production yield. Cavity-BOX SOI eleminates the ""buried trench"" process by introducing a patterned buried oxide layer. The patterned buried oxide mask allows an intact wafer surface during the process until the final DRIE process, which separates the wafer in one go using this oxide mask. The production yield can be significantly improved using the cavity-BOX SOI for the F2R process. A deep brain stimulation (DBS) probe test structure was fabricated with the cavity-BOX SOI based F2R process to demonstrate the technology concept. A method to align the patterns on the wafer to the patterned buried oxide mask was developed.","Cardiovascular diseases; smart catheters; intravascular ultrasound (IVUS) catheter; Flex-to-Rigid (F2R); optical transmitters; optical interconnections; trench capacitors; HAR (High Aspect Ratio) DRIE; SOI substrate; cavity-SOI; cavity-BOX; buried hard-etch mask; miniaturization; deep brain stimulation (DBS); foldable devices; MEMS; Microfabrication; Microassembly","en","doctoral thesis","","978-94-6366-821-7","","","","","","","","","Electronic Components, Technology and Materials","","",""
"uuid:5d837786-9598-460b-b1cb-54dfe7008095","http://resolver.tudelft.nl/uuid:5d837786-9598-460b-b1cb-54dfe7008095","Microstructure development in a case-carburized bearing steel","Abraham Mathews, J. (TU Delft Team Maria Santofimia Navarro)","Sietsma, J. (promotor); Santofimia, Maria Jesus (promotor); Petrov, R.H. (promotor); Delft University of Technology (degree granting institution)","2024","WIND turbines play a crucial role in the global transition towards a sustainable energy future. Maximizing energy production and ensuring a reliable operation is essential to harnessing the full potential of wind energy. Among the critical components, the main shaft bearings have for several years been a focal point due to their significant downtime. In this context, a tribochemical treatment called case-carburization has gained notable attention for enhancing the microstructure of these bearings, to improve their reliability. Case-carburization is a surface treatment technique capable of modifying steel to exhibit a combination of properties such as high fatigue strength, toughness, and wear resistance, that are essential for these bearings as they operate in high-load-bearing environments. In a multi-stage heat treatment process involving case-carburization as the initial stage, the microstructure development at each stage is affected by the final microstructure of the preceding stage. Therefore, a comprehensive understanding of the microstructure at every stage is crucial for assessing its impact on the final microstructure and its properties. This Ph.D. research investigates the microstructure evolution throughout a four-stage heat treatment: carburization, sub-critical isothermal treatment, hardening, and tempering. The second stage is where the sole difference lies with regard to the heat treatment parameters, and is performed along two different routes, also in industrial practise, called the ""bainitic route"" and ""pearlitic route"". One of the primary goals of this research is to understand the microstructure development during the different stages of the two heat treatment routes and to provide an understanding of the microstructural features that can potentially affect the properties/performance of bearings. Additionally, this research also aims to identify the specific stage at which these features form and to provide insight into their formation mechanisms to explore strategies to rectify or mitigate the formation of detrimental features in the microstructure....","","en","doctoral thesis","","978-94-6384-542-7","","","","","","","","","Team Maria Santofimia Navarro","","",""
"uuid:0d57b4ce-42c2-423e-a4e0-b62b6a842a54","http://resolver.tudelft.nl/uuid:0d57b4ce-42c2-423e-a4e0-b62b6a842a54","Revitalizing CMUTs","Kawasaki, S. (TU Delft Electronic Components, Technology and Materials)","Dekker, R. (promotor); Giagka, Vasiliki (copromotor); Delft University of Technology (degree granting institution)","2024","CMUTs (Capacitive Micromachined Ultrasonic Transducers) are causing a technological revolution. Research over the last decade showed that CMUTs can sufficiently replace traditional ultrasound technology based on the bulk PZT, along with other benefits such as lower assembly cost, broader bandwidth and monolithic integration capability with ASICs. Furthermore, devices can be fabricated from with non-toxic materials and eliminate the environmental impact that is associated to PZT. As a result, in recent years we are seeing low-cost consumer level ultrasound imaging technology becoming available for point of care diagnostics devices from startup companies. However, surprisingly, CMUT technology adoption is still lagging behind what we would expect. Thus, in this thesis three novel CMUT applications are investigated to show-case the untapped potentials of CMUTs which should lead to further traction for the CMUT field. By reading this work it is my wish that the reader could understand the hugely prosperous future of CMUTs.","Ultrasound; MEMS; CMUT; ultrasound power transfer; pre-charged CMUTs; microfluidic particle separation; ultrasound neurostimulation","en","doctoral thesis","","978-94-6473-390-7","","","","","","","","","Electronic Components, Technology and Materials","","",""
"uuid:654f32ea-d3df-4804-8d67-eb2dd89d20e5","http://resolver.tudelft.nl/uuid:654f32ea-d3df-4804-8d67-eb2dd89d20e5","Securing Power Side Channels by Design","Aljuffri, A.A.M. (TU Delft Computer Engineering)","Hamdioui, S. (promotor); Taouil, M. (copromotor); Delft University of Technology (degree granting institution)","2024","The security of electronic devices holds the greatest importance in the modern digital era, with one of the emerging challenges being the widespread occurrence of hardware attacks. The aforementioned attacks present a substantial risk to hardware devices, and it is of utmost importance to comprehend the potential detrimental effects they may cause. Side-channel attacks are a class of hardware attacks that exploit information unintentionally leaked by a device during its operation. These leaks manifest in various forms, including power consumption, time variations, and thermal dissipation. The fundamental danger posed by side-channel attacks is their ability to infer sensitive information from these unintended emissions. To address the heightened risks associated with side-channel attacks, this thesis focuses on three main research topics.","Side Channel Analysis; Power Attacks; Countermeasures; Leakage Assessment Framework","en","doctoral thesis","","978-94-6384-544-1","","","","","","","","","Computer Engineering","","",""
"uuid:764408e4-72c1-4cf9-8bff-1ce20b8944b2","http://resolver.tudelft.nl/uuid:764408e4-72c1-4cf9-8bff-1ce20b8944b2","Interactive Intelligence: Multimodal AI for Real-Time Interaction Loop towards Attentive E-Reading","Lee, Y. (TU Delft Web Information Systems)","Specht, M.M. (promotor); Migut, M.A. (copromotor); Delft University of Technology (degree granting institution)","2024","E-learning has shifted the traditional learning paradigms in higher education, offering more flexible, ubiquitous, and personalized learning experiences. The previous years COVID-19 pandemic required a re-calibration of education to accommodate virtual learning environments from the traditional classroom-based education. Widespread learning platforms and digital devices have accelerated the adoption of e-learning , and now, it plays a central role in formal and informal education.","Machine learning; Deep learning; Computer vision; Multimodal reasoning; Learning Analytics; Human Attention; E-reading; Real-time feedback loop; Human-Robot Interaction (HRI); Human-Computer Interaction (HCI)","en","doctoral thesis","","978-94-6366-824-8","","","","","","","","","Web Information Systems","","",""
"uuid:3a1703bb-38e1-4449-9551-92367c3d416b","http://resolver.tudelft.nl/uuid:3a1703bb-38e1-4449-9551-92367c3d416b","Anaerobic digestion of excess sludge by cascade digesters","Guo, H. (TU Delft Sanitary Engineering)","de Kreuk, M.K. (promotor); van Lier, J.B. (promotor); Delft University of Technology (degree granting institution)","2024","The management and disposal of excess sludge is one of the main challenges for wastewater treatment facilities across the world. Anaerobic digestion (AD) is a widely accepted treatment method for stabilizing excess sludge due to its robustness, ability to reduce pathogens, and capacity to convert the biochemical energy enclosed in organic compounds into biogas. However, the efficiency of converting excess sludge organics into biogas using conventional continuous stirred tank reactors (CSTR) is relatively low, primarily due to the slow hydrolysis rate. Various enhancement technologies, including thermal, chemical, and enzymatic methods, have been developed to accelerate the hydrolysis rate. Among these, enzymatically enhanced hydrolysis has attracted attention for its advantages, such as the absence of toxic byproduct formation and the ability to operate under moderate conditions. However, the scaling-up of these methods to industrial scale presents ongoing challenges. The research in this dissertation explored the feasibility of an innovative cascade anaerobic digestion (CAD) technology, consisting of differently sized CSTR digesters in series. The overall objective of the CAD technology is to achieve enzymatically enhanced hydrolysis of excess sludge in the first reactor stages.","","en","doctoral thesis","","978-94-93353-61-9","","","","","","","","","Sanitary Engineering","","",""
"uuid:cc4494e6-df20-42c3-8e43-68dfeafc78b4","http://resolver.tudelft.nl/uuid:cc4494e6-df20-42c3-8e43-68dfeafc78b4","Adaptive Reuse of Urban Heritage in Contested Urban Contexts","Yarza Perez, A.J. (TU Delft Heritage & Architecture)","van der Hoeven, F.D. (promotor); Rocco, Roberto (promotor); Delft University of Technology (degree granting institution)","2024","The world is facing global challenges that are dramatically changing the social and physical environments, resulting in cultural confrontation. Rapid urban growth, and gentrification increase urban pressure while jeopardizing social cohesion, multicultural values and local economies. Moreover, environmental factors associated with climate change challenge the way cities respond
and adapt, as their assets have to be re-designed to meet the current and future generation needs.
One response to these challenges is adaptive reuse, the transformation of the function of an underused structure into a new use. This process turns the cities’ elements in decline into development catalysers. The adaptation to these changes is often a source of conflict, as urban policies lack citizen engagement in the redefinition of public space, resulting in more disagreement. This is particularly acute when addressing contested communities, as their continuous evolution directly influence the adaptation of cultural heritage.
Considering these aspects, this research question is responded: ‘How can socio-spatial conflicts that result from contested identities be mitigated through the adaptive reuse of urban heritage?’.
The relations between Adaptive Reuse, Urban Heritage and Contested Identities are studied, resulting in the research’s objective: to develop an integrative methodology to evaluate urban heritage adaptive reuse alternatives in contested urban contexts, using the case of Acre (Israel).
This final outcome is proposed as a tool for decision-makers and urban planners that provides information-based results to be applied in urban design practice, aiming to translate the theory into practice, and to bridge the gap between global goals and local issues.","Adaptive reuse; urban heritage; conflict; urban resilience; Acre; Israel","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-825-5","","","","","","","","","Heritage & Architecture","","",""
"uuid:16494021-9bd2-4089-8808-5ad9dffadc5d","http://resolver.tudelft.nl/uuid:16494021-9bd2-4089-8808-5ad9dffadc5d","The impact of third generation sequencing on haplotype assembly","Shirali Hossein Zade, R. (TU Delft Pattern Recognition and Bioinformatics)","Reinders, M.J.T. (promotor); Abeel, T.E.P.M.F. (promotor); Delft University of Technology (degree granting institution)","2024","The genome encompasses an organism’s full DNA, organized into chromosomes within the cell nucleus. Humans have 46 paired chromosomes, and within these pairs, genetic information is grouped as haplotypes—genetic packages passed from one generation to the next, ensuring genetic diversity. While DNA sequencing produces short fragments or reads, assembling these back into a complete genome can be complex. The presence of multiple, similar haplotypes in some organisms amplifies this complexity, emphasizing the need for specialized techniques to accurately capture these subtle genetic variations.
In this thesis, we dive into the de novo and haplotype assembly challenges. We aim
to tackle haplotype assembly challenges and find better ways to accurately assemble the genetic puzzle pieces. Along the way, we introduce a new tool for haplotype assembly designed to make the process more interpretable.","Haplotype assembly; Genome assembly; Third generation sequencing; Genome repeats","en","doctoral thesis","","978-94-6384-539-7","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:17b83ad6-3179-4a10-8194-90c4e6c768b2","http://resolver.tudelft.nl/uuid:17b83ad6-3179-4a10-8194-90c4e6c768b2","Printed spark ablation nanoparticle films for microelectronic applications","van Ginkel, H.J. (TU Delft Electronic Components, Technology and Materials)","Zhang, Kouchi (promotor); Vollebregt, S. (copromotor); Delft University of Technology (degree granting institution)","2024","This thesis is about the application and characterisation of spark ablation generated nanoparticles in microelectronics. It opens in chapter 1 with a general motivation of the need for advanced materials and for nanotechnology in particular. It then describes what nanoparticles are and why they are promising materials. Some of their advantages are a high chemical reactivity, a high specific surface area, and the display of quantum effects at that scale. The chapter ends by presenting the research questions and giving the thesis structure.
Chapter 2 provides a technological background for the rest of the thesis based. It describes several applications of nanoparticles within microelectronics not researched during this PhD project: as die-attach materials, chemical sensors, or catalysts. It continues with the description and discussion of several competing nanoparticle synthesis methods and goes in-depth on the theory of spark ablation generation. It describes the effects of various parameters that govern the mass generation rate, particle size, and composition. This theory is important to be able to interpret the results in the other chapters. Impaction deposition is then described in this chapter since it is the method of printing all samples in this thesis. It explains how this method prints dots or lines of nanoparticles, that they have a Gaussian cross-section profile, and how specific deposition parameters affect the deposit. Lastly, the chapter gives a detailed description of the spark ablation synthesis and deposition equipment with which all the experiments in this thesis are performed. The generator, components, gasses, pressures, and materials are all described with diagrams and specifications. Typical synthesis and deposition settings for the generation of Au nanoparticle deposits are given (1 kV, 5 mA, 1.5 L min.¡1 Ar or N2 and 1mmnozzle distance).
The first chapter with results, chapter 3, presents a method to measure the mass deposition rate of the nanoparticle printer. Measuring the mass of microgram scale deposits is challenging due to the high sensitivity required for an accurate measurement. Balances are sensitive to changes in pressure, temperature or humidity that can already give too big errors. One solution already applied in thin film deposition methods is the use of quartz crystal microbalances (QCMs). Their resonance frequency is dependent on their mass, and thus, we can use the frequency shift during deposition to measure a mass change. The Sauerbrey equation that is used for that conversion must be valid, so a special method was developed to comply to all of its conditions. A concentric circular pattern of Au nanoparticles was printed on 10 MHz QCMs to measure the mass deposition rate. It was found that the deposition rate scales linearly with the generation current of the spark, as expected from theory, but also showing the losses in the system are either constant or scale linearly too. The film density was surprisingly constant for all tested synthesis and deposition settings, at 15.95 g cm¡3, or a porosity of £p Æ 0.18. The density was compared to models presented in literature, and it is proposed that the impaction energy likely compacts the porous structure during deposition until this density is reached. The QCM method can be applied for process monitoring using commercially available equipment and open-source software.
The first applications of printed conducting nanoparticle films are discussed in chapter 4. It describes the conductive properties of such films and the effect of annealing on their conductivity. It was found that an untreated Au film conducts 22 times worse than bulk Au. Several applications are then discussed. Here it was demonstrated that printed Au nanoparticle lines can be applied as interconnect materials as an alternative to wire bonding. Next, a method was presented to miniaturize the deposits even further by using lithography and lift-off. This reached a line width at the minimum of the lithography equipment available, at 1.2 ¹m, without significantly changing the nanostructure.
Chapter 5 deals with the application of spark ablation generated nanoparticles as thermoelectric materials. It describes in detail the synthesis and characterization of Bi2Te3 nanoparticles and their thermoelectric properties. The main finding was that the thermal conductivity was drastically lower than bulk Bi2Te3 and comparable to the state of the art for Bi2Te3 nanostructured materials, reaching a minimum of 0.2Wm¡1 K¡1 at room temperature. Unfortunately, the electrical conductivity was reduced by at least a factor 1000, easily undoing any efficiency gains from reduced thermal conductivity. Suggestions are given to possibly improve this trade-off. Additionally, this chapter shows how quickly nanostructured materials like the ones in this thesis oxidize after synthesis. From the moment the sample is printed, it gains mass and loses conductivity, so this must be counteracted if a non-noble metal is to be applied.
The final chapter before the conclusions, chapter 6, showcases another application of printed nanoparticles: as UV-sensing material. It shows the results obtained using ZnO nanoparticles to create a UV sensor that is insensitive to visible light. The nanoparticles were deposited over electrodes to fabricate a resistor that has two orders of magnitude electrical resistance reduction when exposed to 265 nm UV light. The response was slow, with 79 seconds to reach 90% of the maximum response and 82 seconds to get back to 10% again. This is attributed to the adsorption and desorption of oxygen under the influence of UV light and can be prevented by packaging the sensor. The contact behaviour between the metal electrodes and ZnO nanoparticles proved to be too unpredictable to reliably create a Schottky diode, which would have had a higher response. This dissertation ends with a list of the conclusions, the answers to the research questions, and finally, some suggestions for future work.
State-of-the-art Earth system models are used for simulations, and results calculated with the EMAC model are subsequently compared with simulations performed elsewhere with the LMDZ-INCA model. The comparison to a third model, i.e. WACCM, with a very similar – but independent – model setup allows even further clarification. For model validation satellite measurements (ozone, water vapor) and aircraft measurements (ozone, water vapor, temperature) are taken into account.
After the introduction in the first chapter, the second chapter is a general description of the Earth system including anthropogenic perturbations, in particular perturbations from subsonic, supersonic and hypersonic aircraft emissions followed by a detailed explanation of methods and the EMAC model setup in the third chapter. A new research finding in the context of middle atmospheric chemistry is the increased methane and nitric acid oxidation following hypersonic emissions. This effect results in a (photo-)chemical net production of water vapor and eventually increases water vapor perturbations further, which is described in detail in chapter 4. In chapter 5 an analysis of atmospheric dynamics and transport of emitted trace gases in the middle atmosphere underlines the importance of the Brewer-Dobson circulation and shows the impact of polar stratospheric clouds on water vapor perturbations during polar winter. The evaluation of multiple hypersonic aircraft designed for different cruise altitudes shows that their climate impact increases with cruise altitude and can be approximately 10-20 times as much as a conventional aircraft (chapter 6). Emissions at different hypersonic cruise altitude and latitude regions show that the climate impact can vary more with latitude of emission than with altitude of emission (chapter 7). With rf_of_hypersonic_trajectories() a software was developed to estimate the climate impact of aircraft design and flight trajectory/network options in seconds based on robust results from Earth system modelling. Using the software it is shown that a cruise altitude optimization loop can reduce the overall climate impact of a state-of-the-art aircraft design (chapter 8).
There are two methodological highlights to mention in the context of the EMAC model. The first is a new MESSy submodel H2OEMIS, which was created as part of this thesis. H2OEMIS is an interface to include water vapor emissions in EMAC model simulations, which was not possible before. This submodel will generally be of interest for future evaluations of e.g. any vehicles emitting water vapor and the impact of volcanic eruptions with EMAC. The secondmethodological highlight is the application of a novel speed-up technique during simulation runs, which reduces the simulated years by twothirds. To conclude the summary, the four following points are important to take away. This thesis brought
• A new research finding on middle atmospheric chemistry: The identification of a chemical feedback that enhances the water vapor perturbation lifetime albeit an increasing chemical water vapor destruction
• A robust estimate of the climate impact of hypersonic aircraft for both specific aircraft designs and general atmospheric and radiative sensitivities showing a large altitude and latitude dependence
• An easily accessible tool for researchers and companies to estimate the climate impact of new hypersonic aircraft designs with low cost and low time
• An estimate how the development of hypersonic aircraft would contribute to a road map to a climate optimal aircraft industry compared to conventional aircraft
acoustodynamics (cQAD) are used to develop quantum acoustic devices that are coupled to superconducting qubits. cQAD enabled the demonstrations of quantum ground state cooling mechanical objects, generating mechanical Fock-states, and Schrödinger cat states of motion. This makes quantum acoustic devices appealing candidates for applications such as quantum metrology, information processing, and quantum memory.
This thesis focuses on the coupling between a planar superconducting transmon qubit and a high-overtone bulk acoustic resonator (HBAR) and explore its possibilities. Here,experimental demonstrations are shown where the transmon is used to drive the HBAR into a phonon lasing state making it a superconducting single-atom phonon laser. Furthermore, the transmon-HBAR device is used to probe the nature of ghost modes observed in strongly driven nonlinear systems.","cQAD; cQED; HBAR; Highovertone bulk acoustic resnotor (HBAR); Quantum acoustics; superconducting qubit; transmon","en","doctoral thesis","","978-90-8593-588-9","","","","","","","","","QN/Steele Lab","","",""
"uuid:15f5628b-9175-4ef3-8f39-41a97cb7749a","http://resolver.tudelft.nl/uuid:15f5628b-9175-4ef3-8f39-41a97cb7749a","System behaviour in prestressed concrete T-beam bridges","Ensink, S.W.H. (TU Delft Concrete Structures)","Hendriks, M.A.N. (promotor); Lantsoght, E.O.L. (promotor); Delft University of Technology (degree granting institution)","2024","About 70 prestressed concrete T-beam bridges, constructed in the Netherlands between 1953–1977, are still in use today with many located in the main highway network. This type of bridge consists of prefabricated and prestressed T-shaped beams, with an integrated deck slab, cross-beams and transverse prestressing. Even if these bridges are well maintained, two important factors demand the current need for assessment: (1) increased traffic loading and (2) potential lack of shear resistance. Using traditional assessment methods it was concluded that about 50% of these bridges do not fulfil the current design code requirements. However, this does not automatically imply that these bridges are structurally unsafe, since some potentially significant additional load-transfer mechanisms are not taken into account in a traditional assessment. This is strengthened by the observation that, in general, these bridges do not show any signs of distress....","compressive membrane action; arching action; T-beam bridge; collapse test; assessment; Nonlinear Finite Element Analysis","en","doctoral thesis","","","","","","","","","","","Concrete Structures","","",""
"uuid:8570eb94-279e-4a3c-b662-b999fdac517c","http://resolver.tudelft.nl/uuid:8570eb94-279e-4a3c-b662-b999fdac517c","Reader-friendly Edible Binarycodes and Sensors Based on Smart Hydrogel","Zhang, M. (TU Delft Engineering Thermodynamics)","Mendes, E. (promotor); Eral, H.B. (copromotor); Delft University of Technology (degree granting institution)","2024","Food and medicines are two of the most essential categories of goods for human beings, providing vital nourishment and healthcare. However, as these products are commercialized and distributed on a global scale, consumers face the threat of counterfeit and deteriorated products. In response, this dissertation presents four prototypes consisting of On-Dose-Authentication (ODA) binarycodes and battery-less indicators based on smart hydrogel that are edible and reader-friendly to address these issues.
First, a microfluidic platform for continuous synthesis of hydrogel microparticles with superparamagnetic colloids (SPCs) embedded at prescribed positions has been established. The shape of the cross-linked microparticle is independently controlled by stop-flow lithography, whereas the position of trapped SPCs is dictated by virtual magnetic molds made of 2D nickel patches facilitating magnetic trapping. The spatial positions of trapped SPCs collectively function as a binary code matrix for product authentication. The proposed magnetic microparticles will contribute to the development of soft matter-inspired product quality control, tracking, and anti-counterfeiting technologies. (Chapter 2)
Second, a Physical Unclonable Functions (PUF) algorithm was developed to enhance the ODA binary codes' safety level. This algorithm exploits the diameter and coordinates of spheres as input, abandoning color and intensity as inputs, enabling imaging using common illumination and low-magnification microscopy hence lifting the reading constraints to advanced labs that are usually found in other current graphical PUF systems. Two sets of Poly(ethylene glycol) diacrylate ODA-PUF tags that can be read out via this algorithm were fabricated. The sets are single-diameter PUF leveraging random distributed superparamagnetic colloids of identical diameters and multiple-diameter PUF utilizing vortexed sunflower oil drops of various diameters, respectively. The performance of the single-diameter system was investigated. It passed NIST Statistical tests, demonstrating sufficient randomness, ideal bit uniformity, Hamming distance, and device uniqueness. The encoding capacity of the single-diameter system was found to be $9.2\times10^{18}$, which can satisfy labeling the annual output of Aspirin. (Chapter 3)
Third, a humidity indicator has been created that mechanically bends and rolls itself irreversibly upon exposure to high humidity conditions. The indicator is made of two food-grade polymer films with distinct ratios of a milk protein, casein, and a plasticizer, glycerol, that are physically attached to each other. Based on the thermogravimetric analysis and microstructural characterization, the bending mechanism is a result of hygroscopic swelling and consequent counter diffusion of water and glycerol. Guided by this mechanism, the rolling behavior, including response time and final curvature, can be tuned by the geometric dimensions of the indicator. As the proposed indicator is made of food-grade ingredients, it can be placed directly in contact with perishable products to report exposure to undesirable humidity inside the package, without the risk of contaminating the product or causing oral toxicity in case of accidental ingestion - features that commercial inedible electronic and chemo-chromatic sensors cannot provide presently. (Chapter 4)
Finally, an alginate TTI bead that encapsulates betacyanin, a natural colorant extracted from purple pitaya, is proposed to continuously monitor and reflect the temperature history of the perishable products to diagnose the storage conditions. The instability of betacyanin is exploited to demonstrate undesirable temperature abuse through visual color changes. The thermochromic change of the purple pitaya extract and the pitaya-extract-encapsulated bead was investigated under various temperatures, pH, and gaseous atmosphere conditions. Experimental results show that the proposed TTI exhibits an irreversible thermochromic change under a wide operation temperature range up to at least 100 \textdegree C with negligible disturbance from the gaseous composition. (Chapter 5)","Anti-counterfeiting; Binary code; Smart hydrogel; PUF; Humidity indicator; Time-temperature indicator","en","doctoral thesis","","978-94-6366-812-5","","","","","","","","","Engineering Thermodynamics","","",""
"uuid:5c1b165c-4708-45fb-b12a-96b3f4f86f15","http://resolver.tudelft.nl/uuid:5c1b165c-4708-45fb-b12a-96b3f4f86f15","Supporting seat design for Smartphone use during travel","Udomboonyanupap, S. (TU Delft Applied Ergonomics and Design)","Vink, P. (promotor); Boess, S.U. (copromotor); Delft University of Technology (degree granting institution)","2024","This study investigates the impact of smartphone use on passengers' comfort during travel, focusing on train trips. The literature review reveals that smartphones have become the primary activity for train passengers, leading to discomfort and potential musculoskeletal issues, particularly in the neck, shoulders, arms, and back. The study aims to enhance the vehicle seat environment to alleviate these issues.
A questionnaire was administered to passengers, revealing that the main smartphone activities include listening to music, watching videos, reading, and texting. Most passengers prefer using smartphones with arm support, although a high discomfort score related to armrest use was noted. The study suggests exploring smartphone holders for watching videos and improving armrests for texting.
Passenger needs for the seating environment were collected through context mapping and co-creation techniques. Different age groups showed varied preferences in smartphone activities, with younger passengers and employees primarily using smartphones for entertainment, while older individuals engaged in diverse activities. The study emphasizes the importance of arm support, charging facilities, Wi-Fi, and considerations for special passenger groups like the disabled in future interior designs.
Chapters 5, 6, and 7 discuss design aspects to enhance smartphone comfort. An adjustable armrest is recommended, and experiments suggest an optimal trunk angle for smartphone use. A specially designed armrest reduced neck discomfort but increased discomfort in the upper arms, emphasizing the need for adjustable height. Chapter 8 provides specific recommendations for armrest height levels during various smartphone activities and proposes the use of smartphone holders.
In conclusion, the study suggests implementing adjustable armrests, smartphone holders, and considering the duration of smartphone use in future vehicle interior designs. These improvements aim to enhance body posture and reduce discomfort for passengers during smartphone usage. Further testing with end-users is recommended to validate these proposed solutions.","passengers; seat; comfort; design; smartphone use","en","doctoral thesis","","978-94-6366-811-8","","","","","","","","","Applied Ergonomics and Design","","",""
"uuid:5f21aff9-85e5-435e-8402-704263064e66","http://resolver.tudelft.nl/uuid:5f21aff9-85e5-435e-8402-704263064e66","Channel response of an engineered river to climate change and human intervention","Ylla Arbos, C. (TU Delft Rivers, Ports, Waterways and Dredging Engineering)","Blom, A. (promotor); Schielen, R.M.J. (copromotor); Delft University of Technology (degree granting institution)","2024","Humans have intervened in rivers for centuries. River engineering measures have aimed at protecting populations against flooding, ensuring reliable and safe navigation, providing freshwater for drinking, domestic and industrial use, irrigation, and energy supply, and providing opportunities for recreation. All around the world, measures such as channelization (i.e., channel narrowing and shortening), dam construction, or channel diversion have allowed for the proliferation of human settlements, technological progress, and an improved quality of life.
Despite the various socio-economic benefits of human intervention in rivers, engineering measures have side effects, often unaccounted for, or simply unknown before they manifest. This is because, by modifying the channel characteristics (geometry, planform, size of the bed surface sediment), or its controls (water discharge, sediment supply, base level), engineering measures alter the equilibrium state of a river. In response, rivers adjust toward the new equilibrium state through bed incision or aggradation, changes in channel width or sinuosity, or changes in the bed surface grain size distribution. This response may extend over hundreds of kilometers, and develop during decades to centuries....","rivers; channel adjustment; climate change; human intervention; Rhine","en","doctoral thesis","","978-94-6366-808-8","","","","","","","","","Rivers, Ports, Waterways and Dredging Engineering","","",""
"uuid:ef90d088-f7ac-4767-a926-3b4bac9497e9","http://resolver.tudelft.nl/uuid:ef90d088-f7ac-4767-a926-3b4bac9497e9","Alkaliphilic Life: Adaptation strategies by Caldalkalibacillus thermarum","de Jong, S.I. (TU Delft BT/Environmental Biotechnology)","van Loosdrecht, Mark C.M. (promotor); McMillan, D.G.G. (copromotor); Delft University of Technology (degree granting institution)","2024","Alkaliphiles thrive in environments with a pH of 8.5 or above, while maintaining an internal pH closer to neutral. Thus, alkaliphilic microorganisms have a proton gradient inverted with respect to the normal orientation. Intuitively, this would nullify the potential to generate energy via respiration with regularly oriented respiratory chains that rely on proton-coupled ATP synthases. Yet, alkaliphilic respiratory chains are oriented traditionally and are actively used. The question therefore is how they are able to create conditions conducive to such behaviour. In addition, attempts to answer that question will hopefully also clarify how alkaliphiles acidify their cytoplasm with respect to the exterior milieu in the first place. This thesis details methods required to study these questions and provides some answers regarding alkaliphilic life. This thesis focuses on a single category of alkaliphiles: the low-salt gram positive alkaliphiles. These microbes have just a single membrane, the proteins therein, and a cell wall to generate conditions suitable for energy generation and other transport mechanisms. In short, it can be regarded as the most basic system to study an alkaline, or basic, problem....","Alkaliphile; Membrane; Genomics; Proteomics; Lipidomics","en","doctoral thesis","","978-94-6361-963-9","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:5a44e49e-6df3-469b-b9a6-f19085188280","http://resolver.tudelft.nl/uuid:5a44e49e-6df3-469b-b9a6-f19085188280","Drivers’ Behaviour on Freeway Curve Approach: Different Angles, Different Perspectives","Vos, J. (TU Delft Transport and Planning)","Hagenzieker, Marjan (promotor); Farah, H. (promotor); Delft University of Technology (degree granting institution)","2024","This dissertation explores what road characteristics trigger drivers’ speed adjustments when approaching freeway curves. It combines speed prediction modelling and human factors research methods. The results show that drivers primarily consider visible cues such as the preceding roadway, deflection angle, and the number of lanes, as opposed to traditional factors like horizontal radius or speed signs, when starting to decelerate. The study advocates for integrating driver perspectives into road design.","Geometric freeway design; human factors; Curve driving","en","doctoral thesis","","978-90-5584-340-4","","","","","","","","","Transport and Planning","","",""
"uuid:a26f9d02-74db-4f87-b99b-5b226065c598","http://resolver.tudelft.nl/uuid:a26f9d02-74db-4f87-b99b-5b226065c598","Applications of Dynamic Covalent Bonds in Chemical Reaction Networks","Spitzbarth, B. (TU Delft ChemE/Advanced Soft Matter)","Eelkema, R. (promotor); van Esch, J.H. (promotor); Delft University of Technology (degree granting institution)","2024","Nature has inspired countless researchers in their quest to understand the phenomena we observe and utilise their findings to develop new technologies. This becomes especially apparent in systems chemistry, which heavily draws inspiration from natural systems in its pursuit for the understanding and development of chemical reaction networks (CRNs) with interesting properties. Today, CRNs play a big role in many sensors, amplification systems, transient materials, and more. Despite major advances in the field of CRNs, there is still a need for additional robust, versatile chemistries to allow for more diverse applications, both within systems chemistry and in other fields beyond, such as material science. This thesis aims to explore new applications of Dynamic Covalent Chemistry (DCvC)—typically utilised to make self-healing materials—in CRNs to allow for new applications drawing from the versatile chemistry used in DCv systems.","Chemistry; dynamic covalent chemistry; Catalysis; Self-healing material; Chemical reaction network","en","doctoral thesis","","","","","","","","","","","ChemE/Advanced Soft Matter","","",""
"uuid:e202a1f3-9c73-42d1-b7f6-d45f9631df74","http://resolver.tudelft.nl/uuid:e202a1f3-9c73-42d1-b7f6-d45f9631df74","Data assimilation in a LOTOS-EUROS chemical transport model for Colombia using satellite measurements","Yarce Botero, A. (TU Delft Atmospheric Remote Sensing)","Heemink, A.W. (promotor); Quintero Montoya, O.L. (promotor); Delft University of Technology (degree granting institution)","2024","When considering air quality, notably in South America, it seems that we are falling behind more developed regions in exacerbating the issue. This shortfall serves not just as observation, but as a warning, as air quality problems here are rapidly escalating. Nevertheless, by examining how other countries have addressed similar issues, we can prepare ourselves to tackle our own challenges. In this thesis we demonstrate how utilizing Data Assimilation DA we can reduce the uncertainty in some model uncertain parameters in an air quality model such as the LOTOS-EUROS Chemical Transport Model (CTM).....","Data Assimilation; Chemical Transport Model; Ensemble-based methods; Satellite data assimilation; Low-cost in situ measurements","en","doctoral thesis","","978-90-834024-2-0","","","","","","","","","Atmospheric Remote Sensing","","",""
"uuid:3b11d3cc-96ad-469f-9f57-9a4050f1ec5a","http://resolver.tudelft.nl/uuid:3b11d3cc-96ad-469f-9f57-9a4050f1ec5a","The Potential of Small, Low-carbon, Zero-energy Housing: A Multidimensional Approach","Souaid, C. (TU Delft Urban Development Management)","Elsinga, M.G. (promotor); Visscher, H.J. (promotor); van der Heijden, H.M.H. (copromotor); Meijer, A. (copromotor); Delft University of Technology (degree granting institution)","2024","This thesis examines the potential of small, low-carbon, (near) zero-energy dwellings as a solution that would both address sustainability challenges and answer to the growing housing shortage in North-West Europe. It adopts a multidimensional outlook that encompasses institutional, social and technical aspects surrounding the dwellings. The institutional aspect is addressed through an investigation of financial, legislative, technical and cultural barriers to the implementation and uptake of small, low-carbon, zero-energy dwellings. A context specific approach is adopted taking into account contextual peculiarities for the formulation of more refined policy suggestions. The social dimension is addressed first from the perspective of market supply through an investigation of the perceptions of housing professionals. The distinction between perceived versus actual barriers identified by housing professionals is made highlighting a potential desynchronization between policy developments and local practice. Accordingly the study calls for innovation in information dissemination between policy and local practice and between housing professionals themselves. The social dimension is then addressed from the perspective of market demand through an investigation of consumers’ current housing preferences. The assumption stating that, due to an increase in smaller, elderly, and lower-income households, current housing preferences are leaning towards smaller dwellings is refuted underlining the importance of distinguishing between smallest and smaller dwelling sizes. Lastly, the technical dimension is addressed through conducting a partial life cycle assessment that focuses on the embodied carbon of the dwellings. Both downsizing and the use of low-carbon materials such as timber are investigated as embodied carbon reduction strategies. Together, the three dimensions provide a holistic evaluation of the potential of small, low-carbon, zero-energy dwellings as a solution while addressing the complexity in reaching sustainable outcomes.","Small housing; Institutional barriers; Perceived barriers; Multi-attribute utility theory; Housing preferences; Life Cycle Assessment (LCA); Embodied carbon emissions","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-810-1","","","","","","","","","Urban Development Management","","",""
"uuid:1019957f-906a-4ca3-9abe-e1ae9479ef4f","http://resolver.tudelft.nl/uuid:1019957f-906a-4ca3-9abe-e1ae9479ef4f","Coherent manipulation of normal and Andreev fermions","Vilkelis, K. (TU Delft QRD/Wimmer Group)","Akhmerov, A.R. (promotor); Wimmer, M.T. (copromotor); Delft University of Technology (degree granting institution)","2024","A large part of condensed matter physics concerns itself with understanding the behaviour of electrons in solids and finding ways to control them. However, in mesoscopic systems (i.e., systems with nanometer to micrometre scale), the behaviour of electrons is difficult to predict through the Schrödinger equation. Instead, it is often more fruitful to use an approximate semiclassical theory that re-introduces the concept of particle trajectories into the quantumworld. These trajectories not only depend on the applied external fields but also on the Fermi surface of the material itself. The control over the Fermi surface allows to engineer electron trajectories not present in classical physics and therefore leads to new novel phenomena. For example, in highly anisotropic materials with open Fermi surfaces, the semiclassical trajectories of electrons in a magnetic field are no longer closed but instead move in an oscillating open trajectory that travels from one sample edge to the next. These open trajectories result in magnetoresistance oscillations with a period proportional to the flux passing through the sample—similar to the Aharonov–Bohm effect. However, unlike the Aharonov–Bohm effect, the magnetoresistance oscillations here are not due to interference effects....","Andreev bound states; semiclassical methods; Majorana bound states; hybrid superconducting devices; Quantum transport","en","doctoral thesis","","978-94-6384-535-9","","","","","","","","","QRD/Wimmer Group","","",""
"uuid:3b23b63b-69be-4c0e-90c0-3312eae1d871","http://resolver.tudelft.nl/uuid:3b23b63b-69be-4c0e-90c0-3312eae1d871","Distributed Acoustic Sensing using straight, sinusoidally and helically shaped fibres for seismic applications","Al Hasani, M.M.K. (TU Delft Applied Geophysics and Petrophysics)","Drijkoningen, G.G. (promotor); Wapenaar, C.P.A. (promotor); Delft University of Technology (degree granting institution)","2024","Distributed Acoustic Sensing (DAS) is a versatile dynamic strain sensing method that has been adopted for a wide range of seismic applications. In DAS, optical fibres are interrogated and used as sensors, where a strain or strain-rate measurement is made along a specific length of the fibre, called the gauge length. Its main appeal is the spatially dense data over long distances. The main limitations of DAS, however, are that it is mainly sensitive along the axial direction of the fibre and that the signal-to-noise ratio is worse than that of standard geophones. The first issue limits its adoption in surface reflection seismic when the fibre is deployed horizontally. Also, due to the very nature of the measurement (i.e. elongation and contraction of the fibre), it is commonly considered as a single-component measurement, therefore it lacks the information from the other components.
This thesis studies the potential of obtaining multi-component information from DAS as well as investigating the use of combined fibre configurations for surface-seismic applications. We approach this by examining several fibre-shaping approaches with static and dynamic strain measurements. First, the concept of the sinusoidally shaped fibre is examined to make a directional strain sensor in a direction other than the fibres’ axial direction using a static-strain approach. Secondly, the combined use of straight and helically wound fibres for obtaining multi-component information from DAS data as well as assessing the usefulness of using such a combination is investigated in a surface-seismic setting.'
Using the sinusoidally shaped fibre, two approaches are investigated. The first approach involves the use of the sinusoidally shaped fibre embedded in a homogenous material. An analytical model is presented to describe what happens to the deformed fibre in three main directions, which was validated via a finite-element model. Along with the model, loading experiments were performed on a sinusoidally shaped fibre embedding in a polyurethane-type (i.e. called Conathane®) strip in the following directions: in-line (i.e. transversal in-plane with the sinusoidal fibre), broadside (i.e. perpendicular to the sinusoidal fibre), and along-strip (i.e. along the strip’s longest dimension). We saw that the fibre is mainly sensitive to the in-line and broadside directions, and it is slightly more sensitive in the in-line direction relative to the broadside direction. We also saw that the geometrical parameters of the fibre, as well as the mechanical properties of the embedding material, affect its directional sensitivity. This is exploited in the second approach where the embedding material is now adapted to a low Poisson’s ratio metamaterial as well as further adaptations in the geometry of the fibre, aiming to create a unidirectional strain sensor. Experimental results showed improvements in the sensitivity but not as much as predicted by the analytical or numerical modelling.
Using DAS in field settings, multiple configurations of straight (SF) and helically wound fibres (HWF) with different wrapping angles (α) were buried in a 2-m trench in farmland in the province of Groningen in the Netherlands. Significant amplitude differences are observed between the straight and helically wound fibres. It is observed that shaping the fibre into a helix dampens the amplitude inside the surface wave significantly. Also, a polarity flip is observed with the use of HWF with a wrapping angle of 30◦. This hints that there is a contribution of the vertical component on the response measured by the HWF as also supported by the theoretical models. The reflection response is also examined using a set of engineered SF and HWF fibres. The main seismic reflections are present in both fibres with higher amplitude in SF compared to HWF, contrary to what was expected. Also, using post-stack images we see that the SF and HWF provide reflection structural images comparable to surface-deployed geophones but with an (expected) lower signal-to-noise ratio. We show that the combined use of SF and HWF is useful, as reflections were better shown for the shallow section, unlike HWF which provided better reflections in deeper sections. Furthermore, we discuss the effect of gauge length on the retrieval of surface waves along with the use of different fibre shapes using active and passive sources.
With the active-source data, we show that the gauge length plays an essential role in the retrieval of surface waves depending on their wavelength range, as it might cause distortions in the waveform which appears as notches in the (frequency, horizontal-wavenumber)–domain, as well as complicates picking the dispersion curves of these waves. On the other hand, the helically wound fibres might require a longer gauge length to retrieve the surface wave properly. This decreased sensitivity of the helically wound fibres is also shown from virtual shots obtained by passive interferometry as well as a recorded earthquake in the area.","Acquisition; Distributed Acoustic Sensing; shaped fibres; field experiments","en","doctoral thesis","","978-94-6384-531-1","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:8fa9a391-46e0-41bc-abde-fb858409ca7f","http://resolver.tudelft.nl/uuid:8fa9a391-46e0-41bc-abde-fb858409ca7f","Continuous Chromatography of Biopharmaceuticals: Next Generation Process Development","Picanço Castanheira Da Silva, T. (TU Delft BT/Bioprocess Engineering)","Ottens, M. (promotor); Eppink, M. (promotor); Delft University of Technology (degree granting institution)","2024","The biopharmaceutical industry is moving from a batch to a continuous mode of manufacturing. This shift promises to reduce costs and manufacturing footprint while improving productivity and consistency of the product. This thesis implements miniaturized and automated high-throughput screening techniques alongside a mathematical chromatography model for the development of an integrated continuous chromatography process. The model is used for in-silico optimization of a capture and polishing step of a monoclonal antibody (mAb). The optimization focusses on chromatographic processes that would have to deal with higher titer solutions.
The transition to Integrated Continuous Biomanufacturing (ICB) is welcomed by industry and regulatory agencies, which are working together to accomplish this shift. Process development plays a crucial role in defining new processes or adapting existing processes to different modes of operation. High-Throughput Process Development (HTPD) has been used in the biopharmaceutical industry to accelerate and reduce costs of process development, by using miniaturized assays and performing computer-aided studies. However, the industry experiences gaps and sees opportunities for improvement in the HTPD tools that can help the transition to ICB. These gaps, together with a state-of-the-art of HTPD for ICB are presented in Chapter 2. Experts in the field identified microfluidics and modeling to be the most promising technologies to fill in the gaps in process development for ICB.
Subsequently, an overview on the state-of-the-art of automation and miniaturization for biopharmaceutical process development is given in Chapter 3. The focus is on different degrees of miniaturization and automation of the technologies for process development, for both Upstream and Downstream processing (USP and DSP, respectively). Liquid-Handling Stations (LHS) are the epitome of automation for process development, and have seen great adoption for the past decades. Examples of the use of this tool for USP and DSP process development are provided. A greater emphasis is placed on the often overlooked microfluidics and how it can also be used as a screening tool, and a SWOT analysis on LHS and microfluidics as potential process development tools is provided.
Further comparison between HTS tools for chromatographic process development is needed, since process development efforts for chromatography mostly rely on LHS-based experiments. Three methodologies are selected for this comparison: LHS, microfluidics, and Eppendorf tubes (Chapter 4). To achieve this, protein equilibrium adsorption isotherms are determined with each of the aforementioned methodologies. The microfluidics chip produced in-house provides a platform for resin screening that achieves liquid and resin volume reductions of 15- and up-to 200-fold, respectively. Accurate resin volume determination is ensured with an image analysis software, and resin consumption is as high as 200 nl in the microfluidics system. After validating the HTS methodologies, a cost consideration study aims at fairly comparing the three methodologies for their chromatographic process development potential. Although at a lower Technology Readiness Level, microfluidics can be a viable alternative tool when the protein to be studied is very expensive or scarce (such as in early stages of process development), due to the high degree of miniaturization. Furthermore, it is discussed what would be the possible applications of the different methodologies in chromatographic process development.
The HTS methodologies developed paved the way for the implementation of a HTPD approach for the study and optimization of continuous chromatography (Chapters 5 and 6). A large database on the adsorption equilibrium isotherms of mAbs to different protein A (ProA) and Cation-Exchange (CEX) resins is generated from experiments with a LHS. This database is then used to further reduce resin candidates to be used in subsequent experiments. Four resin candidates are used to study the equilibrium adsorption isotherms of mAb to ProA ligands with a clarified cell culture supernatant (harvest). It is shown that pure mAb experiments reflect the same adsorption behavior as harvest experiments for all resin candidates, reducing the need to duplicate experiments in the future. The parameters determined are further used in a mechanistic Lumped Kinetic Model (LKM), used for the in-silico study of column chromatography (Chapter 5). The LKM uses a lumped overall mass transfer parameter that is linearly dependent on feed concentration, in line with mass transfer theory. The hybrid approach to HTPD emphasizes the importance of computational, experimental, and decision-making stages in chromatographic process development.
The LKM model described is further developed for the study of continuous chromatography. The continuous model is used for the in-silico optimization of a 3-Column Periodic Counter-current Chromatography (3C-PCC) capture and polishing step, for the purification of mAbs from high-titer solutions (Chapter 6). The model maximizes Productivity and Capacity Utilization (CU) keeping the yield high (99%) and having the flow rate and the percentage of breakthrough achieved in the interconnected phase as constraints. The shape of the breakthrough curve plays an important role in the optimization of continuous chromatography. The optimization results are validated for three different ProA resins, from which the best resin candidate is selected to continuously capture mAb from a harvest solution. The eluates of this operation are pooled and used as input for the continuous CEX step. The experimental results show very good agreement with model’s predictions (lower than 7% deviation) and the proposed methodology helps to develop and optimize a continuous chromatography process in a short amount of time.
In summary, this thesis presents the exciting journey of process development for continuous chromatography, from conceptualization and selection of screening techniques until the end result of performing an optimized continuous chromatographic step for the successful capture and polishing of a mAb.","","en","doctoral thesis","","978-94-6366-802-6","","","","","","2024-02-02","","","BT/Bioprocess Engineering","","",""
"uuid:90f0c7fe-34db-45f3-bd2b-7fec91075d20","http://resolver.tudelft.nl/uuid:90f0c7fe-34db-45f3-bd2b-7fec91075d20","Learning Human Preferences for Physical Human-Robot Cooperation","van der Spaa, L.F. (TU Delft Learning & Autonomous Control)","Kober, J. (promotor); Babuska, R. (promotor); Delft University of Technology (degree granting institution)","2024","Physical human-robot cooperation (pHRC) has the potential to combine human and robot strengths in a team that can achieve more than a human and a robot working on the task separately. However, how much of the potential can be realized depends on the quality of cooperation, in which awarenes of the partner’s intention and preferences plays an important role. Preferences tend to be highly personal, and additionally depend on the cooperation partner and the cooperation itself. They can be hard to define in terms a robot would understand, and may change over time. This thesis focuses on learning ‘useful models’ from observed behavior, to let our robot adapt its behavior to better match its human partner’s preferences, and thus improve the cooperation.
The aim is to capture personalized approximate models of human preferences –how a person likes to do something– from very few interactive observations, providing only small amounts of imprecise data, such that the robot can use the model to improve each user’s comfort. First, we learn a model to predict and optimize the human ergonomics in a pHRC task, such that our robot can ropose a plan, for both the human and itself, to solve the task in a way that is more ergonomic for its human partner. However, people do not necessarily prefer to act ergonomically, nor do we want to impose on them what a robot thinks best. Therefore, next, we apply inverse reinforcement learning (IRL), to capture less restrictive preference models: 1) path and velocity preferences for motion planning, and 2) on a higher level of abstraction, which (grasp or motion) action to initiate for proactive physical support. For learning to take the correct action in cooperation, we developed the disagreement-aware variable impedance (DAVI) controller to smoothly transition between providing active guidance and allowing the human to demonstrate alternative behavior.....","Physical Human-Robot Interaction; Human-Robot Collaboration; human preferences; human-centered planning; Inverse Reinforcement Learning","en","doctoral thesis","","978-94-6483-764-3","","","","Dr. M. Gienger contributed significantly to the realization of the dissertation.","","","","","Learning & Autonomous Control","","",""
"uuid:5f831793-2dbc-4b24-9d92-1441f2d8ba16","http://resolver.tudelft.nl/uuid:5f831793-2dbc-4b24-9d92-1441f2d8ba16","Optimizing Routing and Fleet Sizing for Flash Delivery Operations","Kronmüller, M. (TU Delft Learning & Autonomous Control)","Alonso Mora, J. (promotor); Babuska, R. (promotor); Delft University of Technology (degree granting institution)","2024","In recent years, Flash Delivery services have gained great popularity. Flash Delivery is a service where goods of daily need can be ordered on-demand and subsequently are delivered to the customer within a short time window, for example, in the next ten minutes. Operational efficiency and cost management are vital for sustainability in this competitive landscape, especially in the long term. To this end, this thesis aims to improve operational planning for Flash Delivery Operations. It focuses on two fundamental questions critical for the success of Flash Deliveries: the associated Vehicle Routing Problem and the associated Fleet Sizing Problem. The Vehicle Routing Problem aims to determine how to best utilize a given fleet of vehicles to deliver the requested orders efficiently, while the Fleet Sizing Problem involves finding the optimal number of vehicles required to serve the given demand. The primary objective of this dissertation is to provide algorithmic contributions, specifically focusing on optimizing vehicle routing and fleet sizing for Flash Delivery services.
First, the Flash Delivery Problem is formally defined and modeled as a Markov Decision Process. This serves as the basis for the dissertation's research and subsequent investigations. The thesis then proposes a novel routing algorithm for Flash Deliveries from multiple depots, which effectively handles multiple depots for order pick-up and dynamically determines the optimal depot for each order. The depots are distributed within the city, for example, using existing stores, this differs from other logistical processes using large warehouses outside of the city. Additionally, this approach allows vehicles to visit depots to load additional orders before distributing their loaded ones, resulting in more agile planning. The scalability of this method is demonstrated in scenarios involving thousands of orders and tens of vehicles.
The proposed routing method is then extended to accommodate heterogeneous vehicles and heterogeneous modes of transportation. Experiments using a fleet featuring trucks and drones demonstrate that this approach serves more orders while requiring less total traveled distance compared to a state-of-the-art method for heterogeneous vehicles. The effects of fleet size and fleet composition between drones and trucks are also examined. More drones were able to deliver more requests at the cost of an increase in traveled distance.
The Fleet Sizing Problem represents the second major challenge addressed in this dissertation. The balance between having too many vehicles, which can be very expensive, and having too few, which leads to unmet promises and undelivered orders, is crucial for operational success. Typically, the Fleet Sizing Problem involves a fixed set of tasks with no flexibility in their execution. However, this thesis introduces a novel problem, adding flexibility in time through the allowance of slight delays in individual transportation tasks. We propose modeling and solving the novel problem as a Mixed Integer Linear Program. By incorporating this flexibility, the problem opens up a broader trade-off space between the required number of agents, traffic, and added delays. As a result, fleet sizes can be significantly decreased. To illustrate the practical application of this algorithm, a case study involving taxi rides in Manhattan is presented.
To conclude this thesis, fleet sizing is combined with the previously proposed routing methods for Flash Delivery, resulting in a novel approach. Our method groups individual delivery requests and generates optimized operational plans using a variation of the earlier proposed routing techniques. These plans are then used for fleet sizing. To assess the effectiveness of our approach, we compare it against applying routing and fleet sizing separately. The results clearly demonstrate the value of our proposed method.
Our experimental analysis is based on a real-world dataset provided by a Dutch retailer, allowing us to gain valuable insights into the design of Flash Delivery operations.
In summary, this thesis makes significant contributions to the operational optimization of Flash Delivery services by addressing key challenges in vehicle routing and fleet sizing. We propose novel methods to improve efficiency and effectiveness in planning Flash Delivery operations.","","en","doctoral thesis","","978-94-6384-533-5","","","","","","","","","Learning & Autonomous Control","","",""
"uuid:e5f2d517-f492-4f85-9725-72e9884549b3","http://resolver.tudelft.nl/uuid:e5f2d517-f492-4f85-9725-72e9884549b3","Seismic acquisition analysis and design using multiple reflections","Revelo Obando, B.A. (TU Delft Applied Geophysics and Petrophysics)","Verschuur, D.J. (promotor); Wapenaar, C.P.A. (promotor); Blacquière, G. (promotor); Delft University of Technology (degree granting institution)","2024","Seismic survey design deals with determining the acquisition parameters that lead to the best possible imaging and characterization of the subsurface. The design of the survey is constrained by health, safety and environmental considerations and the available budget, seeking for a balance between quality and cost. Because seismic exploration is a widely used geophysical method for revealing underground resources, information about the subsurface is available in many areas. Therefore, it can potentially be used for purposes supplementary to exploration such as the monitoring of producing fields and fluids injection. However, the available budget for these purposes is usually lower than for exploration, and it becomes a priority to maximize the benefits derived from a potentially cheaper acquisition. In this thesis, we propose new methods for the analysis and design of seismic surveys that are based on previous knowledge from existing subsurface models and that aimto maximize image quality with the lowest acquisition efforts.","Seismic acquisition; Imaging; Inversion; Optimization","en","doctoral thesis","","978-94-6384-532-8","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:e92fbd81-5b7e-4d40-8c22-75f56486639e","http://resolver.tudelft.nl/uuid:e92fbd81-5b7e-4d40-8c22-75f56486639e","Anaerobic membrane bioreactor (AnMBR) for the treatment of lipid-rich dairy wastewater","Szabo Corbacho, M. (TU Delft Sanitary Engineering)","van Lier, J.B. (promotor); Hernandez, Hector Garcia (promotor); Delft University of Technology (degree granting institution)","2024","The ongoing growth of the global population has led to increased resource consumption, particularly in the realm of water resources, resulting in potential shortages and environmental concerns. The surge in industrialization has intensified the demand for freshwater, consequently causing significant contamination of global water sources through the discharge of industrial wastewater. This wastewater contains harmful contaminants, such as heavy metals and organic compounds, which pose significant threats to both aquatic ecosystems and human health (Corcoran, 2010). To effectively address this issue, it is imperative to strengthen regulatory measures, promote industrialized initiatives for wastewater reduction and treatment, and foster technological
advancements in wastewater management.
Lipids within wastewater systems present both opportunities and challenges. Their high energy content holds promise for bioenergy conversion, yet they can also disrupt anaerobic wastewater treatment processes. Consequently, it is often advisable to extract lipids before commencing biological treatment processes (Alves et al., 2009). Lipids are commonly referred to as fats, oils, and grease (FOG) (Cavaleiro et al., 2008). At the core of FOG composition are triglycerides, formed through the esterification of glycerol with long-chain fatty acids (LCFA) (Alves et al., 2009). Within lipid-rich wastewaters, the prevailing LCFAs identified include palmitic acid (C16:0) and oleic acid (C18:1), as highlighted by Hwu et al. (1996). Anaerobic digestion (AD) plays a central role in advancing various sustainable development objectives by seamlessly integrating energy and resource recovery from organic residues and wastewater, all while effectively managing pollution. AD's ability to produce renewable gaseous energy, recycle essential nutrients, and minimize excess sludge production, combined with an enhanced understanding of microbiology and ecophysiology, has propelled AD technologies to the forefront. These technologies now serve as environmentally friendly treatment options for a wide range of wastes and wastewaters, as evidenced by their widespread adoption at the global level (van Lier et al., 2020). Sustainable and efficient conversion of these waste lipids into methane within anaerobic reactors is met with impediments including adsorption, sludge flotation, washout, and inhibition. However, these complications can be circumvented through feeding protocols, optimized mixing, and adept solid separation methods, underpinned by cutting-edge reactor designs and operational methodologies. More recently, developments such as the anaerobic membrane bioreactor (AnMBR) and flotation-based bioreactors have emerged as solutions tailored for lipid-intensive wastewater treatment (Cavaleiro et. al., 2008). AnMBR, a nexus of anaerobic digestion and membrane filtration, has proven particularly adept for dairy wastewater treatment. It alleviates the challenges tied to gravity-based separation, yielding effluents devoid of suspended solids and of superior quality (Judd, 201).
The central focus of this research centered on the assessment of solids retention time (SRT) and its critical role in the operational parameters of AnMBR. This was accomplished by studying sludge filterability and membrane filtration performance. Additionally, we investigated how the acclimatization of biomass impacted the transformation of longchain fatty acids (LCFA) in lipid-rich wastewater. Initial evaluations emphasized the role of SRT on AnMBR efficiency during the treatment of synthetic dairy wastewater laden with lipids. Employing two distinct AnMBR configurations with SRTs of 20 and 40 days, both systems manifested approximately 99%efficiency in waste removal at an organic loading rate of 4.7 g COD L-1 d-1. Significantly,lipid sedimentation was absent, facilitating their continued anaerobic degradation. LCFAaccumulation was minimal in both systems, with the 40-day SRT configuration showing slightly enhanced biological conversion and stability. Subsequently, the study delved into the effects of SRT on the filtration efficacy of AnMBR using lipid-rich synthetic dairy wastewater. When confronted with 40-day SRT, the system encountered elevated pressures and resistances, presumably due to escalated contaminant levels, including fats, oils, and LCFAs. While both systems showcased analogous filterability, the 20-day configuration exhibited superior membrane performance, suggesting potential membrane operational refinements for the 40-day SRT. Lastly, the influence of LCFA on anaerobic sludge processes was investigated. Trialing three distinct sludge samples—two lipid-acclimated and one non-acclimated—they were exposed to varying oleic and palmitic acid concentrations, ranging between 50 to 600 mg COD/L. Oleic acid showed superior degradation capabilities compared to palmitic acid across all samples, with heightened methane production. Lipid-acclimated sludges demonstrated augmented LCFA degradation potential. However, upon reaching LCFA concentrations beyond 400 mg/L, degradation of both acids into intermediate products was inhibited, albeit without affecting methane production. Intriguingly, specific bacterial taxonomies associated with LCFA degradation were identified in lipid-acclimated sludge samples, underscoring the potential of sludge adaptation strategies in enhancing anaerobic treatment of lipid-rich effluents.
In this doctoral research, we elucidated the prospects and challenges associated with the utilization of AnMBR for treating lipid-rich dairy wastewater. We highlighted the critical importance of Solid Retention Time (SRT), a key operational parameter that exerts a profound influence on both the biological and membrane aspects of the system.
Furthermore, our study underscored the paramount role played by the two most prevalent Long-Chain Fatty Acids (LCFAs), namely oleic and palmitic acid, within the domain of anaerobic digestion.","","en","doctoral thesis","","978-90-73445-58-1","","","","","","","","","Sanitary Engineering","","",""
"uuid:140e7b0d-5b24-4e1f-8aa8-fe6edcfd735d","http://resolver.tudelft.nl/uuid:140e7b0d-5b24-4e1f-8aa8-fe6edcfd735d","Optimizing quantum error correction for superconducting qubit processors","Varbanov, B.M. (TU Delft QCD/Terhal Group)","Terhal, B.M. (promotor); DiCarlo, L. (promotor); Delft University of Technology (degree granting institution)","2024","The theory of quantum mechanics describes many phenomena that may initially seem to be counter-intuitive and, in some cases, impossible, given the understanding of classical mechanics that most of us are more intimately familiar with. Following its initial introduction, there was a great deal of debate among scientists regarding the predictions made by this theory. The strange nature of quantum mechanics has led to many memorable quotes and the use of “spooky” to describe some of these predictions. Since its initial introduction, quantum mechanics has been rigorously tested and has proven to be quite a successful theory. Quantum mechanics has found many different applications and has led to the existence of devices and technologies we use daily. Another potential application of quantum mechanics is quantum computation, which Richard Feynman first put forward as an idea in 1982. Quantum computers have the potential to solve specific problems that can be infeasible for even the most powerful (classical) supercomputers and have potential applications in many different areas, such as quantum chemistry, cryptography, and optimization. However, performing a quantum computation is challenging and requires overcoming the inherent fragility of quantum systems. Storing information in a quantum system requires it to be well isolated from the environment to avoid any unwanted interactions that can corrupt the stored data. Unfortunately, at the same time, we need the ability to control this system, make it interact with other such systems, and ultimately measure it for us to perform an actual computation. This is a universal issue and all of the systems we have so far developed to be used as quantum bits (qubits) have been plagued by noise. Each operation applied to the qubit or even the act of leaving the qubit idling for some time generally leads to an error with a non-negligible probability. The impact of this noise has so far prevented quantum computers from performing any practical computation. While substantial efforts have been made to reduce these physical error rates over the past several years, we are still far from the universal fault-tolerant quantum computers we ultimately strive for. Fortunately, quantum error correction can help us reach the low error rates necessary for quantum computers to realize their potential applications in the future. This can be achieved by storing the quantum information in a logical qubit instead of a noisy physical one. When using a stabilizer code, which will be the focus of this dissertation, this logical information is distributed over many (noisy) physical qubits, referred to as data qubits. Another set of qubits, the so-called ancilla qubits, is used to perform indirect parity measurements, which do not destroy the stored information but give some information about whether an error has occurred. We then try to interpret this information to identify what errors have happened and correct them, which is done by a classical algorithm referred to as the decoder. Increasing the number of physical qubits used to encode the logical qubits allows more physical errors to be detected and corrected. The number of correctable errors is captured by the distance of the code, defined as the minimum number of physical single-qubit errors that constitute a logical error. One of the critical properties of error correction is the ability to reduce the logical error rate by increasing the code distance, which requires the physical error rates to be below some threshold value. The valiant experimental effort over the years has led to several recent experiments that implement various error-correcting codes and demonstrate the reduction of the error rates promised by error correction. In particular, these experiments (and the experiments leading up to them) identified several noise sources that had not been explored in sufficient detail and could significantly impact the logical performance of the code. In this dissertation, we explore the impact of the noise encountered in transmon-qubit devices on the performance of error-correcting codes, namely the surface code. Transmon qubits are, in practice, multi-level systems, and only the lowest two energy levels are used for computation. Unfortunately, they are also weakly anharmonic, leading to the applied operations having some probability of exciting the qubit outside of this computational subspace, referred to as a leakage error. We explore the impact of leakage in both simulations and experiments and develop schemes to mitigate it. We also consider other approaches to improve the logical performance or to reduce unwanted interactions. In Chapter 2, we develop a realistic model of leakage induced by the two-qubit gates between flux-tunable transmon qubits. We show that leaked qubits effectively spread errors on their neighboring qubits, which are then detected by the parity measurements. We show that a Hidden Markov model can detect the increased error rate due to leakage. This enables us to post-select out runs during which any qubit has leaked to restore the code performance. Unfortunately, post-selection is ultimately not scalable. Instead, it is desirable to have operations that return leaked qubits to the computational subspace. These operations are called leakage-reduction units and convert leakage into a regular error. In Chapter 3, we propose a leakage-reduction scheme, which does not require any overhead in the time needed to perform the parity measurements or an overhead in the quantum hardware. For data qubits, we propose an operation that transfers the leakage to a dedicated readout resonator, where it can quickly decay. This operation is designed to not disturb the computational states, allowing it to be applied unconditionally. For the ancilla qubit, we use the fact that measurements can determine if a qubit is in the leaked state. We then apply a conditional operation to return the qubit to the computational subspace whenever it is measured to be leaked. Using detailed density-matrix simulation, we show that this scheme can be easily implemented to remove qubit leakage from the system, mitigating its impact on the logical performance of the code. In Chapter 4, we realize the data-qubit leakage reduction unit in an experiment and show it can also be used to remove ancilla-qubit leakage, removing the need for fast conditional operations and readout that distinguishes the leaked states. We show that these operations can remove most of the leaked population in about a hundred nanoseconds while having a negligible impact on the computational subspace. We also demonstrate that these operations decrease the number of observed errors by a two-qubit parity check, showing that the effect of leakage can be mitigated. Chapter 5 considers an architecture employing two types of superconducting qubits, the transmon qubit and the fluxonium qubit. These qubits have very different frequencies, making it unclear whether these qubits can even interact with each other in the first place. We show that the interactions with the higher-excited states can be utilized to perform operations between them, and we propose two types of gates. In practice, qubit frequencies are targeted with only a certain precision in fabrication. In certain cases, this can lead to unwanted interaction between qubits that increase the physical error rates, referred to as frequency collisions. We show that the large detuning between these qubits reduces the frequency of frequency collision, thereby increasing the expected fabrication yield. In Chapter 6, we realize a distance-two surface code experiment and perform repeated parity measurements to detect and post-select errors, given that it’s impossible to correct them when using such a small code. We implement a suite of logical operations for this code, including initialization, measurement, and several single-qubit gates. In the context of error detection, a logical operation is said to be fault-tolerant if the errors produced by each operation are detectable. We show that fault-tolerant variants of operations perform better than non-fault-tolerant ones. We also characterize the impact of various noise sources on the code performance. In Chapter 7, we look at another small-distance code, in this case, the distance-seven repetition code. We show that increasing the distance weakly suppresses the logical error rate of the code. We investigate the limiting factors behind the observed logical performance by analyzing the correlation between the observed parity measurements and performing simulations using noise models parameterized by the measured physical error rates. Chapter 8 considers a decoder that can perform the error inference more accurately. In particular, we implement a neural network decoder and investigate how it performs on experimental data from surface code experiments. We show that the accuracy of this decoder approaches what can be achieved by an optimal and computationally inefficient tensor network decoder. Transmon measurement produces analog outcomes. These are then typically converted to binary ones, leading to some information loss. We show how a neural network can also use this analog information to improve the achieved logical performance further. We have investigated the impact of non-conventional errors in simulation and in several experiments, demonstrating the importance of characterizing and mitigating these errors. We expect the methods introduced in this dissertation to lead to lower logical error rates. In the short term, this can aid in demonstrations of the usefulness of error correction. In the long term, addressing such errors is important to ensure the ability to suppress logical error rates to sufficiently low levels. We finish this dissertation with a brief conclusion of each chapter. We also outline several potential challenges that can impact future error-correction experiments, namely how to reduce the larger qubit overhead needed for fault-tolerant computation and several error sources that might become a limiting factor for future error-correction experiments.","superconducting qubits; quantum error correction; leakage; decoders","en","doctoral thesis","","978-94-6384-527-4","","","","","","","","","QCD/Terhal Group","","",""
"uuid:266e6da7-0f85-45f5-ad72-0d81ff5f7bcb","http://resolver.tudelft.nl/uuid:266e6da7-0f85-45f5-ad72-0d81ff5f7bcb","A Study of ICT Firm Innovativeness in Indonesia Influencing Conditions and Design of a Change Strategy","Syamsuri, L.M. (TU Delft Economics of Technology and Innovation)","Roosenboom-Kwee, Z. (promotor); van Geenhuizen, M.S. (promotor); Delft University of Technology (degree granting institution)","2024","This PhD study investigates the challenges of and proposes potential solutions to relatively low innovativeness of small and medium-sized enterprises (SMEs) in the ICT sector in Indonesia. Since there is not much understanding of apparent ‘missed opportunities’ in Indonesia's ICT sector, there is a need to investigate internal conditions that affect innovativeness at the firm level (firm-specific managerial and competence factors) as well as external factors, such as networks’ knowledge spillovers and foreign direct investment (FDI). Low innovativeness also indicates the urgency for the country to take necessary actions, such as improving ICT education to stimulate more ICT talent, enhancing strategies to attract more investment in the ICT industry, and reducing the digital divide between regions. Considering the geographical and cultural uniqueness of Indonesia, this thesis further proposes a set of change strategies to improve the innovativeness of the ICT sector in the country.
The study starts with the introduction and problem statement (Chapter 1). This is followed by a discussion of theories on Resource-Based View, Dynamic Capability, Agglomeration and Entrepreneurial Ecosystem, Culture, and Multi-actor theory (Chapter 2). Such broad approach is taken to enable a theory-underpinned broad scan of empirical reality. In this chapter several hypotheses are formulated that will be investigated in the empirical chapters that focus on the firm level. Next, Chapter 3 discusses the problematic situations and opportunities in the ICT sector in Indonesia (sector-level study). Although the ICT sector is a fast-growing sector in Indonesia, one of the problematic situations is that Indonesia is still a net-importer of ICT, which draws attention to innovativeness of domestic firms. In addition, the disparity of ICT infrastructure within the country is relatively wide between the western and eastern regions. The sector-level study in Chapter 3 is followed by a discussion on a set of conditions of ICT innovativeness at the firm level, including specific internal management conditions, and external and entrepreneurial ecosystem conditions in Chapter 4. The empirical results in this chapter are derived from an e-survey among 260 ICT firms (mainly small- and medium-sized), spread over Indonesia, and from estimation of multiple regression models. The findings suggest that firm capabilities and external knowledge spillovers positively influence firm innovativeness only after having reached relatively high values, as indicated by a quadratic relation. Moreover, the country’s entrepreneurial culture faces a ‘strong power distance’ or hierarchy that needs to be transformed for developing innovation. Chapter 5 examines the development differences between the Jakarta area (core region) and the rest of Indonesia (non-core regions) and how each of the conditions influence innovativeness in these regions. The study in Chapter 5 indicates that core and non-core regions in the country show differences in the entrepreneurial ecosystem and firm capabilities in various aspects. In the non-core regions, the innovativeness relationships with the management conditions and entrepreneurial ecosystem seem weaker than those in the core region. The most pressing outcome for non-core regions is that non-core regions have relatively modest firm-internal capabilities but also small potentials in the entrepreneurial ecosystem. The non-core regions also need to expend more effort on increasing innovativeness in terms of ICT skills and manager cognitive capability. Next, through change strategy formulation and in-depth understanding of innovativeness based on the empirical findings in Chapters 3, 4 and 5, the design of innovation change strategies in the ICT sector in Indonesia is explained (Chapter 6). This chapter provides direction for a set of solutions following empirical analysis at the firm level in the ICT sector for the entire country and two different regions. Chapter 6 also presents the elaboration of collaborative policymaking to improve policy implementation in Indonesia’s ICT sector, including more attention for consultation and deliberation between stakeholders and for evaluation. Chapter 7 discusses suggestions for making the study transferable in practice and the key contributions of the study. Chapter 8 concludes the study with reflections on the whole PhD study and discussions of the limitations of the research and suggestions for future research.
Three key conclusions from the empirical part of the study can be mentioned as follows. First, compared to larger firms, small firms in Indonesia have to put extra effort into learning to increase innovativeness. In this regard, the study found some non-linear relations (mostly quadratic) in management capabilities, especially in the ICT skills. This situation calls for improvement of small firms’ management capabilities, in particular ICT skills combined with market-related skills. Second, a relatively weak positive influence of urban environment and somewhat stronger positive influence of clusters can be found in the study. For example, the study could support theoretical ideas of agglomeration advantages (e.g., benefits of knowledge spillovers in metropolitan areas). The findings confirm the positive influence of networks within clusters. As the third conclusion, firm innovativeness tends to have a non-linear relationship with FDI, suggesting increasing returns (benefits), despite firm limitation to use FDI opportunities fully. In addition, the study found that the core and non-core regions in Indonesia differ in most firm-internal conditions, including management and entrepreneurial ecosystem conditions. For instance, ICT skill level is much higher in the core region than that in the non-core regions.
The key scientific contribution of this PhD study is in extending general innovation theories with a partially densely populated developing country like Indonesia, characterised by low technological level and low innovativeness mainly among small firms. The study reveals the extent to which the phenomenon in the developing countries can confirm or refute what has been postulated for developed countries, for example, concerning ambitions to be innovative and power structure within firms. As the policy contribution, the study suggests a new (policy) approach to respond to the many challenges in Indonesia, namely, in improving policymaking concerning conditions for innovation. The related approach is collaborative policymaking, including all stakeholders involved, in particular those at the level of practical policy implementation, with more emphasis on consultation and deliberation between them. The study also suggests a new approach at the firm level referring to ‘co-creation of inventions with customers’, which is relatively new in innovation practice in Indonesia.
Further, some limitations are inevitable due to financial and time constraints during this PhD study, including survey tools and representation of particular regions (e.g., Papua), though attempts were made to overcome the limitations by interviewing practitioners and experts. The study provides a number of suggestions for future research, including: first, to tackle the reluctance of SMEs to act as respondents, future research may extend and complement the survey in this PhD study through other data collection techniques, e.g., via professional surveyor. Second, future research may consider conducting an in-depth survey and complement it with interviews to identify other important qualitative aspects that have remained beyond the study, for instance cultural influence in innovativeness. Third, to use an advanced model assessment technique, such as Structural Equation Modelling (SEM), to evaluate whether theoretical models, including complex interactions between influencing factors, are plausible when compared to observed data. Fourth, the use of agglomeration index to allow the evaluation of the intensity of spatial agglomeration in a single sector and make a comparative analysis among different sectors. Fifth, to obtain the outcome in improving management conditions through a cascading strategy because the cascades process allows the firm to overarch the strategy throughout the organisation and create a supporting strategy for the firm’s entire value chain of activities to ensure the execution of management change. And sixth, a recommendation for collaborative experimentation to identify best practice, e.g., in co-creation.
Overall, this PhD research fills the gaps of innovation studies in Indonesia such as the incomplete focus of existing studies that are limited to a specific region of Indonesia (i.e., western Indonesia) and the limited follow-up for policy solution in practice. To the best of our knowledge, this PhD study is one of the few studies that covers large regions of Indonesia focusing on ICT sectors and also proposes policy and management solutions.
starts to bridge.","","en","doctoral thesis","","978-94-93330-57-3","","","","","","","","","Energie and Industrie","","",""
"uuid:8d2b92dc-51e9-4d8e-b1d8-c9e7bac211e7","http://resolver.tudelft.nl/uuid:8d2b92dc-51e9-4d8e-b1d8-c9e7bac211e7","Dynamic wind farm flow control using free-vortex wake models","van den Broek, M.J. (TU Delft Team Jan-Willem van Wingerden)","van Wingerden, J.W. (promotor); Sanderse, Benjamin (copromotor); Delft University of Technology (degree granting institution)","2024","In the current state of model-based wind farm flow control, the implementation of yaw-based wake steering based on steady-state models has demonstrated potential for improving wind farm power production. However, for realistic, time-varying wind directions, the dynamics of wake propagation may impact the effectiveness of wake redirection. This dissertation presents the development of an economic model-predictive wind farm flow control strategy and assesses the potential for improved power production from wake steering in wind farms under time-varying conditions.
At the core of such a model-based control strategy is a control-oriented model of the wind farm flow. A free-vortex wake model is formulated based on an actuator-disc representation of the wind turbine rotor. A validation study is included for power predictions in the mid to far wake of turbines operating under yaw misalignment using data from wind tunnel experiments. Finally, a distributed strategy for control optimisation is constructed to provide a scalable solution for dynamic wind farm flow control which is tested in a large-eddy simulation environment under realistic conditions. This novel controller yields additional gains in power production during wind direction transients and reduces the increase in yaw actuator usage from wake steering.","wake steering; yaw misalignment; wind farm flow control; adjoint optimisation; economic model-predictive control; free-vortex wake","en","doctoral thesis","","978-94-6366-798-2","","","","","","","","","Team Jan-Willem van Wingerden","","",""
"uuid:3010e904-f171-4a37-b75a-143be8750ab2","http://resolver.tudelft.nl/uuid:3010e904-f171-4a37-b75a-143be8750ab2","Towards Energy-Efficient Residential Buildings In Jeddah, Saudi Arabia: Exploring Energy Retrofitting Options And Assessing Their Feasibility","Felimban, Ahmed Abdulazeem (TU Delft Architectural Technology)","Knaack, U. (promotor); Klein, T. (promotor); Konstantinou, T. (copromotor); Delft University of Technology (degree granting institution)","2024","The thesis explores energy retrofitting options for enhancing the energy efficiency of residential buildings in Jeddah, Saudi Arabia. It identifies and validates cost-effective energy retrofit schemes that have the potential for energy savings. The thesis also assesses the feasibility of energy retrofitting scenarios for building envelopes and their impact on reducing energy consumption, improving thermal comfort, and mitigating the environmental impact of buildings. The results of this research can guide architects and decision-makers on energy-saving measures for residential buildings in Saudi Arabia, with Jeddah serving as a representative case study.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-803-3","","","","","","","","","Architectural Technology","","",""
"uuid:cd0f2c4b-45b8-44ca-b0b0-d58222254375","http://resolver.tudelft.nl/uuid:cd0f2c4b-45b8-44ca-b0b0-d58222254375","Hydrogen energy storage in porous media","Hashemi, L. (TU Delft Numerical Analysis)","Vuik, Cornelis (promotor); Hajibeygi, H. (promotor); Delft University of Technology (degree granting institution)","2024","The demand for sustainable and clean energy sources has become increasingly vital in addressing the challenges of climate change and energy security. Hydrogen, with its high energy density and potential for carbon-free energy conversion, has emerged as a promising candidate for future energy systems. Efficient storage and retrieval of hydrogen are crucial for its widespread utilization, for which a promising approach is underground hydrogen storage in geological porous media. This thesis aims to explore and advance the understanding of hydrogen storage in geological porous media, specifically focusing on pore-scale modeling and contact angle analysis.
This research aims to overcome the limitations of current hydrogen storagemethods and develop more efficient energy storage systems. Porous materials like sandstones have special characteristics that make them suitable for storing hydrogen underground. To design and operate underground hydrogen storage on a large scale, it is important to understand how fluids move through these materials. The way hydrogen is stored and released is influenced by complex processes happening at a very small scale (μm). To accurately simulate these processes, we need to study how fluids move in the pores, including factors like capillary pressure (the pressure difference between nonwetting and wetting phases, which is one of the main forces acting at pore scale transport) and relative permeability (how easily fluids flow through the pores where other fluids are also present).
Pore-scale modeling is a useful tool for simulating and understanding how hydrogen behaves in the tiny pore spaces of porous materials. These models help us see how hydrogen moves, spreads out, and interacts with the pore walls at a very small level. Another important aspect is studying the contact angles in the system of hydrogen, water, and porous material. These angles tell us about the way these substances interact at the interfaces between solids, liquids, and gases. By studying these processes and measuring contact angles, we can gain a better understanding of how hydrogen is stored and released, considering factors like pressure, temperature, the type of material, and how easily fluids flow through the pores. This knowledge will help us design better systems for storing hydrogen energy in porous materials on a larger scale.
The primary objectives of this thesis are as follows: To develop pore-scale models for simulating and understanding underground hydrogen storage in geological porousmedia. To investigate the contact angle between hydrogen, brine, and sandstone systems and their influence on storage and release mechanisms. To analyze the contact angle for a mixture of hydrogen-methane in the brine/sandstone system and assess its implications for hydrogen storage. To develop a dynamic pore network model to capture the dynamic behavior of hydrogen in geological porous media. To draw conclusions from the findings and propose future research directions in the field of hydrogen energy storage.
Despite the benefits provided by the MMC-based MTDC system, various technical problems emerge. For example, in case of a DC fault on HVDC transmission lines, the DC voltage suffers a deep sag, and the fault current increases to the peak value after several milliseconds, the system stability is seriously affected. The fault currents will easily damage the power electronics and may lead to a collapse of the entire system if the faults are not cleared promptly. Thus, it is crucial to implement a fast, selective, and reliableDC fault protection technology in the system for fault detection. Once the fault is cleared, it is important to know the exact fault location to repair the faulty sections and to restore the system. Hence, an accurate DC fault location technique is of utmost importance for the MTDC system, which would significantly minimize electricity loss and expedite the system restoration process in the event of power outages. In addition, there is a lack of standardization in MMC control, and the majority of HVDC projects are constructed in a vendor-specific manner. As of today, it is unclear how MMC converters from different manufacturers will interoperate with each other. These pose new challenges to the performance of HVDC protection and MMC control and need to be addressed to manage, safeguard, and accelerate the practical feasibility of this system.
The research in this thesis aims to address the shortcomings that have not been addressed in the state of the art, mainly related to the challenges arising when DC faults occur in the MMC MTDC systems and, as such, could provide promising solutions for future practicalMTDCapplications. The main topics areMMC control&interoperability, Protection, and Fault location for the MMC-based MTDC system. The thesis deals with designing a robust protection scheme, a fault location method, and an investigation of the interoperableMMC controllers...
To date, however, the potentially confounding effects of both internal and, particularly, external water dynamics in vegetation on radar backscatter have not been adequately addressed. Existing studies have indeed illustrated the effects of SCW on radar backscatter, but the degree to which it influences different frequencies and polarizations, and the subsequent impact on crop bio-geophysical parameters remains unclear. Therefore, the main goal of this thesis is to expand our knowledge of the relationship between radar backscatter, vegetation dynamics, and surface canopy water (SCW) in agricultural monitoring. In this thesis we utilized statistical analysis and radiative transfer modeling in combination with fully polarimetric L-band data from a truck-mounted scatterometer and C-band data from Sentinel-1, along with extensive field data…
The research focuses on magnetic-dipole electromagnetic induction (EMI) and direct-current electrical resistivity tomography (ERT) due to their sensitivity to the electrical resistivity of the subsurface, which correlates with other geotechnical properties. Both methods are easily deployable and can cover large areas relatively quickly.
Chapter 2 delves into the theoretical aspects of EM data acquisition in dikes, demonstrating that EMI devices with far-offset receivers can capture large anomalies at a much faster rate than ERT. However, both methods perform poorly in detecting small, detrimental features such as thin layers, and are affected by groundwater salinity.
Chapter 3 proposes a method to estimate the geometric variability of soil layers using geophysical tomograms. EMI and ERT tomograms are employed to estimate the orientations of soil layers, enabling an accurate estimation of geometric variability with reduced exploration effort.
Chapter 4 highlights the value of high-resolution ERT in estimating the spatial variability of properties within homogeneous soil units. This method serves as an efficient alternative for estimating internal variability in geotechnical analyses of water defense structures and other geotechnical infrastructure.
An essential contribution of this thesis is the proposal of quantitative and reproducible methods for characterizing subsurface heterogeneity in the context of water defenses. These insights help reduce uncertainties and optimize resource allocation for dike reinforcement. The integration of geophysical methods with other geotechnical site data enhances the understanding of the subsurface. Chapter 5 summarizes the main findings of this research regarding the geotechnical schematization of dikes.","Geophysics; Geostatistics; Heterogeneity; Dikes","en","doctoral thesis","","","","","","","","","","","Geo-engineering","","",""
"uuid:e39336b5-6943-47b4-909f-9fc83e215b7c","http://resolver.tudelft.nl/uuid:e39336b5-6943-47b4-909f-9fc83e215b7c","Model-Based Hydrodynamic Leveling: An Impact Study on the European Vertical Reference Frame","Afrasteh, Y. (TU Delft Physical and Space Geodesy)","Klees, R. (promotor); Verlaan, M. (promotor); Slobbe, D.C. (copromotor); Delft University of Technology (degree granting institution)","2024","Establishing an accurate global unified vertical reference frame (VRF) is a long-standing objective of geodesy. However, that objective has still not been achieved. One particular application where the lack of such a VRF is evident, is the improvement of hydrodynamic models by assimilating total water levels acquired by tide gauges. Indeed, to facilitate a straightforward assimilation requires that both the observed and modeled water levels refer to the same vertical datum. The required accuracy is high; it is expected to be in the order of 1centimeter. The best alternative VRF for the area of interest, the northwest European continental shelf, is the European Vertical Reference Frame 2019 (EVRF2019). The EVRF2019, however, still lacks complete coverage and the required accuracy. The key reason is that it is solely based on geopotential differences from spirit leveling/gravimetry, which are not available between benchmarks separated by large water bodies. This thesis exploits model-based hydrodynamic leveling to provide these differences. The specific objective is to assess the potential of including these data in realizing of European Vertical Reference System (EVRS).","Model-based hydrodynamic leveling; Hydrodynamic model; Height system realization; European Vertical Reference System; European Vertical Reference Frame; Tide gauge, Empirical noise model","en","doctoral thesis","","978-94-6384-525-0","","","","","","","","","Physical and Space Geodesy","","",""
"uuid:e19a0363-d9cf-4f33-8936-757c268f27a1","http://resolver.tudelft.nl/uuid:e19a0363-d9cf-4f33-8936-757c268f27a1","Leveraging Factored State Representations for Enhanced Efficiency in Reinforcement Learning","Suau, M. (TU Delft Interactive Intelligence)","Oliehoek, F.A. (promotor); Spaan, M.T.J. (promotor); Delft University of Technology (degree granting institution)","2024","Reinforcement learning techniques have demonstrated great promise in tackling sequential decision-making problems. However, the inherent complexity of real-world scenarios presents significant challenges for its application. This thesis takes a fresh approach that explores the untapped potential of factored state representations as a means to enhance the efficiency of reinforcement learning.
Factored representations involve variables describing various features of the environment. These variables, along with their possible values, define the agent’s states. Unlike standard representations, factored representations provide a unique perspective that enables us to gain deeper insights into the underlying structure of the environment and refine our understanding of the problem at hand.
By analyzing variable dependencies, we can abstract simplified representations of the environment states and construct computationally lightweight models. To do so, we will explore potential factorizations of key functions governing the reinforcement learning problem, such as transitions, rewards, policies, or value functions. These factorizations can be achieved by exploiting variable redundancies and leveraging relations of conditional independence.
This thesis proposes a set of methods that are shown to improve the efficiency and scalability of reinforcement learning in complex scenarios. We hope that the findings of this research contribute to showcasing the potential of factored representations and serve as inspiration for future research in this direction.
The first theme considers hyperelastic material modelling, with a focus on developing wrinkling models under large strains. The shell model employed in this dissertation is based on the isogeometric analysis paradigm. Specifically, the Kirchhoff--Love shell model is used, which leverages the higher-order continuity of underlying spline spaces. Chapter 3 extends hyperelastic material formulations to stretch-based materials, enabling the use of the isogeometric analysis paradigm for rubber-like shells. Since the modelling of wrinkling patterns imposes physical scales limiting element mesh sizes, chapter 4 introduces a hyperelastic isogeometric membrane element that incorporates an implicit wrinkling model, thus avoiding explicit modelling of wrinkling amplitudes.
The second theme addresses adaptive methods. On the one hand, spatial adaptivity enhances the local detail in a numerical simulation. Chapter 5 presents an adaptive isogeometric analysis framework based on intuitive goal functions, such as wrinkling amplitudes, to guide adaptive meshing routines. On the other hand, temporal or quasi-temporal adaptivity serves to enhance the efficiency of dynamic or quasi-static simulations. Chapter 6 introduces an adaptive parallel arc-length method. The method's adaptivity arises as a by-product of parallelisation efforts aimed at reducing computational times for quasi-static simulations.
The advantage of the smoothness inherent in the spline spaces used in isogeometric analysis is limited to simple topologies. To benefit from this smoothness in complex geometries, the third theme of this dissertation focuses on complex domain modelling. Chapter 7 presents a qualitative and quantitative comparison of unstructured spline constructions for multi-patch modelling using isogeometric analysis. This chapter offers insights and suggestions for future developments related to unstructured spline constructions.
The final theme of this dissertation concerns the reproducibility of the developed methods. In this section, design considerations are presented for an open-source software library, along with small examples, aimed at ensuring easy reproducibility and supporting future research in the three themes mentioned earlier.
In summary, this dissertation offers a wide range of methods for the isogeometric analysis of structural instabilities in thin-walled structures, including the modelling of wrinkling. The concepts developed in terms of hyperelasticity expand the applicability of wrinkling models to encompass large strains. The concepts developed in terms of adaptivity provide intuitive error estimators that drive local refinement in space, as well as a novel continuation method that eliminates the inherently serial arc-length methods. Through the use of unstructured splines, complex domains become accessible for the analysis of structural stabilities. By creating an open-source, forward-compatible software library, these concepts are made available for future developments in the field of isogeometric analysis of wrinkling.
Among the auxiliary subsystems, the Environmental Control System (ECS) is the largest consumer of non-propulsive power, accounting for up to 3-5% of the total fuel burn. The replacement of the conventional Air Cycle Machine (ACM) with an electrically-powered ECS based on the Vapor Compression Cycle (VCC) system could enable: i) a substantial decrease in fuel consumption; ii) a finer regulation of the relative humidity in the air distribution system, leading to improved air quality in the cabin and flight deck; iii) a reduction in maintenance costs and an increase in system reliability, due to the removal of the maintenance-intensive bleed system. However, the adoption of VCC systems in the aerospace sector has been historically very limited, due to safety concerns regarding the ozone depleting potential, toxicity and flammability of the working fluids used as refrigerants, as well as because of a lack of research specifically targeting airborne applications.
This dissertation documents research work performed as part of the NEDEFA project, which entails the investigation of VCC-based ECS architectures powered by oil-free highspeed centrifugal compressors. The first objective is to advance of the state-of-the-art regarding high-speed compressors operating with gas bearings, i.e., the key technological enablers of airborne VCC systems. The second target is to develop of a methodology for the integrated design of aircraft ECS, namely, a design philosophy in which the system and the main components are optimized simultaneously.
The main outcomes of this work are the development of a preliminary design model for high-speed compressors, extensively validated with experimental data and computational fluid dynamics simulations, and the implementation of an integrated design framework for aircraft ECS, embedding a multi-point and multi-objective optimization strategy. The compressor model has been applied to derive design guidelines for single-stage and twin-stage machines operating with arbitrary working fluids, as well as to perform the fluid dynamic design optimization of the compressor that will be installed in the IRIS (Inverse organic Rankine Integrated System) test rig of the Propulsion and Power Laboratory. Furthermore, the integrated design method has been used to size and compare the performance of two alternative ECS configurations for a single-aisle, short-haul aircraft resembling the configuration of an Airbus A320, i.e., a bleedless ACM and an electrically driven VCC. The results reveal that the optimal VCC system could be both more efficient and lighter than the corresponding ACM architecture, leading to potential fuel savings in the order of 20% for the prescribed application.
A cell, around 10 micrometers in diameter, contains approximately two meters of DNA, divided into 46 chromosomes. These chromosomes need to be tightly folded to fit within the cell, but during interphase, they must also be accessible for protein interactions. In mitosis, chromosomes take on the X-shape, driven by a molecular machine called condensin, part of the SMC complexes family, essential for DNA folding at different cell cycle stages.
Condensin, known for shaping mitotic chromosomes, comes in two types in humans: condensin I and condensin II. This thesis explores the functions of condensin II beyond mitotic chromosome structuring. In Chapter 3, research on 24 organisms reveals diverse chromosome organization during interphase, with condensin II influencing the transition from chromosome territories to Rabl-like organization. Removal of condensin II in human cells shifts their organization.
Chapter 4 examines the impact of condensin II removal on chromosome territories in human cells, concluding that different levels of genome organization operate independently. Removing condensin II minimally affects gene expression, suggesting chromosome territories' limited role in regulating genes.
Chapter 5 investigates how condensin II prevents Rabl-like organization and centromere clustering, finding its specific role during or after mitosis. The data indicates that condensin's role in shortening the chromosome axis is crucial in preventing centromere clusters.
Chapter 6 contextualizes findings, proposing a model on how condensin II may control interphase organization based on data from Chapters 3 to 5.
Chapter 7 shifts focus to condensin II's negative regulator, MCPH1, inhibiting condensin II in interphase. Removing MCPH1 leads to interphase condensation, affecting DNA distribution during cell division. Condensin II, typically working with topoisomerase 2, encounters difficulties in untangling DNA knots when MCPH1 is absent.
This dissertation highlights the importance of balancing condensin II, investigating the consequences of its loss and over-activation. Both scenarios significantly impact cell function, emphasizing condensin II's broader role beyond mitotic chromosome organization. The research contributes fundamental insights into condensin biology, offering potential for new discoveries in this field.","Chromosome biology; Molecular biology; SMC complexes; Condensin; Cohesin; chromosome condensation; Genome organization; Chromosome","en","doctoral thesis","","978-94-6483-646-2","","","","","","2025-01-18","","","BN/Benjamin Rowland Lab","","",""
"uuid:b54561bd-1141-429f-83e3-a94b966c7a07","http://resolver.tudelft.nl/uuid:b54561bd-1141-429f-83e3-a94b966c7a07","Global impacts of aircraft emissions on air quality and nitrogen deposition","Domingos de Azevedo Quadros, F. (TU Delft Aircraft Noise and Climate Effects)","Snellen, M. (promotor); Dedoussi, I.C. (copromotor); Delft University of Technology (degree granting institution)","2024","Global passenger air traffic has doubled in the 13 years prior to 2019, and is expected to double again over the next 20 years or so. Growing demand for aviation is met by a corresponding increase in jet fuel being burned by aircraft, releasing multiple pollutants into the atmosphere. Besides disturbing the Earth’s radiative balance, these emissions also lead to excessive deposition of reactive nitrogen, and to a degradation of air quality. Anthropogenic nitrogen deposition damages vulnerable ecosystems, while degraded air quality is associated with increases in human mortality rates. These last two environmental impacts can be very localized, but, owing to the high altitude of emissions, they also occur over intercontinental distances. This thesis aims to evaluate the magnitude of air quality and nitrogen deposition due to emissions from civil fixed-wing aircraft at a global scale, and how these impacts might change in the coming decades.","Aviation; Air quality; Air pollution; Aircraft emissions; Nitrogen deposition; Intercontinental pollution; Public health; Atmospheric chemical transport model","en","doctoral thesis","","","","","","","","","","","Aircraft Noise and Climate Effects","","",""
"uuid:5fb0dfe6-94cc-40f1-8773-91367b5e2fba","http://resolver.tudelft.nl/uuid:5fb0dfe6-94cc-40f1-8773-91367b5e2fba","Analysis and Design of Lens Antenna Systems for Applications at Millimeter and Sub-millimeter Wavelengths","Zhang, H. (TU Delft Tera-Hertz Sensing)","Llombart, Nuria (promotor); Neto, A. (promotor); Delft University of Technology (degree granting institution)","2024","In recent decades, dielectric lens antennas have been more and more adopted and developed for sensing and imaging applications at sub-millimeter (sub-mm) wavelengths because they can achieve high gain while keeping their physical size and weight acceptable at these wavelengths. More recently, as low-loss and low-cost lens materials have become available and the lens fabrication is becoming easier and more accurate, lens antennas are attracting more interests for variety of applications at millimeter (mm) wavelengths such as high-data-rate wireless communication and automotive radars. However, the analysis and design of lens antennas at mm and sub-mm wavelengths present different challenges. In this thesis, we propose to use a field correlation technique to analyze lens antennas in reception and then optimize their aperture efficiency for different scenarios. Based on this optimization methodology, three examples of lens antenna systems are described at 28 GHz, 180 GHz, and beyond 200 GHz for the applications of 5G communication, wide field-of-view security imaging, and future mm-resolution THz imaging, respectively. The proposed methodology and design provide possible solutions for the potential challenges and can be used as guidelines for designing lens antennas at mm and sub-mm wavelengths.....","Equivalent circuits; focal plane arrays; field correlation; geometrical optics; lens antennas; leaky-wave antennas; lens shaping; millimeter waves; photoconductive antennas; quasi-optical systems; sub-millimeter waves; sparse array; time-domain analysis; ultra wideband; wide field-of-view","en","doctoral thesis","","978-94-6384-524-3","","","","","","","","","Tera-Hertz Sensing","","",""
"uuid:04e4fe71-d257-4cac-aaa6-56390b3d80f9","http://resolver.tudelft.nl/uuid:04e4fe71-d257-4cac-aaa6-56390b3d80f9","Analysis of the slow-moving landslides in the Mazar Region in southeast Ecuador","Urgilez Vinueza, A.R. (TU Delft Water Resources)","Bakker, M. (promotor); Bogaard, T.A. (promotor); Delft University of Technology (degree granting institution)","2024","Landslide activity in the Andes remains an ongoing natural hazard with significant implications for regional development. Slow-moving landslides, while not typically resulting in catastrophic outcomes, can still cause substantial damage to critical infrastructure, including roads, buildings, crops, and hydropower dams. In southeast Ecuador, slow-moving landslides threaten the stability and functionality of theMazar damand reservoir. This thesis aimed to address these challenges by characterizing the slow-moving landslides in theMazar region and developing a systematic approach to identify changes in their displacement rates, understand their physical causes and assess the influence of hydrometeorological forcings.....","Slow-moving landslides; Hydro-geology; Hydro-Meteorology; Accelerations-decelerations; Multiple regression","en","doctoral thesis","","978-94-6384-526-7","","","","","","","","","Water Resources","","",""
"uuid:bd9a6840-9c69-43a3-9720-730d5879d4b6","http://resolver.tudelft.nl/uuid:bd9a6840-9c69-43a3-9720-730d5879d4b6","Developing places for human capabilities: Understanding how social sustainability goals are governed into urban development projects","Janssen, C. (TU Delft Practice Chair Urban Area Development)","Verdaas, J.C. (promotor); Daamen, T.A. (copromotor); Delft University of Technology (degree granting institution)","2024","Although social objectives are frequently part of the pursuit of sustainable urban development, how such social sustainability goals can be achieved in urban development practices remains a largely unsolved puzzle. While scholars increasingly acknowledge that urban social sustainability is a plural concept that needs to be specified in different situations, thus far very few social sustainability studies have focused on the processes in which such specifications take place – i.e., the implementation processes in which policies are brought into practice in urban areas or neighborhoods. This dissertation develops an understanding of how institutionalized governance processes affect the implementation of policy goals related to social sustainability in area-based urban development projects. The research draws on Sen’s Capability Approach (CA) to construct a capability-centered evaluation of such efforts. More than other normative approaches that primarily focus on the distribution or quality of spatial goods, the principles of the CA focus on the fact that different people have different experiences. Unique personal, social, and environmental circumstances per individual imply that people have different capabilities: the actual freedoms to do or be what one considers valuable for a dignified life. A promising role is reserved for the CA to investigate how exactly the diversity of human beings can be incorporated into urban development and planning processes. This provides a sincere response to the calls of social sustainability scholars that more ‘human-centered’ approaches are needed. The dissertation hypothesizes that governance processes around urban development projects hold various elements that affect the implementation of social sustainability in contemporary cities, and subsequently, influence whether ‘capability-centered urban outcomes’ are achieved or not. In that way, this dissertation analyzes how governance processes in urban development practice relate to capability-centered evaluations of urban social sustainability outcomes. Whereas these two aspects are often investigated separately – i.e., studies often either focus on analyzing the mechanisms within governance processes or on describing and evaluating social outcomes in the urban environment – this dissertation explicitly brings these together. The governance process is investigated from a collaborative governance perspective to analyze which activities and interactions between the different stakeholders affect capability-centered social sustainability outcomes in urban environments, and complementary, from an institutionalist perspective that explores what less-visible, yet structural elements of governance condition the emergence of capability-centered governance activities.","social sustainability; urban development projects; collaborative governance; institutions; capability approach","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-799-9","","","","","","","","","Practice Chair Urban Area Development","","",""
"uuid:2dd874e6-bb94-4e3d-848d-3b54a0bc856a","http://resolver.tudelft.nl/uuid:2dd874e6-bb94-4e3d-848d-3b54a0bc856a","Hydrodynamics for the integration of fermentation and separation in the production of diesel and jet biofuels","Sousa Pires da Costa Basto, R.M. (TU Delft BT/Bioprocess Engineering)","van der Wielen, L.A.M. (promotor); Mudde, R.F. (promotor); Delft University of Technology (degree granting institution)","2024","Over the years, various technologies have been developed to produce and separate advanced biomolecules. These technologies range from complex terpenoids for pharmaceuticals and flavors to commodity chemicals and fuels via the fermentative route. These compounds are often poorly water soluble, phase splitting organic compounds or inhibitory and unstable necessitating addition of an extractive, second liquid phase for product removal. The turbulent conditions in the multiphasic fermentation coupled with the presence of surface-active compounds in the medium create a stable emulsion that is difficult to separate in conventional systems. Technologies such as centrifugation and de-emulsifiers have been used to separate the emulsion and recover the product. However, these type of recovery processes are expensive, drastically increase the final product’s environmental footprint and often hamper cell recycling.","","en","doctoral thesis","","978-94-6483-631-8","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:6abb764c-a884-4da9-9f89-7140ee8b097b","http://resolver.tudelft.nl/uuid:6abb764c-a884-4da9-9f89-7140ee8b097b","Dynamics of the Pitch-able VAWT: A Study of the Dynamics of the Vertical Axis Wind Turbine with Individual Pitch Control","LeBlanc, B.P. (TU Delft Wind Energy)","van Bussel, G.J.W. (promotor); Ferreira, Carlos (promotor); Delft University of Technology (degree granting institution)","2024","Society is finally entering a new age of renewable energy development. For the first time it is truly conceivable to power a vast majority of global energy use with a combination of wind, solar, and other forms of low carbon, renewable power. The rise of electrification and so called ”Power to X”, where renewable energy is used to create other more condensed and potentially storable sustainable fuels, will require a significant increase in the capacity of electrical grid networks worldwide in the coming decades. One of the largest growing sectors in renewable energy is offshore wind power. With farms in operation for over two decades, offshore wind has been predominately deployed in relatively shallow water in the North Sea of Europe. While expanding to global markets is possible with fixed bottom machines, the resource is relatively limited based on the strict seabed requirements. Moving to floating offshore wind platforms, demonstrated in pilot projects like Hywind Scotland, has the potential to vastly expand the potential wind resource and open markets in the Americas and Asia which would otherwise be unreachable....","Vertical Axis Wind Turbine; Pitch Control; PIV; Wake Steering; Structural Dynamics","en","doctoral thesis","","","","","","","","","","","Wind Energy","","",""
"uuid:0a34b7ef-d18b-4f86-be16-62128a52dd7c","http://resolver.tudelft.nl/uuid:0a34b7ef-d18b-4f86-be16-62128a52dd7c","Design of a High Voltage Arbitrary Wave Shape Generator for Dielectric Testing","Ganeshpure, D.A. (TU Delft High Voltage Technology Group)","Vaessen, P.T.M. (promotor); Bauer, P. (promotor); Ghaffarian Niasar, M. (copromotor); Delft University of Technology (degree granting institution)","2024","The integration of wind and solar energy through power electronic converters has introduced new challenges to High Voltage (HV) equipment in the electrical power system. Switchgear, cables, and transformers are now subject to higher dV/dt stress and complex wave shapes due to solid-state switching. This poses a threat to the reliability of the grid by weakening the dielectric material of these assets. Existing HV test sources face limitations in generating complex wave shapes and have restricted current capabilities. Building a customized test setup is time-consuming when combining multiple HV test sources for complex waveforms.
To overcome these challenges, an Arbitrary Wave shape Generator (AWG) for dielectric testing of HV grid assets is proposed. The Modular Multilevel Converter (MMC) topology is chosen for its modular structure, low harmonic content, and scalability to higher voltage levels. The initial focus is on dielectric testing of Medium Voltage (MV) class equipment, with the ultimate goal being the development of a modular prototype as part of a PhD project.
HV test requirements and procedures for conventional tests of MV class equipment are compiled, along with specifications for non-standard wave shapes in consideration of the hybrid grid. Two main HV test requirements are addressed in the PhD thesis: the output voltage range of 10 kV to 100 kV with a load capacitance range of 50 pF to 10 nF and a large-signal bandwidth up to 2.5 kHz. The second requirement involves generating steep pulses with a rise time of a few microseconds for a voltage magnitude of 250 kV across a capacitive load of 10 nF.
Despite the maturity of MMC technology for HVDC transmission, adapting it for HV AWG applications presents unique challenges. The thesis explores design trade-offs related to MMC parameters such as the number of Submodules (SMs) per arm, arm inductance, arm resistance, modulation technique, SM capacitance, and control system. Design criteria are developed and demonstrated through simulation models and a scaled-down prototype.
The control hardware of the HV AWG is addressed using a commercially available Real Time Simulator (RTS) named Typhoon-HIL. This choice is based on its flexibility to program arbitrary waveforms in the FPGA without coding in any special hardware description language. The performance is demonstrated in the scaled-down prototype, achieving sinusoidal waveforms up to 5 kHz reference frequency with THD less than 5%.
The second HV test requirement, steep pulse generation, is investigated with the MMC topology. It is found that the series-connected SMs of MMC make it challenging to obtain a short rise time across a large capacitive load. To address this, an integrated hybrid circuit of MMC and Marx generator circuit is proposed for complex waveforms with a rise time faster than 100 μs. Proper guidelines for choosing circuit parameters are provided and experimentally validated with a scaled-down prototype.","Dielectric Testing of Grid Assets; Arbitrary Wave shape Generator; modular multi-level converter (MMC); Marx generator; PD measurement","en","doctoral thesis","","","","","","","","2024-12-31","","","High Voltage Technology Group","","",""
"uuid:2c93f1af-bf49-4353-b9b9-c6ed8d62d3c9","http://resolver.tudelft.nl/uuid:2c93f1af-bf49-4353-b9b9-c6ed8d62d3c9","Epidemics on Static and Adaptive Networks","Achterberg, M.A. (TU Delft Network Architectures and Services)","Van Mieghem, P.F.A. (promotor); Kooij, Robert (promotor); Delft University of Technology (degree granting institution)","2024","The COVID-19 pandemic has had a disruptive impact on healthcare systems and everyday life of the majority of the people around the globe. Despite many years of research on network epidemiology, many key aspects of disease transmission and in particular the response of people to the spread of a disease, remain poorly understood. On the basis of epidemiological modelling lie the Susceptible-Infected-Susceptible (SIS) and Susceptible-Infected-Recovered (SIR) models. In this dissertation, we aim to improve the understanding of the spread of contagious diseases, with an emphasis on the interplay between disease spread and personal behaviour, applied to the SIS and SIR models. The first part starts with the analysis of the eigenvalue spectrum of the infinitesimal generator of the Markovian SIS model with self-infections (Chapter 2). Based on the eigenvalue spectrum, which we believe encodes the majority of the dynamics, we derive an alternative definition of the epidemic threshold. We show that the epidemic threshold approximately coincides with the effective infection rate for which the third-largest eigenvalue is minimal. Contrary to the SIS process, where only an eigenvalue analysis is possible, the SIR process is completely solved on an arbitrary, heterogeneous network (Chapter 3). The benefit of the exact solution is demonstrated by analytically computing the time when the number of infections is maximal. The second part concerns the interplay between the spread of a disease and the response of people to the disease spread. We develop the Generalised Adaptive SIS (GASIS) model to describe how individuals break and create links in the contact graph. The decisions for breaking or creating links are based on the viral state of the nodes attached to that link. For all 36 instances in the G-ASIS model, we analyse the relation between the epidemic threshold and the effective link-breaking rate (Chapter 4). We derive the first-order and second-order mean-field approximation of the G-ASIS model (Chapter 5) and illustrate that the second-order approximation is able to qualitatively approximate the Markovian model more accurately than the first-order approximation. The G-ASIS mean-field model is extended to arbitrary link-breaking and link-creation responses, which are not only related to the number of susceptible and infectious neighbours of a node, but may also depend on the presence of the virus in the whole population (Chapter 6). For all possible link-breaking and link-creation responses, epidemic waves cannot occur in the mean-field adaptive SIS process. In the final part,we develop theNetwork-Inference-based Prediction Algorithm(NIPA) for forecasting the spread of contagious diseases on heterogeneous networks (Chapter 7). The contact graph is assumed to be unknown and is inferred by NIPA from the number of reported cases. NIPA is a hybrid method, combining epidemiological knowledge, machine-learning and networks. Network-based forecasting, and NIPA in particular, seems favourable for predicting epidemic outbreaks, which is demonstrated by showing that NIPA outperforms many other forecasting algorithms for estimating the spread of COVID-19.","Mathematical epidemiology; Adaptive networks; Markov processes","en","doctoral thesis","","978-94-6384-514-4","","","","","","","","","Network Architectures and Services","","",""
"uuid:4fa3a292-477c-4ff0-b01a-e7d90b66ec2a","http://resolver.tudelft.nl/uuid:4fa3a292-477c-4ff0-b01a-e7d90b66ec2a","Exploring Active Inference and Model Predictive Path Integral Control: A Journey from Low-Level Commands to Task and Motion Planning","Pezzato, C. (TU Delft Robot Dynamics)","Wisse, M. (promotor); Hernández, Carlos (copromotor); Delft University of Technology (degree granting institution)","2024","In an ever-evolving society, the demand for autonomous robots equipped with human-level capabilities is becoming increasingly imperative. Various factors, such as an aging population and a shortage of labor for repetitive and physically demanding tasks, have underscored the need for capable autonomous robots to assist us in our daily activities. However, despite the recent advancements in robotics, the field still faces significant challenges in delivering on its promises of developing general-purpose robots with human-level capabilities for everyday tasks. This thesis aims to develop control algorithms at different levels of abstraction to achieve more robust, adaptive, and reactive robot behavior for long-term tasks in dynamic environments.
Since our ultimate goal is to achieve human-level performance, a natural starting point is to investigate theories of human intelligence and how they can be applied to real robots, such as mobile manipulators. In this regard, one prominent theory is Active Inference, a popular and influential concept that can explain a wide range of cognitive functions, from motor control to high-level decision-making. Active Inference was developed based on the free-energy principle providing an explanation for embodied perception-action loops. While the free-energy principle and Active Inference have garnered significant attention among neuroscientists, their application to robotics remains largely unexplored, presenting an exciting avenue for research in this thesis. At the same time, it is also important to recognize that we should not confine ourselves solely to theories of human intelligence and their inherent limitations. Machines and humans are built upon fundamentally different structures, which opens up possibilities for alternative approaches. Consequently, this thesis also investigates the use of Model Predictive Path Integral Control (MPPI), which stems from a different formulation of free-energy that is not bound to biological assumptions. By exploring the application of Active Inference to low-level robot control and task planning, as well as the utilization of MPPI for motion planning, this thesis provides advancements in robot control at different levels of abstraction. More concretely, this thesis contributes to the following four areas: 1) Lowlevel adaptive and fault-tolerant control, 2) Reactive high-level decision making, 3) Contact-rich motion planning, and 4) Reactive task and motion planning (TAMP)…
In the Technical branch, Ambient Intelligence (AmI) was coined in the late 90s to describe a cohesive vision of a future digital living room, a built-environment whose computing hardware and software technology imbued its dwelling space with serviceable intelligence to the benefit of its occupant(s) [8]. Also salient in this branch was Ambient Assisted Living—or Active and Assisted Living—(AAL), which framed its inquiry around the promotion of quality of life as well as the prolongation of independence with respect to Activities of Daily Living (ADLs) [9] among the elderly via technical assistance [10].
In the Architectural branch, Cedric Price’s pioneering Generator Project and corresponding programs by John and Julia Frazer [11] in the late 70s, explored notions of interaction between human and non-human agents in the built-environment. In Price’s project, architecture was conceived as a set of interchangeable sub-systems integrated into a unifying computer system, which enabled a reconfigurability sensitive to function. Price and the Frazers intended for the system to suggest its own reconfigurations, denoting non-human agency.
The promise of solutions yielded by both AmI/AAL and IA/AA is limited by the rigid and increasingly outdated assumptions in their approaches. It is not possible, as they are and as they are currently developing, to combine AmI/AAL and IA/AA to yield a unified and cohesive approach. This is because the sophistication of a system will depend on that of its mutually complementing subsystems; and two or more subsystems may not mutually complement, sustain, and/or support one another properly if their levels of development and sophistication do not correspond [12]. That is: at present, the architectural does not correspond to the technically predominant AmI/AAL, while the technical does not correspond to the architecturally predominant IA/AA. Consequently, a different design-approach is required in order to enable comprehensively and cohesively intelligent built-environments with corresponding levels of technical and architectural sophistication. What could such an approach look like?
In this thesis, an alternative approach that conceives of the intelligent built-environment as a Cyber-Physical System (CPS) is presented and demonstrated. Under this approach, ICTs and Architectural considerations in conjunction instantiate intelligence fundamentally—i.e., unlike existing AmI/AAL or IA/AA approaches, the present approach subsumes enabling technologies into the very core of the built-environment, where a solution does not exist as such without either of its informational and physical constituents deliberately conceived for each other (if not formally, at least conceptually and operationally with respect to instantiated services).
In this thesis, the general potential and promise of the presented approach is illustrated via its application to a constrained use-case—i.e., that of intelligent built-environments for elderly assistance and care (also informally referred to as smart homes or environments). Twelve proof-of-concept demonstrators (see Chapter 5), each showcasing an intelligent product and/or a service—or combinations and sets thereof—integrated into the built-environment and/or its ecosystem, are developed. Eight established parameters (see Section 3.2)—four pertaining to Indoor Environmental Quality (IEQ) and four Quality of Life (QoL)—define the purpose and inform the design of each demonstrator’s setup and development within four types of demo environments (see Chapter 4)—two Physical (Hyperbody and Robotic Building) and two Virtual (Digital Twin and Non-descript). Each demonstrator, while presented as a discrete proof-of-concept, builds on the same core System Architecture, and are intended to be viewed as a collection of systems and services expressed within a same hypothetical environment. That is to say, all come together to represent the intelligent built-environment as CPS.
All demonstrators are functionally and physically developed and involve human participation to test and to validate both the feasibility and success of the concept. Success is determined if the developed products and services indeed provide added value to a user and/or occupant of the space—i.e., if they promote and contribute to well-being by assisting, facilitating, or enhancing. Accordingly, the tangible nature of the process and results promote—albeit in a limited scope—the presented approach in very real terms, and—hopefully—situate it as an alternative to existing modes of imbuing intelligence in the built-environment.
Through operational analysis of the pilot plant, it has been determined that replacing half of the primary raw material with galvanized steel scrap as a secondary source in the HIsarna process is feasible. This substitution would result in a significant reduction in the injection of fine iron ore. Another advantage is the continuous evaporation of zinc from the scrap surface, accumulating in the off-gas dust, which can later be separated and recovered. In contrast to the blast furnace route, the zinc element does not form a circulating loop inside the reactor but is converted to the oxidized/ferrite form, ultimately ending up in the dust bag and filters.
However, plant measurements and laboratory analysis of the HIsarna dust reveal that the evaporated zinc primarily reacts with available oxygen and iron oxides to form zinc ferrite. This necessitates additional pre-processing steps before feeding into the zinc smelting unit, incurring extra costs. Consequently, the formation of ferrite is deemed undesirable.
In a nutshell, this thesis focuses on developing a precise computational fluid dynamic (CFD) model to predict the behaviour of the HIsarna off-gas system. This model is crucial for predicting temperature and composition profiles within the off-gas system, particularly in zones where data are not measured at the pilot plant. The possibility of zinc ferrite formation reduction and off-gas system is investigated using plant measurements, CFD data analysis, and thermodynamic calculations. Furthermore, the developed CFD model is utilized to propose modification/optimization of the process, reducing iron ore dust escaping the system, reducing post-combustion oxygen consumption, optimizing post-combustion lance, and off-gas system scale-up.
Chapter 1 of the thesis is dedicated to a brief history of ironmaking and introduces the HIsarna process in detail, as well as the research focus and thesis structure. Chapter 2 focuses on establishing and validating a CFD model and offers a detailed description. Chapter 3 provides an extensive discussion of the model selection and sensitivity analysis. This chapter primarily delves into critical insights regarding the reasons behind the choice of sub-models within the CFD model. Flow analysis of the off-gas system is presented in Chapter 4, and in Chapter 5, the behaviour of the escaped ore entering the off-gas system is investigated, and potential solutions to mitigate injected ore losses from the off-gas system are discussed. The modified geometry introduced in Chapter 5 is subjected to analysis using the same validated CFD model, ensuring its effective operation within the entire off-gas system. These findings are discussed in Chapter 6 of the thesis. In Chapter 7, the formation of zinc oxide and zinc ferrite are investigated in the original and modified geometry of the off-gas system, and possible solutions to reduce the ferrite formation are proposed. In Chapter 8, a modification to the oxygen lance is proposed to enhance the combustion of the CO-H2 mixture. This modification involves using a fluidic oscillator instead of injecting oxygen through a conventional nozzle. The results demonstrate an improvement in CO-H2 combustion in the reflux chamber. The proposed geometry is constructed and implemented in the reflux chamber for further evaluation and is discussed in detail.
In Chapter 9 (Part 3), the CFD model developed for the pilot plant is employed to conduct a CFD-based scale-up of the off-gas system to the industrial scale. Within this chapter, the optimized geometry and recommended operating conditions are presented. Conclusions, remarks, and recommendations are presented in the final chapter of the thesis (Chapter 10).","Computational Fluid Dynamics; Discrete Element Method; Finite Element Method; Discrete Phase Model (DPM); HIsarna Iron Making; Particle flow modelling; CFD-assisted scale up; Zinc ferrite formation; Thermodynamic analysis; Post combustion chamber; Combustion; Fluidic oscillator","en","doctoral thesis","","978-94-6384-517-5","","","","","","","","","Team Yongxiang Yang","","",""
"uuid:7d3c6107-812e-458e-bd11-04f5c1e5931a","http://resolver.tudelft.nl/uuid:7d3c6107-812e-458e-bd11-04f5c1e5931a","Driver and Pedestrian Mutual Awareness for Path Prediction in Intelligent Vehicles","Roth, M. (TU Delft Intelligent Vehicles)","Gavrila, D. (promotor); Kooij, J.F.P. (copromotor); Delft University of Technology (degree granting institution)","2023","This thesis addresses the sensor-based perception of driver and pedestrian to improve joint path prediction of ego-vehicle and pedestrian based on mutual awareness in the domain of intelligent vehicles.
According to the World Health Organization (WHO), more than half of global traffic deaths are among Vulnerable Road Users (VRUs), such as pedestrians and riders, and human error is still a major cause of accidents. This motivates paying special attention to pedestrians and drivers while they are interacting in traffic. For the foreseeable future, the reality on the road (and the accident numbers) will largely be determined by Advanced Driver-assistance Systems (ADAS) where the driver is still required to keep the eyes on the road. To that end, the scope of this thesis resides within ADAS and driving automation up to (including) autonomy level 3 as defined by the Society of Automotive Engineers (SAE). While current ADAS consider pedestrians and the driver individually, their mutual awareness has not been leveraged to improve path prediction and thereby road safety. This thesis presents a framework that estimates driver head pose from driver camera images, estimates pedestrian location and orientation from exterior camera images and lidar point clouds, uses this information over time to reason about driver and pedestrian mutual awareness, and performs joint probabilistic path prediction of ego-vehicle and pedestrian to assess collision risk.
Deep neural networks demand a large training set to tune the vast amount of parameters. This thesis introduces DD-Pose, the Daimler TU Delft Driver Head Pose Benchmark, a large-scale and diverse benchmark for image-based head pose estimation and driver analysis. It contains 330k measurements from multiple cameras acquired by an in-car setup during naturalistic drives. Large out-of-plane head rotations and occlusions are induced by complex driving scenarios. Precise head pose annotations are obtained by a motion capture sensor and a novel calibration device. The new dataset offers a broad distribution of head poses, comprising an order of magnitude more samples of rare poses than a comparable dataset.
Utilizing the dataset, this thesis presents intrApose, a novel method for continuous 6 degrees of freedom (DOF) head pose estimation from a single camera image without prior detection or landmark localization. intrApose uses camera intrinsics consistently within the deep neural network and is crop-aware and scale-aware: poses estimated from bounding boxes within the overall image are converted to a consistent pose within the camera frame. It employs a continuous, differentiable rotation representation that simplifies the overall architecture compared to existing methods. Experiments show that leveraging camera intrinsics and a continuous rotation representation (SVDO+) results in improved pose estimation compared to intrinsics agnostic variants and variants with discontinuous rotation representations. Driver head pose of naturalistic driving is biased towards close-to-frontal orientations. Training with an unbiased data distribution, i.e., a more uniform distribution of head poses, further reduces rotation error, specifically for extreme orientations and occlusions.
In addition to considering the inside of the vehicle, this thesis also focuses on the outside environment and presents a method for 3D person detection from a pair of camera image and lidar point cloud in automotive scenes. The method comprises a deep neural network that estimates the 3D location, spatial extent, and yaw orientation of persons present in the scene. 3D anchor proposals are refined in two stages: a region proposal network and a subsequent detection network. For both input modalities high-level feature representations are learned from raw sensor data instead of being manually designed. To that end, the method uses Voxel Feature Encoders to obtain point cloud features instead of widely used projection-based point cloud representations. Experiments are conducted on the KITTI 3D object detection benchmark, a commonly used dataset in the automotive domain.
Eventually, the output provided by the methods of the former chapters, namely, driver head pose and 3D person locations, are leveraged by a novel method for vehicle-pedestrian path prediction that takes into account the awareness of the driver and the pedestrian of each other’s presence. The method jointly models the paths of ego-vehicle and a pedestrian within a single Dynamic Bayesian Network (DBN). In this DBN, subgraphs model the environment and entity-specific context cues of the vehicle and pedestrian (incl. awareness), which affect their future motion. These sub-graphs share a latent state which models whether the vehicle and pedestrian are on collision course. The method is validated with real-world data obtained by on-board vehicle sensing, spanning various awareness conditions and dynamic characteristics of the participants. Results show that at a prediction horizon of 1.5 s, context-aware models outperform context-agnostic models in path prediction for scenarios with a dynamics change while performing similarly otherwise. Results further indicate that driver attention-aware models improve collision risk estimation compared to driver-agnostic models. This illustrates that driver contextual cues can support a more anticipatory collision warning and vehicle control strategy.
The main conclusions and findings of this thesis are: using a measurement device with a per-subject calibration procedure simplifies the data acquisition process to obtain a broad distribution of head poses. Using an intrinsics-aware head pose estimation method with a continuous rotation representations allows for a simple architecture that yields robust head pose estimates across a broad spectrum of head poses. Modeling of both driver and pedestrian mutual awareness in a unified DBN improves joint probabilistic path prediction compared to driver-agnostic models. Additionally, it provides explainability for model parameters and interpretability of the internal decision making process. Further research can be conducted to understand the behavior of humans inside and outside an intelligent vehicle. Two major trends go towards integrating uncertainties into the components and combining them to a system that can be trained end-to-end from raw sensor data to predicted paths. Future work would greatly benefit from representative, worldwide, naturalistic, multi-sensor, temporal data which cover the outside environment as well as the inside of the vehicle - ideally shared across research institutions and companies.","Head pose estimation; Head pose dataset; Person detection; Ego-vehicle path prediction; Pedestrian path prediction; Intelligent vehicles; Automated driving","en","doctoral thesis","","978-94-6384-502-1","","","","","","2023-12-20","","","Intelligent Vehicles","","",""
"uuid:a68190dd-6e9c-426b-bbcb-4ea1f5910c82","http://resolver.tudelft.nl/uuid:a68190dd-6e9c-426b-bbcb-4ea1f5910c82","Exploring the use of Extended Reality for user experience design in product-service systems","Li, M. (TU Delft Applied Ergonomics and Design)","van Eijk, D.J. (promotor); Albayrak, A. (copromotor); Delft University of Technology (degree granting institution)","2023","This dissertation aims to explore the use of extended reality (XR) as an approach to developing user experience (UX) for product-service systems. It included eight chapters to explore the research question: “How can designers use extended reality to develop the user experience for product-service systems?”
Chapter 1 introduces three immersive experiences in user experience studies as examples and explains three relevant research topics - Product-service systems, User Experience, and Extended Reality. By reviewing the XR applications in both in design practices and in literature, the author proposed e the aim, the research question, and six sub-questions of this dissertation, followed by the explanation on theoretical backgrounds and research methodologies.
Chapter 2 answers sub-question 1 about the essence of immersive experience from users’ and designers’ viewpoints, thus proposes a user-centered model of immersive experience from literature and case analysis; then the author maps currently available XR platforms concerning the categories of experiences.
Chapter 3 firstly answers the sub-question 2 by reviewing state-of-the-art XR technologies for UX studies; then the author proposes a process to prototype experiences via XR to develop positive experiences for product-service systems.
Chapter 4 investigates three case studies to understand how to ideate concepts via
XR at the early design stage, specifically in conceptualization. In addition, the studies also compare the influence of different viewpoints and ways of interaction on the perception of “being comfortable”.
Chapter 5 examines how to assess experiences via XR across user groups and
concentrates on competence-related experiences. This chapter contains three case studies in the context of true-to-life surgical training where a successful surgery depends both on proficient psycho-motor skills and mature self-management of surgeons. In addition, these studies also observe the influences of proficiency, cultural backgrounds, and technology familiarity on the perception of competencies.
Chapter 6 scrutinizes how to facilitate remote collaboration via XR. This chapter
covers two studies in the context of remote teamwork. Given relatedness as a universal need, these studies focus on the influences of different interfaces, either immersive or non-immersive, on the perception of the co-location, as well as task loads, usability, and presence.
Chapter 7 first reviews the lessons learned from the case studies and then probes how design teams integrate immersive experiences into their practices. Hence, four co-creation studies were developed which are in line with the conceptual process in Chapter 3. Section 7.2 to Section 7.5 focus on designer's intention, designerly thinking, prototyping, and co-design via XR respectively.
Chapter 8 reflects on each sub-question from an overarching perspective, and then summarizes three sets of recommendations for design stakeholders who are interested in integrating immersive experiences in their work. This chapter then envisions a concept of a co-design community via immersion - ‘Design Metaverse’. At the end, the limitations of this work are discussed, as well as future research directions.","extended reality (XR); user experience design; Product-Service Systems (PSS)","en","doctoral thesis","","978-94-93353-48-0","","","","","","","","","Applied Ergonomics and Design","","",""
"uuid:7b9881a9-8dc7-43f5-b0c5-2700b07c0b09","http://resolver.tudelft.nl/uuid:7b9881a9-8dc7-43f5-b0c5-2700b07c0b09","The Impact of Public Transport Disruptors on Travel Behaviour","Geržinič, N. (TU Delft Transport and Planning)","Hoogendoorn, S.P. (promotor); Cats, O. (promotor); van Oort, N. (promotor); Delft University of Technology (degree granting institution)","2023","Public transport systems have been and continue to be shaped by disruptive forces, impacting individuals’ travel behaviour and how they interact with public transport. This thesis analyses the impact of disruptors on travel behaviour, the perception and use of public transport, enabling operators and policymakers to design appropriate measures and policies in order to improve the quality of service, the sustainability of transport and the liveability of our environment.","","en","doctoral thesis","","978-90-5584-339-8","","","","","","","","","Transport and Planning","","",""
"uuid:19232f14-6765-417e-8bb6-198f91a4a8a6","http://resolver.tudelft.nl/uuid:19232f14-6765-417e-8bb6-198f91a4a8a6","Using Biomass-derived Carbon Catalysts for Electrochemical CO2 Reduction","Fu, S. (TU Delft Large Scale Energy Storage)","de Jong, W. (promotor); Kortlever, R. (copromotor); Delft University of Technology (degree granting institution)","2023","This dissertation explores the integration of clean energy and electrochemical CO2 reduction to address environmental issues. Metal-free nitrogen-doped carbon materials, derived from renewable biomass, emerge as efficient catalysts for CO2 reduction, offering sustainability and cost-effectiveness. Chapters delve into methods of N-doped biochar production, activation strategies, structure-performance relationships, catalyst performance in the presence of impurities, and the use of N-doped biochar as a carbon support for Ni-N-C catalyst synthesis. Results highlight the importance of physicochemical properties in enhancing CO2 reduction performance. The catalysts demonstrate resilience to SO2 impurities, outperforming benchmark electrodes, and showcase promise for sustainable CO2 reduction.","Electrochemical CO2 Reduction; Electrocatalyst; N-doped carbon; Biomass","en","doctoral thesis","","978-94-6384-515-1","","","","","","","","","Large Scale Energy Storage","","",""
"uuid:db745835-aead-40c0-9b54-0ed2f2c6e7cc","http://resolver.tudelft.nl/uuid:db745835-aead-40c0-9b54-0ed2f2c6e7cc","Target-oriented seismic imaging and inversion with marchenko redatuming and double-focusing","Shoja, Aydin (TU Delft Applied Geophysics and Petrophysics)","Wapenaar, C.P.A. (promotor); Slob, E.C. (promotor); Delft University of Technology (degree granting institution)","2023","Reflection seismology aims to estimate the Earth's subsurface elastic parameters for further investigation by geologists and engineers. This involves generating elastic waves using seismic sources and recording the Earth's response with receivers. The subsurface model is typically considered a combination of a background model and a short-wavelength reflectivity model. There are two main paths to estimate these parameters: non-linear waveform inversion to directly compute the elastic parameters or depth migration to estimate a structural image or reflectivity of the subsurface.
Reverse-Time Migration (RTM) is a common depth migration technique that migrates recorded wavefields from the space-time domain to the space-depth domain. It utilizes the Born approximation and the adjoint of the Born operator to produce an RTM image. However, RTM can suffer from errors, such as noise, temporal and spatial limitations, and multiple reflections.
Least-Squares Reverse-Time Migration (LSRTM) is used to overcome some of these errors. LSRTM involves resolving the reflectivity model by least-squares inversion, which is computationally expensive. Gradient-based optimization algorithms are often employed to reduce the computational burden, but they still require solving the wave equation and its adjoint for a large model in multiple iterations. One way to reduce the computational cost is by limiting the computational domain to a target region of interest.
Target-oriented LSRTM, known as TOLSRTM, focuses on the wavefield just above the target by bypassing the overburden. This approach proves beneficial when the overburden generates strong internal multiple reflections that obscure the reflections from the target area. However, a redatuming method is required to predict all orders of multiples. Marchenko redatuming is a data-driven technique that predicts the Green's functions at the boundary of the target region, incorporating all orders of internal multiples. It allows for double-sided redatuming, considering both the source and receiver perspectives. By combining the LSRTM algorithm and Marchenko double-focusing, a target-oriented LSRTM algorithm is devised that can predict interactions between the target and overburden and remove the effects of the overburden in the image. Predicting these interactions results in an artifact-free image, a better convergence rate, and a high-resolution image of the target.
Target-oriented migration algorithms typically consider only the upper horizontal boundary of the region of interest (ROI), neglecting wavefields entering the ROI from the medium beneath the lower boundary. To address this, a target-enclosed LSRTM algorithm is proposed, including both the ROI's upper and lower boundaries. Including the lower boundary provides transmission information and can improve inversion convergence. In addition, this algorithm is adopted for virtual receivers created by Marchenko redatuming. In the case of physical receivers at the boundaries of the target zone, the target-enclosed algorithm can incorporate the transmission information emanating from the lower boundary to the upper one. Consequently, when the initial model is far from the actual model, the resulting image partly recovers the long wavelength part of the model in agreement with the Born approximation criteria. Moreover, when an initial model closer to the actual model is used, the algorithm can partially recover the vertical interfaces of the perturbation. In the case of virtual receivers at the boundaries of the target zone, since the Marchenko redatuming is performed in the initial background model, the redatumed wavefields at the lower boundary suffer from kinematic errors. Therefore, the algorithm can not recover the long wavelength part of the model.
The thesis concludes with a discussion of the results obtained from applying the algorithms to marine datasets. The images resulting from the Marchenko double-focusing based target-oriented LSRTM algorithm show improvements in both resolution and artifact reduction by suppressing the overburden generated internal multiple effects. Moreover, the double-focusing enables the user to reduce the computational costs of the LSRTM algorithm and choose finer spatial sampling for the image.
An appendix proposes a formulation for integrating the target-oriented algorithms with non-linear inversion like Full Waveform Inversion (FWI). The results of this proposed algorithm show its effectiveness by reducing the internal multiple related artifacts and increasing resolution and faster convergence.","Marchenko method; Redatuming; Target-oriented; Least-squares migration; Seismic imaging","en","doctoral thesis","","978-94-6366-785-2","","","","Dr. ir. J.R. van der Neut of Delft University of Technology has contributed greatly to the preparation of this dissertation.","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:be41d02b-a120-4191-96c6-1fb06e88e7c2","http://resolver.tudelft.nl/uuid:be41d02b-a120-4191-96c6-1fb06e88e7c2","Towards a circular building industry through digitalisation: Exploring how digital technologies can help narrow, slow, close, and regenerate the loops in social housing practice","Çetin, Sultan (TU Delft Real Estate Management)","Gruis, V.H. (promotor); Straub, A. (promotor); Delft University of Technology (degree granting institution)","2023","The concept of Circular Economy (CE) has emerged as a promising alternative to the current linear economy, decoupling economic activity from the depletion of natural resources and promoting a restorative and regenerative system. The transition of the building industry to a circular one can be achieved through four core resource principles: Narrow (minimising the use of primary resources), slow (extending the lifetime of buildings and products), close (regaining post-use and construction waste through reuse or recycling), and regenerate (minimising toxic substances and maximising the use of renewable resources). These principles provide a framework for exploring the role of digitalisation in the transition of social housing organisations (SHOs) toward circular housing practices, with a focus on European SHOs, particularly those in the Netherlands. This thesis follows a structured format comprising six chapters, with four of them encapsulating the author’s published articles. Chapter 1 serves as the introduction, providing a contextual foundation for the research. It outlines the overarching theme of the thesis, which revolves around the intersection of CE, digitalisation, and the built environment, with a specific focus on SHOs. The chapter sets the stage by identifying the gaps in existing literature, emphasising the need for a comprehensive conceptualisation of this emerging research field. It further delves into essential methodological aspects, the problem statement, and the broader significance of the research. In Chapter 2, the research delves into an exploration of the current state of CE implementation in Dutch SHOs and provides insights into the pressing barriers, and potential enablers. A Delphi study conducted with 21 social housing professionals reveals that, as of 2020, SHOs were in an experimental phase, incorporating circular construction techniques in pilot projects. Barriers encompass organisational priorities, operating within a linear system, and a lack of awareness. Also, financial challenges related to the costs of circular materials also emerge as significant hurdles. Chapter 3 develops a framework, the Circular Digital Built Environment Framework, in an exploratory qualitative research approach. This conceptual model integrates CE principles with digital technologies to provide an understanding of their potential applications within the built environment. The framework is constructed through expert workshops, literature reviews, and evaluations of current research and practices, resulting in the identification of over ten key digital technologies. These technologies encompass a broad spectrum, including big data analytics, blockchain technology, and material passports. The framework not only informs subsequent empirical studies but also serves as a valuable guide for scholars and industry practitioners navigating the intersection of digitalisation and circularity in the building industry. Chapter 4 presents an analysis of how enabling digital technologies, identified in Chapter 3, are practically employed in real-life practices, specifically within circular new build, renovation, maintenance, and demolition projects of forerunner Dutch SHOs. Employing a multiple-case study approach, the chapter gathers empirical evidence from three large-scale SHOs through semi-structured interviews, desk research, and extensive data analysis. The within-case and cross-case analyses reveal insights into the types of digital technologies being deployed, their impact on circular practices, and the challenges encountered in their adoption. By examining the real-world examples, Chapter 4 contributes to the evolving domain of digitalisation for a circular building industry. Chapter 5 addresses the challenges associated with data (identified in Chapter 4), with a specific focus on material passports as a crucial tool for circularity in existing housing stock. Employing a multiphase mixed-method research design, the chapter utilises the SCOPIS method (Supply Chain-Oriented Process to Identify Stakeholders) for user and data mapping. This approach results in a data template outlining the requirements of users for material passports. Subsequently, the study tests this template through a case study, identifying critical data gaps and proposing a material passports framework to address these gaps. By leveraging both digital technologies and human expertise, Chapter 5 offers solutions to enhance data management in the pursuit of circularity within the building industry. The findings contribute to ongoing industry and policy initiatives. Chapter 6, the concluding chapter, consolidates the exploration conducted throughout the thesis. It presents the overarching contributions of the research, offering a summary of the scientific and practice contributions and recommendations derived from the entire study.","cicular economy; building; digitalisation; material passports; circular buildings; social housing","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-786-9","","","","","","","","","Real Estate Management","","",""
"uuid:9d3d7180-d021-4067-b14c-05bec9bf5756","http://resolver.tudelft.nl/uuid:9d3d7180-d021-4067-b14c-05bec9bf5756","Deformation Prediction and Autonomous Path Planning for Robot-Assisted Endovascular Interventions","Li, Z. (TU Delft Medical Instruments & Bio-Inspired Technology)","Dankelman, J. (promotor); De Momi, Elena (promotor); Delft University of Technology (degree granting institution)","2023","Endovascular interventions, as emerging medical therapies, utilize blood vessels as conduits to access anatomically challenging regions deep within the body. Within endovascular interventions, one of the prominent challenges involves maneuvering the instrument tip by coordinating insertion, retraction, and torque actions at the proximal end of the instrument. This intricate task is hindered by the presence of a complex mapping between input actions and resulting motion, rendering precise control and accurate targeting of the desired area difficult. Thanks to the introduction of robotic assistance and the steerability of robotic catheters, the complexity of endovascular interventions has been mitigated.
The integration of steerable catheters and navigation guidance has the potential to reduce the level of expertise required for endovascular interventions. By leveraging autonomous navigation, path-related complications, such as perforation, embolization, and dissection, arising from excessive interaction forces between interventional tools and the vessels, can be effectively addressed and potentially reduced. Within the context of robotic catheters navigating through narrow, delicate, and deformable vessels, path planning presents significant challenges, particularly under complex operating conditions, stringent safety constraints, and the inherent limitations on catheter steering capability. Furthermore, the intricate interplay between the steerable catheter and vessel walls, coupled with the deformable nature of the vessels, intensifies the complexity of achieving reliable and real-time path planning, rendering it a hard problem to solve.
This dissertation aims to develop a safe, accurate, and efficient path planner for steerable robotic catheters. Firstly, this dissertation provides a systematic literature analysis of path planning techniques, collating the findings from the most significant research contributions in the field employing the PRISMA method. In the first part of this dissertation, a novel path planning approach named BFS-GA is proposed, which effectively adheres to the robot curvature constraint while keeping the catheter's path as close to the vasculature's centerline as possible. This path planner is capable of swiftly calculating obstacle-free trajectories that conform to the patient's vasculature, while incorporating the inherent limitations of the catheter such as maximum curvature.
A major challenge during autonomous navigation in endovascular interventions is the complexity of operating in a deformable but constrained workspace with an instrument. To address this, two methods are proposed in the second part of this dissertation to provide a realistic and dynamic environment for path planning. Specifically, a realistic, auto-adaptive, and visually plausible simulator is developed. This simulator has the capability to accurately predict the interplay between catheters and vessel walls. Additionally, it accounts for the deformable nature of the vessels induced by the cyclic heartbeat motion. In addition, a novel deformable model-to-image registration framework is designed to reconstruct comprehensive intra-operative vessel structures from medical imaging data, while accurately accounting for deformations.
Given the dynamic vascular environments generated as above, a robust path planner named C-GAIL for steerable catheters is proposed in the third part of this dissertation. This path planner ensures higher precision and robustness by accounting for both the deformable properties of vessels and the catheter's steering capabilities. The in-vitro experiments demonstrate that the path generated by the proposed C-GAIL path planner aligns better with the actual steering capability of robotic catheters. Thereafter, the dissertation presents an in-depth exploration of path planning assistance utilizing various interactive modalities based on augmented reality. Three interactive control modalities for steering robotic catheters are introduced, and their impact on human-in-the-loop robot-assisted cardiac catheterization is investigated. The path guidance is facilitated by the previously discussed C-GAIL path planning method. A user study is conducted, which demonstrates the feasibility of harnessing the capabilities of a gaming joystick for catheter teleoperation and the practicality of utilizing a head-mounted display to receive 3D visual feedback.","Path planning; Medical robotics; Augmented reality; Simulator development","en","doctoral thesis","","978-94-6384-520-5","","","","","","2024-11-30","","","Medical Instruments & Bio-Inspired Technology","","",""
"uuid:d848c617-2eae-491d-aae1-d524495e9e65","http://resolver.tudelft.nl/uuid:d848c617-2eae-491d-aae1-d524495e9e65","Hydrogenated nanocrystalline silicon-based layers for silicon heterojunction and perovskite/c-Si tandem solar cells","Zhao, Y. (TU Delft Photovoltaic Materials and Devices)","Weeber, A.W. (promotor); Isabella, O. (promotor); Zeman, M. (promotor); Delft University of Technology (degree granting institution)","2023","Large-scale deployment of photovoltaic (PV) technology is imperative for realizing a future sustainable and electrified energy system. Over the past decades, technological advancements that enhance the efficiency of PV technologies have been one of the crucial aspects for significantly reducing the cost of PV-generated electricity. Among various crystalline silicon (c-Si) PV technologies, silicon heterojunction (SHJ) solar cells, which have achieved the highest efficiency of single-junction c-Si solar cells, hold great promise for advancing the energy transition facilitated by PV technologies even further. Moreover, notable efficiency enhancements, which are well beyond the theoretical efficiency limit of single-junction c-Si solar cells, have been experimentally demonstrated by combining SHJ solar cells with semi-transparent perovskite solar cells in tandem configurations. This thesis focuses on addressing the challenges of efficient deployments of doped hydrogenated nanocrystalline silicon-based (nc-Si:H-based) layers for high-efficiency front/back-contacted (FBC) SHJ solar cells and applications of FBC-SHJ bottom-cells in two-terminal (2T) and four-terminal (4T) tandem devices with perovskite top-cells, supported by advanced opto-electrical simulations.","","en","doctoral thesis","","978-94-6473-313-6","","","","","","","","","Photovoltaic Materials and Devices","","",""
"uuid:a482f922-4c23-4b0e-bdbf-594b0c2b6142","http://resolver.tudelft.nl/uuid:a482f922-4c23-4b0e-bdbf-594b0c2b6142","Characterization and Modeling of Time-Varying Networks","Ceria, A. (TU Delft Multimedia Computing)","Hanjalic, A. (promotor); Wang, H. (copromotor); Delft University of Technology (degree granting institution)","2023","The interconnected nature of our daily lives, both virtually and physically, highlights the importance of understanding temporal networks in the context of epidemic and information spread. This dissertation aims to address this challenge by proposing characterization methods for temporal networks. In Chapter 2, the analysis reveals that close temporal contacts are generally close in topology, with virtual contacts showing a stronger correlation, suggesting the potential for social contagion. However, a limitation is acknowledged, as the methodologies assume interactions only occur between pairs of nodes.
Chapter 3 extends the focus to characterize temporal higher-order networks involving groups of nodes larger than pairs. Findings demonstrate differences between collaboration and physical interaction networks, with physical contacts exhibiting strong correlation between topological distance and temporal delay. In contrast, collaboration networks show weak or absent correlation.
Considering temporal networks as spreading processes, Chapter 4 introduces a methodology to identify underlying spreading processes among nodes, specifically exploring the congestion contagion of airports in the U.S. air transportation network. The proposed heterogeneous Susceptible-Infected-Susceptible (SIS) spreading process effectively reproduces nodal vulnerability and outperforms a homogeneous model.
The dissertation concludes with reflections on the insights gained and suggests future research directions in the field of temporal network characterization.","Temporal Networks; Higher-Order Networks; Network Models; Network CharacterizationMethods","en","doctoral thesis","","978-94-6469-703-2","","","","","","","","","Multimedia Computing","","",""
"uuid:72116acd-c5aa-4b3b-8fc7-52f1b2fa9958","http://resolver.tudelft.nl/uuid:72116acd-c5aa-4b3b-8fc7-52f1b2fa9958","Towards data-driven turbulence modeling for wind turbine wakes","Steiner, J. (TU Delft Wind Energy)","Viré, A.C. (promotor); Watson, S.J. (promotor); Dwight, R.P. (copromotor); Delft University of Technology (degree granting institution)","2023","The Dutch energy strategy expects renewable energy sources like wind and solar to provide around 70% of the yearly electricity by 2030. In order to achieve these targets, models that efficiently and accurately capture the flow around wind turbines would be immensely helpful for both planning and operation of wind farms.
For wind turbine wake interaction, computationally cheap and simple engineering models fail to capture the more complex flow physics, whereas LES based models do very well but are computationally too expensive. An alternative is to use Reynolds-Averaged Navier-Stokes (RANS) solvers which lie somewhere between LES and engineering models. However, these models have structural shortcomings for many applications and development of better models has stalled in the past decades.
More recently, data-driven techniques have been used to try and derive better, application-specific models. In this work, a combined methodology between a baseline RANS model and a data-driven correction is presented. The resulting models give significantly better predictions than the baseline model for both velocity and turbulent kinetic energy. Similar to traditional Nonlinear Eddy Viscosity Models, the models initially showed numerical instability, but a pragmatic solution was found for this.
The novelty of the results presented in this thesis lie in the application of the methodology to higher Reynolds numbers and 3D test cases. However, there still remains much to be done before data-driven models can be useful in industrial practice. This would require larger datasets and more efficient algorithms for both training and testing of the data-driven corrections to the baseline turbulence model.
Four crucial hydraulic quantities are involved in the core-annular flow study, namely: pressure drop, hold up ratio, watercut, and total flow rate. The pressure drop can be non-dimensioned to a Fanning friction factor. From the Fanning friction factor, the lubrication strength of CAF is shown, since the value of the Fanning friction factor is comparable to water only pipe flow under the same mixturebased Reynolds number. The holdup ratio indicates the apparent slip effect between the oil and water; its value is between 1 and 2, because water somewhat accumulates in the core-annular flow. In both the numerical simulations and lab experiment, two parameters are set as input and two parameters appear as output. Understanding the correlation between the four parameters can help to properly design the pipe flow system. The study of the correlation between these four parameters will be presented in Chapter 3 and in the Appendix.
The effect of gravity on the CAF depends on the inclination of the pipe. For horizontal pipe flow, gravity acts perpendicular to the pipe wall and introduces a buoyancy force on the oil core. Our simulation starts with concentric oil-water CAF with a flat interface. This flow configuration is unstable for a horizontal pipe and will finally develop into an eccentric oil core with a wavy interface. The waves create a downward force to balance the buoyancy force. Due to the movement of the oil core, a secondary flow will appear in the water layer. From our simulation results, we found how the inertia effect redistributes the pressure on the interface, creating a net downward pressure force that balances the buoyancy force, and prevents the oil core to touch the upper pipe wall. This part will be illustrated in Chapters 2 and 3. For the vertical pipe, gravity acts in the streamwise direction. Detailed DNS simulations were presented by Kim & Choi (2018). In Chapter 5, we repeat the work of Kim & Choi by using RANS, and find a rather good agreement for the Fanning friction factor and holdup ratio between RANS and DNS. Different is that the waves in the RANS simulations are more regular and that RANS predicts higher turbulence than DNS...
Within different research methods, the Discrete Element Method (DEM) is utilized to analyse behaviour of granular materials on particle level, making it suitable for railway ballast-related research. A DEM model allows for the description of interaction between particle on a mesoscopic level, while presenting the overall performance of the assembly on a macroscopic level. However, the large number of elements in a model along with the complex algorithm lead to high computational efforts, resulting in low efficiency of the DEM models. This problem limits the number of elements acceptable in a model, which means that only a limited amount of materials in a limited scale is possible to be generated and analysed. Considering the calculation time, the accepted number of elements in a mode depends on various simulated particle sizes (e.g., soil, sub-ballast, ballast) and simulated model sizes (e.g., box model, full-scale model). Additionally, it also affects the simulated loading condition, e.g. static loading or cyclic loading....","","en","doctoral thesis","","","","","","","","","","","Railway Engineering","","",""
"uuid:f910c01b-6a03-42ba-b967-6a0e4dc3480f","http://resolver.tudelft.nl/uuid:f910c01b-6a03-42ba-b967-6a0e4dc3480f","Safety Risk Assessment of Unmanned Aircraft System Operations for Urban Air Mobility","Jiang, C. (TU Delft Air Transport & Operations)","Blom, H.A.P. (promotor); Sharpanskykh, Alexei (copromotor); Delft University of Technology (degree granting institution)","2023","Technology developments has enabled Unmanned Aircraft System (UAS) to be adopted for various applications, including Urban Air Mobility (UAM) – an air transportation system for passengers and cargo in and around urban environments. The operations of UAS in urban environment inevitably raises concerns about the safety impact of UAS.
The operational characteristic of UAS is largely different from the conventional commercial aviation. which brings novel safety issues for which the safety learning process has just started. To address these novel safety issues of UAS operations, it is essential to systematically study them within a formal setting of safety risk assessment.
Safety risk assessment involves a process that comprises risk indicators, risk analysis and risk evaluation. In recent years, regulators and researcher have dedicated significant efforts to developing risk assessment for UAS operations. These approaches are largely adopted from safety risk assessment of commercial aviation. However, it is essential to recognize that UAS operations have large differences with commercial aviation. Therefore there remains shortcomings and improvements to be made to the risk assessment of UAS operations.
This thesis addresses the further development of risk assessment methods for UAS operations for Urban Air Mobility (UAM). The main risk posed by UAM is third party risk (TPR) posed to people on the ground. Therefore, the focus of this thesis is on improving risk assessment methods for ground TPR.
The first study focuses on the TPR indicators for UAS operations. Based on these TPR indicators of commercial aviation, novel TPR indicators and nine separate third party fatality terms are identified. Subsequently, current UAS regulations are evaluated regarding their coverage of these nine third party fatality terms. By doing so, the research provides a more comprehensive understanding of the overall third party risk posed by UAS operations.
The second study aims to develop a safety risk assessment method for the novel ground TPR indicators proposed in the first study. To achieve this, a Monte Carlo simulation based risk assessment approach is proposed and applied to a hypothetical UAS urban parcel delivery case. The results show that the proposed annual ground TPR model and indicators provide an accumulated understanding of the risk posed to people on the ground. The non-negligible level of uncertainty in the models adopted highlights the need for further development of more accurate sub models for UAS ground TPR assessment.
The third study aims to improve the accuracy of the common ground TPR model, where a key limitation lays in the assumption that the product of impact PoF and size of impact area are independent of each other. To address this, an improved characterization is developed and evaluated using dynamical simulation of MBS model of a UAS impacting a human body. The comparison of the novel approach to existing approaches shows significant advantages of the novel developed approach.
The fourth study applies the novel approach developed in the third study to an urban parcel delivery UAS, weighting 15kg, equipped with airbag and parachute. A key motivation is that existing models do not address the risk mitigating effects of equipping a UAS with a combination of airbag and parachute. For the UAS equipped with an airbag Multi Body System (MBS) and Finite Element (FE) models are developed. Subsequently, these models are used to assess ground TPR for different cases with and without airbag and parachute. This analysis show that the method developed in the third study is able to quantify the risk reducing effects of the combination parachute and airbag.
The four interrelated series of studies have developed novel insights and methods in Third Party Risk assessment of UAS operations. These novel insights and methods can provide enhanced safety feedback to a UAS design process, and can stimulate further development of UAS regulation.
This dissertation focuses on identifying and resolving three key challenges related to scheduling in modular production. The first challenge revolves around the definition and utilization of modules. Factors such as resource requirements, project sequencing influenced by module size, and project-specific variations in module usage are crucial considerations. The second challenge pertains to inventory management, where reduced production time increases the impact of long lead times, and standardized components spread inventory costs across multiple projects. The third challenge involves stochastic scheduling, leveraging the structural similarities among products in a modular production system to optimize schedules for future projects.
To address these challenges, the dissertation explores the Resource Constrained Project Scheduling Problem with a flexible Project Structure (RCPSP-PS). It introduces a Mixed Integer Linear Programming (MILP) model and a solution method, demonstrating its superiority over existing methods. Given the NP-hardness of the problem, heuristic methods, including group graphs, hybrid differential evolution, and ant colony optimization algorithms, are proposed to quickly find feasible solutions.
The scope expands to the production of a product family through the Resource Constrained Project Scheduling Problem with Modular construction and new Project arrivals (RCPSPMP). This extended problem incorporates stochastic project arrivals and inventory allocation, modeling the pre-assembly of modules. A Progressive Hedging (PH) algorithm is introduced to consider future project arrivals, ultimately aiming to create a profitable product family rather than individual products.
Finally, stochastic project arrivals are considered for the standard Resource Constrained Project Scheduling Problem (RCPSP). Simulation optimization is initially employed, but a data-assisted method using neural networks is introduced to significantly reduce computational costs while maintaining solution quality.
In conclusion, this dissertation presents comprehensive methods for scheduling in modular shipbuilding, addressing challenges related to flexible project structures, nonrenewable resources, resource allocation, and stochastic project arrivals. The versatility of these methods extends their applicability beyond shipbuilding to various industries.","Modular shipbuilding; Project scheduling; Resource constrained project scheduling problem; optimization","en","doctoral thesis","","978-94-6473-319-8","","","","","","","","","Ship Design, Production and Operations","","",""
"uuid:372c1678-f3b2-4e39-8aef-3c05bf954d76","http://resolver.tudelft.nl/uuid:372c1678-f3b2-4e39-8aef-3c05bf954d76","Structural health monitoring of the additively manufactured structures with embedded fiber optic sensors","Xiao, Y. (TU Delft Structural Integrity & Composites)","Benedictus, R. (promotor); Rans, C.D. (copromotor); Delft University of Technology (degree granting institution)","2023","Additively manufacturing can bring opportunities and risk factors to the aerospace industry. On one hand, additive manufacturing allows the manufacturing of structures with geometries that are difficult or impossible to fabricatewith conventional machining procedures. This geometry flexibility may lead to components with a greater strength-to-weight ratio, which can enhance the aircraft’s fuel efficiency. On the other hand, possible defects in the additively manufactured parts can lead to reduced strength and increased fatigue susceptibility. In addition, it is very difficult to apply traditional nondestructive testing techniques to additively manufactured specimens with complex geometry due to limited accessibility.....","Structural health monitoring; fiber optic sensor; additive manufacturing; crack detection; machine learning","en","doctoral thesis","","978-94-6384-500-7","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:a15a3a3b-79c3-4cee-add4-bdf727606d06","http://resolver.tudelft.nl/uuid:a15a3a3b-79c3-4cee-add4-bdf727606d06","Advances in PIV Uncertainty Quantification: Towards a Comprehensive Framework","Adatrao, S. (TU Delft Aerodynamics)","Scarano, F. (promotor); Sciacchitano, A. (copromotor); Delft University of Technology (degree granting institution)","2023","Particle Image Velocimetry (PIV) is a leading technique that allows flow velocity measurements in two- and three- dimensional domains. PIV is a full-field, non-intrusive and quantitative technique. However, due to the complexity of the measurement chain, PIV results are often affected by errors from various sources. It is therefore necessary to identify these errors and quantify the uncertainties. The available PIV uncertainty quantification (UQ) approaches are limited in estimating systematic uncertainties in the measurements and mostly focus on the random uncertainty. In order to exploit full benefits of PIV, the knowledge of the full uncertainty comprising both random and systematic uncertainties is necessary. The present work proposes a comprehensive PIV-UQ framework which not only quantifies the systematic uncertainties but is also universal as it can potentially be used for any measurement irrespective of the measurement setup (e.g. planar PIV, tomographic PTV, large scale PIV or microscopic PTV) or the output quantity (e.g. mean velocity or higher order statistics).","","en","doctoral thesis","","978-94-6384-510-6","","","","","","","","","Aerodynamics","","",""
"uuid:1314fbd5-794a-47d7-8bc6-ed73a84d8a6d","http://resolver.tudelft.nl/uuid:1314fbd5-794a-47d7-8bc6-ed73a84d8a6d","Realizing superconducting spin qubits","Pita-Vidal, Marta (TU Delft QRD/Kouwenhoven Lab)","Kouwenhoven, Leo P. (promotor); Andersen, C.K. (promotor); Delft University of Technology (degree granting institution)","2023","Josephson junctions implemented in semiconducting nanowires proximitized by a superconductor exhibit intricate physics arising from the interplay of electron-electron interactions, superconductivity, spin-orbit coupling, and the Zeeman effect. This thesis explores these phenomena through a series of experiments conducted using circuit quantum electrodynamics techniques.
After establishing the fundamental theoretical concepts and experimental methodologies, we introduce a crucial element for probing our devices with microwaves: magnetic field-compatible resonators. We then describe various experiments conducted over the past years in which superconducting resonators and other circuits are used to explore the physics of nanowire Josephson junctions.
In an initial experiment, we develop a magnetic-field-resilient fluxoniumcircuit that incorporates an InAs semiconducting nanowire at its core. We show that the device’s spectrum is highly dependent on both the electrostatic gate voltage and the magnetic field strength, allowing us to detect signatures of non-conventional phenomena in semiconducting Josephson junctions.
The bulk of this thesis revolves around a second set of experiments, where a quantum dot is electrostatically defined within the nanowire Josephson junction. This time, we use a transmon circuit to investigate singlet-doublet ground state transitions and their dynamics. The two spinful doublet states of the junction define a novel type of qubit with intriguing properties: a superconducting (or Andreev) spin qubit (ASQ). Thus, we then shift our focus to the doublet states and explore their magnetic field dependence with transmon spectroscopy. Subsequently,we turn to directly investigating the spin-flip transition and the coherence properties of the two spin states. We find that the intrinsic coupling between the spin state and the supercurrent through the junction enables
strong coupling between the ASQ and the transmon qubit in which it is embedded.
In a final experiment, we connect two such Andreev spin qubits in parallel and investigate their supercurrent-mediated longitudinal coupling. We find that the qubits are strongly coupled and their coupling strength can be switched on and off by adjusting the magnetic flux. Notably, given that the spins are placed micrometers apart, this mechanism enables interaction between distant spins. Building on these promising characteristics, we end by introducing a proposal that outlines our vision for scaling up ASQs. The proposed architecture, where multiple ASQs are connected in parallel, enables the selective coupling of any pair of qubits in the system, regardless of their spatial separation, through flux control.
This thesis concludes by outlining potential future experiments that could be conducted with devices and techniques similar to those investigated here.","","en","doctoral thesis","","978-90-8593-584-1","","","","","","","","","QRD/Kouwenhoven Lab","","",""
"uuid:1ff19056-6af0-4b96-b278-84907c20ec77","http://resolver.tudelft.nl/uuid:1ff19056-6af0-4b96-b278-84907c20ec77","Functional tip-sample interactions in STM","Gobeil, J. (TU Delft QN/Otte Lab)","Otte, A. F. (promotor); van der Zant, H.S.J. (promotor); Delft University of Technology (degree granting institution)","2023","","Scanning tunnelling microscopy; Tip functionalisation; Image potential states; Field emission resonance; Quantum magnetism; Frustrated magnetism","en","doctoral thesis","","978-94-6366-790-6","","","","","","","","","QN/Otte Lab","","",""
"uuid:6ea4a13b-ff8e-4c4b-866d-168cdccd880a","http://resolver.tudelft.nl/uuid:6ea4a13b-ff8e-4c4b-866d-168cdccd880a","Learning Automata for Network Behaviour Analysis","Pellegrino, G. (TU Delft Cyber Security)","van den Berg, Jan (promotor); Verwer, S.E. (promotor); Delft University of Technology (degree granting institution)","2023","","Automata Inference; Intrusion Detection; Passive Learning; Behavioral Fingerprinting","en","doctoral thesis","","978-94-6469-686-8","","","","","","","","","Cyber Security","","",""
"uuid:b0405182-6bef-47c7-9f61-84d0e29c70bc","http://resolver.tudelft.nl/uuid:b0405182-6bef-47c7-9f61-84d0e29c70bc","On Unitary Positive Energy and KMS Representations of Some Infinite-Dimensional Lie Groups","Niestijl, M. (TU Delft Analysis)","van Neerven, J.M.A.M. (promotor); Janssens, B. (copromotor); Delft University of Technology (degree granting institution)","2023","In this dissertation, we study (projective) unitary representations of possibly infinite dimensional locally convex Lie groups, in the sense of Bastiani, that either satisfy a positive energy condition, or a KMS(Kubo-Martin-Schwinger) condition. Both of these are motivated by physics. The main purpose of this thesis is to gain general understanding for these classes of representations, and more specifically to develop general tools by which they can be studied in systematic fashion. These tools are consequently applied to specific cases of interest, demonstrating that these conditions are typically extremely restrictive, that the classification of these classes of representations is feasible in various cases, and that these tools can be effectively applied towards achieving such a classification....","","en","doctoral thesis","","978-94-6473-297-9","","","","","","","","","Analysis","","",""
"uuid:71b4a574-184e-4b4d-929b-11f7c1e1b2db","http://resolver.tudelft.nl/uuid:71b4a574-184e-4b4d-929b-11f7c1e1b2db","Phenolic and petrochemical wastewater treatment in AnMBR under extreme conditions: High salinity, high temperature, and high concentration of toxic compounds","Garcia Rea, V.S. (TU Delft Sanitary Engineering)","van Lier, J.B. (promotor); Spanjers, H. (promotor); Delft University of Technology (degree granting institution)","2023","Anaerobic digestion (AD) is a biochemical process in which organic matter is converted into biogas in a series of biochemical reactions. The development of high-rate anaerobic reactors (HRAR) led to the breakthrough of full-scale applications of AD for the treatment of industrial wastewater. HRARs, such as the upflow anaerobic sludge blanket (UASB) or the expanded granular sludge bed (EGSB) reactors are characterized by long solids retention times obtained by the gravitational separation of the solids from the liquid. Enhanced biomass retention is facilitated by the formation and growth of granular methanogenic biomass in EGSB and, most commonly also, in UASB reactors treating industrial wastewaters....","Anaerobic digestion; AnMBR; salinity; phenol; phenolic compounds; acetate; thermophilic; p-cresol; syntrophic acetate oxidation; resorcinol; carbon and energy sources; bitumen fume condensate; IC50; microbial community","en","doctoral thesis","","978-94-93353-44-2","","","","","","","","","Sanitary Engineering","","",""
"uuid:c31d254c-045c-4f4d-bd82-0d24ef8d48fa","http://resolver.tudelft.nl/uuid:c31d254c-045c-4f4d-bd82-0d24ef8d48fa","Digitally Intensive Frequency Synthesis and Modulation Exploiting a Time-mode Arithmetic Unit","Gao, Z. (TU Delft Electronics)","Babaie, M. (promotor); Staszewski, R.B. (promotor); Delft University of Technology (degree granting institution)","2023","Reducing power consumption is becoming increasingly important for the sustainability of the communication industry because it is expected to consume a significant portion of the global electricity in the face of the exponentially increasing demands on the volume and rate of data transmission. As the scope narrows to the individual wireless device level, the reduced power consumption helps to extend the lifetime of battery-powered devices, thereby leading to improved user experience and enabling the development of innovative applications. The quest for the lower power consumption will profoundly shape the wireless transceiver design, i.e., each critical block in the system should constantly reduce its drained power without sacrificing the performance. With this background, the thesis focuses on the phase-locked loops (PLL) that generate RF clocks for wireless transceivers, and develops low-power techniques suppressing the fractional-spur levels when the PLL generates unmodulated carrier, and the phase modulation (PM) error when the PLL additionally serves as a two-point modulator...","time-mode arithmetic unit (TAU); digital-to-time converter (DTC); phase-locked loop (PLL); fractional spur; process voltage and temperature (PVT); spur cancelation; self-interference; synchronous interference; interference mitigation; PLL-based modulator; phase modulator; two-point modulation; non-uniform clock compensation (NUCC); phase-domain digital pre-distortion (DPD); LC-tank nonlinearity","en","doctoral thesis","","978-94-6366-779-1","","","","","","2024-12-07","","","Electronics","","",""
"uuid:300b839f-78fc-4d1b-a7d6-007d080e902e","http://resolver.tudelft.nl/uuid:300b839f-78fc-4d1b-a7d6-007d080e902e","Advancing non-rigid 3D/4D human mesh registration for ultra-personalization","Tajdari, F. (TU Delft Emerging Materials; TU Delft Mechatronic Design)","Song, Y. (promotor); Huysmans, T. (copromotor); Delft University of Technology (degree granting institution)","2023","Personalized designs bring significant added value to the products and the users. However, they also pose challenges on the product design process. For instance, for products for personalized fit ,each may differ subject to each user’s body shape and preference. Presently, there exist knowledge and methods which suppor designing personalized products/services ,with sample applications in the fields of medical products, shoes, clothing industry, etc. Meanwhile, the major steps in these methods are manual or semi-automated, thus designing Ultra Personalized Products and Services(UPPS) can be a tedious and time-consuming task. Furthermore ,the design process is usually not optimized and most applications are employing ad-hoc approaches. Designers need a systematic approach to designing UPPS...","personalized design; human body shapes; 3D mesh registration; 4D scanning; computational design framework","en","doctoral thesis","","978-94-6366-792-0","","","","","","","","","Mechatronic Design","","",""
"uuid:65a92e32-3830-4c4e-a244-ad5bbcb7af89","http://resolver.tudelft.nl/uuid:65a92e32-3830-4c4e-a244-ad5bbcb7af89","Advancements in Optical Diagnostics for Experimental Aeroelasticity: Benchmarking the cylinder-foil system","Gonzalez Saiz, G. (TU Delft Aerodynamics)","Scarano, F. (promotor); Sciacchitano, A. (copromotor); Delft University of Technology (degree granting institution)","2023","Experimental aeroelasticity has been hindered by the intrusivity of measurement equipment and complexity of the experimental setups. Advancements in hardware development have been encouraging optical tracking techniques to replace cross correlation approaches in experimental aerodynamics. However, digital image correlation (DIC) is still the standard optical technique in structural diagnostics. The first experiment of the dissertation assesses the accuracy of an established tracking technique, such as Shake-the-Box, for tracking surface markers on a moving panel, comparing it to DIC. Despite being outperformed in terms of spatial resolution by DIC, surface marker tracking resulted in the same order of accuracy, making structural tracking suitable for large-scale applications. Such results imply the feasibility of characterizing experimentally aeroelastic problems by tracking, in a simultaneous manner, flow tracers and surfacemarkers....","experimental aeroelasticty; fluid-structure interaction; unsteady aerodynamics; Lagrangian particle tracking; Collar Triangle; force estimation","en","doctoral thesis","","978-94-6366-787-6","","","","","","","","","Aerodynamics","","",""
"uuid:73271cad-d4c1-4dbb-97b6-3682b2c1c9c4","http://resolver.tudelft.nl/uuid:73271cad-d4c1-4dbb-97b6-3682b2c1c9c4","Microscopic 3D plant imaging with high-resolution optical coherence tomography","de Wit, J. (TU Delft ImPhys/Computational Imaging; TU Delft ImPhys/Kalkman group)","Kalkman, J. (promotor); Stallinga, S. (promotor); Delft University of Technology (degree granting institution)","2023","","Optical coherence tomography (OCT); plant imaging; spectral estimation; functional OCT; refocusing; aberration correction","en","doctoral thesis","","978-94-6384-509-0","","","","","","2024-04-06","","","ImPhys/Computational Imaging","","",""
"uuid:c706c198-d186-4297-8b03-32c80be1c6df","http://resolver.tudelft.nl/uuid:c706c198-d186-4297-8b03-32c80be1c6df","Aero-structural Design and Optimisation of Tethered Composite Wings: Computational Methods for Initial Design of Airborne Wind Energy Systems","Candade, A.A. (TU Delft Wind Energy)","van Bussel, G.J.W. (promotor); Schmehl, R. (promotor); Delft University of Technology (degree granting institution)","2023","Airborne wind energy (AWE) is an emerging renewable energy technology that harnesses wind energy using tethered flying systems. The extra degrees of freedom allow these systems to harvest wind resources at altitudes currently unrealisable by conventional turbines. These flying devices, often resembling kites or drones, are typically divided into two classes. The first converts kinetic energy into electricity using onboard generators and transmits it to the ground via a conductive tether. The second class transfers aerodynamic forces via the tether to the ground, where the mechanical energy is converted into electrical energy using an electrical machine.As the tether’s length constrains the system, once the flying device reaches this tether length limit, some energy must be used to retract it back to its initial position. This cycle of traction and retraction is known as a pumping cycle. Therefore, AWE systems must be designed to maximise the harvesting or traction phase while minimising the retraction phase to ensure a net positive power output.
From the AWE system landscape, this thesis is based on tethered aircraft-style fixed-wing systems. Typically, such systems utilise composite structures owing to their high stiffness-toweight ratios. Designing these composite structures demands special attention due to their anisotropic nature, which results in complex load-deflection couplings. Here, a multi-disciplinary simulation framework for tethered composite aircraft wings is developed. The research focuses on methods used during the iterative phases of initial (conceptual and preliminary) design that are commonly employed in a spiral system engineering approach. The proposed framework integrates computational methods for the design of the aerodynamic A, bridle B, and structural S domains. The bridle is a system of segments of tether and pulleys that distribute the tether forces into the wing structure. The aerodynamic and structural domains are divided into 2D and 1D models, which are then integrated to determine the 3D response of the wing. A nonlinear vortex-lattice method (VLM) is utilised for the aerodynamic domain.
For the structural domain, an anisotropic 1D finite element (FE) model is developed that is coupled with a 2D FE sectional solver. In addition, methods are proposed that enable detailed topology optimisation. For tailless swept-wings, like those used by EnerKíte, the aero-structural-bridle interactions are crucial. The developed framework is used to investigate the impacts of different wing and bridle configurations to determine the sufficient level of fidelity required at the initial design phases. Typically, such aeroelastic phenomena are captured during detailed design stages wherein full 3D structural and aerodynamic simulations are employed. However, this mandates design knowledge typically unknown at the initial design stages. This motivates a multi-fidelity modelling approach to include these coupling effects while abstracting the composite ply level details during the design exploration. This is achieved by combining geometric discretisation approaches with lamination parameters. Thus, the framework aims to provide viable design options during the initial stages while considering aero-structural-bridle couplings.","AWE; Initial Design Methods; Composite Structures; Aeroelasticity","en","doctoral thesis","","978-94-6384-508-3","","","","","","","","","Wind Energy","","",""
"uuid:efee1e53-d081-4fcb-9a3c-39650de2a13e","http://resolver.tudelft.nl/uuid:efee1e53-d081-4fcb-9a3c-39650de2a13e","Precipitation extremes around the world: Unraveling historical extremes and future changes","Gründemann, Gaby J. (TU Delft Water Resources)","van de Giesen, N.C. (promotor); van der Ent, R.J. (copromotor); Delft University of Technology (degree granting institution)","2023","Improved understanding of historical precipitation extremes is important to better explain their behavior, predict future occurrences, and inform planning and engineering design. The intensity, seasonality, and timing of these extremes have far-reaching consequences, and require a comprehensive analysis of both historical trends and projected future changes. By integrating historical observations, statistical methods, and climate model projections, this research provides valuable insights into precipitation extremes on the global domain.","extreme precipitation; extreme value distribution; climate change; seasonality","en","doctoral thesis","","978-94-6473-310-5","","","","","","","","","Water Resources","","",""
"uuid:cb598569-af98-4cef-8115-9939fa5ed256","http://resolver.tudelft.nl/uuid:cb598569-af98-4cef-8115-9939fa5ed256","Towards Safe and Just Work Environments for System Administrators: A Qualitative Sociotechnical Investigation into System Administration","Kaur, M. (TU Delft Information and Communication Technology)","Janssen, M.F.W.H.A. (promotor); Fiebig, T. (copromotor); Delft University of Technology (degree granting institution)","2023","Technological systems and infrastructures form the bedrock of modern society and it is system administrators (sysadmins) who configure, maintain and operate these infrastructures. More often than not, they do so behind the scenes. The work of system administration tends to be unseen and, consequently, not well known. After all, do you think of your IT help-desk when everything is working just fine? Usually, people reach out for help when something is not working as expected or when they need something. A lot of work and effort goes into ensuring that systems are working as expected most of the time and, paradoxically, this smooth functioning results in the invisibilization of the work and effort that went into it.
This PhD research focuses on system administration work and what that entails in day-to-day tasks. Instead of proposing technical and social solutions, we try to better understand the “problem” that these proposed solutions are meant to solve. Drawing from safety science research and feminist research approaches, we perform a qualitative exploration of sysadmins’ work. We center their experiences via an in-depth interview investigation and a focus group study. We identify and describe the coordination mechanisms and gender considerations embedded in their work. We shed light on care work as part of sysadmin work and the phenomenon of double invisibility that is experienced by sysadmins who are not cis men. The thesis wraps up with a set of recommendations for moving toward safe and equitable work environments for sysadmins.
In many cases, a statistician has a belief about the true value of the parameter before even starting the experiment. The Bayesian paradigm is an attractive method of combining the new information coming from observations with this prior belief. It gives a sound mechanism, namely the posterior distribution, to update the beliefs about the truth.","Statistics; Bayesian; Inverse Problems; Posterior contraction rate; Bernstein–von Mises theorems; Distributed methods; Asymptotics; Misspecification","en","doctoral thesis","","","","","","","","","","","Statistics","","",""
"uuid:59760454-301a-43cf-ad96-005b6484a062","http://resolver.tudelft.nl/uuid:59760454-301a-43cf-ad96-005b6484a062","Spatial approaches to a circular economy: Determining locations and scales of closing material loops using geographic data","Tsui, T.P.Y. (TU Delft Environmental & Climate Design; TU Delft Design & Construction Management)","van Timmeren, A. (promotor); Peck, David (copromotor); Wandl, Alex (copromotor); Delft University of Technology (degree granting institution)","2023","Rapid urbanization and a growing world population has exerted unsustainable pressures on the environment, exacerbating climate change through unrestrained material usage and greenhouse gas (GHG) emissions. Since the turn of the century, transitioning to a circular economy (CE) has been seen by policy makers as a potential solution for resource scarcity and climate mitigation. Cities, which possess a high density of human activities, material stock, and waste production, are major contributors to emissions. This is especially true due to the concentration of construction activities in cities – the industry is responsible for 38% of CO2 emissions and 40% energy consumption globally. On the other hand, cities can also facilitate the implementation of circular strategies, thanks to increasing availability of data on space, people, and materials in cities. While the importance of cities for the circular transition is recognized in literature, earlier studies and policy documents on “circular cities” focus on urban governance strategies. Scholars have therefore called for a deeper understanding of the spatial aspects of CE since the late 2010s, engendering the recent integration of spatial disciplines, such as urban planning, regional economics, and geography, into the study of CE. Moreover, the increasing availability of spatial data, especially on the location of material stocks and flows, provides an unprecedented opportunity to develop a data-driven understanding of where, and how far, materials should travel in a CE. This research therefore asks the question, “what determines the locations and scales of closing material loops in a circular economy?” The question was answered in 5 chapters (chs. 3-7), using both quantitative and qualitative spatial analysis methods, as well as present- and future-oriented perspectives. The research scope moves from general to specific, with earlier chapters (chs. 3-6) analysing 10 material types for the whole country of the Netherlands, and later chapters (chs. 6-7) focusing on construction materials in the city of Amsterdam and its surrounding region. Two novel data sources were used throughout the research. Waste statistics from the Dutch National Waste Registry provided current locations of waste reuse; and a prediction dataset from the Dutch Environmental Assessment Agency provided locations for future supply for construction waste and future demand for construction materials. In chapter 3, a theoretical foundation for understanding locations and scales for closing material loops was constructed by identifying the drivers, barriers, and limitations of circular urban manufacturing - processes that produce goods using local secondary resources. By conducting a literature review and interviewing experts, it was found that there were several caveats to closing material loops at a local scale. Factors that determine the locations of circular urban manufacturers were identified from three perspectives: space, people, and flow. In chapter 4, the factors affecting locations of waste reuse in the Netherlands were identified using spatial correlation. The previously identified space, people, and flow factors were translated into quantitative spatial factors that could affect the location of waste reuse. Correlations were found for flow and space-related factors, but not for people-related factors, which suggests that actors within the waste-to-resource supply chain tend to attract each other and cluster together to form agglomerations, and that locations of waste reuse are not related to attributes of the local population, such as local income, skills, or education. In chapter 5, the location and scale of waste reuse clusters in the Netherlands were then identified using spatial statistical methods. This answered the main research question from a spatial econometric perspective, identifying industrial clusters for closing material loops. It was found that all the studied materials except for glass and textiles formed statistically significant spatial clusters. To determine the scale of spatial clustering, the grid cell sizes for data aggregation were varied, to find the cell size that had the strongest spatial clustering. The best fit cell size is ~7 km for materials associated with construction and agricultural industries, and ~20–25 km for plastic and metals. In chapter 6, to answer the question from a spatial planning perspective, spatial parameters were identified for circular construction hubs - facilities that close material loops by collecting, storing, and redistributing demolition waste as secondary construction materials. Using the Netherlands as a case study, spatial parameters were extracted from two sources: Dutch governmental policy documents, and interviews with companies operating circular hubs. Four types of circular construction hubs were identified: urban mining hubs, industry hubs, local material banks, and craft centers. The spatial requirements for the four hub types were translated into a list of spatial parameters and analysis methods required to identify future locations - site selection, spatial clustering, and facility location. Finally, in chapter 7, spatial optimization was used to identify the optimal scale and location for circular timber hubs in Amsterdam and its surrounding region, answering the main research question from the perspectives of industrial ecology and logistics. The optimal scale was defined as a scale that is most cost effective, minimizing costs and maximizing emissions reductions through timber reuse. The optimal number of hubs for the study area was 29, with an average service radius of 3 km. The cost effectiveness was affected mostly by transportation and storage costs, while emissions savings had minimal effect. As an overall conclusion, five tensions were identified for determining locations and scales for closing material loops, because of the diverse and sometimes misaligned spatial perspectives. The first three tensions are conceptual, addressing contrasting perspectives for defining closing material loops - as urban manufacturing or urban mining; for their locations - as clusters or hubs; and for the factors that affect locations and scales - as spaces, people, or materials. The final two tensions are methodological, addressing contrasting approaches to time - looking at the present or the future; and to methods - quantitative or qualitative.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-782-1","","","","","","","","","Environmental & Climate Design","","",""
"uuid:abd28906-04fb-4faa-8b4e-aef8dd893c70","http://resolver.tudelft.nl/uuid:abd28906-04fb-4faa-8b4e-aef8dd893c70","Monitoring Dynamic Properties of Railway Tracks Using Train-borne Vibrometer Measurement","Zeng, Y. (TU Delft Railway Engineering)","Li, Z. (promotor); Nunez, Alfredo (copromotor); Delft University of Technology (degree granting institution)","2023","","Railway tracks; Structural health monitoring; Laser Doppler vibrometer; Speckle noise; Vibration measurement; Modal analysis; Transfer function","en","doctoral thesis","","978-94-6384-513-7","","","","","","2024-11-09","","","Railway Engineering","","",""
"uuid:45895388-2e1c-41de-88c3-fa06d6ab29ea","http://resolver.tudelft.nl/uuid:45895388-2e1c-41de-88c3-fa06d6ab29ea","Hardware and Protocol Optimization in Quantum-Repeater Networks","Horta Ferreira da Silva, F. (TU Delft QID/Wehner Group)","Wehner, S.D.C. (promotor); Hanson, R. (promotor); Delft University of Technology (degree granting institution)","2023","The future quantum internet promises to enable users all around the world to, among other applications, generate shared secure keys and perform distributed quantum computations. To do so, entanglement must be distributed between remote users. One way of doing this is by sending photons through optical fiber, which allows for reusing some existent classical infrastructure. However, the probability of photons being absorbed in optical fiber grows exponentially with the distance covered, rendering entanglement generation at larger-than-metropolitan scales unfeasible. One possible approach to enable distributing entanglement over larger distances is to employ quantum repeaters, devices that can in theory mitigate the effects of fiber loss by splitting the total distance to be covered into smaller segments. Despite recent advances, the required technology is still under development. In this dissertation we aim to contribute to a swifter realization of fiber-based quantum-repeater networks.
To this end, we introduce a methodology combining quantum-network simulations and genetic-algorithm-based optimizations that allows for determining hardware requirements for quantum repeaters. Using this methodology we translate quantum-network-application-derived performance metrics into specific requirements on the quantum repeaters used to implement the quantum network. This indicates not only how good hardware must be in order to enable given applications, but also in what specific ways state-of-the-art hardware must be improved to do so.
We also investigate the effects of using existing fiber infrastructure for the deployment of near-term quantum networks. Doing so would be a cost-effective way of constructing quantum networks. However, existing infrastructure also imposes constraints, namely on where quantum hardware can be placed. We quantify to what extent such constraints affect quantum-network performance, as well as how these effects can be mitigated by optimizing repeater placement.
Finally, we contribute to answering the question of how to extract the best possible performance out of imperfect hardware. For a given hardware quality, making the right choices with regards to what protocols are executed by the nodes and where nodes are placed can result in significant boosts in performance. We perform a joint hardware-protocol optimization and find that good hardware choices can significantly relax hardware requirements, as well as highlight multiple possible paths to functional quantum-repeater networks. We also provide tools for the discovery of entanglement generation protocols.
This thesis provides a comprehensive understanding of the lateral failure of the pile foundation by full-scale quay wall experiments and it proposes a computational model to predict the resistance against this failure mechanism.
To gain a comprehensive understanding of the lateral failure mechanism, an unique and extensive experimental program has been conducted on an existing historic quay wall, founded on timber piles. The quay is located at Amsterdam Overamstel and dates back to 1905. Experiments have been conducted at three different system levels. At level 1, four-point bending experiments have been performed on individual piles to obtain the bending material properties. At level 2, lateral pile group experiments have been conducted on two 3x4 pile groups to study the pile-soil-pile interactions. At level 3, proof load experiments have been carried out on entire full scale quay wall sections, to study the overall behaviour of the quay. As part of the experimental program, an extensive geotechnical site investigation has been performed. The experimental approach chosen enables a stepwise validation and calibration for computational quay wall models.
Through the experimental program, it is demonstrated that among all potential failure mechanisms, the lateral failure mechanism is most likely to occur when a quay wall is subjected to large surface loading at its backside. Examples of such loads in practice are parked or moving cars, heavy vehicles or goods. The mechanism is triggered by an increase in soil stresses at the backside of the quay, which pushes the foundation towards the water. This, in turn, results in the bending of the timber piles, accompanied by the development of bending stresses. State-of-the-art models (ABAQUS, PLAXIS and spring models) were used to predict the failure surface load of the Overamstel quay, with an estimated value of approximately 20kPa. However, in reality, the quay demonstrated significantly greater strength, as failure was not observed even for loads as high as 55kPa. While part of this underprediction can be attributed to experiment-specific effects not considered in the prediction analysis, the substantial underprediction of the failure load still emphasizes the conservatism in current modelling approaches.
Clear indicators of the lateral failure mechanism include the inclined position of the top of the piles, broken piles, settlements at the backside of the quay, and lateral deflection of the foundation. These indicators can effectively be monitored, as demonstrated by the employed monitoring plan in the experiments. Elements of this plan, such as inclination sensors mounted on the pile caps, can be implemented in Amsterdam’s city centre to detect signs of lateral failure. The foundation piles experience fracture when they reach a state of full yielding, which occurs when the bending stresses in the timber surpass the modulus of rupture across the entire cross-section of the pile. Bending experiments conducted on timber piles indicate a substantial variance in both the modulus of rupture (variation coefficient of 0.26) and the modulus of elasticity (variation coefficient of 0.3). Consequently, the piles exhibit a wide range of flexural stiffnesses and bending moment capacities. These discrepancies stem from natural variability and biological degradation of the timber, which lead to the formation of a weakened outer layer or “soft shell” starting at the perimeter of the piles, going inward. The soft shell thickness is approximately 10% of the external pile diameter and it does not contribute to the structural strength of the piles.
The substantial variations in load carrying capacities within a timber pile group can be primarily attributed to the variations in pile stiffness and bending capacity. Surprisingly, typical pile group effects such as in-line, side-by-side, pile free height, and pile diameters do not have a large contribution to the variations in individual lateral pile resistances found. When multiple piles are considered together, significant variations between individual piles compensate each other, leading to a group resistance that was almost identical in the two pile group experiments. This finding is advantageous from a computational modelling and risk assessment standpoint. Within the tested pile groups at the Overamstel site, with 200-300 mm diameter piles, partial yielding starts at approximately 100 mm of group deflection. The first pile breakages are expected to initiate at 140 mm of deformation; however, due to the redistribution of lateral loads among the piles, it does not directly result in group failure. Nevertheless, when deformations exceed 200 mm, a majority of the piles will break, leading to group failure. It is vital to emphasize that the transition from the initial onset of yielding to group failure requires merely a slight additional lateral load of 15%.
An analytical quay wall model has been developed to predict the resistance against lateral failure of historic quay walls. This model comprises a framework of elastic beams embedded in an elastic foundation, which is externally loaded by a linear elastic soil model based on Flamant’s theory. The framework is made up of multiple Euler-Bernoulli beams, connected to each other by boundary and interface conditions. The stiffness of the connection between piles and headstock is described by a pile-headstock interface model. The elastic foundation is represented by a series of independent p-y springs, approximated with a bilinear elastic-perfect-plastic model. A method is developed to include the pile-soil-pile interaction and the influence of a sloping surface by adjusting the plastic branch of the p-y springs. This method has been validated through multiple experiments documented in literature in which steel piles were used, eliminating material property uncertainties. The analytical quay wall model has been validated and calibrated with the Overamstel quay wall experiments, employing the stepwise approach. In the first step, the bending properties of the timber piles were obtained from the level 1 bending experiments. Subsequently, in the second step, the model’s capability to describe laterally loaded pile groups was validated through the level 2 pile group experiments. Finally, the Flamant soil model and the model’s ability to describe a historic quay were validated using the level 3 quay experiments. As a final step, the model was compared with finite element computations, demonstrating a good agreement in displacements and forces. The analytical quay wall model accurately predicts lateral displacement, pile bending moments, and bending stresses at various depths, allowing for the assessment of pile fracture under specific surface loads. Its key advantages over state-of-the-art finite element modelling software include robustness, computational speed, feedback loops (e.g. force and displacement-dependent pile-headstock connection stiffnesses), minimal input requirements, and no numerical stability issues at large deformations. The model is highly suitable for trend analysis, sensitivity studies, and probabilistic analysis due to its short computational time in seconds, compared to complex three-dimensional FEM software that takes minutes to hours. The effectiveness and potential of the validated analytical quay wall model have been demonstrated in two “follow up” studies, described below.
In the first study, the quay model has been employed to investigate the failure of the Grimburgwal. With the model it was demonstrated that bending stresses in the timber piles exceeded the modulus of rupture as a consequence of local deepening of the canal in front of the quay. It therefore provides valuable insights for Amsterdam’s historical centre. The analyses have served as an additional validation step for the analytical quay wall model developed in this thesis, specifically for applications to the quay walls of Amsterdam’s historical centre.
In the second study the quay model has been used to effectively showcase the potential of Bayesian updating by incorporating evidence of survived loading situations and corresponding deformations. This approach enables refinement of the reliability predictions and parameter distribution uncertainties, leading to a more accurate prediction of the resistance against the lateral failure mechanism of quay wall foundation piles. Depending on the type of evidence, an a-priori reliability prediction for a quay wall that fails to meet safety standards can be updated to any of the three consequence classes (CC3, CC2, and CC1b) outlined in NEN8700. In a fictive case study, a quay wall with an a-priori reliability of β = 1.5 has been increased to β = 3.2 by including evidence of an extreme survived load of 10 kN/m2 that resulted in displacements of less than 4mm. This is a decrease in failure probability by two orders of magnitude, showing the potential impact of using observational information in combination with Bayesian updating
The main practical implication of this thesis has been the improvement in modelling accuracy, as a result of the Overamstel experiments. The revised “gain” in modelling accuracy for bending moments and deflection was 43% and 37% respectively. This improvement can be attributed to advancements in modelling techniques, such as accurately simulating pile-soil-pile interaction and modelling the pile-headstock connection, as well as utilizing precise location-specific geotechnical and structural material properties as model input. The improved modelling accuracy results in a less conservative evaluation of the quay walls, leading to a reduction in the number of unnecessarily rejected quay walls for the Amsterdam quay wall domain.
The most practical recommendations for Amsterdam are: a) to develop accurate techniques for mapping quay wall configurations, b) to implement comprehensive quay wall monitoring systems in the city centre, c) to utilize the analytical model in future studies and assessments, d) prioritize geotechnical site investigations before making model predictions, and e) perform non-destructive tests in the city centre and incorporate this information in the assessment.
The methods and insights developed in this dissertation enhance the understanding of the lateral failure of historic quay walls and enable more precise predictions of their resistance against such failures. As such, the model can be effectively used to support decisions on their safe use, remaining service life, and the need for their replacement.","Historic quay walls; Experiments; Bending tests; Overloading tests; Lateral pile group experiments; Quay wall modelling; Analytical models; Forensic engineering; Bayesian updating; Reliablity updating; Amsterdam","en","doctoral thesis","","978-94-6469-656-1","","","","Analytical quay wall model open source: https://doi.org/10.4121/4fd90d71-ffd9-4db2-a358-8576f5b19a32","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:d2e1100d-af4e-4af7-8124-41ca5fa881c1","http://resolver.tudelft.nl/uuid:d2e1100d-af4e-4af7-8124-41ca5fa881c1","Classification of Human Activities with Distributed Radar Systems","Guendel, Ronny (TU Delft Microwave Sensing, Signals & Systems)","Yarovoy, Alexander (promotor); Fioranelli, F. (promotor); Delft University of Technology (degree granting institution)","2023","This thesis introduces the relevance of radar systems in the realm of human activity recognition (HAR) in Chapter 1. The study touches upon the complex understanding of continuous human activities and the existing challenges and gaps in current methodologies, hinting at the innovative technical approaches that are to be detailed in the following chapters.
The technical foundation of the research is given in Chapter 2 by introducing distributed ultrawideband (UWB) radar systems. These systems, especially when spatially distributed, bring a depth of information by integrating data from multiple radar nodes and spatial perspectives. There is a significant emphasis on how different fusion techniques, both late and early, play a crucial role in harnessing data effectively, particularly in the context of HAR.
A critical contribution in the study is the potential to deviate from conventional radar data domains, such as microDoppler spectrograms for activity recognition. The research in Chapter 3 highlights an alternative approach, rooted in the radar phase information from a highresolution rangetime map, which bypasses the limitations of common FFTbased radar data domains. This methodology, paired with the histogram of oriented gradients (HOG) algorithm, showcases promising results that can be particularly interesting for realtime applications with computational constraints.
The research in Chapter 4 underlines the efficacy of employing a network of spatially distributed UWB radars for continuous HAR. These networks address the downsides of using a single sensor, like unfavorable aspectangle observations. The study delves into fusion methodologies and their implementation in classifying activities, particularly using recurrent neural networks. To assess these continuous recognition systems, novel evaluation metrics are proposed, offering a deeper insight into the practicality and effectiveness of such systems with temporal classification capabilities.
Indoor radar networks often face multipath challenges. The study in Chapter 5 not only identifies this challenge, but also uses the multipath components by leveraging these typically unwanted phenomena to enhance classification capabilities. Through a pipeline that isolates, determines, and analyzes different propagation pathways, there is an evident boost in the network’s perception. This novel approach showcases a significant performance upward trend, especially when employing convolutional neural networks.
Chapter 6 of the research focuses on the complexities of HAR in crowded environments. The study introduces the challenges of differentiating the activities of walking versus standing idle for multiple individuals simultaneously. The investigation shows initial promising results by using synthetic data generated from experimental recordings, by employing a regressionbased approach and leveraging diverse techniques such as LSTM, CNN, SVM, and linear regression.
In conclusion, the research offers a reflective glance at the breakthroughs achieved in the domain of radarbased HAR in Chapter 7. The significant contributions and advancements of the study are highlighted. Looking ahead, the chapter identifies research areas for exploration and further improvement.","radar signal processing; ultra wideband radar; radar sensor network; distributed radar; human activity recognition; microDoppler signatures; deep learning","en","doctoral thesis","","978-94-6366-769-2","","","","","","","","","Microwave Sensing, Signals & Systems","","",""
"uuid:d641823e-eaab-4fca-b856-fe3b0a500f88","http://resolver.tudelft.nl/uuid:d641823e-eaab-4fca-b856-fe3b0a500f88","The Septin Circus: An Unforgettable Show of Cellular Choreography","Castro Linares, G. (TU Delft BN/Gijsje Koenderink Lab)","Koenderink, G.H. (promotor); Jakobi, A. (copromotor); Delft University of Technology (degree granting institution)","2023","The world we inhabit has long been home to an immense multitude of living beings: small bacteria and other microscopic organisms that elude our vision, as well as plants, fungi, and animals. Despite these vast differences in size and appearance, we classify all of these entities as living. Scientists, including the author of this thesis, have been captivated by the mechanisms that sustain the diversity of life, all while maintaining fundamental characteristics that define all this diversity of organisms as alive. At the basis of this remarkable diversity and functionality of life are cells. From single cells existing as unicellular organisms to multiple cells interacting to form multicellular organisms, cells are the minimum living building block of life. Cells between and even within single organisms are very diverse in shape, components, functionalities, and organization. Even with these differences, they all share common traits: cells have a cell membrane that separates their interior from their exterior, possess genetic material that contains all the information needed for the cell to function, they use components from their environment to fuel and renew themselves, and they are able to divide to reproduce. In order to carry out these functions: cells generate a multitude of components, proteins, nucleic acids, lipids, sugars, and other small metabolites, that are stored inside the cell, making it a complex self-sustaining chemical reactor with tens of thousands of interconnected and sometimes redundant reactions. Understanding how the basic commonalities between cells emerge from the intricate interaction of those reactions is the key to understand what is life and how it works...","","en","doctoral thesis","","978-94-6384-491-8","","","","","","","","","BN/Gijsje Koenderink Lab","","",""
"uuid:49eaed4b-ff4b-450d-97c9-8ed5dc5e7f22","http://resolver.tudelft.nl/uuid:49eaed4b-ff4b-450d-97c9-8ed5dc5e7f22","Reach Probability Estimation of Rare Events in Stochastic Hybrid Systems","Ma, H. (TU Delft Air Transport & Operations)","Blom, H.A.P. (promotor); Santos, Bruno F. (copromotor); Delft University of Technology (degree granting institution)","2023","This thesis conducts a series of interrelated research studies on reach probability estimation of rare events for stochastic hybrid systems. Chapter 1 explains that the motivation for these studies stems from the need to assess safety and capacity of a design for a future Air Traffic Management (ATM) concept of operations (ConOps). The safety/capacity of an ATM ConOps can be expressed in terms of the amount of traffic that can be handled in such a way that the probability of rare events remains sufficiently low. Chapter 1 also explains that the dynamic and stochastic behaviours in an ATM ConOps design can be captured by a General Stochastic Hybrid System (GSHS) model, and that the rare events to be studied can be defined as events that the state of a GSHS model reaches an unsafe set. In ATM safety studies, an unsafe set often considered is the closed subset in the GSHS state space where the physical shapes of two aircraft overlap. The state of a GSHS model consists of two components: i) a Euclidean valued component, and ii) a discrete valued component. The evolution of these two components influence each other; therefore a GSHS model can capture various types of dynamic and stochastic behaviours, including Brownian motion and spontaneous jumps. In contrast to forced jumps, that happen when the GSHS state reaches a boundary in the hybrid state space, spontaneous jumps occur according to a Poisson point process. A mathematically important property of GSHS, is that a GSHS execution satisfies the strong Markov property...","Interacting Particles; Factorization; Rare event; Reach Probability; Stochastic Hybrid System","en","doctoral thesis","","978-94-6384-501-4","","","","","","","","","Air Transport & Operations","","",""
"uuid:40f2a4ef-2080-4cb1-b14a-f0981aeb0cc9","http://resolver.tudelft.nl/uuid:40f2a4ef-2080-4cb1-b14a-f0981aeb0cc9","Delocalization transitions in disordered media","Spring, H. (TU Delft QN/Akhmerov Group)","Akhmerov, A.R. (promotor); Wimmer, M.T. (copromotor); Delft University of Technology (degree granting institution)","2023","The study of crystalline solids and condensed matter physics at large concerns itself with the new behaviors and phases of matter exhibited by elementary particles, atoms, and molecules by virtue of being assembled into a structure. These phases arise from complex microscopic behaviors, which makes it is difficult to establish rigorous quantitative models. The analysis of certain phases is greatly simplified in the presence of symmetries. These symmetries can be non-spatial (time-reversal, particle-hole) or spatial (rotation, inversion, reflection). For example, topological phases ofmatter are easily characterized and classified by the symmetries of the system. Symmetries constrain the band structure of a system, and as a result produce certain quantized responses, such as surface modes on an otherwise insulating bulk. Since these surface modes are related to the symmetry of the bulk, this phenomenon is known as bulk-edge correspondence. So long as the symmetries protecting the topological phase are respected and the energy gap of the insulating bulk remains open, bulk-edge correspondence persists in the presence of disorder. This disorder can be non-structural (appliedmagnetic field), involve part of the structure (impurities) or the entire structure, such as in amorphous systems.....","disorder; topology; phase transition; condensed matter; physics","en","doctoral thesis","","978-94-6366-771-5","","","","","","","","","QN/Akhmerov Group","","",""
"uuid:d4bbe182-8021-4562-a56a-bcb7652032ef","http://resolver.tudelft.nl/uuid:d4bbe182-8021-4562-a56a-bcb7652032ef","Nonlinear coupling and dissipation in two-dimensional resonators","Keşkekler, A. (TU Delft Dynamics of Micro and Nano Systems)","Steeneken, P.G. (promotor); Alijani, F. (promotor); Delft University of Technology (degree granting institution)","2023","Micro and nanomechanical resonators are essential to the state-of-the-art communication, data processing, timekeeping, and sensing systems. The discovery of graphene and other two-dimensional (2D) materials has been a profound source of inspiration for the next generation of these devices, owing to their exceptional mechanical, electrical, and thermal properties. However, alongside their advantages, the atomically thin nature of these resonators also presents its own unique challenges, as the dynamic response of these resonators rapidly becomes nonlinear, where nonlinear coupling and dissipation processes manifest. To unleash the full potential of these resonators, a comprehensive understanding of the emerging nonlinear phenomena is crucial. In this pursuit, this thesis studies nonlinear dissipation pathways in 2D material resonators that arise from the coupling of their internal mechanical modes to each other as well as to theirmicroscopic physics. The thesis consists of six chapters.","nanomechanics; nonlinear dynamics; graphene; two-dimensional materials; internal resonance; mode coupling; nonlinear damping, frequency combs; nonlinear reduced-ordermodelling; NEMS; laser interferometry; magnetic phase transition","en","doctoral thesis","","978-94-6366-781-4","","","","","","2024-05-31","","","Dynamics of Micro and Nano Systems","","",""
"uuid:ed1d3e3b-5f21-401f-88c1-f991958baf00","http://resolver.tudelft.nl/uuid:ed1d3e3b-5f21-401f-88c1-f991958baf00","A Flexible Behavioral Framework to Model Mobility-on-Demand Service Choice Preferences","Dubey, S.K. (TU Delft Transport and Planning)","Hoogendoorn, S.P. (promotor); Cats, O. (promotor); Delft University of Technology (degree granting institution)","2023","Understanding economic decision-making is essential for impactful policy design. In the literature, two main modelling paradigms exist: compensatory and non-compensatory. In this work, we advance the field of decision theory by developing a flexible choice model capable of approximating two modelling paradigms without imposing any a-priori assumptions. Furthermore, through the use of the proposed model, we empirically identify the decision strategy involved in the choice of Mobility-on-demand (MoD) services. Finally, independent of the modelling paradigm, we propose and empirically validate a framework to model the effect of interpersonal network on choice behaviour.","Non-compensatory behaviour; Context-aware survey; Word-of-mouth effect","en","doctoral thesis","","978-90-5584-338-1","","","","","","","","","Transport and Planning","","",""
"uuid:6d51ae3a-ffd8-48f1-96ee-89358d3cb5b7","http://resolver.tudelft.nl/uuid:6d51ae3a-ffd8-48f1-96ee-89358d3cb5b7","Experimental characterisation and mechanical modelling of connection details in traditional Groningen houses","Arslan, O. (TU Delft Applied Mechanics)","Rots, J.G. (promotor); Messali, F. (copromotor); Delft University of Technology (degree granting institution)","2023","Post-earthquake structural damage shows that out-of-plane wall collapse is one of the most prevalent failure mechanisms in unreinforced masonry (URM) buildings. This issue is particularly critical in Groningen, a province located in the northern part of the Netherlands, where low-intensity ground shaking has occurred since 1991 due to gas extraction. The majority of buildings in this area are constructed using URM and were not designed to withstand earthquakes, as the area had never been affected by tectonic seismic activity before. Hence, the assessment of URM buildings in the Groningen province has become of high relevance.
Out-of-plane failure mechanisms in brick masonry structures often stem from poor wall-to-wall, wall-to-floor or wall-to-roof connections that provide insufficient restraint and boundary conditions. Therefore, studying the mechanical behaviour of such connections is of prime importance for understanding and preventing damages and collapses in URM structures. Specifically, buildings with double-leaf cavity walls constitute a large portion of the building stock in the Groningen area. The connections of the leaves in cavity walls, which consist of metallic ties, are expected to play an important role. Regarding the wall-to-floor connections, the traditional way for URM structures in Dutch construction practice is either a simple masonry pocket connection or a hook anchor as-built connection, which are expected to be vulnerable to out-of-plane excitation. However, until now, little research has been carried out to characterise the seismic behaviour of connections between structural elements in traditional Dutch construction practice.
This thesis investigates the seismic behaviour of two types of connections: wall-to-wall connections between cavity wall leaves and wall-to-floor connections between the masonry cavity wall and timber diaphragm, commonly found in traditional houses in the Groningen area. The research is divided into three phases: (1) inventory of existing buildings and connections in the Groningen area, (2) performance of experimental tests, and (3) proposal and validation of numerical and mechanical models. The thesis explores the three phases as follows:
(i) An inventory of connections within URM buildings in the Groningen area is established. The inventory includes URM buildings of Groningen based on construction material, lateral load-resisting system, floor system, number of storeys, and connection details. Specific focus is given to the wall-to-wall and wall-to-floor connections in each URM building. The thickness of cavity wall leaves, the air gap between the leaves and the size and spacing of timber joists are key aspects of the inventory.
(ii) Experimental tests are performed on the most common connection typologies identified in the inventory. This phase consists of two distinct experimental campaigns:
o The first experimental campaign took place at the laboratory of the Delft University of Technology to provide a comprehensive characterisation of the axial behaviour of traditional metal tie connections in cavity walls. The campaign included a wide range of variations, such as two embedment lengths, four pre-compression levels, two different tie geometries, and five different testing protocols, including both monotonic and cyclic loading. The experimental results showed that the capacity of the wall tie connection is strongly influenced by the embedment length and the tie geometry, whereas the applied pre-compression and the loading rate do not have a significant influence.
o The second experimental campaign has been carried out at the laboratory of the Hanze University of Applied Sciences to characterise the seismic behaviour of timber joist-masonry cavity wall connections, reproducing both as-built and strengthened conditions. Twenty-two unreinforced masonry wallets were tested, with different configurations, including two tie distributions, two pre-compression levels, two different as-built connections, and two different strengthening solutions. The experimental results highlighted the importance of cohesion and friction between joist and masonry since the type of failure mechanism (sliding of the joist or rocking failure of the masonry wallet) depends on the value of these two parameters. Additionally, the interaction between the joist and the wallet and the uplift of the latter activated due to rocking led to an arching effect that increased friction at the interface between the joist and the masonry. Consequently, the arching effect enhanced the force capacity of the connection.
(iii) Mechanical and numerical models are proposed and validated against the performed experiments or other benchmarks. Mechanical and numerical models for the cavity wall tie and mechanical models for the timber joist-masonry connections were developed and verified by the experimental results to predict the failure mode and the strength capacity of the examined connections in URM buildings.
o The mechanical model for the cavity wall tie connections considers six possible failures, namely tie failure, cone break-out failure, pull-out failure, buckling failure, piercing failure and punching failure. The mechanical model is able to capture the mean peak force and the failure mode obtained from the tests. After being calibrated against the available experiments, the proposed mechanical model is used to predict the performance of untested configurations by means of parametric analyses, including higher strength of mortar for calcium silicate brick masonry, different cavity depth, different tie embedment depth, and the use of solid bricks in place of perforated clay bricks.
o The results of the experimental campaign on cavity wall ties were also utilised to calibrate a hysteretic numerical model representing the cyclic axial response of cavity wall tie connections. The proposed model uses zero-length elements implemented in OpenSees with the Pinching4 constitutive model to account for the compression-tension cyclic behaviour of the ties. The numerical model is able to capture important aspects of the tie response, such as strength degradation, unloading stiffness degradation, and pinching behaviour. The mechanical and numerical modelling approach can be easily adopted by practitioner engineers seeking to model the wall ties more accurately when assessing URM structures against earthquakes.
o The mechanical model of timber-masonry connections examines two different failure modes: joist-sliding failure mode, including joist-to-wall interaction and rocking failure mode due to joist movement. Both mechanical models have been validated against the outcomes of the experimental campaigns conducted on the corresponding connections. The mechanical model is able to estimate each contribution of the studied mechanism. Structural engineers can use the mechanical model to predict the capacity of the connection for the studied failure modes.
This research study can contribute to a better understanding of typical Groningen houses in terms of identifying the most common connections used at wall-to-wall and wall-to-floor connections in cavity walls, characterising the identified connections and proposing mechanical models for the studied connections.","Masonry buildings; Cavity walls; Timber floors; Connections; Experimental characterization; Mechanical Modelling; Seismic retrofitting; Arching effect","en","doctoral thesis","","978-94-6473-302-0","","","","External advisor: Ihsan Engin Bal","","","","","Applied Mechanics","","",""
"uuid:7e1ad6f0-ef77-4cef-8f9b-b6fde94988b0","http://resolver.tudelft.nl/uuid:7e1ad6f0-ef77-4cef-8f9b-b6fde94988b0","Application of additive manufacturing in vascular self-healing cementitious materials","Wan, Z. (TU Delft Materials and Environment)","Šavija, B. (promotor); Schlangen, E. (promotor); Delft University of Technology (degree granting institution)","2023","Self-healing concrete has great potential to enhance the durability of concrete structures without significantly increasing the initial costs. Among the self-healing approaches, vascular self-healing cementitious composite is capable of supplying healing agents to the cracked region in a continuous way or multiple times. However, the use of brittle materials as vascular makes it difficult to create vascular networks with complicated geometry. The recent development of additive manufacturing (AM, also known as 3D printing) promotes the fabrication of complicated vascular system for vascular self-healing materials. However, the application of AM in vascular self-healing cementitious materials is relatively rare. Therefore, this study focuses on understanding the behavior of 3D-printed vascular self-healing concrete with different printing parameters or vascular configurations.....","Additive manufacturing; Self-healing; Cementitious materials; Machine learning; Optimization; Printing parameters","en","doctoral thesis","","978-94-6384-503-8","","","","","","","","","Materials and Environment","","",""
"uuid:039445ea-ce0e-445d-b07c-5e369fe7e708","http://resolver.tudelft.nl/uuid:039445ea-ce0e-445d-b07c-5e369fe7e708","Probabilistic Labeling in Radar Track-before-Detect Processing: Algorithms for tracking closely-spaced and/or interacting targets","Moreno León, C. (TU Delft Microwave Sensing, Signals & Systems)","Yarovoy, Alexander (promotor); Driessen, J.N. (copromotor); Delft University of Technology (degree granting institution)","2023","Radar-tracking of low-observable targets such as drones suffers from low detection performance. In these type of applications, it is desirable to avoid data thresholding in order to preserve the weak target signal in the raw sensor data. This thesis considers the Multiple Object Tracking (MOT) problem in the context of radar Track-before-Detect (TrBD) processing, where the raw radar data is fed into the filtering process without previous compression into a finite set of detection/plots....","Multiple target tracking; Radar Track-before-detect; Bayesian Inference; Tracking of interacting/closely-spaced/unresolved targets; non-linear filtering; detection of target anomalous behaviour; Particle filtering; data-association free tracking; Sequential Monte Carlo methods","en","doctoral thesis","","978-94-6384-512-0","","","","","","","","","Microwave Sensing, Signals & Systems","","",""
"uuid:56cadf8e-cc7f-4f7b-b6b0-696dd4ecb65d","http://resolver.tudelft.nl/uuid:56cadf8e-cc7f-4f7b-b6b0-696dd4ecb65d","Advances in actuation techniques for wind farm flow control","van der Hoek, D.C. (TU Delft Team Jan-Willem van Wingerden)","van Wingerden, J.W. (promotor); Ferreira, Carlos (promotor); Delft University of Technology (degree granting institution)","2023","Offshore wind farms suffer substantial energy losses due to the interference of wind turbine wakes, with estimates indicating losses of approximately 10%. This thesis aims to advance the state of the art of wind farm flow control techniques for maximizing wind farm power performance. Wind farm flow control can be divided into three categories: static induction control (SIC), wake steering control (WSC), and wake mixing control (WMC). With SIC, upstream turbines are derated to create higher velocity wakes. The losses that are incurred by derating are subsequently compensated by downstream turbines. WSC redirects the wake away from downstream turbines by misaligning a turbine with respect to the incoming wind direction. Using WMC, the wake mixing process is enhanced by continuously adjusting the operating conditions of the wind turbine. Generally, this is achieved through a periodic pitching motion of the blades.
This thesis covers all three categories of wind farm flow control. First, field experiments on an onshore wind farm were carried out to examine the effectiveness of SIC. Measurements indicated a 3.3% increase in power production, as well as a significant decrease in experienced turbulence intensity during favorable ambient conditions. Second, a framework was developed for improving the estimated energy for WSC with analytical steady-state wake models. Using Gaussian process regression, the framework combines the results from an analytical wake model and large eddy simulations with varying ambient conditions, resulting in a 76% increase in estimated annual energy production with respect to the analytical wake model. Finally, a set of wind tunnel experiments were carried to study the wake of a scaled wind turbine model operating with WMC using Particle Image Velocimetry (PIV). The PIV measurements showed enhanced levels of wake recovery with WMC compared to normal operation. Furthermore, a recent TU Delft innovation called ‘the helix approach’, which induces a helical velocity profile in the turbine wake, was shown to be capable of increasing the power of a two turbine array by as much as 15%.","wind farm flow control; wind farm power maximization; static induction; wake mixing; wake steering; Helix approach; wind tunnel experiment; Particle image velocimetry","en","doctoral thesis","","978-94-6366-765-4","","","","","","","","","Team Jan-Willem van Wingerden","","",""
"uuid:7d073c83-1da2-47e1-9244-06c7e26129b1","http://resolver.tudelft.nl/uuid:7d073c83-1da2-47e1-9244-06c7e26129b1","Keep the pitcher’s elbow load in the game: Biomechanical analysis of injury mechanisms in baseball pitching towards injury prevention","van Trigt, B. (TU Delft Biomechanical Engineering)","Veeger, H.E.J. (promotor); van der Helm, F.C.T. (promotor); Hoozemans, Marco J.M. (promotor); Delft University of Technology (degree granting institution)","2023","In baseball pitching, high performance is closely related to injuries. The baseball pitch is a rapid, full-body throwing motion that culminates in a ballistic motion of the throwing arm, creating high ball velocity but exposing the elbow to significant loads. As a result, injuries to the medial side of the elbow involving the Ulnar Collateral Ligament (UCL) are currently a major concern in baseball pitchers at all levels of play. UCL injuries are recently prevalent among youth pitchers and injury rates have gradually increased over the years. It is important to prevent injuries in (youth) pitchers, not only to attain healthy pitching performance but also to avoid injuries at older ages. The general aim underlying the present dissertation is to establish biomechanical injury mechanisms related to the Ulnar Collateral Ligament in baseball pitchers. Knowledge of these mechanisms can eventually be used to develop an ‘early warning system’ to safeguard baseball pitchers from UCL injuries. This dissertation is divided into three parts.","","en","doctoral thesis","","978-94-6469-687-5","","","","","","","","Biomechanical Engineering","","","",""
"uuid:bd8a8edb-3ccb-4f2f-9212-bc6e14af41bb","http://resolver.tudelft.nl/uuid:bd8a8edb-3ccb-4f2f-9212-bc6e14af41bb","High Speed Electron Microscopy: Engineering of a commercial multi-beam scanning electron microscope with transmission imaging","Zuidema, W. (TU Delft ImPhys/Hoogenboom group)","Kruit, P. (promotor); Hoogenboom, J.P. (copromotor); Delft University of Technology (degree granting institution)","2023","In this thesis, the design and engineering considerations for a multi-beam scanning electron microscope (MBSEM) are discussed. This microscope can benefit biological research in various ways. It can give new insights into the inner workings of a multitude of biological systems that were hard to get using previously existing instrumentation. For instance, a higher throughput gives the option to do statistical analysis of multiple samples instead of drawing conclusions from only one. The goal of this thesis was to get from a proof of principle to a final system that can actually be used to do the research. It is divided into 5 chapters showing a step-by-step process of getting to the final system as it is now on the market. Chapter 1 is an introduction to the subject showing the current state of the art with respect to high throughput imaging. Chapter 2 Describes a novel imaging method in scanning electron microscopes. This chapter does not focus on the multi-beam application but shows it in the context of the often-used backscatter imaging. In this method, we place the tissue section directly on top of a thin scintillator screen (thinner than 200 μm) which is coated with a conductive layer. The light signal generated by the electrons transmitted through the sample is collected by a high NA objective lens and the light is imaged onto a photon detector outside of the vacuumchamber. A noise model is created to calculate the signal-to-noise ratio and the contrast-tonoise ratio of this imaging method. It shows that the best images are generated around a landing energy of about 5keV. There are some dependencies on sample thickness, staining level, and light collection efficiency which are also explored. This method does lower the resolution in the image to some extent (by a factor of 2 at low energies and thick sections), which is shown at the end of the chapter. Chapter 3 Goes into the considerations that have to be taken into account when dealing with the imaging method from chapter 2. This chapter is applicable to a single beam SEM as much as anMBSEM. A list of possible light detectors is given from which silicon photomultipliers are selected as the best candidate for the MBSEM. Combined with the light detector, multiple options for a scintillator were discussed, from which YAG:CE is selected. Organic scintillators are discarded due to their bleaching behavior due to electron beam irradiation. The surface of the scintillator and the coating layer are shown to have a large impact on image quality. For this reason, the scintillators are ion-beam polished and coated with a Boron layer. Unexpected behavior in the form of scintillator saturation is observed which is then described by a model and connected to the noise model fromchapter 2. Chapter 4 Gives an analysis of all the hardware requirements for a MBSEM. First a measurement of the crosstalk as a function of landing energy and pitch. It is found that a crosstalk of at least 10 % is to be expected in the system. Next, an overview is given for all the parameters that are related to the stage and the light optics. These are then related to the final throughput of the system. Two imaging strategies are described, in one the beam scans in one direction and the stage in the other. In the other strategy, the beams scan like in a regular SEM and are subsequently descanned in the light-optical system. It is found that with a step and scan approach in combination with planned beamshifts, the maximum throughput that can be achieved is around 420 mpix/s. Chapter 5 Shows results from the final prototype system. Alignments are of great importance in any SEMbut even more so in theMBSEM. Therefore a large part of this chapter is dedicated to describing this alignment. This starts with the electron optical alignment of the source and the beam through the column. The grid of beams has to be optimized to show as little as possible distortions to improve system throughput. The scan and descan have to be aligned to the grid axes and the amplitude has to be precisely correct. The beams have to be perfectly aligned to the detector array. On the processing side, a description of how can be compensated for varying dark and gain levels in the detector array. In the end, a final image is shown, consisting of 400 megapixels. Chapter 6 Describes the valorization of the project and all the challenges and choices that were involved.","Electron Microscopy; High-Throughput; Multibeam,; Transmission imaging","en","doctoral thesis","","","","","","","","","","","ImPhys/Hoogenboom group","","",""
"uuid:31ea7d92-e9d9-4029-8637-e36fa0ff2d6c","http://resolver.tudelft.nl/uuid:31ea7d92-e9d9-4029-8637-e36fa0ff2d6c","Designed to fit: The use of 3D anthropometric data of children’s heads and faces in mask design","Goto, L. (TU Delft Applied Ergonomics and Design)","Goossens, R.H.M. (promotor); Molenbroek, J.F.M. (copromotor); Delft University of Technology (degree granting institution)","2023","When designing products like bicycle helmets or oxygen masks, achieving a good fit is crucial for optimal functioning, usability, safety, and comfort. Integrating anthropometric data in the development and design of products, workplaces, and environments whilst understanding the variations in anthropometric measurements amongst users will improve the usability, comfort, efficiency and interaction of products, subsequently enhancing the overall user experience.
Thus, accurate and detailed measurements of the human body shape in general and for a specific target population in particular, are essential for designing products that require a close fit. Therefore, designers should integrate relevant properties of the body, especially anthropometric dimensions in their design process to optimize the fit between the product and the relevant body part. Recent advancements in 3D imaging technologies have made it possible to collect anthropometric data faster, with higher accuracy and reproducibility. This has led to the increasing use of 3D imaging technologies in anthropometric surveys worldwide, providing detailed anthropometric information for the design of products that closely conform to the human body.
Although various anthropometric tools are available, both in 2D and 3D, designers often rely on traditional 1D anthropometric information when designing and sizing products due to familiarity, ease of use, and cost-efficiency of these tools. However, traditional anthropometric information may not provide sufficient details about the human body shape required for developing products with an optimal fit. While there are advantages to using 3D anthropometric data, there are challenges in integrating it into the design process. The complexity and large quantity of data, making it challenging to sort and analyse both quantitatively and qualitatively. Additionally, there is a lack of established procedures on how to effectively use 3D anthropometric data in product sizing, and limited research has been conducted on its application in the design process and the needs of designers themselves...","3D anthropometry; Dutch children; head and face; representative face model; parametric design; virtual fit testing; mask design","en","doctoral thesis","","978-94-6384-507-6","","","","","","","","","Applied Ergonomics and Design","","",""
"uuid:73945ebd-7e39-459f-a566-50d446b745b2","http://resolver.tudelft.nl/uuid:73945ebd-7e39-459f-a566-50d446b745b2","Biogas-Solid Oxide Fuel Cell (SOFC) Energy System for Rural Energy Supply: A field based study on the role of local materials on operation and capital system cost","Wasajja, H. (TU Delft Sanitary Engineering)","van Lier, J.B. (promotor); Lindeboom, R.E.F. (copromotor); Aravind, P.V. (copromotor); Delft University of Technology (degree granting institution)","2023","Biomass is predominantly the major source of energy in the global south. It is the readily available source of energy in global south and is used in rural energy households in the form of wood, charcoal and agricultural residues. However, biomass energy source is not utilised in the most efficient way and hence there is still a gap in achieving the SDG 7 target. The growing global population has increased the global demand of energy and other basic resources like water and food. But also, has resulted in increased need of sanitation services which are not readily provided to rural communities.....","Anaerobic digestion; Biogas impurities; Sorbent cleaning systems; Biogas-SOFCs; Dry reforming; In-situ H2S reduction; Biochar; Techno-economic analysis","en","doctoral thesis","","978-94-6384-506-9","","","","","","","","","Sanitary Engineering","","",""
"uuid:ca239097-b130-4614-b8c8-bb4d1d4d06d9","http://resolver.tudelft.nl/uuid:ca239097-b130-4614-b8c8-bb4d1d4d06d9","From Theory to Practice: Surgical Process Modeling and Technological Integration","Gholinejad, M. (TU Delft Medical Instruments & Bio-Inspired Technology)","Dankelman, J. (promotor); Loeve, A.J. (copromotor); Delft University of Technology (degree granting institution)","2023","The vital role of surgery in healthcare requires constant attention for improvement. Surgical process modelling is an innovative and rather recently introduced approach for tackling the issues in complex surgeries. The goal of this thesis is to structure the strategies in surgical process modelling and to seek the applications of surgical process models (SPMs) with computer-based technologies to address various challenges in different surgeries. These challenges include surgical training, introduction of new technology and tools, surgery planning, prediction of surgical activities and surgery outcome, and intra-operative guidance of surgeons.
This thesis is composed of two main parts. The first concerns the strategies for establishment of the process models. The second focuses on the application of the surgical process modelling techniques on surgery improvement.....","","en","doctoral thesis","","978-94-6366-760-9","","","","","","","","","Medical Instruments & Bio-Inspired Technology","","",""
"uuid:81e5e8a8-2bee-4af5-b1bc-c7b210f9cb55","http://resolver.tudelft.nl/uuid:81e5e8a8-2bee-4af5-b1bc-c7b210f9cb55","Small Reservoirs in Northern Ghana: Monitoring, Physical Processes, and Management","Annor, F.O. (TU Delft Water Resources)","van de Giesen, N.C. (promotor); Delft University of Technology (degree granting institution)","2023","The importance of small reservoirs for the livelihoods of people in the Upper East Region of Ghana cannot be over-emphasized. They are used for many purposes which include fishing, livestock watering, construction, irrigation, recreation, drinking water, and other domestic uses. The reservoirs were built most often close to communities to support them with dry season water use since the region has a mono-modal rainfall pattern (April – October). The best time to realise the full extent or capacity of small reservoirs is therefore at the beginning of November.
This study was carried out in the Volta basin focusing on the Upper East Region as part of a larger Challenge Program for Water and Food and the EU H2020 TWIGA project. The shallowest (with a maximum depth less than 2m) reservoirs in the northern part of the Volta basin are often dry at the start of the Harmattan season (December - February) when they are most needed. The perception was that this was mainly a result of high rates of evaporation because of high temperatures (going up to 41oC) in that part of the basin in the dry season (Nov – April). Unfortunately, most of the reservoirs are ungauged making their management challenging. Remote Sensing methods have been used to monitor the reservoirs but mainly with regards to their distribution and capacities (surface areas).
In this research, we studied the filling and emptying of the reservoirs with a combination of remote sensing and in situ data, offering better insights into the components of the water balance and energy budget for small reservoirs and thereby the possibility to manage them better. Aside the usage of water in reservoirs, evaporation is considered to be the main component of the water balance of a reservoir. Accurate estimation of evaporation is required for irrigation management and water resources planning. Knowledge of hydrologic fluxes, including evaporation, is required for monitoring, and understanding hydrological and ecological processes. It is however expensive to directly measure evaporation energy fluxes in the field continuously for a long period of time using the Eddy Covariance method. Following this study, a cost effective and reliable way of measuring evaporation flux is proposed using a TAHMO-like meteorological station and the FAO-56 Penman-Monteith method in CropWat.
The main findings from the research are as follows:
• Water abstraction for irrigation, including through small reservoirs of up to 10m3/s from the Volta river, will have minimal impact on hydropower generation at Akosombo and Kpong. However, increasing irrigation and small reservoir abstraction (or storage) rates to about 38m3/s would mean that the water demand for hydropower for some years will not be fully met (about 0.1 percent shortage may be experienced). This means the one-village-one dam project might not create many problems for hydropower generation downstream if they are well-managed (gains not offset by high water losses).
• Evaporation from small reservoirs is not as high as expected. Average actual rate of evaporation is about 5mm/day instead of the reference evaporation of about 10mm/day estimated using meteorological variables from distant (> 3km) weather and climate stations.
• Even though evaporation in small reservoirs is low, the rate of evaporation is higher in shallow and smaller reservoirs. The management of the small reservoirs will therefore require better landuse planning and water allocation to make them fit for the purpose for which they have been constructed for use in the dry season.
• A combination of hydro-meteorological data from TAHMO-like stations and remote sensing offer a better way to monitor and manage the water use in small reservoirs.
• Small reservoirs are good for community water management and not as inefficient as often thought.
The focus of this thesis is on investigating the limits of quality factors in nanomechanical resonators operating at room temperature. The study revolves around four main facets, addressing limitations in fabrication techniques, and design strategies, exploring the impact of aspect ratio on quality factor enhancement, and investigating the potential for temperature sensing.
Firstly, we address the limits imposed by current fabrication techniques to realize high aspect ratio resonators, such as stiction and collapse due to interfacial forces like capillary. To overcome these challenges, we develop and characterize an SF6 plasma etching technique which enables a quick and controllable release of nanomechanical resonators. The high fidelity achieved through this approach allows the use of advanced optimization strategies to design resonators with exceptional quality factors.
In doing so, we tackle the limits of design strategies, which have primarily relied on human intuition until now. By harnessing the power of Bayesian Optimization and inspired by nature, we discover a strategy to increase the quality factor at low order mode via a torsional soft-clamping mechanism. The experimental validation of the resulting spiderweb resonators confirms quality factors surpassing 1 billion at room temperature in the kHz frequency range. Notably, these resonators contain no features smaller than 1 micrometer, ensuring a fast and cost-effective fabrication.
Expanding on these findings, the thesis explores the limits of aspect ratio in quality factor enhancement. By bridging nanomechanics and macromechanics, we create nanomechanical resonators with centimeter-scale lateral sizes. Utilizing multi-fidelity Bayesian Optimization alongside stiction-free fabrication techniques, our strategy allows to reduce the computational cost and to suspend the fragile structures with a fabrication yield approaching 100%, leading to a quality factor above 6 billion.
Finally, the thesis investigates the potential of high quality factor nanomechanical resonators for temperature sensing. We develop a primary noise thermometer to detect temperature across a wide range. The elevated quality factor enables the detection of the effect of the Brownian motion on the resonator’s motion. However, it also poses limitations on the measurement scheme due to the narrow linewidth of the resonators.
Combining all these aspects, this thesis explores and pushes the boundaries of quality factors in nanomechanical resonators at room temperature. It presents novel fabrication techniques, advanced design strategies, and sensing capabilities of high quality factor resonators. The findings offer valuable insights and open up new possibilities for applications in precision sensing, quantum mechanics, and beyond.","high Q factor; low dissipation; nanomechanical resonators; Bayesian optimizatio; spiderweb; room temperature; temperature sensing","en","doctoral thesis","","978-94-6419-985-7","","","","","","","","","Dynamics of Micro and Nano Systems","","",""
"uuid:faf12bf9-8dc5-42ae-8025-b22b6d88e97e","http://resolver.tudelft.nl/uuid:faf12bf9-8dc5-42ae-8025-b22b6d88e97e","Quantum transport in hybrid semiconductor-superconductor nanostructures","Levajac, V. (TU Delft QRD/Kouwenhoven Lab)","Kouwenhoven, Leo P. (promotor); Wimmer, M.T. (copromotor); Delft University of Technology (degree granting institution)","2023","Quantum technology is a developing field of sciencewhere devices possess novel and superior functionalities thanks to their quantum-mechanical behaviour at the nanometer scale. A typical example is a quantum computer, where information is stored in quantum states of its quantum bits. By manipulating entangled and superposition states of these qubits, quantumcomputers can achieve exponential speed-ups in calculation and therefore solve currently unsolvable problems within polynomial computational times. This powerful advantage of quantum computers is particularly difficult to achieve in practice, due to decoherence - a tendency of quantum objects to lose their quantummechanical properties when interacting with their environment. Obviously, qubit decoherence cannot be avoided because the control of a quantum computer inevitably causes couplings to the environment. To mitigate decoherence, fault-tolerant implementations of quantumcomputing need to be developed.
Topological quantum computing has been proposed to achieve fault-tolerance since its significant robustness to decoherence is inherent in the quantum-mechanical nature of topological qubits. Building units of a topological qubit are Majorana zero modes (MZMs) – zero-energy quasiparticles that possess the non-Abelian anyonic exchange statistics and are localized at the boundaries of a topological superconductor. In sufficiently large topological superconductors, MZMs exhibit no overlap and therefore can in pairs host non-local fermions. By braiding non-overlapping MZMs, the information stored in the non-local fermions is manipulated while being insensitive to local noise. In this way one can perform computation that is topologically protected against local sources of decoherence.
In 2010, III-V semiconductor nanowires proximitized by s-wave superconductorswere proposed as a suitable candidate platform for the realization of topological superconductors. Topological superconducting phase occurs in such a hybrid nanowire due to an interplay among the large spin-orbit interaction, s-wave superconductivity, controllable electron density and large Zeeman energy introduced by an externalmagnetic field. Consequently, the nanowire bulk undergoes a band inversion and two MZMs appear at the two nanowire ends. First signatures of MZMs were reported in 2012 and since then a lot of effort has been put in fully demonstrating them. Despite huge improvements in the materials and measurement techniques, conclusive evidence of MZMs in hybrid nanowires is still missing. This is because disorder in hybrid nanowires can also cause the observed signatures of MZMs and make the topological scenario indistinguishable from the trivial ones. Therefore, further improvements and more detailed studies are needed and this thesis shows some recent examples of these...
The urgency to understand the behavior of terrestrial ice shelves under environmental forcing is driven by the ongoing climate crisis. Antarctica is experiencing a rapid loss of mass, primarily due to increasing ocean-induced melting at the base of its ice shelves in response to global warming. The release of glacier meltwater into the world’s oceans contributes to arising the global sea level. However, the rate and magnitude of sea-level rise are highly uncertain and the potential ice mass-loss from Antarctica could significantly accelerate sea-level rise throughout this century due to the instability of its ice shelves. Thus, accurately projecting Antarctica’s contribution to global sea level necessitates a better understanding of the processes behind the loss of its ice shelves.
In this dissertation, I examine the thinning of Antarctic ice shelves caused by enhanced melting at their base due to warming oceans. Intrusion of ocean heat beneath the ice shelves indeed plays a crucial role in projecting their future. Through idealized ocean modeling using the Massachussetts Institute of Technology general circulation model (MITgcm), I simulate ocean dynamics under the ice, investigating the impact of fractures and ice front retreat on the sub-shelf ocean circulation. Results indicate that fractures may act as barriers, inhibiting the intrusion of warm water towards the inland sections of the ice shelves, and thereby reducing basal melt. Furthermore, I examine the impact of the separation of iceberg A-68 from the Larsen C ice shelf in July 2017 on the sub-shelf ocean dynamics. This specific retreat event leads to the redistribution of heat under the ice, resulting in enhanced melting in specific sections of the ice shelf, suggesting future destabilisation of Larsen C. These findings highlight the importance of considering updated ice-shelf coastlines to accurately project ocean circulation and its implications for ice shelf stability.
Furthermore, this dissertation explores the dynamics of specific lineament features observed on the surface of Europa, which are identified as ice fractures. Although limited observations restrict our understanding of ice fracturing events on this moon, insights from studying terrestrial ice sheets provide valuable knowledge. By extend ing an existing terrestrial-based numerical model of fracture propagation on ice shelves, I show that some lineaments on the surface of Europa exhibit a behavior that is similar to ice fractures on Antarctic ice shelves. The model depicts the evolution of these lineament features as bursts of fracture propagation events interspersed with periods of inactivity, which is a typical behavior of fractures on terrestrial ice shelves.
Overall, this dissertation shows the potential for synergy between Earth and planetary science. By leveraging advances in our understanding of physical processes on Earth, terrestrial-based models and theories contribute to expanding our knowledge of physics on other celestial bodies. This interdisciplinary approach, supported and validated by remote sensing and in-situ missions, is fundamental in order to advance our understanding of ice fractures, their interaction with the surrounding environment and their dynamics throughout the Solar System. On Earth, a better understanding of the dynamics of Antarctic ice shelves is imperative to correctly project Antarctica’s contribution to global sea level.","physical oceanography; fracture mechanics; ice-ocean interactions; ice shelves; ice rifts; icy moons","en","doctoral thesis","","978-94-6419-986-4","","","","","","","","","Physical and Space Geodesy","","",""
"uuid:f5f62ff1-28d8-4c5f-93f5-3f9d90d06d28","http://resolver.tudelft.nl/uuid:f5f62ff1-28d8-4c5f-93f5-3f9d90d06d28","Self-Supervised Neuromorphic Perception for Autonomous Flying Robots","Paredes-Vallés, Federico (TU Delft Control & Simulation)","de Croon, G.C.H.E. (promotor); de Wagter, C. (copromotor); Delft University of Technology (degree granting institution)","2023","In the ever-evolving landscape of robotics, the quest for advanced synthetic machines that seamlessly integrate with human lives and society becomes increasingly paramount. At the heart of this pursuit lies the intrinsic need for these machines to perceive, understand, and navigate their surroundings autonomously. Among the senses, vision emerges as a cornerstone of human perception, providing a wealth of information about the world we inhabit. Thus, it comes as no surprise that equipping robots with vision-based perception capabilities, or computer vision, has captivated researchers for decades. Recent breakthroughs, fueled by the advent of deep learning, have propelled computer vision to new heights. However, challenges persist in leveraging the power of deep learning, as its hunger for computational resources poses hurdles in the realm of robotics, particularly for small flying robots with their inherent limitations of payload and power consumption.
This dissertation embarks on a journey that begins at the intersection of two groundbreaking technologies with the potential to revolutionize computer vision and enhance its accessibility to small robots: event-based cameras and neuromorphic processors. These two technologies draw inspiration from the information processing mechanisms employed by biological brains. Event-based cameras output sparse events encoding pixel-level brightness changes at microsecond resolution, while neuromorphic processors leverage the power of spiking neural networks to realize a sparse and asynchronous processing pipeline.
Throughout this dissertation, comprehensive investigations have been conducted, presenting innovative solutions and advancements in the fields of computer vision and robotics. The thesis begins by presenting the winning solution of the 2019 AIRR autonomous drone racing competition, which showcases a monocular vision-based navigation approach specifically designed to address the limitations of conventional sensing and processing methods. Moreover, it explores the bridging of the gap between event-based and framebased domains, enabling the application of conventional computer vision algorithms on event-camera data. Building upon these achievements, the thesis introduces a pioneering spiking architecture that enables the estimation of event-based optical flow, with emergent selectivity to local and global motion through unsupervised learning. Additionally, the thesis presents a framework that addresses the practicality and deployability of the models by training spiking neural networks to estimate low-latency, event-based optical flow with self-supervised learning. Finally, this dissertation culminates with a demonstration of the integration of neuromorphic computing in autonomous flight. By utilizing an eventbased camera and neuromorphic processor in the control loop of a small flying robot for optical-flow-based navigation, this research showcases the practical implementation of neuromorphic systems in real-world scenarios. Overall, our studies demonstrate the benefits of incorporating neuromorphic technology into the vision-based state estimation pipeline of autonomous flying robots, paving the way for the development of more power-efficient and faster neuromorphic vision systems.","Artificial neural networks; Autonomous drone racing; Deep learning; Event-based cameras; Flying robots; Neuromorphic computing; Optical flow; Self-supervised learning; Spiking neural networks; Unsupervised learning","en","doctoral thesis","","978-94-6366-755-5","","","","","","","","","Control & Simulation","","",""
"uuid:a3070931-7512-44fa-833e-4fdc9e33da4a","http://resolver.tudelft.nl/uuid:a3070931-7512-44fa-833e-4fdc9e33da4a","AI-Assisted Design & Optimization for Predictive Maintenance: A Case Study using Deep Learning and Search Metaheuristics for Structural Health Monitoring in Aviation","Ewald, Vincentius (TU Delft Structural Integrity & Composites)","Benedictus, R. (promotor); Groves, R.M. (promotor); Delft University of Technology (degree granting institution)","2023","One of the classical solutions to maintain the aircraft structural integrity is to rely on the analysis of non-destructive testing (NDT) inspector with various inspection methods. However, it is relatively expensive in matter of time and costs to train human resources until the certification is reached. Further, in majority of the cases of aircraft scheduled and unscheduled maintenance, most of the detected damages are far below the damage tolerance limit and therefore are considered as a costly false positive because such inspections generally require additional downtime. Structural Health Monitoring (SHM) tries to reduce the wasteful resources in the maintenance, repair, and overhaul (MRO) industry by signaling such false positives during the maintenance process by becoming an integral part of the structure itself.
On the other hand, there has been an increase in using the artificial intelligence (AI) methodologies such as computational heuristics and machine learning in many areas of human civilization which includes voice and face recognition, languages translation, and automated driving. There has been a lot of interest on implementing AI to assist SHM in maintaining airworthiness while driving the cost down. Nevertheless, the maintenance of airworthiness (such as but not limited to, EASA Part 145/M and FAA CFR Part 21) is a heavily regulated area and are not easily changed.
The current state of the art was captured in the literature review. This includes recent developments of guided wave based SHM and the parameter optimization as well as recent trends and advances in artificial intelligence such as machine and deep learning. The findings from the state of the art were used as the basis to determine the research problem and to propose the solution.
The first part of the proposed solution consisted of a short review the damage growth assumption within the damage tolerance framework and the used methodology to generate and capture Lamb wave signal within Finite Element (FE) environment. This methodology is a deterministic solution that can be partially used for solving continuous optimization in deterministic sensor placement problem. It was further expanded to include a semi-stochastic approach to address nonpredictable damage location that includes some metaheuristics search such as genetic algorithm and swarm intelligence. The ultimate first part of solution was a compromise between the deterministic and semi-stochastic actuator-sensor topology.
The second part of the proposed solution was the investigation on whether deep learning can be used to treat the Lamb wave signal given the configuration obtained from the first part of the proposed solution. To do so, an assumption based on converging probability measures and generalization bound in deep learning must be taken. Then, the approach is to represent the entity of the captured Lamb wave signal in time-frequency domain either as randomly sampled spectrogram or layers of joined spectrograms. After the training, the hypothesis was validated with A/B Testing.
Then, the research was expanded to understand the scalability level of deep learning for SHM for given data size, model parameters, and restriction on physical memory. In this sense, the signal representations were trained sequentially with an example of in hybrid convolutional recurrent network. The investigation was focused on stability behavior of convoluted-recurrent modelling for variable spectrogram length and the experimental validation of the model for classification of the Lamb wave spectrogram signals.
The thesis examines active flow control techniques derived from the spanwise wall oscillation concept. The latter involves introducing a time-dependent spanwise motion to the wall over which a turbulent boundary layer is present. The current work relies on the experimental investigation using particle image velocimetry to quantify the effect of the active control techniques.....","Quantitative flow visualization; Particle image velocimetry; Skin-friction reduction; wall bounded turbulence","en","doctoral thesis","","978-94-6366-773-9","","","","","","","","","Aerodynamics","","",""
"uuid:02e43ae2-b025-45e6-8436-5b7f041f2507","http://resolver.tudelft.nl/uuid:02e43ae2-b025-45e6-8436-5b7f041f2507","High-Performance Phase-Locked Loops for Quantum Computing Applications","Gong, J. (TU Delft QCD/Babaie Lab)","Babaie, M. (promotor); Sebastiano, F. (promotor); Delft University of Technology (degree granting institution)","2023","Quantum computers have gained widespread interest from both industry and academia in the last decade as they are very promising for solving problems intractable by classical computers. However, there is a limited number of qubits in current quantum processors, which impedes the practical applications of a quantum computer. To increase the number of qubits and scale up a quantum computer, a classical electronic interface is required to control and read out the quantum processor operating at cryogenic temperatures....","phase-locked loops; jitter; phase detectors; phase noise; cryogenic electronics; low-power electronics; quantum computing; voltage-controlled oscillators; calibration; digital-analog conversion; oscillators","en","doctoral thesis","","978-90-8593-583-4","","","","","","","","","QCD/Babaie Lab","","",""
"uuid:f9c4d874-92cc-4a45-bd2f-a37f98f8fb78","http://resolver.tudelft.nl/uuid:f9c4d874-92cc-4a45-bd2f-a37f98f8fb78","Photonic topological edge states: A nanoscale investigation","Arora, S. (TU Delft QN/Kuipers Lab)","Kuipers, L. (promotor); Caviglia, A. (promotor); Delft University of Technology (degree granting institution)","2023","The aimof this thesis is to investigate the impact of symmetry on light and how it alters its characteristics. Our research centers around the examination of complex photonic crystals rooted in the concept of photonic topological insulators, which are analogs of topological insulators initially introduced in condensed matter physics. Unlike typical insulating materials, these possess a unique ability to conduct along their surface or edges. Leveraging this fundamental property, photonic topological insulators have gained attention for designing transport circuits resistant to back-reflection and scattering mechanisms....","topological photonics; optics; near-field microscopy; silicon-oninsulator","en","doctoral thesis","","978-90-8593-580-3","","","","","","","","","QN/Kuipers Lab","","",""
"uuid:cbe936ed-32a4-4084-ae9d-a8f6a2b36480","http://resolver.tudelft.nl/uuid:cbe936ed-32a4-4084-ae9d-a8f6a2b36480","Movement of Thumb-Base Joints: In-Vivo anatomy and biomechanics to support Implant Design","Yuan, T. (TU Delft Emerging Materials)","Goossens, R.H.M. (promotor); Song, Y. (promotor); Kraan, G.A. (copromotor); Delft University of Technology (degree granting institution)","2023","The thumb finger is indispensable for an independent daily life. Implant replacement, which aims to restore joint mobility and functionality, is one of the surgical treatments for patients with osteoarthritis at the thumb-base. However, current designs and the biomechanical understanding of the thumb-base joint are inadequate. Small bone size, deep location, and high degree-of-freedom challenge the investigation on this exquisite joint. Taking advantage of 4D CT scanning, this dissertation examined bone shape, joint contact, and the active motion boundary of the thumb-base joint among participants without signs of joint degeneration. In detail, the analysis compared the joint movement between females and males for the etiology of thumb-base osteoarthritis. The deeper insights gained into the structure and mechanics of the asymptomatic thumb-base joints provide the baseline understanding of the thumb-base joints, which can help researchers and healthcare professionals improve and develop more effective treatments for patients with thumb-base osteoarthritis. Furthermore, the exploration of connecting information between the skeletal and skin systems opens up possibilities for future research perspectives.","Trapeziummetacarpal Joint (TMCJ, CMC-1); 4D CT scanning; Joint Biomechanics; Implant Design; Osteoarthritis (OA)","en","doctoral thesis","","978-94-6366-766-1","","","","","","","","","Emerging Materials","","",""
"uuid:fb0cc4b7-a67b-474b-9570-96eb054a39ec","http://resolver.tudelft.nl/uuid:fb0cc4b7-a67b-474b-9570-96eb054a39ec","WebDSL: Linguistic Abstractions for Web Programming","Groenewegen, D.M. (TU Delft Programming Languages)","van Deursen, A. (promotor); Erdweg, S.T. (promotor); Delft University of Technology (degree granting institution)","2023","Information systems store and organize data, and manage business processes concerned with that data. Information systems aim to support operations, management and decision-making in organizations. Web applications are ideal for implementing information systems. Although existing web frameworks provide abstractions for creating web applications, there are three major issues with current web frameworks. Insufficient or leaky abstraction: web programming concerns are not sufficiently covered or abstractions contain accidental complexity. Lack of static verification: application faults are not removed during development. Security flaws: web application security issues are not sufficiently addressed in the framework, web programmers are exposed to many possible security faults.
How can the benefits of web frameworks be provided for web programming while avoiding the major issues of abstraction, static verification, and security? We propose a domain-specific language (DSL) solution. The challenge is to design a language that provides abstractions for all kinds of web programming tasks with the web framework issues in mind. We designed multiple sublanguages to address web programming concerns, and integrated them to form the WebDSL web programming language. WebDSL incorporates better abstraction for web programming concepts, has static checks on the application code with accurate error reporting, and automatically addresses security concerns in the code generation and runtime.
The primary concerns in web programming are user interfaces and data handling. Which features do we need from a user interface language? These features include both the rendering of data persisted in the database, as well as providing input-handling components to enter new data and update existing data. Additionally, data invariants need to be enforced by the system. How can a DSL provide these features in an integrated way? These are language-design challenges that are investigated in this dissertation. The user interface sublanguage of WebDSL contains several unique improvements compared to existing approaches: form submits that are safe from hidden data tampering; prevention of input identifier mismatch in action handlers; safe composition of input templates; automatic enforcement of Cross-Site Request Forgery protection; expressive data validation; and partial page updates without explicit JavaScript or DOM manipulation.
Access control is essential for the security and integrity of interactive web applications. Existing solutions for access control often consist of libraries or generic implementations of fixed policies. These rarely have clear interfacing capabilities, and they require manual extension and integration with the application code, which is error-prone. WebDSL provides a declarative access control sublanguage, which is entirely integrated with other language components and automatically weaves checks into the application code. Errors related to inconsistent application of access control checks are avoided. The access control language shows that various policies can be expressed with simple constraints, allowing concise and transparent mechanisms to be constructed.
Our work on abstractions for web programming resulted in several scientific and software contributions: The design and implementation of a linguistically integrated domain-specific language for web programming that combines abstractions for web programming concerns covering transparent persistence, user interfaces, data validation, access control, and internal site search. Sublanguages for the various concerns are integrated through static verification to prevent inconsistencies, with immediate feedback in the integrated development environment (IDE) and error messages in terms of domain concepts. WebDSL is the largest programming language created with the Stratego program transformation language and the Spoofax language workbench, in which the DSL compiler and IDE have been iteratively developed. This iterative development is a recurring pattern of discovering new abstractions, domain-specific language abstraction, and reimplementation using new core abstractions tailored to the language. To validate WebDSL, we have created several real-world applications in the domain of research and education for external clients.
In our research we aim to create solutions for problems in web engineering and language engineering by developing concepts, methods, techniques, and tools. We aim to create more than just prototypes by continuing maintenance and development beyond the proof of concept. For over 10 years, we have developed WebDSL, and created and operated practical applications for external clients. For example, EvaTool is a course evaluation application that supports processes for analyzing student feedback by lecturers and other staff. WebLab is an online learning management system with a focus on programming education (students complete programming assignments in the browser), with support for lab work and digital exams, used in dozens of courses at TU Delft. Conf Researchr is a domain-specific content management system for creating and hosting integrated websites for conferences with multiple co-located events, used by all ACM SIGPLAN and SIGSOFT conferences. MyStudyPlanning is an application for composition of individual study plans by students and verification of those plans by the exam board, used by multiple faculties at TU Delft.","Programming Languages; Programming Language Design; Domain-Specific Languages; Web information systems; compilers; Information Systems; Access control; Data validation; User Interfaces; Persistence; Object-Relational Mapping; practical impact","en","doctoral thesis","","978-94-6419-976-5","","","","Prof.dr. E. Visser (Delft University of Technology) was the original promotor and supervisor of this research until his untimely passing on April 5th, 2022.","","","","","Programming Languages","","",""
"uuid:b4091579-66ea-4401-9277-dffe5a83ab90","http://resolver.tudelft.nl/uuid:b4091579-66ea-4401-9277-dffe5a83ab90","Upstream process development for cultured red blood cell production","Gallego Murillo, Joan Sebastián (TU Delft BT/Bioprocess Engineering)","van der Wielen, L.A.M. (promotor); Wahl, S.A. (promotor); von Lindern, Marieke (promotor); Delft University of Technology (degree granting institution)","2023","Production of cultured red blood cells (cRBCs) hold the promise of being a potentially unlimited source of cells that could cover the increasing demand of RBCs for transfusion purposes, while having more control on the quality and safety of the cells compared to the current donor-dependent system. cRBCs could also be used for novel therapies in which cells are used as carriers of therapeutic molecules. Scaling up cRBC manufacture is essential to produce the large number of cells needed for such applications. However, scaling up the current static culture systems for the production of erythroblasts (RBC precursor cells) would be prohibitively labor-intensive, requiring large volumes of medium and a high footprint. The work presented in this thesis aims to develop solutions to some of the key challenges in the scaling up of cRBC manufacture.
Stirred tank bioreactors (STRs) are the standard for the large-scale production of biopharma therapeutics, including monoclonal antibodies and vaccines. Agitation in this type of reactors can reduce the concentration gradients of essential nutrients compared to static culture systems such as culture dishes. STRs also offer active control of critical operating parameters in the culture, such as dissolved oxygen concentration, pH and temperature. We therefore developed a culture protocol for the proliferation and differentiation of erythroblasts in STRs (Chapter 2). To define the operating conditions that sustain erythroblast proliferation in STRs, the effect of agitation, aeration strategy, and dissolved oxygen concentration was evaluated using 0.5 L STRs. Using this knowledge, the cultivation process could then be scaled up to 3 L bioreactors.
Erythroblasts lose their replication capacity when transitioning from proliferation to differentiation culture conditions. Thus, efficient proliferation of erythroblasts is essential to produce the large number of cells required for cRBC manufacture. Growing erythroblasts under proliferative conditions is typically performed following a repeated-batch cultivation strategy, in which the culture is diluted every 24 hours with fresh medium to a fixed lower cell concentration. To reduce culture volumes, it is desirable to use higher cell concentrations. However, at increasing cell densities we observe a decrease in growth (Chapter 3). The observed growth limitations of erythroblast cultures at high cell densities appeared to be caused by depletion of low molecular weight nutrients (molecular mass <3 kDa) in the spent medium. We quantified consumption rates of amino acids, major contributors to biomass synthesis in proliferating mammalian cell cultures. Although the concentration of some amino acids decreases considerably over time, supplementation with additional amino acids did not improve growth. Following an untargeted metabolomics approach, we identified multiple pathways that indicate an excess of oxidative stress in erythroblast proliferation cultures.
Perfusion proved to be a successful alternative cultivation strategy to overcome growth limitations due to depletion of nutrient components (Chapter 3). Increasing the maximum cell concentration in erythroblast cultures leads to an increase in the volumetric productivity (number of cells produced per reactor volume per culture time), which decreases the reactor volume needed to produce the same amount of cRBCs. However, large volumes of medium would still be required to sustain those cultures. Currently, the cost of culture medium for erythroid cultures makes cRBC manufacture economically unfeasible. Growth factors and proteins added to the medium are major contributors to the cost of the medium. Holotransferrin, an iron-carrying protein, is the main cost driver in erythroblast differentiation medium. We show that holotransferrin in erythroblast cultures can be replaced by a GMP-compatible iron chelator (deferiprone; Def), bound to ferric ion (Def3⋅Fe3+; Chapter 4) . Addition of Def3⋅Fe3+ to the culture medium resulted in similar final cRBC yields of cRBCs during proliferation and differentiation of erythroblast cultures compared to optimal holotransferrin concentrations. During differentiation, Def3⋅Fe3+ fully supported enucleation and hemoglobinization. We did not observe toxic effects of Def3⋅Fe3+.
Finally, the main conclusions of this thesis are discussed, providing also an overview of the next developments that are required to make the production of cRBCs at large scale technically and economically feasible (Chapter 5). A multidisciplinary approach is needed to further reduce media cost, optimize medium composition to improve cell yields, and to improve the bioreactor culture system developed in this work.
This thesis focuses on performance flight-testing methods for conventionally-configured helicopters, i.e., those that employ a single main rotor to generate lift and thrust, and a single tail rotor to counter-act the torque effect of the main rotor. More specifically, the scope of this research was limited to gas-turbine available power testing and power required for out of ground effect (OGE) hover and power required for level-flight (AKA cruise flight). The research was limited to the execution of up to ten flight test sorties on two types of helicopters; the Bell Jet-Ranger and the MBB BO-105 helicopters, both normally used for training at the National Test Pilot School (NTPS) in Mojave, California.
The goal of this thesis is to develop new and improved flight-test methods to rectify existing problems associated with the conventional methods. The conventional method for the maximum available power of a gas-turbine relies on three independent, single-variable polynomials that often yield poor prediction accuracy that sometimes even defy basic engineering concepts. The conventional method for OGE hover performance is overly simplified and neglects important blade non-linear effects. This results in inaccurate empirical models for hover performance representation. The conventional flight-test method for level-flight performance incorporates several drawbacks which not only make the execution of flight-test sorties inefficient and time consuming, but also compromise the level of accuracy achieved. This conventional level-flight method fails to specifically address non-linear effects such as blade-tip compressibility and drag-divergence that often results in inaccurate predictions, especially at high altitude and low air temperature conditions.
The research intended to develop new flight-test methods for the available power of a gas-turbine engine and for the power required for hover and level-flight. Both new methods are based on multivariable polynomial approach. The research was initiated with the development of a new method for the maximum available power of a gas-turbine engine. A novel method, referred to as the ‘Multivariable Polynomial Optimization under Constraints’ (MPOC), was developed. This method seeks for a third order multivariable polynomial to describe the engine output power as a function of the other three variables of the engine (compressor speed, temperature and fuel-flow). The maximum available engine power is realized by solving an optimization problem of maximization under constraints. For this optimization, the Karush-Khun-Tucker (KTT) method was used successfully. For the exemplary BO-105, the standard deviation of the output power estimation error was reduced from 13 hp (conventional method) to only 4.3 hp by using the proposed method. Expanding the flight-test data base to include seven different engines reveals that the multivariable polynomials approach of the proposed method performed much better with all seven engines, as compared to the conventional single-variable approach. The maximum average prediction error was only 0.2% as compared to a maximum average prediction error of 1.15%, yielded by the conventional method.
The research effort conducted for the OGE hover performance was concluded successfully with the development of the novel “Corrected Variables Screening using Dimensionality Reduction” (CVSDR) method for hover performance. This novel method combines fundamental dimensional analysis to generate a list of candidate corrected-variables (CVs) to represent the hover performance problem, then screens for the most essential ones by means of dimensionality reduction, implemented by singular-value-decomposition (SVD). This phase of the research was executed with four sorties on the Bell Jet-Ranger helicopter and produced a total of five conclusions. The most significant conclusion was that power predictions of the CVSDR method were 1.9 times more accurate than the conventional method. At the 95% confidence level, the CVSDR method deviated by an average of only 0.9 hp (0.3% of the maximum continuous power of the example helicopter) from the actual power required to hover, whereas power predictions from the conventional method deviated by an average of 1.7 hp.
The final phase of the research concentrated on developing a new flight-test method for the level-flight regime. This effort spanned over five distinct sorties using the BO-105 helicopter. Similar concepts used for the hover performance testing were expanded and adapted for level-flight performance flight testing. The CVSDR method for level flight performance can be regarded (abstractly) as an expansion of the CVSDR method for OGE hover into a higher dimensional space. This phase of the research was aimed at addressing five research questions and yielded ten conclusions. The top three conclusions were that (1) the power predictions accuracy achieved using the CVSDR method for level-flight was nearly 21% better (on average and at the 95% confidence level), as compared to the prediction accuracy yielded from the conventional method. (2) the CVSDR method made planning and execution of flight-test sorties more efficient and time conserving. It is estimated to reduce flight-time for data gathering by at-least 60%, and (3) the CVSDR method is not restricted by the high-speed approximation, hence is also appropriate for the low-airspeed regime, and can potentially bridge the empirical modelling gap between the hover and level-flight regimes.
The novel flight-test methods developed within this research (the MPOC for the available power of a gas-turbine engine and the CVSDR for OGE hover and level-flight performance) are recommended to be used by the helicopter flight-testing community, as they were shown to increase accuracy and promote execution efficiency.
This thesis produced six recommendations concerning possible future expansion of the work already done during the current research. These include an expansion of the CVSDR method into more areas of performance testing like vertical and forward flight climb, partial power and unpowered descent, etc. Another continued research recommendation relates to the applicability and efficiency of the CVSDR method to relevant vertical-lift aircraft that combine both RW and FW characteristics. It is also recommended that continued research look into the potential and feasibility of employing the CVSDR method for empirical modelling used by Health and Usage Monitoring Systems (HUMS) installed in helicopters.
Despite extensive theoretical knowledge of nonclassical gas dynamics, which includes rarefaction shock waves (RSWs), there is still a lack of compelling experimental evidence supporting their existence. The motivation for the research documented in this dissertation is two-fold: firstly, it is crucial to conduct experiments that can provide empirical validation of nonclassical gas dynamics, with a specific focus on observing RSWs, which have proven elusive in previous attempts. Secondly, performing accurate measurements of fluid properties in the dense-vapour thermodynamic regime has the potential to improve the thermodynamic models of BZT fluids or fluids made of complex organic molecules in general. This in turn can contribute to a more accurate characterisation of flows in practical applications that involve these fluids, such as turbine flows in Organic Rankine Cycle (ORC) systems or compressors in high temperature heat pumps.
This research work aimed to provide conclusive experimental evidence for the existence of nonclassical expansion shock waves in the flows of a candidate BZT fluid, siloxane D6. For this purpose, two novel test facilities namely the Asymmetric Shock Tube for Experiments on Rarefaction Waves (ASTER) and the Organic Vapour Acoustic Resonator (OVAR) have been conceived, developed, designed, built and commissioned at TU Delft. Relevant theoretical studies were performed to complement the experimental observation of nonclassical effects. Novel measurements of fluid properties in the nonclassical gasdynamic region of the candidate BZT fluid were executed, the outcomes of which are useful for the improvement and the optimisation of thermodynamic models for this fluid.
Magnetic coupler design
The key performance indicators of an IPT system include power transfer capability, power density, power efficiency, and misalignment tolerance. Due to conflicts among these performance indicators, it is indispensable to formulate the design of IPT charging pads as a multi-objective optimization (MOO) problem. By using finite element (FE) models, the magnetic field property of a coupler can be computed. However, calculating the aligned and misaligned power losses at the rated power requires not only the magnetic field property but also the compensation strategy. The compensation strategy determines the load match method which is used to calculate the optimal load condition and the rated winding currents. Therefore, compensation strategy should also be considered for the magnetic coupler design. With the magnetic field distribution known, the power losses in the AC link can be calculated through the existing analytical method.
This thesis develops a MOO method that can find the performance space from the design search space of magnetic couplers. In the performance space, Pareto fronts can be obtained under different conflicting optimization objectives. The study shows that analytically calculating the AC link power efficiency is possible when the magnetic field is accurately computed at the rated condition. More importantly, the DC-DC power efficiency of the final prototype reaches $97.2\%$ which proves that the MOO design is vital to make full use of IPT technology.
Prediction and control of transient behaviors
IPT systems require capacitive/inductive components to form resonant circuits on both sides to improve the power transfer capability and power efficiency, while the compensation components also make the resonant stage of a high order. As a result, the analytical dynamic models of IPT systems are complex and mostly impossible to solve in the time domain.
This thesis proposes a new reduced-order dynamic modeling method that describes the transient behavior of a resonant stage from the energy point of view. The order of the resultant dynamic model is one-fourth that of conventional ones for SS compensated IPT systems. Also, a MPC controller is designed based on the proposed dynamic model. It is proven that simplifying the dynamic model is helpful in explaining how circuit parameters influence transient behaviors and also in facilitating the application of advanced control strategies in IPT systems.
Reduction of power fluctuation
The most obvious difference between static and dynamic IPT is the change in magnetic coupling. In DIPT applications, the magnetic coupling fluctuates from the maximum to a usable level as EVs move, so one of the main challenges of DIPT is to stabilize the pick-up power, especially for DIPT systems using segmented Tx coils where magnetic coupling changes more frequently. The conventional methods are either to overlap Tx coils or to add extra sets of the Rx sides, which are expensive in building costs.
This thesis presents the design of a segmented DIPT system using a multiphase Tx side. The Rx coil consists of two sub-windings connected in series with a relatively large spatial offset in the EV moving direction. One advantage of the proposed design is that the Tx coils are deployed loosely so the building cost can be reduced. The other advantage is that the pick-up power is seamless with a small ripple. The pick-up power demonstrates a $24.9\%$ ripple by experiments.
Detection of EVs and FOs
To minimize the Tx side power losses and magnetic field radiation, the detection of EVs and FOs should be implemented in DIPT systems. Considering the integration of the detection equipment into the charging pads, PCB coils become the most suitable candidate to sense the magnetic field for detection purposes. However, the detection of EVs and FOs are mostly discussed separately in the literature. There is a need to achieve these two detection functions within one set of PCB coils.
This thesis presents the design of detection equipment consisting of PCB coils installed onto charging pads and the detection resonant circuit (DRC) connected to Tx side PCB coils. It can be concluded that the detection of EVs and FOs can both be realized by measuring the variation of the magnetic field caused by their intrusion, and PCB coils demonstrate good performances in measuring the change of magnetic field together with DRC to amplify the detection signals.
This thesis focuses on the second approach, which is to explore the implementation of algorithms on modern quantum processors. Along all these implementations, we study the errors that cause these algorithms to derail from their ideal results. We attempt to understand, quantify and control these errors, in the hope that this provides useful insights into how to design algorithms for the modern hardware.
This thesis starts by introducing the topic of superconducting quantum processors and modern algorithms in the first two chapters. Then we move onto the three experiments, one chapter each, detailing our findings.
The first experiment cover an digital-analog implementation of a quantum simulation of light-matter interaction. We present the implementation that makes use of both digital (gates) and analog (evolution) blocks. The accuracy of the Trotterization technique is studied in detail, as well as the capability to study the photon population in the resonator. We manage to implement up to 90 Trotter steps and reproduce the behaviour in the ultra-strong coupling regime.
The second experiment presents an error mitigation technique, on an application of great interest to the field (molecular simulations). This application is a fully digital one, within the hot topic of variational algorithms for ground-state preparation. The mitigation technique, which is an invention of our own team (see referenced theoretical works) manages to reduce the algorithm error over an order of magnitude. In order to demonstrate this level of control, we quantify the error through accurate simulations of the quantum process and independent quantification of the parameters involved.
The third experiment presents another variational algorithm, this time to produce thermal states rather than ground states. Again, we pursue a detailed study of the many error mechanisms involved, in order to quantify and match the results obtained. We go beyond incoherent errors and add a coherent error mechanism common to our hardware architecture, the residual ZZ coupling.
Finally, we reflect on the final chapters about how to continue towards implementations that make the most out of modern, noisy, hardware.","quantum; quantum computing; quantum simulations; experimental physics; condensed matter physics; superconducting devices; superconducting qubits","en","doctoral thesis","","978-94-6419-969-7","","","","","","","","","QCD/DiCarlo Lab","","",""
"uuid:c3f5e791-2eb5-426a-8067-a1e3b488bd0a","http://resolver.tudelft.nl/uuid:c3f5e791-2eb5-426a-8067-a1e3b488bd0a","Theory of Chirality Induced Spin Selectivity in Two Terminal Transport","Huisman, K.H. (TU Delft QN/Thijssen Group)","Thijssen, J.M. (promotor); van der Zant, H.S.J. (copromotor); Delft University of Technology (degree granting institution)","2023","In this thesis we perform a theoretical study on the Chirality Induced Spin Selectivity effect in the context of two-terminal measurements for realistic parameters. In twoterminal measurements on chiral molecules one of the leads is magnetized and the current is measured for opposite magnetizations. In experiment it is found that the currents for opposite magnetizations are different for finite bias voltage. We call this finite difference a magnetocurrent. The magnetocurrent is odd in bias voltage and the size of the effect of the order of a few percent. Our aim is to explain this effect through modeling junctions with interactions and the spin-orbit where we always use choose realistic parameters.","Magneto-transport; Onsager reciprocity; Büttiker reciprocity; Voltage probes; Coulomb interactions; Mean-field theory; (non-)collinear Hubbard one approximation; Vibrational modes; Spin-orbit coupling; Stray fields; Chirality Induced Spin Selectivity","en","doctoral thesis","","978-90-8593-579-7","","","","","","","","","QN/Thijssen Group","","",""
"uuid:0be72865-8064-4120-8103-c57b1321a3f0","http://resolver.tudelft.nl/uuid:0be72865-8064-4120-8103-c57b1321a3f0","Embedding design practices in local government: A case study analysis","Kim, A. (TU Delft Methodologie en Organisatie van Design)","Lloyd, P.A. (promotor); Mulder, I. (promotor); van der Bijl-Brouwer, M. (copromotor); Delft University of Technology (degree granting institution)","2023","Design approaches are increasingly being employed by governments worldwide to address public service and policy issues. This book explores the evolution of these design practices within the context of local government, shedding light on the value they can create and how they become stabilized in six local government organizations.","Design for policy; Local government; Public sector innovation; Design management","en","doctoral thesis","","978-94-6366-758-6","","","","","","","","","Methodologie en Organisatie van Design","","",""
"uuid:62a12e6d-d7d2-4244-8df9-89413ec133da","http://resolver.tudelft.nl/uuid:62a12e6d-d7d2-4244-8df9-89413ec133da","Development of Nickel-Titanium Shape Memory Alloys via Laser Power Bed Fusion","Zhu, Jia-Ning (TU Delft Team Vera Popovich)","Popovich, V. (promotor); Hermans, M.J.M. (promotor); Delft University of Technology (degree granting institution)","2023","Shape memory alloys (SMAs), such as nickel-titanium (NiTi) alloys or Nitinol, possess remarkable properties, including superelasticity and shape memory effects, which are attributed to the reversible martensitic transformation. However, traditional manufacturing of NiTi SMAs is challenging due to its high ductility and reactivity, which limits NiTi applications to simple geometries. In this context, laser powder bed fusion (L-PBF), an additive manufacturing technique, emerges as a promising solution capable of overcoming these limitations and introducing the concept of four-dimensional (4D) printing. This approach enables the creation of morphing shapes that can be activated by external stimuli, such as heat or stress, particularly beneficial for SMAs.","Nickel-Titanium; shape memory alloys; additive manufacturing; laser powder bed fusion; superelasticity","en","doctoral thesis","","978-94-6469-631-8","","","","","","","","","Team Vera Popovich","","",""
"uuid:c74cc90a-e55d-489c-b3bb-4c8c4a6dd7e6","http://resolver.tudelft.nl/uuid:c74cc90a-e55d-489c-b3bb-4c8c4a6dd7e6","Design Patterns for Detecting and Mitigating Bias in Edge AI","Hutiri, Wiebke (TU Delft Information and Communication Technology)","Janssen, M.F.W.H.A. (promotor); Ding, Aaron Yi (copromotor); Delft University of Technology (degree granting institution)","2023","From smart phones to speakers and watches, Edge Al is deployed on billions of devices to process large volumes of personal data efficiently, privately and in real-time. While Edge Al applications are promising, many recent incidents of bias in Al systems caution that Edge Al too, may systematically discriminate against groups of people based on their gender, race, age, accent, nationality and other personal attributes. More so, as the physical restrictions of Edge Al, together with the complexity of its heterogeneous and decentralised operating environment pose trade-offs when deploying Al to the edge.
This thesis is motivated by the societal demand for trustworthy Al, by the propensity of Al systems to be biased, and consequently by the need to detect and mitigate bias in diverse Edge Al applications. To address this need, this thesis develops design patterns for detecting and mitigating bias in the development of Edge Al systems. The design patterns present a generalisable approach for capturing established practices to detect and mitigate bias in machine learning. They make this knowledge readily accessible to researchers and practitioners that develop Edge Al, but who have limited prior experience with detecting and mitigating bias.","Edge AI; Edge Intelligence; Trustworthy AI; Responsible AI Design; Bias; Fairness; Design Patterns; Speech Technology; Speaker Verification; Keyword Spotting","en","doctoral thesis","","978‑94‑6419‑932‑1","","","","","","","","","Information and Communication Technology","","",""
"uuid:b3d264ce-e7dc-4e67-b0e1-94f3cc7831ca","http://resolver.tudelft.nl/uuid:b3d264ce-e7dc-4e67-b0e1-94f3cc7831ca","Geomechanical Study of Underground Hydrogen Storage","Ramesh Kumar, K. (TU Delft Reservoir Engineering)","Hajibeygi, H. (promotor); Jansen, J.D. (promotor); Delft University of Technology (degree granting institution)","2023","With the rise of renewable energy and the drive to achieve net-zero emissions, energy storage has become a crucial component of the energy sector to address the challenges of intermittency. The vast subsurface environment offers significant storage potential, capable of accommodating terawatt-hour (TWh) capacities. One approach to leverage this storage capacity involves converting renewable energy into hydrogen and storing it underground within salt caverns and depleted porous reservoirs. This stored hydrogen can then be utilized as needed. However, this cyclic injection and production of hydrogen will exert repeated stress on the subsurface, resulting in periodic changes in pressure.
One critical aspect that requires investigation for the safe storage of hydrogen (H2) is the field of geomechanics, which becomes essential in both salt caverns and depleted reservoirs. To gain a better understanding of this, a comprehensive review of the geomechanics involved in underground hydrogen storage was conducted to examine existing knowledge and identify research gaps. To delve deeper into the influence of geomechanics, particularly regarding the inelastic creep deformation of rocks in salt caverns and depleted porous reservoirs, numerical simulations were employed. Given the potential costliness of fine-scale simulations, multiscale simulations were carried out using algebraic multiscale methods. Constitutive models were utilized to analyze deformation patterns in and around the reservoir, assessing their impact on subsidence or uplift.
In order to further comprehend the effects of cyclic loading on rocks, constitutive models were developed based on extensive experimental data obtained from sandstone rocks subjected to long-term stress conditions. These models aided in uncovering the underlying physics of rock behavior when exposed to different stress regimes during prolonged cyclic loading. Subsequently, these models were integrated into finite element method (FEM) simulations to observe their impact on field-scale scenarios, with a synthetic Bergermeer case study serving as an example.
To enhance the computational efficiency of multiscale methods, unsupervised machine learning techniques were applied to optimize the formation of computational grids, utilizing graph theory techniques such as Louvain and random walk algorithms. These optimized grids were then compared with the grids generated from METIS to evaluate the computational performance of pressure solvers in a commercial scale simulator.","","en","doctoral thesis","","978-94-6366-759-3","","","","","","2023-11-01","","","Reservoir Engineering","","",""
"uuid:4d4a1cfa-836f-415f-a255-84d49a4797a0","http://resolver.tudelft.nl/uuid:4d4a1cfa-836f-415f-a255-84d49a4797a0","Shearography non-destructive testing and defect characterisation of thick composite structures","Tao, N. (TU Delft Structural Integrity & Composites)","Benedictus, R. (promotor); Groves, R.M. (promotor); Anisimov, A. (copromotor); Delft University of Technology (degree granting institution)","2023","THICK composite materials, e.g., thickness of more than 50 mm, are increasingly being used across diverse industry sectors owing to their significant advantages of weightsavings, superiormaterial properties and load-carrying capability. These materials tend to be adopted in safety-critical applications such as large primary or secondary load-bearing structures, where mechanical failures would result in serious consequences. However, various defects and damage may occur in thick composites that endanger structural integrity and safety severely. Hence to improve the maintenance, safety and reliability of these structures, it is crucial to develop inspection methods capable of defect detection and characterisation for composite structures of significant thickness. To date, the nondestructive testing and evaluation (NDT&E) of thick composite structures still remain an urgent challenge due to their material and structural complexity, significant thicknesses, and the presence of various manufacturing and in-service defects....","Digital shearography; Speckle interferometry; Strain characterisation; Thick composite inspection; Composite laminates; Non-destructive testing and evaluation; Defect detection and characterisation; FEM-assisted inspection; Spatially and temporally modulated heating","en","doctoral thesis","","978-94-6384-499-4","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:a3d42b01-b207-4524-94c6-1c8cf642f687","http://resolver.tudelft.nl/uuid:a3d42b01-b207-4524-94c6-1c8cf642f687","On-trip Behavior of Truck Drivers on Freeways: New mathematical models and control methods","Sharma, Salil (TU Delft Transport and Planning)","van Lint, J.W.C. (promotor); Tavasszy, Lorant (promotor); Snelder, M. (promotor); Delft University of Technology (degree granting institution)","2023","Congestion, a frequent problem on freeways, is often considered a major challenge for the operations of road freight transport. Trucks, the main choice for road freight, not only suffer from congestion but they also contribute to it. Consequently, billions of dollars are lost worldwide in trucking operations, which also impedes economic growth and prosperity. Understanding driving behavior and on-trip decision-making of truck drivers are critically important to design measures that mitigate the impacts of congestion on truck traffic, and vice versa, to design measures that mitigate the impacts of truck traffic on congestion. In this respect, the on-trip behavior of truck drivers can be decomposed—like driving behavior in general—into strategical, tactical, and operational behavior, depicting route choice, short-term path-planning (e.g. merging, lane changing), and the steering & accelerating of the vehicle, respectively. Whereas these on-trip behaviors have been studied in-depth for drivers of passenger cars, there are larger gaps in our knowledge when it comes to strategical, tactical and operational behavior of trucks. Furthermore, our limited insight into the driving behavior of truck drivers inhibits the design of appropriate traffic control and management measures.
To improve freight and traffic operations on freeways, this dissertation focuses on obtaining insights into the on-trip behavior of truck drivers and influencing this behavior for congestion relief. To this end, this dissertation develops new mathematical models and control methods for the strategical, tactical and operational behavior of truck drivers by analyzing emerging datasets and designing novel cooperative intelligent transportation system (C-ITS) applications.....","","en","doctoral thesis","","978-90-5584-337-4","","","","","","","","","Transport and Planning","","",""
"uuid:03d907d1-f44d-45e9-a1ea-5110ccff91f2","http://resolver.tudelft.nl/uuid:03d907d1-f44d-45e9-a1ea-5110ccff91f2","Heritage Beyond Singular Narratives: Embracing Diversity in Participatory Heritage Planning Empowered by Artificial Intelligence","Foroughi, M. (TU Delft Heritage & Architecture)","Pereira Roders, A. (promotor); Wang, T. (copromotor); Delft University of Technology (degree granting institution)","2023","This PhD thesis explores the evolving field of heritage planning, focusing on the cultural significance of heritage properties. It advocates for a value-based approach that recognizes the diverse perspectives of stakeholders, including experts, policymakers, and users. While participatory heritage aims to foster consensus-building, tensions may arise due to varying cultural significance conveyed by different stakeholder groups. Conventional research methods are time-consuming and costly, limiting their effectiveness in heritage planning. To address this gap, this research aims to utilize Artificial Intelligence (AI) models and information repositories, such as social media platforms, to understand the cultural significance of built heritage from different stakeholder groups’ perceptions.
This research presents a theoretical framework that examines the factors influencing consensus-building on heritage values and attributes. Based on this framework, a public participation methodology empowered by AI is developed and tested in the case study of windcatchers in Yazd, Iran. This study compares the perceptions of three stakeholder groups: experts, policymakers, and users. The findings reveal consensus on the value of windcatchers while highlighting differing interpretations of their significance.
The AI-empowered methodology proves effective in uncovering stakeholder groups' understanding of cultural significance. This framework can be replicated in other case studies, facilitating participatory heritage practices. The thesis contributes to knowledge in public participation, cultural significance, and AI in heritage planning, offering insights for practitioners and policymakers to promote inclusive heritage practices. It emphasizes the importance of stakeholders' contributions and advocates for a more diverse and inclusive approach to heritage planning.
The effectiveness of adjoint-based error estimation is initially demonstrated using linear advection-diffusion problems. An adjoint-based AMR strategy is further developed and analysed for unsteady 1D Burgers problems with a multi-frequency forcing term. Then we introduce a Reduced-Order Representation (ROR), which uses the Proper Orthogonal Decomposition (POD) to replace full-order primal solutions when solving the adjoint problem backward in time. Numerical results demonstrate the effectiveness of using RORs for adjoint-based AMR.
An enhanced online algorithm for POD analysis is proposed to deal with high-dimensional LES data, resulting from the nonlinearity and unsteadiness that require us to store a time history of primal states for solving the adjoint problem. The enhanced algorithm is based on the incremental Singular Value Decomposition and exploits the decomposition of full-order solutions into reconstructed and truncated solutions. Two lower-bound estimators are proposed to equip the enhanced algorithm with a posteriori error analysis. Numerical experiments demonstrate that the algorithm can significantly improve the computational efficiency of online POD analysis while accuracy is maintained with an appropriate number for the truncation of POD modes. Furthermore, the enhanced algorithm scales well in parallel and the improvement of computing efficiency is independent of the number of processors.
The unsteady adjoint problem is investigated for 2D and 3D cylinder flows. Using RORs significantly reduces the memory requirement for storing primal flow solutions for both 2D and 3D cylinder flow. Dynamic features of the adjoint field are well presented with using RORs, although there are differences in regions around and upstream of the cylinder using a small number of POD modes. Error distributions can be well predicted with POD-based RORs, especially in regions with large errors. The exponential growth of adjoint solutions in the 3D turbulent flow is found to be attenuated when using RORs for solving the adjoint problem.","Large Eddy Simulation; Adjoint Method; Order Reduction; Incremental Singular Value Decomposition; A Posteriori Error Estimation; Adaptive Mesh Refinement","en","doctoral thesis","","978-94-6366-753-1","","","","","","","","","Aerodynamics","","",""
"uuid:03f73e4c-dd70-4fed-95e5-7bb28927afd0","http://resolver.tudelft.nl/uuid:03f73e4c-dd70-4fed-95e5-7bb28927afd0","Thermo-mechanics of energy piles: fine-grained soils, cycles, and interfaces","Golchin, A. (TU Delft Geo-engineering)","Vardon, P.J. (promotor); Hicks, M.A. (promotor); Delft University of Technology (degree granting institution)","2023","In the serviceability lifespan of thermo-active geo-structures such as energy-piles, soils surrounding these structures are exposed to a combination of mechanical and thermal loads. These loads are often complex (including cycles) and, depending on the state of the soils, the response of the surrounding soil to these loads may differ. Since the performance and safety of the soil-structure system directly depends of the response of the surrounding soil, it is important to understand and quantify the thermomechanical behaviour of soils. These objectives can be achieved by performing laboratory-scale element tests to gain knowledge on the fundamental response of the material and by developing numerical tools which can be used to simulate the complete soil--structure system under various complex load paths.
To date, many laboratory test have been conducted to study the thermomechanical behaviour of soils. A large portion of these tests have been triaxial tests and many thermomechanical constitutive models for soils are developed based on the phenomenological findings from these tests. While these models have been seen to be capable of capturing the general thermomechanical behaviour of soils, none have been formulated to ensure that they unconditionally satisfy the principles of thermodynamics. Therefore, under complex loading paths certain phenomena may not be captured/predicted, and other phenomena may be spuriously predicted. On the other hand, only a very limited number of tests have been conducted on soil-structure interfaces. Therefore the available knowledge on the thermomechanical behaviour of soil-structure interfaces until this time has been limited.
The objective of this thesis is to fill-in the gaps mentioned above by investigating and exploring the main mechanisms governing the thermomechancial behaviour of soils and soil-structure interfaces, as well as developing thermomechanical constitutive models constructed from a sound foundation (i.e. thermodynamics) and numerical algorithms that can be used in boundary-value solvers such as finite-element methods.
First, the phenomenological temperature effects observed in laboratory-scale tests are combined with principles of thermodynamics to develop a ""{\it base}"" thermomechanical constitutive model, defined in triaxial stress space, that can capture the main thermomechanical behaviour of fine-grained soils. This base model has a single flexibly shaped yield surface. The base model is then upgraded to a ""{\it two surface/bubble}"" thermomechanical model by introducing an additional yield surface. The additional yield surface translates within the admisible stress space via a temperature-dependent kinematic rule, which enables the model to capture additional thermomechanical features such as the shakedown behaviour of soils when subjected to thermal cycles, which the single yield surface constitutive model was not able to capture or predict.
The main value of constitutive models is achieved when they are efficiently embed within boundary-value solvers, such as a finite-element method solver. One such efficient method is to use the implicit stress integration scheme. However, many constitutive models fail to converge within these schemes. One possible reason, as demonstrated in this thesis, is the existence of undesired elastic nuclei or domains with erratic divergence. A new yield function (which can also be used as a plastic potential function) is proposed, which is flexible and unique, and overcomes the aforementioned drawbacks. The single surface thermomechanical model (defined in triaxial space) is then modified by incorporating the newly proposed yield surface formulation with the addition of Lode angle dependency and generalisation to three-dimensional stress space, prior to being implemented in a finite-element context. Since the non-linear thermo-elastic relationships of the model were derived from a Gibbs-type energy potential, a new numerical algorithm was designed to accommodate this feature when implementing the model in a finite-element context using an implicit stress integration scheme.
The thermomechanical behaviour of soil-structure interfaces is experimentally investigated using a temperature-controlled direct shear apparatus. Several thermomechanical stress paths, with a wide ranges of stresses, temperatures and boundary conditions, analogous to those an interface element experiences in the serviceability life-time of an energy-pile, were designed and performed. Unique observations including the coupling effect of initial shear stress and thermal cycles were recorded, which enhanced the knowledge of thermoemchanical behaviour of soil-structure interfaces. The main impact on soil-concrete interfaces was seen to be the mechanical cyclic loads arising due to the heating and cooling of the concrete pile, rather than direct thermal impacts. Thermal creep was identified as a novel phenomen which had not been previously identified.","Constitutive model; Implicit stress integration algorithm; Laboratory tests; Soil-structure interface; Thermo-mechanics; Yield function","en","doctoral thesis","","978-94-6473-261-0","","","","","","","","","Geo-engineering","","",""
"uuid:d58256b5-2532-462c-b2a0-9c95e1dc6cef","http://resolver.tudelft.nl/uuid:d58256b5-2532-462c-b2a0-9c95e1dc6cef","Comparing automated vehicles with human drivers: Improving motion comfort with motion planning and suspension control","Zheng, Y. (TU Delft Intelligent Vehicles)","Shyrokau, B. (promotor); Keviczky, T. (copromotor); Delft University of Technology (degree granting institution)","2023","This dissertation is dedicated to understanding the potential of improving the motion comfort of automated vehicles and explores multiple options that serve this purpose. Comfort is usually prioritized behind factors such as safety and efficiency but is nevertheless influential to the acceptance of automated vehicles. The goal of enhancing motion comfort overlaps with the need to overcome challenges brought by the motion sickness phenomenon. Motion sickness is found to impact a significant portion of travelers in all types of transport. It tends to develop faster among occupants who are not engaged in the driving task. Its symptoms can cause difficulties for non-driving-related tasks (NDRTs) to be performed effectively by the passengers. Therefore, a part of the research in this dissertation is directed specifically toward mitigating motion sickness in automated vehicles...","","en","doctoral thesis","","","","","","","","","","","Intelligent Vehicles","","",""
"uuid:9363fddf-aeed-4fcc-82bd-23bcced5cc6d","http://resolver.tudelft.nl/uuid:9363fddf-aeed-4fcc-82bd-23bcced5cc6d","Incorporating Congestion Phenomena into Large Scale Strategic Transport Model Systems","Brederode, L.J.N. (TU Delft Transport and Planning)","Pel, A.J. (promotor); Hoogendoorn, S.P. (promotor); Delft University of Technology (degree granting institution)","2023","Strategic traffic assignment (TA) models assess long-term effects of policies on route choices of travelers. To meet stability requirements, current strategic TA models lack modelling of queues. This thesis develops two TA models that include queue modelling whilst satisfying stability requirements along with a method to fuse observed link flows, congestion patterns and -delays. All methods are shown to be applicable in the large-scale strategic application context.","Traffic assignment; strategic transport planning; spatial assumptions; temporal assumptions; behavioral assumptions; fundamental diagram; model capabilities; STAQ; large scale; congested networks; static; semi-dynamic; model; User Equilllibrium; travel demand; matrix estimation; strict capacity constraints; big data; floating car data; ANPR data; Bluetooth data; congestion patterns; route travel times; prior OD matrix; mathematical properties","en","doctoral thesis","","978-90-5584-330-5","","","","","","","","","Transport and Planning","","",""
"uuid:ec3f81c3-1bb4-4724-9f97-fff46e3c1c66","http://resolver.tudelft.nl/uuid:ec3f81c3-1bb4-4724-9f97-fff46e3c1c66","Miniaturization of Process Analytical Technology: from Concept to Reality","Neves Sao Pedro, M. (TU Delft BT/Bioprocess Engineering)","Ottens, M. (promotor); Eppink, M. (promotor); Delft University of Technology (degree granting institution)","2023","Continuous biomanufacturing is considered the future phase for the optimization of production processes in the biopharmaceutical industry. Productivity, product quality and consistency are greatly improved while production costs and environmental footprint are drastically reduced. The manufacturing of monoclonal antibodies (mAbs), an important biopharmaceutical in the treatment of cancers, autoimmune disorders, and, more recently, COVID-19, is eligible for this continuous processing due to patent expiration and the subsequent need to lower manufacturing costs...","Antibody aggregation; Continuous Biomanufacturing; Process Analytical Technology (PAT); Microfluidic Sensor; Fluorescent Dyes","en","doctoral thesis","","978-94-6384-473-4","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:02485fcb-1445-4479-9308-518b579e3a6d","http://resolver.tudelft.nl/uuid:02485fcb-1445-4479-9308-518b579e3a6d","Solar resource modelling and shading tolerant modules for the urban environment","Calcabrini, A. (TU Delft Photovoltaic Materials and Devices)","Zeman, M. (promotor); Isabella, O. (promotor); Manganiello, P. (copromotor); Delft University of Technology (degree granting institution)","2023","The deployment of photovoltaic (PV) systems in urban environments has the potential to supply a significant share of the urban energy demand and help us reduce greenhouse emissions in the hope of alleviating the consequences of climate change. Moreover, recent advancements in the integration of PV technology into multi-functional architectural elements, offer the possibility to deploy solar cells almost on every surface of the urban fabric.
In this dissertation, the solar energy potential of the urban environments and approaches for improving the performance of urban PV systems are investigated. The first part of this thesis is focused on computational models to evaluate the solar radiation reaching a PV system in complex geometric environments. The second part, delves into the effects of partial shading on the electrical output of PV systems and proposes strategies to increase the shading tolerance of PV modules....","","en","doctoral thesis","","978-94-6384-492-5","","","","","","","","","Photovoltaic Materials and Devices","","",""
"uuid:2c713a33-8cc6-42d3-8682-7e6872f6d422","http://resolver.tudelft.nl/uuid:2c713a33-8cc6-42d3-8682-7e6872f6d422","Extreme aerated water-wave impacts on floating bodies: The relevance of air content in water on ship design loads","van der Eijk, M. (TU Delft Ship Hydromechanics)","Boersma, B.J. (promotor); Wellens, P.R. (copromotor); Delft University of Technology (degree granting institution)","2023","A deeper understanding of physics is required when the complexity of events increases. A complex event consists of many detailed interacting processes. The complete picture asks for an understanding of each of the processes individually.
Numerical computing in the maritime industry is becoming more relevant due to the increase in usability and relatively low costs compared to experiments. The numerical results allow for analysis at the required level of detail. The complexity of water-wave impacts on offshore structures necessitates innovative numerical approaches because conventional analytical techniques fall short of representing the non-linearity in these events....","","en","doctoral thesis","","978-94-6473-260-3","","","","","","","","","Ship Hydromechanics","","",""
"uuid:bd755894-1386-4490-9ba0-c53aef05fb3f","http://resolver.tudelft.nl/uuid:bd755894-1386-4490-9ba0-c53aef05fb3f","Inelastic Deformation in Metals and Contacts: Comprehensive Treatises on Yielding and Hardening, the Yield Phenomenon and Dissipation","van Dokkum, J.S. (TU Delft Team Erik Offerman)","Sietsma, J. (promotor); Offerman, S.E. (promotor); Bos, C. (copromotor); Delft University of Technology (degree granting institution)","2023","Inelastic deformation is a common but often neglected phenomenon in experimental analysis of metal deformation and in contacts. This neglect leads to degraded measurement accuracy of material properties. Therefore a need arises for material models that a priori incorporate inelasticity. These material models must be simple and comprehensive to have the highest impact in society. This thesis addresses three main sources of inelasticity, namely anelasticity and plasticity in metals and viscosity in contacts. Inelasticity is a dissipative mode of deformation that is mechanically recoverable for anelasticity and viscosity, and irrecoverable for plasticity. We connect the fundamental properties and structures of metallic and soft matter constituents with experimentally accessible measures. The presented models will aid in the development of materials with specific properties that meet the needs of industry.
Chapter 2 presents an analytical model of the tensile test tangent moduli and yield points for single-crystallite metals with spatially uniform and nonuniform dislocation distributions across slip systems. The moduli and the onset of plastic flow show a notable dependence on initial dislocation character, spatial dislocation distribution, and loading direction with respect to crystallographic orientations. An improved methodology accounts for elastic compressibility and anisotropy, and the geometric structure of crystal lattices when one measures dislocation network geometry in single metallic crystallites.
Chapter 3 contains a seamless, unified stress-strain treatment of dislocation-driven deformation. This treatment combines the three deformation mechanisms of elastic bond stretching, stable dislocation glide, and unstable dislocation glide. The model’s yield criterion connects the bowing out of local dislocation links and global dislocation multiplication. A semi-empirical relation is constructed for the evolution of the dislocation network structure with uniaxial loading.
Chapter 4 formulates a macromechanical model of the yield point phenomenon under invariant plane conditions. The heterogeneous stress state across the Lüders front and the plastic flow inside the Lüders band are accounted for. The Lüders band orientation with respect to the tensile direction is not unique; the orientation changes with material properties and tensile specimen geometry by the stress concentration at the front. The model serves to approximate constitutive parameters independent of the test conditions.
Chapter 5 elucidates the interplay between adhesion and roughness by modelling the retraction of rigid, wavy indenters from viscoelastic substrates. Viscoelasticity governs adhesive hysteresis across all loading rates, and even in the presence of roughness-induced mechanical instabilities. This confirms the central role that viscoelasticity must play in experimental measurements in the presence of adhesive interfaces in soft matter contacts.
Chapter 6 examines the static, quasi-static, and dynamic trajectories of a base-excited mass-spring-damper system in the presence of friction. The differences between the dynamic and the quasi-static solution in engineering problems with viscous, static, and dry friction are assessed. The omission of inertial contributions will under-predict dissipation at both low and high excitation frequencies. This chapter is a guide for future (multi-scale) numerical modelling efforts on adhesion and interface friction, and the hysteretic deformation of metals.
Chapter 7 is a general discussion on the impact of inelasticity in metals, that follows from Chapters 2 and 3, the measurement of the yield point phenomenon in Chapter 4, and numerical modelling of dissipative contacts in Chapters 5 and 6. The four models as presented in Chapters 2-6 are readily applicable in experimental measurements and future numerical models. The importance of accounting for inelasticity in experimental measurement and modelling of the yield strength in metals, and adhesive dissipation in soft matter contacts is emphasised. Finally, the state of the art in research on the three main sources of inelasticity and potential applications of the presented models are enumerated, which serve as starting points of future research.","Inelasticity; Anelasticity; Plasticity; Viscoelasticity; Yield; Yield Point Phenomenon; Hysteresis in Contacts; Quasi-Static Solution","en","doctoral thesis","","978-94-6384-479-6","","","","","","2024-10-26","","","Team Erik Offerman","","",""
"uuid:ab63eb8b-a2d9-4d8a-9134-b9af2df1f62b","http://resolver.tudelft.nl/uuid:ab63eb8b-a2d9-4d8a-9134-b9af2df1f62b","Responsible Innovation for Wicked Societal Challenges: An Exploration of Strengths and Limitations","Wiarda, M.J. (TU Delft Economics of Technology and Innovation)","Doorn, N. (promotor); van de Kaa, G. (promotor); Yaghmaei, E. (copromotor); Delft University of Technology (degree granting institution)","2023","Innovators are increasingly called upon to help resolve societal challenges such as pandemics, climate change, and social injustice. The complexity, uncertainty, and contestation associated with such wicked problems require them to leverage approaches that help navigate normative and epistemic considerations for decision-making. A large number of scholars and practitioners believe that the procedural approach of Responsible Innovation could offer this. Responsible Innovation aims to align innovations with societal values and worldviews through forms of anticipation, inclusion, reflexivity, and responsiveness. Early anticipatory and reflexive deliberations subsequently provide an understanding of what decisions and outcomes are deemed ethically acceptable in light of uncertainty. This dissertation explores the usefulness of some approaches applied by Responsible Innovation in tackling wicked problems. It suggests that Responsible Innovation paradoxically fosters collaborations while also revealing contestation, and that innovators will need to leverage boundary objects and combine complementary approaches to deal with the (multi-scalar) conflict that is attributed to societal challenges.","","en","doctoral thesis","","978-94-6366-747-0","","","","","","","","","Economics of Technology and Innovation","","",""
"uuid:bbf1eb72-9cc7-4e64-adf0-d869954c1750","http://resolver.tudelft.nl/uuid:bbf1eb72-9cc7-4e64-adf0-d869954c1750","Poison to Products: On harnessing the power of microorganisms to convert waste streams into new chemicals","Allaart, M.T. (TU Delft BT/Environmental Biotechnology)","Kleerebezem, R. (promotor); Sousa, Diana (promotor); Delft University of Technology (degree granting institution)","2023","One of the main challenges society currently deals with is the depletion of fossil fuels. To navigate this issue, we must embrace the concept of circularity and turn waste into a resource. Waste streams are omnifarious and their conversion into new chemical building blocks is not always trivial. Luckily, we can take a look at nature’s problem solving skills to help us out. Because nature, in due time, always finds a solution and there is a (micro)organism for everything.
But.. we can also give nature a hand by simplifying the problem. The diversity and complexity of waste streams can be reduced by using gasification, where the waste is combusted at a high temperature with small amounts of oxygen. This yields syngas, a mixture consisting of mainly carbon monoxide, carbon dioxide and hydrogen gas. Syngas can be converted chemically into i.e. ethanol, but the success of this process highly depends on the ratios of CO, CO2 and H2 and the absence of impurities in the gas. Microorganisms can deal with much more variability, making them a promising biocatalyst for the conversion of syngas to chemical building blocks. Yet, we have to understand the microorganisms to be able to work together with them in combatting climate change. The work in this thesis is aimed at increasing our understanding of two specific types of microorganisms that can help us to turn waste into new chemicals: syngas fermenting bacteria and chain elongating bacteria. Together, they can form a team that turns a C1 molecule (carbon monoxide) all the way into a C6 molecule (hexanoate). To make the team as effective as possible, we studied both team members in detail. The syngas fermenting bacterium we studied goes by the name Clostridium autoethanogenum, and is already being used at industrial scale by the company LanzaTech. For its chain-elongating counterpart, however, we used a mixed community of microorganisms that was specifically selected to perform chain elongation. We used this mixed community because the single, optimal partner for C. autoethanogenum has yet to be found.
It has been established previously, by other researchers, that producing a lot of hexanoate is easiest when you feed chain elongating organisms a substrate with a high ethanol-to-acetate ratio. C. autoethanogenum naturally produces ethanol and acetate, but usually in a low ethanol-to-acetate ratio. In Chapter 2 we use a theoretical framework based on thermodynamics, as well as data from literature to understand what triggers C. autoethanogenum to make ethanol. We found that acetate conversion into ethanol is a stress response used to deal with a (too) high load of CO, which can be classified as overflow metabolism. We show that this behavior not only takes place when feeding CO alone, but also in the presence of both CO and H2, underlining its relevance in syngas fermentation processes. The stress response can be induced by tuning the operational parameters of the bioreactor, such as the CO supply rate or the growth rate.
In Chapter 3 we quantify this effect in the laboratory ourselves. We use a steady-state culture of C. autoethanogenum in a chemostat bioreactor and repeatedly disturb it for periods of one hour with increasing amounts of CO in the inlet gas, up to a CO partial pressure of 1.2 atm. We see that ethanol production increases with increasing CO partial pressures, and at a pCO of 0.6 atm or higher external acetate is even consumed to sustain higher ethanol production rates. This proves that the product spectrum of syngas fermentation can be directed by changing the operational conditions. Furthemore, the experimental method that we used allowed for the identification of the CO uptake rate at each CO partial pressure, directly via the off-gas measurements. We observed biomass-specific CO uptake rates of up to –119 ± 1 mmol·gx−1·h−1, which is much higher than has previously been reported for this organism. The biomass-specific uptake rate is instrumental for obtaining an accurate mathematical description (or: kinetic model) of this microorganism, which in turn allows for more accurate bioprocess design.
Chapter 4 focusses on the chain-elongating counterpart of our syngas fermenter. C. autoethanogenum prefers to grow at a pH of 5 –5.5, and most chain elongators that have been described in literature rather grow at neutral pH (± 7.0). This chapter revolves around this discrepancy. By using enrichment cultures in a sequencing batch bioreactor, we select for chain elongating microorganisms both at pH 7.0 and pH 5.5. In doing so, we establish that chain elongators can live at pH 5.5 and that a very comparable microbial community (on genus-level) develops at both pH. However, the behavior in the bioreactors was not the same. At lower pH, a significantly smaller fraction of the supplied ethanol was converted to hexanoate. Instead, more of the C4 molecule butyrate was produced, likely because it is less toxic to the microorganisms than hexanoate. This means that pH is an important parameter to control the product spectrum of chain elongation and that establishing an effective microbial team for C1-to-C6 conversion likely requires more than finding microbes with the same preferred pH.
In Chapter 5 we delve into the biochemistry of chain elongating microbes. They are known to be very flexible in their metabolism, and they can deal with a wide range of ethanol-to-acetate ratios. Theoretically, this ratio could even be infinite (i.e. feeding only ethanol), which would lead to the production of only hexanoate and no butyrate. We call this ethanol-only chain elongation. This is interesting from a fundamental as well as a process design perspective. Therefore, we test whether it is also possible in practice by using well-monitored batch experiments in bioreactors. We use different initial conditions: only ethanol, ethanol and a small amount of acetate and ethanol and a small amount of butyrate. We observe in the bioreactors that ethanol-only chain elongation is possible, but that it proceeds very slowly. Beside that, the microorganisms prefer the presence of either acetate or butyrate so much that they eventually start producing these compounds from ethanol themselves when they are not available. This behavior has never been observed before, nor was it regarded as possible.
In Chapter 6 we present a dataset of well-controlled bioreactor experiments in 9 different initial conditions, including the experiments described in the previous chapter. This dataset can be used to refine the current mathematical description of chain-elongating microbes. We describe the initial analysis of this dataset and how we assure its quality and usability for kinetic modelling using data reconciliation. With this reconciled dataset we test the accuracy of the currently available kinetic model. From this overall analysis we set out the next steps for the formulation of a more accurate kinetic model of chain elongating microbes in the future.
Chapter 7 recapitulates the significant findings from this thesis, but more importantly provides a list of questions that still remain to be answered. These questions are grouped around three different themes to provide some structure: the inner world of microorganisms, the interactions between (communities of different) microorganisms and the design of efficient (new) bioprocesses for a more sustainable world. To conclude, I reflect on the societal role of a scientist.
Assessing microbial health risks related to floodwater is a complex undertaking, hindered by factors like the intricate nature of the microorganisms involved and the susceptibility of those exposed to them. This research contributes a framework and application that combines health risk assessment, disease burden calculation, and hydrodynamic modeling to estimate adverse health consequences of microbial pathogens in floodwater through traffic activities which is a common factor of exposure during floods. The case study is Ninh Kieu District (Can Tho City, Vietnam) located on the western side of Hau River, a Mekong tributary. The study focuses on the health risk and disease burden due to rotavirus A in floodwater in Ninh Kieu through traffic activities, especially for motorcyclist. This research is one of the first to consider the input parameter concentrations and the number of exposed people to reduce the health impact of flood risk. It reveals that mitigation measures should not only focus on reducing urban floods but also on raising awareness of the local people of microbial health risks in floodwater. The disease burden is considered the prime variable of the health indicator to represent the social dimension in the assessment of flood vulnerability.","","en","doctoral thesis","","978-90-73445-54-3","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:bdc47b1e-68dd-40a9-ad36-abe8537e9f90","http://resolver.tudelft.nl/uuid:bdc47b1e-68dd-40a9-ad36-abe8537e9f90","Structure and supramolecular assembly in multi-component organogels","Ghanbari, E. (TU Delft ChemE/Advanced Soft Matter)","van Esch, J.H. (promotor); Picken, S.J. (copromotor); Delft University of Technology (degree granting institution)","2023","This thesis reports on the “structure and supramolecular assembly in multi-component organogels”. It guides readers how the aim of this research has been achieved by division of the main question into subgoals in different chapters. This introduction chapter gives a brief overview on the research theme, it is followed by the second chapter extracted from our literature review on “From molecular assembly to gel formation: what is going on behind the scenes of supramolecular gel formation”. This tutorial review discusses three different assembly mechanisms in molecular gels namely: supramolecular polymerization, crystallization, and spinodal decomposition. The second chapter of this thesis is based on the section on the crystallization mechanism from the larger tutorial review paper, since crystallization is found to be the dominant mechanism of gel formation in bisamide systems throughout our research. It provides a general background on molecular gels followed by how crystallization can lead to the order in the gel network. The third chapter elaborates the study of single bisamide gelators in the solid state. It aims at understanding how odd-even spacer length in the chemical structure affects the complementarity of hydrogen bonding which determines the molecular structure and gelator properties. The fourth chapter describes the supramolecular arrangement and rheological properties of single bisamide gels. In the fifth chapter of this thesis, we explain how we developed and validated the DSCN(T) analytical model. This model empowered our research toolkit to quantitatively analyze the experimental data obtained from DSC. This reliable analysis enabled us to understand the phase behavior of bisamide molecules in the solid state (chapter 3), gel state (chapter 4), and binary systems (both solid and gel state in the subsequent chapter). The last chapter (chapter 6) focuses on the ultimate goal of this thesis: to develop design rules to control the supramolecular assembly pattern in the solid and gel state of multi-component systems. In the course of this phase of research, we made an attempt to understand how compound formation/ co-assembly and phase separation/ self-sorting impact the rheological properties of bisamide gels. The summary of this scientific journey is provided at the end of this thesis.","","en","doctoral thesis","","","","","","","","2023-10-24","","","ChemE/Advanced Soft Matter","","",""
"uuid:1bd92bed-2d18-45fa-8b1c-cc40f6d7f4ae","http://resolver.tudelft.nl/uuid:1bd92bed-2d18-45fa-8b1c-cc40f6d7f4ae","The Potential of Anaerobic Digestion combined with Dissolved Air Flotation (AD-DAF) for Wastewater Treatment","Piaggio, A.L. (TU Delft Sanitary Engineering)","de Kreuk, M.K. (promotor); Lindeboom, R.E.F. (copromotor); Delft University of Technology (degree granting institution)","2023","In the context of a worldwide scenario characterized by a progressively expanding human population, the combining effects of climate change, escalating water stress, and the degradation of freshwater resources, water reclamation has emerged as a viable solution to alleviate the critical issue of water scarcity. Several streams around the world are subjected to a wide range of pollutants concentration and water-born pathogens, like antibiotic-resistant bacteria (ARB), due to human activity. The latter can be considered as a global emerging threat, due to its potential to deteriorate the human health system. Thus, adequate treatment of these polluted streams is needed to overcome water scarcity. While anaerobic membrane bioreactors (AnMBR) systems are a promising anaerobic digestion (AD) technology to treat municipal and concentrated wastewater, the application of membranes to separate solids from the bioreactor broth also has considerable constraints. An alternative physical separation method could be used to overcome the AnMBR limitations. Replacing the membrane unit of an AnMBR with a dissolved air flotation (DAF) system, and returning the flotation layer to the anaerobic reactor, may ensure high total suspended solids (TSS) retention while overcoming the membrane limitations. However, the oxygen-saturated flotation layer and the overall introduction of oxygen into the reactor through the DAF may negatively impact the anaerobic conversion process. This dissertation investigates the potential to use an AD coupled with a DAF system (AD-DAF) as a pre-treatment technology, specifically for the treatment of drain- and wastewater that mimics the ever-changing conditions of the Barapullah drain in New Delhi. Since testing an AD-DAF system on a laboratory-scale is not practically feasible, due to the constraints in downscaling a DAF unit, the implications of coupling these two technologies were assessed in two different systems: a column bench-scale DAF unit, and a lab-scale micro-aerated anaerobic membrane bioreactor (MA-AnMBR). To begin with, a data-driven experimental DAF model was developed to predict TSS removal. Input values for the experimental model were particle and bubble characteristics. The experimental model outcomes were verified in a bench-scale column DAF and two full-scale DAF systems. Results showed a predicted TSS removal aligned with the measured one of Delft canal water, anaerobic sludge, and DAF2 influents, 68 ± 1% vs. 66-96%, 77 ± 3% vs. 68-92%, and 98 ± 1% vs. 96± 1%, respectively.Afterwards, the bench-scale DAF was used to investigate the removal of suspended solids under four different influent conditions and seven DAF independent control variables (influent TSS, pH, temperature, DAF particles residence time, white water pressure, coagulants and flocculants concentration and mixing time). The influents simulated the Barapullah drain conditions under 1) dry and 2) monsoon times, and 3) close or 4) far from the pollution source. The results obtained indicated that TSS removal efficiency on the bench-scale DAF unit could mimic a full-scale system and that a DAF can remove over 90% of TSS for the four different tested influents. On the other hand, the effect of the performance variables altered depending on the influent type, with pressure showing a positive influence on the separation efficiency.Secondly, to assess the effect of coupling the DAF system with AD, a lab-scale AnMBR system was subjected to an oxygen load similar to the one used on a DAF unit. The effects of the oxygen load were compared to a fully anaerobic system, and the MA-AnMBR performance was assessed, for removal of organic matter, biogas production, nutrient concentration, operation and maintenance, and removal of two antibiotics sulfamethoxazole, SMX, and trimethoprim, TMP). Results showed a slight significant increase in COD removal, from 98.2 to 98.5%, and an increase of 35% in the ammonium concentration in the MA-AnMBR permeate, which indicated improved hydrolysis. Furthermore, biogas production decreased by 27%, but methane concentration on both MA-AnMBR and AnMBR was high (85%). Micro-aeration of the AnMBR had no negative effect in the removal of the tested antibiotics, which have a preferred anaerobic degradation pathway. TMP was rapidly adsorbed onto the sludge biomass and then degraded due to the long solids’ retention time (27 days). SMX adsorption was minimal, but the system hydraulic retention time of 2.6 days allowed its biodegradation. The addition of SMX and TMP led to an increase in the relative abundance of all studied anti-microbial resistant genes (ARGs) ( sul1, sul2, and dfrA1) and one mobile genetic element (intI1) in the MA-AnMBR sludge. Furthermore, the presence of antibiotic-resistant bacteria and antibiotic-resistance genes in the reactor permeate indicated that further treatment was needed. The outcomes obtained in this dissertation showed that an AD-DAF system has the potential to effectively remove total suspended solids under different influent conditions, and that the added oxygen load could improve hydrolysis with minimal impacts on the anaerobic conversion processes.","","en","doctoral thesis","","978-94-6366-751-7","","","","","","","","","Sanitary Engineering","","",""
"uuid:a2e6b6ca-c4c1-4993-8254-991506ad6cd8","http://resolver.tudelft.nl/uuid:a2e6b6ca-c4c1-4993-8254-991506ad6cd8","Characterization and Mitigation of Speckle Noise in Laser Doppler Vibrometer on Moving Platforms (LDVom)","Jin, J. (TU Delft Railway Engineering)","Li, Z. (promotor); Dollevoet, R.P.B.J. (promotor); Delft University of Technology (degree granting institution)","2023","Laser Doppler vibrometer (LDV) is a vibration-detecting instrument for noncontact and non-destructive measurement. It is superior to classic contact transducers in terms of the wide frequency range and high measurement resolution. LDV on moving platforms (LDVom) is one of the LDV measurement technology to one-way scan the vibrating surface, so that it is applicable in large-scale measurement like railway tracks. Speckle noise is a significant signal issue for LDV technologies, especially for LDVom. It distorts the local vibration signal dramatically and reduces the overall signal-to-noise ratio to a quite low level. The one-way scanning nature of LDVom makes it impossible to simply average the signals for noise removal. In view of the speckle noise issue of LDVom, the goal of this dissertation is to acquire new understanding of the problem and proposed there upon-based de-speckling solutions. Three aspects are investigated to achieve the research goal: 1) numerical simulation of speckle noise and characterization of noise behaviors. It can provide insight into behaviour changes of speckle noise in response to some variables and possible tools for minimizing noise strength; 2) the theoretical Fourier spectrum of speckle noise series. The resulted frequency domain characteristics can help design the de-noise signal filter accordingly; 3) development of classic approach-based and newly designed de-speckling algorithms...","Laser Doppler vibrometer; speckle noise; signal processing","en","doctoral thesis","","","","","","","","2024-10-24","","","Railway Engineering","","",""
"uuid:97f000f7-1176-410f-8213-2bd67ce7406f","http://resolver.tudelft.nl/uuid:97f000f7-1176-410f-8213-2bd67ce7406f","Down the nanoparticle hole: 103Pd:Pd/Fe-oxide theranostic agents for image-assisted thermo-brachytherapy as alternative cancer treatment","Maier, A. (TU Delft BT/Biocatalysis)","Djanashvili, K. (promotor); Denkova, A.G. (promotor); Delft University of Technology (degree granting institution)","2023","Cancer is one of the leading causes of death worldwide and the number of cases is expected to keep increasing in the next years. Even though nowadays most employed cancer treatments in clinical practice (surgery, chemo-, and radiotherapy) are effective, they are still associated with multiple limitations and side effects. The main pitfall stays in their non-specificity to tumour cells, which leads to affecting healthy tissues. Therefore, alternative treatments able to overcome the oncologic challenges of the current treatment regimens by specifically treating only the cancer cells, be minimally invasive, and limit short and long-term side effects are highly needed. As the number of patients diagnosed with cancer in incipient stages is constantly increasing, such alternative treatments are currently even more attractive. Due to the advances in nanotechnology, cancer nanomedicine is a fast-advancing field, employing nanoparticles to both diagnose and deliver therapy of cancer, namely, nanotheranostics. Nanobrachytherapy is the brachytherapy treatment delivered via injection of radioactive nanoparticles into the tumour. The great advantage is that nanobrachytherapy retains the characteristics of brachytherapy, such as precise and targeted dose delivery, while allowing a less invasive administration and a more uniform dose distribution in the tumour. However, the radio-resistance exhibited by the tumour cells can hinder the success of nanobrachytherapy, but the synergetic combination of cell damaging agents, as well as radioactivity and heating, is wellknown. Thermal treatments, such as hyperthermia, offer a hyperthermic radiosensitization making the tumour more susceptible to irradiation, while thermal ablation can serve as surgery replacement. Furthermore, thermal treatments can be delivered by injection of colloidal suspensions of magnetic nanoparticles (MNPs) in tumours and heating them via exposure to an externally applied alternating magnetic field. An additional advantage of such magnetic nanoparticles is their ability to ensure visualization via magnetic resonance imaging (MRI), a non-invasive technique, helpful in monitoring the treatment effects. This thesis aims to develop a nanotheranostic agent, able to deliver therapeutic effects via radiation and heating, with additional imaging via magnetic resonance imaging. We envision the nanotheranostic as a core-shell hybrid nanoparticle in the form of 103Pd:Pd/Fe-oxide. The palladium core is radiolabelled with 103Pd radioisotope, responsible for the required radiation dose, whereas the iron oxide coating ensures hyperthermia/thermal ablation and imaging…","","en","doctoral thesis","","978-94-6384-489-5","","","","","","","","","BT/Biocatalysis","","",""
"uuid:ad7c33be-979f-4e27-8edd-c2eb193389ae","http://resolver.tudelft.nl/uuid:ad7c33be-979f-4e27-8edd-c2eb193389ae","Predicting and preventing in-plane shear induced fiber angle deviations during automated handling of non-crimp fabrics","de Zeeuw, C.M. (TU Delft Delft Aerospace Structures and Materials Laboratory)","Benedictus, R. (promotor); Peeters, D.M.J. (copromotor); Bergsma, O.K. (copromotor); Delft University of Technology (degree granting institution)","2023","Over the years the use of composites as an aircraft structural material has significantly increased. Currently, the industry still relies largely onmanually manufactured components. Automated manufacturing can however bring advantages such as reduced manufacturing costs and amore consistent and higher quality end product. An attractive automated option for the handling of reinforcements is the pick and place process, which involves the picking up, moving and placing down of objects. The pick and place process makes it possible to place layers of reinforcement as a whole and brings opportunities for the handling ofmultiple layers and/or large layers of reinforcement. Literature shows countless different strategies to execute a pick-and-place operation, with research typically focusing on developing more highly specialized concepts. This generally involves demonstrating the feasibility of the concept but does not include reporting on the accuracy. Not taking the accuracy of the pick-and-place process and the quality of the reinforcement during handling into account might result in inconsistent or substandard final products.","","en","doctoral thesis","","","","","","","","","","","Delft Aerospace Structures and Materials Laboratory","","",""
"uuid:58a8de05-9b30-4009-93bb-1b671eed3bee","http://resolver.tudelft.nl/uuid:58a8de05-9b30-4009-93bb-1b671eed3bee","Prediction of Particulate Fouling in Reverse Osmosis Systems: MFI-UF Method Development and Application","Abunada, M.B.M. (TU Delft Sanitary Engineering)","Kennedy, M.D. (promotor); Dhakal, Nirajan (promotor); Delft University of Technology (degree granting institution); IHE Delft Institute for Water Education (degree granting institution)","2023","The application of reverse osmosis (RO) membranes in water treatment has rapidly grown over the last few decades thanks to the continuous advancements in both design and operation. However, RO membrane fouling still remains a key challenge. Fouling can cause a decline in membrane permeability, which requires higher operational energy and more frequent membrane cleaning/replacement to maintain stable water production, which eventually results in increased O&M costs. Particulate fouling, due to the deposition of particles and colloids onto RO membranes, is one of the types of fouling persistently experienced in RO systems. Therefore, there is a real need for a reliable method to predict particulate fouling in order to effectively monitor and control the performance of RO systems....","","en","doctoral thesis","","978-90-73445-55-0","","","","","","2024-04-18","","","Sanitary Engineering","","",""
"uuid:b0adc65b-301a-49dc-aac5-03f3c55f7f2a","http://resolver.tudelft.nl/uuid:b0adc65b-301a-49dc-aac5-03f3c55f7f2a","On the real-world security of cryptographic primitives: From theory to practice","Najm, Z. (TU Delft Cyber Security)","Hartel, P.H. (promotor); Picek, S. (copromotor); Delft University of Technology (degree granting institution)","2023","","Cyber Security; Information Security; Side Channel Attack; Cryptography; Implementations","en","doctoral thesis","","978-94-6384-497-0","","","","","","","","","Cyber Security","","",""
"uuid:1b163f58-5570-4915-8999-8f5dd644ed7b","http://resolver.tudelft.nl/uuid:1b163f58-5570-4915-8999-8f5dd644ed7b","Modeling and characterization of non-ideal compressible flows in unconventional turbines","Tosto, F. (TU Delft Flight Performance and Propulsion)","Colonna, Piero (promotor); Pini, M. (copromotor); Delft University of Technology (degree granting institution)","2023","The vast majority of energy conversion systems currently makes use of fossil fuels, whose combustion generates harmful greenhouse gases. Transitioning to renewable energy sources is thus paramount to limiting the environmental impact of human activities on the climate. In this regard, the harvesting of wasted thermal energy constitutes a promising strategy to increase the efficiency of industrial processes and mobile engines. For instance, technologies such as organic Rankine cycle (ORC) systems enable the energy discarded during the conversion processes to the atmosphere to be repurposed and generate CO2-neutral electricity or additional mechanical work.
The efficiency of such systems is subordinate to that of each of the components, among which, is the turbine. Designing more efficient ORC turbines inherently leads to a higher thermodynamic cycle efficiency. However, these turbines operate with complex organic compounds, and part of the expansion process often occurs in the dense vapor state, where the thermodynamic properties exhibit significant deviations from the variations predicted by the ideal gas law. As a consequence, available guidelines for the design of turbomachinery operating with air or steam cannot be used, as they would lead to incorrect sizing and wrong performance estimations. The development of generalized guidelines for turbine design is possible only through a thorough investigation of the internal non-ideal compressible flow inside the vane passage, and by accurately discerning all the possible loss sources.
The research outlined in this thesis aims at characterizing non-ideal compressible internal flows of dense vapors and developing new guidelines for the design of unconventional turbines operating with organic fluids, such as those operating in organic Rankine cycle power systems.
The influence of both the complexity of the fluid molecules and the thermodynamic state on the flow field is evaluated for some paradigmatic one-dimensional flow configurations. For these processes, loss mechanisms and relevant trends in flow variables are both qualitatively and quantitatively estimated. Moreover, a detailed analysis of the viscous dissipation in turbulent wall-bounded flows of dense vapor is performed by resorting to direct numerical simulations (DNS). Results are compared against those from an in-house reduced-order model (ROM) code solving the two-dimensional boundary layer equations.
The combined effects of the working fluid, its thermodynamic state, and the flow compressibility on the flow deviation downstream of turbine cascades are then investigated by means of Reynolds Averaged Navier-Stokes (RANS) calculations on a representative geometry. The results obtained from the simulations are compared against those estimated with reduced-order physical models. Finally, an investigation of the influence of both compressibility and fluid molecular complexity on the optimal solidity of axial turbines is performed using RANS calculations. New design guidelines for the selection of optimal solidity in the preliminary design of non-conventional turbomachinery are proposed and discussed.
Results show that turbines operating with compounds characterized by a high complexity of the molecular structure are arguably subjected to higher losses in the mixing region, as well as exhibiting larger viscous dissipation at a given Reynolds number. Moreover, the fluid strongly affects the operational range of the turbine, as well as its design.","dense vapor; organic fluid; turbine; loss breakdown; boundary layer; solidity; CFD; direct numerical simulation","en","doctoral thesis","","978-94-6419-941-3","","","","","","","","","Flight Performance and Propulsion","","",""
"uuid:05fe4340-31bb-4c24-a827-69189aa2622b","http://resolver.tudelft.nl/uuid:05fe4340-31bb-4c24-a827-69189aa2622b","Towards Artificial Social Intelligence in the Wild: Sensing, Synthesizing, Modeling, and Perceiving Nonverbal Social Human Behavior","Raman, C.A. (TU Delft Pattern Recognition and Bioinformatics)","Reinders, M.J.T. (promotor); Loog, M. (promotor); Hung, H.S. (promotor); Delft University of Technology (degree granting institution)","2023","Over the last three decades, the social roots of human intelligence have come to influence the development of artificial intelligence (AI). Researchers in AI have moved beyond agents operating in isolation towards developing socially situated agents that can operate in the real world. Meanwhile, researchers in the social sciences have been leveraging AI techniques to analyze and theorize about social phenomena. Both these research endeavors came to be independently termed Artificial Social Intelligence (ASI), leading to the emergence of a field spanning several subdisciplines of the social and computational sciences.
This Thesis takes a holistic view of ASI and makes contributions toward both its historical goals. Moreover, the work presented here focuses on taking ASI research into natural real-world settings in the wild. The research is organized under three themes: acquiring, modeling, and perceiving social human behavior.
The Thesis begins by addressing the challenge of data acquisition. We propose a replicable data collection concept for curating datasets of real-world social human behavior, incorporating technical innovations and ethical considerations required for the noninvasive sensing of multimodal behavioral streams. To overcome the limited availability of real-world data, we also explore the potential of synthetic training data for downstream tasks.
Next, we tackle the challenge of modeling real-world social behavioral cues. Evidence from social psychology suggests that individuals uniquely adapt their behaviors to different conversation partners to sustain interactions. How can we jointly forecast these mutually dependent future cues of conversation partners? We propose a stochastic meta-learning method that adapts its forecasts to the unique dynamics of a conversation group given example behavior sequences. Thereby, it generalizes to unseen groups in a data-efficient manner by avoiding the need for group-specific models. Further, to facilitate the integration of data-driven and hypothesis-driven research, we propose a post hoc explanation framework for identifying timesteps that are salient to a forecasting model's predictions.
Finally, we contribute to a nuanced perception of social interactions by establishing evidence of multiple conversation floors within a single conversing group, in contrast to the prevailing implicit assumption in the automatic detection of conversation groups. We also develop an instrument for measuring the perceived quality of conversations at the individual and group levels.
Through these research themes, we provide novel contributions to the field of ASI, taking important steps toward the development of socially intelligent machines that can operate effectively in complex real-world settings.","","en","doctoral thesis","","978-94-93330-33-7","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:74dd005a-cef5-4427-84c6-123cf31b5b18","http://resolver.tudelft.nl/uuid:74dd005a-cef5-4427-84c6-123cf31b5b18","Mechanics and thermodynamics of suspended two-dimensional membranes","Liu, Hanqing (TU Delft Dynamics of Micro and Nano Systems)","Steeneken, P.G. (promotor); Verbiest, G.J. (copromotor); Delft University of Technology (degree granting institution)","2023","This thesis provides a comprehensive research of both the mechanics and thermodynamics of suspended two-dimensional (2D) membranes, such as tunable mechanical resonance, membrane deformation, heat transport, phonon scattering, and energy dissipation. These characteristics make nanomechanical resonators, made of a suspended 2D membrane, promising candidates for both fundamental studies and engineering applications. This thesis is composed of eight chapters in total.","Two-dimensional materials; nanomechanics; thermal transport; acoustic phonons; tunability","en","doctoral thesis","","978-94-6469-648-6","","","","","","","","","Dynamics of Micro and Nano Systems","","",""
"uuid:c18b3f07-1bbc-4ef1-9f6d-cc116d8b4cb4","http://resolver.tudelft.nl/uuid:c18b3f07-1bbc-4ef1-9f6d-cc116d8b4cb4","Data-driven Methods to Study Individual Choice Behaviour: with Applications to Discrete Choice Experiments and Participatory Value Evaluation Experiments","Hernández, J.I. (TU Delft Transport and Logistics)","Mouter, N. (promotor); van Cranenburgh, S. (promotor); Delft University of Technology (degree granting institution)","2023","Since its origins in the 1970s, choice modelling has become an important field of study in diverse areas, including transportation, health economics, environmental economics and marketing. Choice modellers have developed several methods to collect and model individual choices. Researchers and policymakers use such methods to understand individual preferences in diverse contexts, derive economic values or predict behaviour.
Over the years, the field of choice modelling has been developed in two key areas. Firstly, choice modellers have developed new data collection tools to account for more realistic forms of decision-making. While discrete choice experiments (DCEs) are still popular and highly customisable, they force respondents to choose among mutually-exclusive alternatives, which may not reflect how individuals choose in real life. In response, new SC experiments have been proposed to incorporate more realistic forms of decision-making, such as Participatory Value Evaluation (PVE). In a PVE experiment, respondents select a combination of alternatives without surpassing resource constraints. Secondly, while theory-driven models based on utility theory, e.g., random utility maximisation (RUM) or Kuhn-Tucker, are still the norm to model choice behaviour, there is a broader recognition that individual' behaviour is ultimately unknown from the analyst perspective, data-driven methods can help to uncover such behaviour.
Despite the latter, to the author’s knowledge, three methodological and practical challenges are still unresolved in the literature. Firstly, no research has been done to explore the potential of data-driven methods to analyse data from SC experiments outside DCEs, and in particular for PVE experiments, either as complements to improve the specification of choice models or as standalone data analysis methods. Secondly, while data-driven methods for discrete choices (and DCEs) are available in the literature, such methods either sacrifice their flexibility to learn from the data to satisfy consistency assumptions or vice versa. Thus, a method that balances flexibility and consistency assumptions is lacking. Thirdly, there is a lack of software tools to estimate and compare data-driven methods easily and conveniently, hindering their widespread use.
Considering these challenges, this thesis further investigates how data-driven methods can be used for analysing individual choice behaviour from SC experiments, either to complement theory-driven choice models or alternatives to theory-driven choice models; and to develop methodological tools for such purposes, i.e., new models and software. This thesis scopes its research to two specific SC experiments: PVE and DCEs.
To reach the goals of this thesis, five novel studies are proposed. The first study (Chapter 2) introduces the reader to how PVE experiments are conducted in real-life and how they are conventionally analysed with theory-driven choice models. The second study (Chapter 3) proposes three procedures based on association rules (AR) learning and random forests (RF) to assist the specification and test the validity of the assumptions of theory-driven choice models for PVE experiments. The third study (Chapter 4) shows how XGBoost and SHAP -a machine learning model and explainable artificial intelligence method, respectively- can be used to analyse PVE experiments data as an alternative to theory-driven analysis. The fourth study (Chapter 5) proposes a new discrete choice model based on artificial neural networks that balances flexibility to learn utility functions from the data while satisfying consistency with RUM and economic theory. The fifth study (Chapter 6) introduces NP4VTT, a new software tool that provides five nonparametric models to uncover the VTT distribution from two-attribute-two-alternative DCEs. Together, these studies provide further evidence that supports the use of data-driven methods to analyse individual choice behaviour and specific methodological tools were provided for such purposes.
This thesis concludes by highlighting that while the primary research goal and sub-goals were achieved, the relevance of the findings and conclusions of this thesis shall be put into perspective. Firstly, using data-driven methods, either to assist choice models or as an alternative to them, lead to “moderate-to-modest” model fit improvements. Consequently, researchers or policymakers interested in using the methods proposed in this thesis for prediction should not expect considerable differences compared with conventional choice models. Secondly, the methods proposed in this thesis provide a considerable number of new insights of behavioural interest. Choice modellers could benefit from thesis insights to contrast or further assist the development of choice models, while policymakers have a wide range of new information for targeting decisions to specific policies or individuals. However, researchers should consider how to synthesise all these new insights effectively. Thirdly, this thesis made efforts to make more data-driven methods available by, for instance, publishing the studies in open-access journals and, when possible, making code and data publicly available for the general public. Nevertheless, there are still conceptual challenges to make these methods more amicable to researchers accustomed to the concepts and structure of the choice modelling community. As a final reflection, while having the potential to help choice modellers to increase their understanding of individual choice behaviour, data-driven methods still require more development (and being easily accessible) to serve as a real alternative to choice models.
A key challenge in doing so is AI systems' behaviour-use interdependence (i.e., the behaviour of a system is related to the manner, in which it is used) – a topic underrepresented in extant literature. We set out to explore this issue, guided by the central research question: ""How can we design a theoretical model that facilitates early simulation of AI systems' behaviour-use interdependence using Design theories?”.
This dissertation presents both theoretical and empirical investigations into the development of such a model. These lay the foundation for what we term the Theoretical Model for Prototyping AI or PAI model. The PAI model is defined by relationships among abduction, induction, and deduction, which offer a means to support the early simulation of the behaviour-use interdependence. Finally, the devising of the PAI model allows us to shed light into how Design theories could contribute to the design of better AI systems. It also allows us to extend these theories and identify potential future directions for the field of Design.
Nevertheless, as the number of microsystems within a biomedical device escalates, a pressing need emerges to interconnect these independent microsystems using an approach that meets the constraints imposed per each particular context. Wire bonding, for instance, is one of the most widely known and used methods to establish electrical connections between chips and packages. However, wire-bonded microsystems may be inadequate to fit in applications confined by the available physical space and whereby aspects such as reliability and biocompatibility are paramount. Specifically deserving attention is the increased footprint and the introduction of protrusions that may jeopardize an effective interface of biomedical devices with biological systems. Therefore, it becomes essential to devise seamless connections between these microsystems for enhanced robustness, electrical performance, compactness, and improved physical conformability to biological structures.
This doctoral research was driven by the increasing demand for microsystem integration alternatives in the biomedical field and the need to develop advanced biomedical devices with improved functionality and performance. Monolithic fabrication was the principal method of establishing a seamless integration between distinct microsystems: integrated circuits—essential for the signal conditioning of transducers—and micro-electromechanical systems—excellent for implementing functionalities at the microscale via precise micromachining delicate structures on high-quality materials. Two novel biomedical devices were devised to achieve this objective: an organ-on- a-chip system for cell-culture experimentation equipped with an analog-compatible, cost-effective, BiCMOS-based temperature sensor and a stretchable polydimethyl-siloxane membrane; and an artifact-resilient optrode optimized for ultralow-noise measurements of infraslow brain activity. The latter benefited from dual-gate, low-noise, p-channel JFETs based on a BiFET technology and deep reactive ion etching on a silicon-on-insulator wafer for micromachining nonrectilinear features on the probe— essential for creating application-oriented solutions that interface better with biological structures.
Both devices were designed based on a unique awareness-oriented co-design methodology that aids the device architect in undertaking design decisions of various process-related hurdles entailing co-fabrication. This methodology, namely “holistic iterative co-design thinking”, offers an iterative co-design process that facilitates the early identification of integration obstacles related to the manufacturing process. One of the key procedures in this methodology refers to functionally decomposing a multidimensional complex design problem into a set of individual one-dimensional problems that are less complex to solve. As a result, the (co)-design is iteratively readjusted, significantly saving time and resources.
This dissertation also takes a new standpoint into the existing monolithic fabrication modalities, proposes a new taxonomy, clarifies terminologies, and addresses a novel co-fabrication technique: IC-interlaced-MEMS, employed for cost- effectively co-fabricating the organ-on-a-chip system described in Chapter 4. The IC-interlaced-MEMS is similar to its “sibling” IC-interleaved-MEMS. The distinction lies primarily in their degree of process orthogonality. While the IC-interleaved-MEMS benefits from fully orthogonalizing process steps between the IC and MEMS domains, the IC-interlaced-MEMS trades orthogonality for process simplification and enhanced lithographic pipeline workflow. These benefits promise to leverage the construction of next-generation biomedical devices that interact with biological systems via specialized, large-area transducers.","monolithic fabrication; microsystem integration; integrated circuits; micro-electromechanical systems; organs-on-a-chip; optrodes; holistic co-design methodology","en","doctoral thesis","","","","","","","","","","","Bio-Electronics","","",""
"uuid:c4145035-8c63-4a84-b367-695d0c63f76f","http://resolver.tudelft.nl/uuid:c4145035-8c63-4a84-b367-695d0c63f76f","Impact Assessment of Train-Centric Rail Signalling Technologies","Aoun, J. (TU Delft Transport and Planning)","Goverde, R.M.P. (promotor); Quaglietta, E. (copromotor); Delft University of Technology (degree granting institution)","2023","As the deployment of new railway technologies requires official approval from local authorities and governmental agencies, a well-specified strategy can foster investment decisions for technological developments and the overall system migration process. Therefore, it is crucial to guarantee that the proposed railway technologies can enhance operational efficiency and ensure safety to passengers and freight transport. Next-generation train-centric signalling systems can provide substantial capacity benefits to railway undertakings. Moving Block (MB) or the European Rail Traffic Management System / European Train Control System Level 3 (ERTMS/ETCS L3) is a radio-based system without any trackside equipment. A Radio Block Centre (RBC) receives positions of each train continuously and computes a Movement Authority (MA) to each of them. In this signalling system, the track is not partitioned into fixed blocks as is the case in conventional railways but the trains operate under “moving blocks” with a safe distance in front determined by the absolute braking distances. As there is no available trackside equipment, it is vital that trains guarantee their integrity by means of a Train Integrity Monitoring (TIM) system. Virtual Coupling (VC) is one of the most advanced train-centric signalling concepts that drastically reduces train headways and allows trains to move synchronously together in platoons using Vehicle-to-Vehicle (V2V) communication. However, several uncertainties arise in the safety validation and feasibility (from the technical, financial and regulatory perspectives) of the VC technology, particularly when compared to MB.
This thesis aims at developing methodological frameworks to support science and the industry in analysing, assessing and developing new complex systems and next-generation rail technologies. The proposed frameworks use interdisciplinary approaches to address complex decision-making processes such as market potential analysis, impact assessment and roadmapping. In addition, a novel methodological framework is proposed to evaluate the safety and performance of technologies and complex systems.
We first investigate the market potentials and operational scenarios of VC for different segments of the railway market: high-speed, mainline, regional, urban, and freight trains. The research builds on the Delphi method, with an extensive survey to collect expert opinions about benefits and challenges of VC as well as stated travel preferences in futuristic VC applications. Survey outcomes show that VC train operations can be very attractive to customers of the high-speed, mainline, and regional market segments, with benefits that are especially relevant for freight railways. In particular, customers of regional and freight railways are observed to be unsatisfied with current train services and willing to pay higher fares to avail of a more frequent and flexible service enabled by VC. Operational scenarios for VC are then defined by setting market-attractive service headways and defining characteristics of the rolling stock, infrastructure, and traffic management. A SWOT analysis of strengths and weaknesses of this concept together with business opportunities and threats is carried out. The defined VC future scenario is set to induce a sustainable shift of customers from other travel modes to the railways.
Second, we examine the overall impact of next-generation train-centric signalling systems to identify development strategies to face the forecasted railway demand growth. To this aim, an innovative Multi-Criteria Analysis (MCA) framework is introduced to analyse and compare VC and MB in terms of relevant criteria including quantitative (e.g., costs, capacity, stability, energy) and qualitative ones (e.g., safety, regulatory approval). We use a hybrid Delphi-Analytic Hierarchic Process (Delphi-AHP) technique to objectively select, combine and weight the different criteria to more reliable MCA outcomes. The analysis has been performed for different rail market segments including high-speed, mainline, regional, urban and freight corridors. The results show that there is a highly different technological maturity level between MB and VC given the larger number of vital issues not yet solved for VC. The MCA also indicates that VC could outperform MB for all market segments if it reaches a comparable maturity and safety level. The provided analysis can effectively support the railway industry in strategic investment planning of VC.
Third, developments in the railway industry are continuously evolving and long-term transition strategies can enable an efficient implementation of signalling technologies that provide a significant increase in network capacity and operation efficiency. VC advances MB signalling by further reducing train separation to less than an absolute braking distance using V2V communication and cooperative train control within a Virtually Coupled Train Set (VCTS). This chapter proposes a method to develop scenario-based roadmaps based on the SWOT and hybrid Delphi-AHP MCA. Step-changes are identified and initially assessed in a Swimlane based on priorities and time order collected from stakeholders through a survey and further developed in a workshop. Optimistic and pessimistic scenarios are assessed regarding various factors and timelines. The step-changes are then enriched with the optimistic and pessimistic scenarios, and associated durations are estimated for each of the step-changes, which finally result into scenario-based roadmaps that can be used as an efficient tool for stakeholders to identify and solve potential criticalities/risks to the deployment of VC as well as to setup investment and development plans. The approach is applied to deliver implementation roadmaps of VC for different market segments with particular focus on mainline railways.
Fourth, although MB and VC rail signalling will change the current train operation paradigm by migrating vital equipment from trackside to onboard to reduce train separation and maintenance costs, their actual deployment is constrained by the need for methods to identify configurations which can effectively guarantee safe train movements even under degraded operational conditions. In this thesis, we analyse the effectivity of MB and VC in safely supervising train separation under nominal and degraded conditions by using an innovative approach which combines Fault Tree Analysis (FTA) and Stochastic Activity Network (SAN). An FTA model of unsafe train movement is defined for both MB and VC capturing functional interactions and cause-effect relations among the different signalling components. The FTA is then used as a basis to apportion signalling component failure rates needed to feed the SAN model. Effective MB and VC train supervision is analysed by means of SAN-based simulations in the specific scenario of an error in the Train Position Report (TPR) for five rail market segments featuring different traffic characteristics, namely high-speed, mainline, regional, urban and freight. Results show that the overall approach can support infrastructure managers, railway undertakings, and rail system suppliers in investigating the effectiveness of MB and VC in safely supervising train movements in scenarios involving different types of degraded conditions and failure events. The proposed method can hence support the railway industry in identifying effective and safe design configurations of next-generation rail signalling systems.
In summary, this thesis provides multiple scientific contributions to train-centric rail signalling technologies by developing several methodological frameworks to support decision-making towards the development of complex railway systems. With a rapid growth of the railway demand, this thesis serves as a guidance for practitioners to develop more advanced transportation systems while ensuring an improved evaluation of safety and performance.","Railway signalling; Virtual Coupling; Moving Block; Impact assessment; Roadmapping; Modelling","en","doctoral thesis","TRAIL Research School","978-90-5584-333-6","","","","","","2023-09-29","","","Transport and Planning","","",""
"uuid:5097011c-5898-4fe9-bafb-081f351c6fd1","http://resolver.tudelft.nl/uuid:5097011c-5898-4fe9-bafb-081f351c6fd1","The Role of Micellar Nanowires in Diagnosing Tropical Diseases","Hubbe, H.M.K. (TU Delft ChemE/Advanced Soft Matter)","Mendes, E. (promotor); Boukany, P. (promotor); Staufer, U. (promotor); Delft University of Technology (degree granting institution)","2023","The tropical mosquito-transferred virus diseases Dengue, Zika and Chikungunya are causing hundreds of millions infections worldwide every year. Their occurrences are often linked, also due to their common transfer vector – the mosquito – thus they are of interest as a group together. Areas of prevalence include vast rural areas of low to middle income countries such as Indonesia, often with sub-optimal access to medical facilities. An affordable, reliable and easy-to-use point-of-care test would assist monitoring outbreaks, allowing earlier countermeasures such as mosquito extermination or setup of temporary on-site medical aid, as well as help allowing timely treatment in the often far-away hospitals.","","en","doctoral thesis","","","","","","","","","","","ChemE/Advanced Soft Matter","","",""
"uuid:db47aff4-192e-4557-ada9-22e8b7e5be07","http://resolver.tudelft.nl/uuid:db47aff4-192e-4557-ada9-22e8b7e5be07","Quantum communication using phonons: Towards a quantum network using high frequency mechanical oscillators","Fiaschi, N. (TU Delft QN/Groeblacher Lab)","Groeblacher, S. (promotor); Verhagen, E. (promotor); Delft University of Technology (degree granting institution)","2023","Quantum communication refers to the field of science that studies the ability to connect separated quantum devices via coherent channels, i.e. via buses that maintain coherently the information encoded. The importance of the task is on multiple levels: from secure communications to scaling quantum computers. The first one can be of fundamental importance in moments like government elections, or banks transaction, but even to secure the right of privacy of individuals. The latter could open the way to, for example, faster and more precise solutions to problems in chemistry (for drug development), or material science (more environmentally friendly solar cells). At this moment, quantum communication and computing are in a similar stage as of the early computers: the machines are with very little connectivity and require large spaces and specialists to be operated. In a few years, we can expect these systems to be interconnected more and more, scaled, and made easier to use. In this thesis, we present work done to create quantum channels using high frequency mechanical oscillators. In chapter 1 we present recent progress in the field of quantum communication done with several types of systems, both in the long and short distance. We also introduce how high frequency mechanical oscillators could play an important role in this research area. We discuss the current challenges and limitations and possible future developments. In chapter 2 we perform a optomechanical quantum teleportation. In this work, we teleport a polarization encoded telecom photon onto a quantum memory, made by two single mode mechanical oscillators in a dual-rail configuration. This work is a step towards entanglement swapping (also referred to as ’teleportation of entangled state’) and represents a proof of principle towards quantum repeaters (using the scheme proposed by Duan, Lukin, Cirac, and Zoller - DLCZ scheme).
In chapter 3 we report the first experiment done with the multimode mechanical
devices. These devices are formed by a single mode optomechanical cavity coupled to a single-mode mechanical waveguide (ended with a phononic mirror). We show that the non-classical information created in the optomechanical cavity can be guided on chip in the mechanical waveguide, using as witness the cross-correlation between the scattered photons. However, the non-uniform spacing between the mechanical modes severely lowers the maximum value of the non-classical correlation measured. This was greatly improved with the new design of the device. With this design, we are able to entangle two traveling phonons in the mechanical waveguide, shown in chapter 4. In this way, we show that the traveling phonons can be used to distribute quantum entanglement on-chip, a first step towards connecting quantum devices on a short scale. In chapter 5, we measure in time the frequency jitter of two spectrally close mechanical modes of the same device. We demonstrate that the frequency diffusion of the modes
is not correlated in time, and so the coherence length of the traveling information will ultimately be limited by the jitter. This result shows the importance of performing a detailed study on the surface defects.
Lastly, in chapter 6 we summarize the findings of these experiments and we discuss the future developments of the field.","","en","doctoral thesis","","","","","","","","2023-10-05","","","QN/Groeblacher Lab","","",""
"uuid:9263fa46-5a34-4a1d-b693-8b547daee065","http://resolver.tudelft.nl/uuid:9263fa46-5a34-4a1d-b693-8b547daee065","Sensing the Cultural Significance with AI for Social Inclusion: A Computational Spatiotemporal Network-based Framework of Heritage Knowledge Documentation using User-Generated Content","Bai, N. (TU Delft Heritage & Architecture)","Pereira Roders, A. (promotor); Nourian, Pirouz (copromotor); Delft University of Technology (degree granting institution)","2023","Social Inclusion has been growing as a goal in heritage management. Whereas the 2011 UNESCO Recommendation on the Historic Urban Landscape (HUL) called for tools of knowledge documentation, social media already functions as a platform for online communities to actively involve themselves in heritage-related discussions. Such discussions happen both in “baseline scenarios” when people calmly share their experiences about the cities they live in or travel to, and in “activated scenarios” when radical events trigger their emotions. To organize, process, and analyse the massive unstructured multi-modal (mainly images and texts) user-generated data from social media efficiently and systematically, Artificial Intelligence (AI) is shown to be indispensable. This thesis explores the use of AI in a methodological framework to include the contribution of a larger and more diverse group of participants with user-generated data. It is an interdisciplinary study integrating methods and knowledge from heritage studies, computer science, social sciences, network science, and spatial analysis. AI models were applied, nurtured, and tested, helping to analyse the massive information content to derive the knowledge of cultural significance perceived by online communities. The framework was tested in case study cities including Venice, Paris, Suzhou, Amsterdam, and Rome for the baseline and/or activated scenarios. The AI-based methodological framework proposed in this thesis is shown to be able to collect information in cities and map the knowledge of the communities about cultural significance, fulfilling the expectation and requirement of HUL, useful and informative for future socially inclusive heritage management processes.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-749-4","","","","","","2023-10-01","","","Heritage & Architecture","","",""
"uuid:791dca4f-d05f-4bff-b947-b6caa179d171","http://resolver.tudelft.nl/uuid:791dca4f-d05f-4bff-b947-b6caa179d171","Mass Housing Neighbourhoods and Urban Commons: Values-based Governance and Intervention Framework for New Belgrade Blocks","Dragutinovic, Anica (TU Delft Heritage & Architecture)","Pottgiesser, U. (promotor); Nikezić, Ana (promotor); Quist, W.J. (copromotor); Delft University of Technology (degree granting institution)","2023","The neglect of significance, deterioration and consequent devaluation of the post-war mass housing neighbourhoods are major challenges, both in the field of heritage conservation and management and in urban planning and design. The reasons for their deterioration are different, and interlinked with the socio-cultural discourse, as well as the spatial characteristics of these neighbourhoods. This doctoral research addresses the challenges of those neighbourhoods, focusing on New Belgrade Blocks, as one of the largest modernist post-war mass housing areas in Europe. The case is particularly important for the discourse on mass housing and ‘ordinary’ heritage management, as it encapsulates concepts, policies and practices developed in Yugoslavia, which are relevant to the contemporary discussions on community-driven approaches for urban planning and governance and participation in heritage studies. The doctoral thesis presents this legacy and reveals causalities and relations of spatial and socio-political aspects, policies, but also planning and design principles. Furthermore, it empirically studies and evaluates the blocks in the contemporary context, with the society (involving citizens), and within the current legal and organisational conditions. Eventually, it develops a framework for enhancement of the blocks, addressing the current and future societal and users’ needs, while preserving the identity and values of the blocks. The doctoral thesis provides different findings and perspectives, contributing to the current knowledge on integrated conservation, urban planning and governance of urban heritage, and in particular mass housing neighbourhoods. It shows co-dependence of those fields and offers an integrative and cross-disciplinary approach.","","en","doctoral thesis","","978-94-6366-735-7","","","","","","","","","Heritage & Architecture","","",""
"uuid:ea94239f-5e95-4705-9deb-32196d74daaa","http://resolver.tudelft.nl/uuid:ea94239f-5e95-4705-9deb-32196d74daaa","On developers’ practices for hazard diagnosis in machine learning systems","Balayn, A.M.A.","Houben, G.J.P.M. (promotor); Bozzon, A. (promotor); Delft University of Technology (degree granting institution)","2023","Machine learning (ML) is an artificial intelligence technology that has a great potential for being adopted in various sectors of activities. Yet, it is now also increasingly recognized as a hazardous technology. Failures in the outputs of an ML system might cause physical or social harms. Besides, the development and deployment of an ML system itself are also argued to be harmful in certain contexts.
Surprisingly, these hazards persist in applications where ML technology has been deployed, despite the increasing amount of research performed by the ML research community. In this thesis, we task ourselves with the challenges of understanding the reasons for the subsistence of hazardous system’s output failures and of hazardous development and deployment processes in practice, and of developing solutions to further diagnose these hazardous failures (especially in the system’s outputs). For that, we investigate further the nature of the potential gap between research and the practices of those developers who build and deploy the systems. To do so, we survey major related ML research directions, surface developers practices and challenges, and search for types of (mis)alignment between theory and practices. There, among others, we find a lack of technical support for ML developers to identify the potential failures of their systems. Hence, we then tackle the development and evaluation of a human-in-the-loop, explainability-based, failure diagnosis method and user-interface for computer vision systems...
two parts: the power electronic converter and the smart charging control, including battery degradation.
A. Power Electronics
In this thesis a modular DC-integrated multi-port converter is developed. The DC integration allows to reduce the amount of power converters hereby reducing its costs, while increasing efficiency and power density. All converters ports are developed for bidirectional operation to maximize its flexibility. a two level DC-AC converter is used for the bidirectional AC grid connection. Next, a 4-phase interleaved flyback converter is used for isolated EV charging. Finally, two interleaved four-switch buck-boost (FSBB) converters are used for both the PV and BES ports. All DC-DC converters utilize quasi-resonant boundary conduction mode (QR-BCM), combined with silicon carbide semiconductors to achieve efficiencies up to above 99%. A novel control method for the interleaved FSBB converter is proposed to enable multi-mode QR-BCM operation. Based on an experimental comparison with three other soft-switching modulation schemes it is shown that the proposed modulation and control achieve the highest efficiency (up to 99.5%) with little to no compromise in power density and control complexity.
B. Smart Charging
Next, a two-level smart charging structure is proposed to utilize the flexibility obtained from the multi-directional power electronic hardware. The first level is a non-linear programming (NLP) model that optimizes the charging powers of the EV and BES in a moving horizon context, to minimize the operational costs, including primary frequency control market participation and battery degradation. To minimize the battery degradation, a literature survey study has been done on lithium-ion ageing mechanisms and how to model it. Based on this survey the best suited degradation model is chosen and integrated in the NLP model. The second level of the proposed smart charging structure recalculates the setpoints based on grid frequency deviation, and PV forecasting errors. Both the theoretical and experimental results show that the proposed control method is effective in reducing the lifetime system costs. In combination with optimal sizing of the components the total lifetime system costs can be reduced up to 460% compared to conventional non-optimal charging methods.","Energy Storage; Smart charging; Power electronic converter","en","doctoral thesis","","978-94-6366-737-1","","","","","","2023-10-04","","","DC systems, Energy conversion & Storage","","",""
"uuid:f51b273d-eac8-495b-b692-919eb54b9974","http://resolver.tudelft.nl/uuid:f51b273d-eac8-495b-b692-919eb54b9974","Ultrabroadband coherent Raman spectroscopy for reacting flows","Mazza, F. (TU Delft Flight Performance and Propulsion)","Colonna, Piero (promotor); Bohlin, G.A. (copromotor); Delft University of Technology (degree granting institution)","2023","The present dissertation covers the development of ultrabroadband femtosecond/picosecond coherent Raman spectroscopy (CRS) to measure temperature and species concentrations in gas-phase chemically reacting flows.
Since its first demonstration in 1965, CRS has been vastly employed as a non-linear optical spectroscopic technique to quantify scalars in gas-phase chemically reacting flows, and it is presently regarded as a benchmark to measure temperature and concentrations of major species in combustion environments. The commercial availability of ultrafast regenerative laser amplifiers has brought forth an astounding amount of advancements over the past ten years, with the development of time-resolved CRS techniques able to perform measurements on a timescale shorter than that of molecular collisions in gas-phase media. Hybrid femtosecond/picosecond (fs/ps) CRS in particular represents the current state-of-the-art for gas-phase thermometry with unprecedented accuracy and precision, achieved with remarkable spatial and temporal resolution. The high peak power
provided by amplified fs laser systems enables spectroscopy to be realised in one- and two-dimensional imaging configurations acquiring single-shot images of the relevant scalar fields. Furthermore, the broad spectral bandwidth of fs laser pulses allows for a great simplification of the fs/ps CRS instrument. In two-beam fs/ps pure-rotational CRS a single broadband fs laser pulse coherently excites the whole rotational energy manifold of the target molecules, resulting in the coherent scattering of a spectrally narrow ps probe probe pulse. Moreover, the introduction of spectral broadening techniques prompted the development of ultrabroadband fs/ps CRS, where a single temporally-compressed supercontinuum pulse can excite, in principle, all the Raman-active modes of the target molecules. Ultrabroadband fs/ps CRS thus allows for the simultaneous investigation of the rotational and the vibrational motion of all the major species present in the probed volume, and could become the laser diagnostic tool for scalar determination in gas-phase chemically reacting flows, both in thermal equilibrium and in non-equilibrium conditions. For this to become a reality, however, a robust experimental protocol is needed for the implementation of ultrabroadband fs/ps CRS, which could be reliably employed behind the thick optical windows present in many practical experiments, such as those involving
pressurised combustors and enclosed chemical reactors.
In this respect, the present thesis revolves around two main experimental developments. The first one concerns the implementation of fs laser-induced filamentation as the supercontinuum generation mechanism to perform ultrabroadband fs/ps CRS. The research demonstrated that fs laser-induced filamentation can be employed in situ to compress the excitation pulse directly behind thick optical windows and inside the chemically reacting flow under study. This ultrabroadband coherent light source is employed throughout the present research to perform single-shot fs/ps CRS measurements, over a spectral region ranging ∼500-2000 cm-1, the so-called ""vibrational fingerprint region"". Single-shot detection of four major combustion species –hydrogen, oxygen, carbon dioxide, and methane– is demonstrated in this region of the Raman spectrum, and fs/ps CRS thermometry based on each one of them is validated in a number of laboratory flames. The influence of the combustion environment on the non-linear optical phenomena underpinning fs laser-induced filamentation and on the resulting pulse self-compression is furthermore investigated, evaluating the impact of the local composition and temperature of the gas-phase optical medium.The second experimental advancement addresses the need for an accurate quantification of the resulting spectral excitation bandwidth, with the development of a novel CRS experimental protocol. The conventional protocol entails the measurement of the nonresonant (NR) CRS signal ex situ in a non-resonant gas (typically argon), sequential to the CRS experiment, to map the spectral excitation profile. The novel protocol, on the contrary, is based on the generation of the NR CRS signal in situ in the combustion environment, simultaneous to that of the resonant CRS signal, thus removing a source of systematic bias in the spectral referencing. In order to practically implement this protocol, a polarisation-sensitive coherent imaging spectrometer is developed, which can simultaneously record the cross-polarised resonant and NR CRS signals in two distinct detection channels. The required polarisation angle to generate the resonant and NR CRS signals with orthogonal polarisation is theoretically determined, and the same angle is proven to realise the in situ referencing of any completely depolarised Raman transition. This referencing protocol is firstly applied to pure-rotational CRS thermometry on N2 and O2 in the pure-rotational region of the Raman spectrum, up to ∼500 cm-1. Thereupon the protocol is employed to realise ultrabroadband CRS on H2, whose pure-rotational spectrum spans more than 1500 cm-1 at flame temperatures. The adoption of the in situ referencing protocol proves essential to perform accurate H2 CRS thermometry behind the thick optical window. The novel protocol is also demonstrated on the ro-vibrational Raman spectrum of second vibrational mode (𝜈2) of CH4, which is completely depolarised, as are the Raman spectra associated to the least symmetric vibrations of more complex polyatomic molecules (e.g. heavier hydrocarbons). In this respect, ultrabroadband fs/ps CRS with in situ referencing of the spectral excitation efficiency could be employed not only to perform accurate thermometry in chemically reacting flows, but also to measure the concentrations of all the major molecular species in the probed volume.In parallel to these experimental developments, the present research also involves the development of time-domain models for the pure-rotational and ro-vibrational CRS signals detected in the spectral window up to 2000 cm-1. In particular the CH4 𝜈2 model is, to the best of the author’s knowledge, the first of its kind to include more than 10 million spectral lines, proving the suitability of this modelling approach to complex polyatomic molecules, which could pave the way to the future application of quantitative ultrabroadband fs/ps CRS to investigate a broader set of chemically reacting flows.All in all, the results collected in the present dissertation provide a basis for the direct use of ultrabroadband fs/ps CRS for scalar measurements in numerous and diverse practical applications in the applied science and engineering domain. The possibility of simultaneously measuring temperature and the concentrations of major species in chemically-reactive flows is paramount to understanding the physical and chemical processes at the base of many propulsion and power generation technologies. To name one, the in situ generation of the compressed excitation pulse provides a straightforward path to the use of ultrabroadband fs/ps CRS to perform spatially-resolved measurements of all the relevant scalar fields in high pressure combustion chambers. On the other hand, the ability of performing quantitative spectroscopy on complex polyatomic molecules is of great interest to many chemical engineering platforms, such as chemical reactors for the reforming of CH4 in commodity hydrocarbons and carbon-neutral H2.","coherent Raman spectroscopy; gas-phase thermometry; time-resolved spectroscopy; femtosecond laser-induced filamentation; laser diagnostics; ro-vibrational spectroscopy; chemically reacting flows","en","doctoral thesis","","978-94-6366-725-8","","","","","","","","","Flight Performance and Propulsion","","",""
"uuid:0fbfd624-39a0-45f7-8046-c90e854baca6","http://resolver.tudelft.nl/uuid:0fbfd624-39a0-45f7-8046-c90e854baca6","Anaerobic Protein Degradation for Resources Recovery from Nitrogen-Loaded Residual Streams","Deng, Z. (TU Delft Sanitary Engineering)","van Lier, J.B. (promotor); Spanjers, H. (promotor); Delft University of Technology (degree granting institution)","2023","The demand for reactive nitrogen, i.e., ammonia (NH3), is constantly growing as the global population grows, especially in the nitrogen (N) fertiliser production sector. Simultaneously, the reactive nitrogen in residual streams, i.e., mainly ammonium (NH4+) and nitrate (NO3-), has caused serious environmental issues, e.g., eutrophication and species diversity loss. NH3 is produced from non-reactive nitrogen gas (N2) by means of the energy-intensive Haber-Bosch process. Typically, reactive nitrogen is converted back to the non-reactive N2 by nitrification and denitrification in wastewater treatment plants at the cost of energy. To reduce the overall energy demand and reduce the pool of reactive nitrogen in the environment, a potential solution can be the recovery and reuse of NH3 and NH4+ (Total Ammoniacal nitrogen, TAN) from N-loaded residual streams. Ongoing TAN recovery research has mainly focused on the efficiency of different available technologies from the perspective of a specific application of the recovered TAN. One important aspect, the availability of N-loaded residual streams and their compositions, is overlooked: there is a lack of identification and characterisation of the potential streams for TAN recovery…","","en","doctoral thesis","","978-94-93353-16-9","","","","","","","","","Sanitary Engineering","","",""
"uuid:83e0a177-5c50-4e1e-9c49-afbf1f0d6073","http://resolver.tudelft.nl/uuid:83e0a177-5c50-4e1e-9c49-afbf1f0d6073","Multi-Level and Learning-Based Model Predictive Control for Traffic Management","Sun, D. (TU Delft Team Bart De Schutter)","De Schutter, B.H.K. (promotor); Jamshidnejad, A. (copromotor); Delft University of Technology (degree granting institution)","2023","This thesis focuses on management and control of traffic networks, including urban networks and freeway networks, in which we aim to reduce traffic congestion by minimizing the total time spent of all the vehicles in the network, and also consider green mobility by minimizing the total emissions produced by the vehicles. In this thesis, we have addressed the challenges of model predictive control (MPC) for traffic management in terms of computational complexity and model mismatches by developing several novel MPC-based control frameworks for urban and freeway traffic networks. More specifically, several multi-level and learning-based MPC control frameworks are proposed. First, a novel bi-level temporally-distributed MPC framework is proposed to deal with the green urban mobility issue that usually involves long-term (e.g., one year) emission constraints, and is thus computationally intractable due to the large window of the problem. Second, we employ a grammatical evolution method to generate parameterized control laws for parameterized MPC (PMPC) with application to urban traffic signal control. Third, we develop a novel combined MPC- deep reinforcement learning (DRL) multi-level control framework, in which the MPC module provides a basic control performance at a lower frequency based on a prediction model, and the DRL module works at a higher frequency to compensate for the model mismatches and external disturbances through learning. Forth, we propose a synthesis framework of reinforcement learning (RL)-based adaptive PMPC. In this framework, all components of the PMPC scheme, such as the cost function, the prediction model, the control law, the constraint set, and the terminal set, can be parameterized and adjusted by a high-level RL agent.","Model Predictive Control (MPC); Reinforcement Leaning (RL); Traffic Management; Multi-Level MPC; Learning-Based MPC","en","doctoral thesis","","978-90-5584-335-0","","","","","","","","","Team Bart De Schutter","","",""
"uuid:998441fb-72b3-4a4b-b2bb-f3a5ba06042e","http://resolver.tudelft.nl/uuid:998441fb-72b3-4a4b-b2bb-f3a5ba06042e","Derivative-free Equilibrium Seeking in Multi-Agent Systems","Krilašević, S. (TU Delft Team Sergio Grammatico)","Grammatico, S. (promotor); De Schutter, B.H.K. (promotor); Delft University of Technology (degree granting institution)","2023","Both societal and engineering systems are growing in complexity and interconnectivity, making it increasingly challenging, and sometimes impossible, to model their dynamics and behaviors. Moreover, individuals or entities within these systems, often referred to as agents, have their own objectives that may conflict with one another. Examples include various economic systems where agents compete for profit, wind farms where upwind turbines reduce the energy extraction of downwind turbines, unwanted perturbation minimization in extremum seeking control, and cooperative source-seeking robotic vehicles. Despite having access to only limited observable information, it is crucial to ensure that all participants are content with the outcomes of these interactions. In this thesis, we choose to examine these problems within the framework of games, where each agent has their own cost function and constraints, and all costs and constraints are interconnected. Since the notion of optimum in multi-agent problems is difficult to define, we often seek to find a Nash equilibrium, i.e., a set of decisions from which no agent has an incentive to deviate.
This thesis primarily explores the development of Nash equilibrium seeking algorithms for scenarios where agents' cost functions are unknown and can only be assessed through measurements of a dynamical system's output, referred to as the zeroth-order (derivative-free) information case. We specifically concentrate on scenarios where partial derivatives can be estimated from these measurements and subsequently integrated into a full-information algorithm. Existing approaches exhibit significant drawbacks, such as the inability to handle shared constraints, stringent assumptions on the cost functions, and applicability limited to agents with continuous dynamics.","Nash equilibrium seeking; Derivative-free; hybrid system","en","doctoral thesis","","","","","","","","","","","Team Sergio Grammatico","","",""
"uuid:6103d9e9-5c2e-487a-b77c-ae0ce9cb12f1","http://resolver.tudelft.nl/uuid:6103d9e9-5c2e-487a-b77c-ae0ce9cb12f1","DC trolleygrids as sustainable, multi-functional, and multi-stakeholder electrical infrastructures: Thinking outside of the bus","Diab, I. (TU Delft DC systems, Energy conversion & Storage)","Bauer, P. (promotor); Chandra Mouli, G.R. (copromotor); Delft University of Technology (degree granting institution)","2023","Electricity grids are increasingly congested as the world moves toward a sustainable, electrified future. Expanding and upgrading these infrastructures is costly and challenging on a technical and administrative level and even redundant when considering that some sub-parts of these grids, such as electric transportation networks, are massively underutilized. This thesis investigates, in four parts, the potential of one of these high-power infrastructures, the trolleybus grid, to become a sustainable, multi-functional, and multi-stakeholder backbone to urban power grids by integrating renewables, storage, and EV chargers, all while remaining ready for their next generation of sophisticated, high-power transport fleets such as In-Motion-Charging buses...","trolleybus; trolleygrid; transportation; electric mobility; solar systems; energy storage; Sustainable transportation","en","doctoral thesis","","978-94-6384-481-9","","","","","","2024-12-01","","","DC systems, Energy conversion & Storage","","",""
"uuid:7d3d4287-271d-4bbb-9b73-39674a0af60f","http://resolver.tudelft.nl/uuid:7d3d4287-271d-4bbb-9b73-39674a0af60f","Biopolymer nanocomposites: lessons from structure-property relationships","Pereira Espíndola, S. (TU Delft ChemE/Advanced Soft Matter)","Picken, S.J. (promotor); Zlopasa, J. (copromotor); van Loosdrecht, Mark C.M. (promotor); Delft University of Technology (degree granting institution)","2023","The urgent need to address sustainability within material science, driven by global environmental concerns over pollution, climate change, and resource scarcity, has led to a growing interest in bio-based materials. This thesis explores the potential of biopolymers as alternatives to non-renewable resources, specifically the ones derived from renewable and residual sources. The biomacromolecules can be harvested from plants, algae, microorganisms, and animal products; or extracted from the process waste of agricultural and urban cycles. In particular, the high stiffness (Young's modulus) exhibited by certain biopolymers, often surpassing that of standard engineering polymers, motivates this investigation. The biopolymers' uncontrolled chemical structure and morphology still inhibit their application in many industries. Inspired by the unique structures, properties, and functions found in biological systems, this research aimed to develop (solid-state) structureproperty relationships for relevant biopolymer systems aiming at predicting final material properties (physicochemical, thermal, mechanical, barrier). The focus on structure-property guidelines is brought about by systematic investigations of the intricate architecture and interactions found in biopolymers and bioinspired nanocomposites. The ultimate goal is to design bio-based materials with superior performance, such as lightweight, high stiffness and strength, and functionality, while at a competitive cost and sustainability.","biopolymers; nanocomposites; structure-property","en","doctoral thesis","","978-94-6384-487-1","","","","","","","","","ChemE/Advanced Soft Matter","","",""
"uuid:e45d99b7-4ea5-4e88-9e23-dabd5367aecd","http://resolver.tudelft.nl/uuid:e45d99b7-4ea5-4e88-9e23-dabd5367aecd","Comfort Experience in Air Travel: Research Methods and Design","Yao, X. (TU Delft Emerging Materials)","Vink, P. (promotor); Song, Y. (promotor); Delft University of Technology (degree granting institution)","2023","Comfort, which is defined as “a pleasant state or relaxed feeling of a human being in reaction to its environment”, plays an important role in air travel both for passengers and airlines. However, the combination of strict safety regulations, limited space, and a large variation in passenger body types make aircraft cabins challenging environments to create comfort. The studies presented in this PhD thesis focus on different aspects of comfort experience in air travel and are aimed to be helpful for aircraft interior designers, as well as airlines to have creative design solutions for inflight comfort issues in the future...","","en","doctoral thesis","","978-94-6366-736-4","","","","","","","","","Emerging Materials","","",""
"uuid:9a149309-2339-40c5-9151-032d70b91dbe","http://resolver.tudelft.nl/uuid:9a149309-2339-40c5-9151-032d70b91dbe","Engineering and integration of pathways for anaerobic redox-cofactor balancing in yeast","van Aalst, A.C.A. (TU Delft BT/Industriele Microbiologie)","Pronk, J.T. (promotor); Daran, J.G. (promotor); Delft University of Technology (degree granting institution)","2023","The production of ethanol by the yeast Saccharomyces cerevisiae remains the process in industrial biotechnology with the largest product volume (ca. 100 million litres annually in 2022). Carbon losses occur due to the formation of biomass and glycerol, which can account for at least 8% of the product. Under anaerobic conditions, formation of yeast biomass and glycerol are coupled via redox-cofactor balances, as a net generation of NADH during biomass formation needs to be compensated by NADH-dependent formation of glycerol from sugar. This thesis discusses redox-engineering strategies for maximizing ethanol yields on substrate.","Redox engineering; Ethanol production; Saccharomyces cerevisiae; co-culture; Metabolic engineering; anaerobic fermentation; Electrons; industrial biotechnology","en","doctoral thesis","","978-94-6483-352-2","","","application external","","","","","","BT/Industriele Microbiologie","","",""
"uuid:34bcaaa6-ec2b-44fd-ad25-387236568911","http://resolver.tudelft.nl/uuid:34bcaaa6-ec2b-44fd-ad25-387236568911","Shadow-wall lithography as a novel approach to Majorana devices","van Loo, N. (TU Delft QRD/Kouwenhoven Lab)","Kouwenhoven, Leo P. (promotor); Wimmer, M.T. (promotor); Delft University of Technology (degree granting institution)","2023","The development of quantum computers is perhaps one of the most exciting innovations of our time. The most investigated quantum computers, however, suffer from the fact that quantum information is lost due to interaction between the quantum bits and their environment. As a radically different approach, it has been proposed that one can instead use topological phases of matter to create quantum bits that are immune to environmental noise. The most prominent example of such a topological state of matter is the topological superconductor, which hosts Majorana zero modes. These quasiparticles can be used to store information non-locally, and their non-abelian exchange statistics allow for the implementation of protected quantum gates. Their postulated appearance at the edges of a one-dimensional semiconductor coupled to a superconductor has been a hot research topic over the last decade. Yet, their claimed observation in condensed-matter experiments has not been unequivocal. While the experiments produce some of the signatures of Majorana zero modes, they often exhibit significant deviations from the theory. The main obstacle here is that one of the fundamental properties of Majorana zero modes, namely their non-locality, has not yet been accessible due to the design of these experiments.
In this thesis, we have developed shadow-wall lithography as a novel approach to Majorana devices. One of the key concepts of this technique is to move the majority of the required nanofabrication steps prior to the formation of a semiconductor-superconductor hybrid, which significantly improves the performance of the device. Moreover, the shallow-angle deposition of a thin superconducting film allows the hybrid section to be grounded. This facilitates the simultaneous investigation of both ends of the device, enabling the search for the predicted end-to-end correlation of the Majorana zero modes. We extend the fabrication improvements by also considering the material used in these devices. For their operation, a magnetic field is required, which quenches the superconductivity in the superconducting film due to both orbital and paramagnetic effects. The paramagnetic effects are suppressed through the use of Pt impurities, which provide spin-orbit scattering centers in the film. For the thinnest films, we are able to extend the critical magnetic field up to 7 T. We further demonstrate that the inclusion of Pt does not prevent the quantum states in the semiconductor from obtaining a Zeeman splitting. We combine the improved nanofabrication technique and material developments with novel measurement schemes, such as the use of radio-frequency reflectometry and non-local conductance spectroscopy. The former allows us to map out large regions of the available experimental parameters while looking for the predicted end-to-end correlation of zero energy states. We demonstrate that such correlations are lacking in these devices, indicating that they do not exhibit an extended topological superconducting phase with Majorana zero modes at their ends. With non-local measurements, we instead focus on the induced superconducting gap in the bulk of such a hybrid. We demonstrate a significant tunability through electrostatic gating and show a closing and reopening of the induced gap, though the absence of zero-bias peaks also indicates that this is not due to an extended topological phase transition. These experiments strongly suggest that the realization of a topological superconductor in semiconductor-superconductor hybrids requires monumental efforts in the development of better materials.
While the bulk of this thesis is devoted to the creation of a topological superconductivity, the final chapters take an alternative approach. We demonstrate that these hybrids possess all the necessary ingredients to form a topological superconductor by using the shadow-wall lithography technique to realize an artificial Kitaev chain. By coupling two quantum dots via a gate-tunable proximitized quantum state in the hybrid segment, we show that the system can be brought to a sweet spot that hosts unpaired Majorana zero modes. To demonstrate the versatility of the developed platform, we finally move away from the study of Majorana zero modes and instead focus on the superconducting diode effect. We show that the tunability of the superconducting properties in a hybrid segment can be used to control the presence and magnitude of the superconducting diode effect in short nanowire Josephson junctions. These two chapters offer an inspiring perspective on the future of semiconductor-superconductor hybrid devices.
These examples demonstrate the need for reflecting on the status and significance of a term that is so widely used in academia and across the science-policy divide, but whose meaning and value are so fiercely disputed. Given that resilience is already informing many large-scale and significant societal efforts, they also raise the need to ask under which conditions such efforts could be just.
This work uses philosophical perspectives from ethics, metaethics and justice theory for revisiting recent debates on the meaning and normative status of this concept, with special emphasis on understanding the normative guidance that diverse interpretations of resilience can offer and disclosing the implications that this may have for achieving justice in and through resilience-based interventions.","resilience; risk; normativity; justice; climate adaptation","en","doctoral thesis","","978-94-6366-718-0","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:98b24420-219b-438e-a28c-4b12e5f450e6","http://resolver.tudelft.nl/uuid:98b24420-219b-438e-a28c-4b12e5f450e6","Time-lapse monitoring with virtual seismology: Applications of the Marchenko method for observing time-lapse changes in subsurface reservoirs","van IJsseldijk, J.E. (TU Delft Applied Geophysics and Petrophysics)","Wapenaar, C.P.A. (promotor); Slob, E.C. (promotor); Delft University of Technology (degree granting institution)","2023","Monitoring time-lapse changes inside the subsurface is of great significance to many geotechnical applications, such as storage of gasses in underground geological formations. Minute differences in the seismic wavefield between an initial baseline and a subsequent monitor survey have to be detected in order to observe fluid flow inside subsurface reservoirs. This problem becomes even more challenging when the reservoir is situated underneath a series of complex, highly reflective layers. Such an overburden will generate strong multiple reflections that will interfere with the reflections of the target zone. Ideally, a methodology is designed in order to remove these internal multiples to allow a clear view of the reservoir response for time-lapse analysis. The Marchenko method can redatumthe seismic wavefield to arbitrary depth levels or points in the subsurface, while accounting for all orders of internal multiple reflections. This method, therefore, has great potential to solve some of the time-lapse issues, as it is able to closely examine specific zones of interest in the subsurface without distortions from surrounding layers. Time-lapse studies are often hampered by irregular or imperfect sampling, whereas the Marchenko method relies on densely sampled, co-located sources and receivers. It is, therefore, important that the Marchenko method is able to handle more complex acquisition geometries. This can either be achieved by interpolating the reflection data as a pre-processing step or by correcting for errors inside the Marchenko scheme. Here, point-spread functions are introduced that describe the imperfections in the reflection data. These imperfections distort the focusing and Green’s functions retrieved from the Marchenko method. Next, each iteration of theMarchenko scheme is extended to deblur the imperfect focusing and Green’s functions by multidimensional deconvolution with these point-spread functions. Additionally, a slight modification is required to ensure stability of the new scheme. This new iterative Marchenko scheme is computationally more expensive, but removes all sampling artifacts. Finally, the migrated images of the target zone show significant improvements, when using either the new scheme or interpolation as pre-processing step...","Marchenko; Internal multiples; Time-lapse; Seismic; Reservoir Simulation","en","doctoral thesis","","978-94-6366-698-5","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:4dbd0b57-c40c-4cc2-84a7-e47e000f1437","http://resolver.tudelft.nl/uuid:4dbd0b57-c40c-4cc2-84a7-e47e000f1437","The end of the sectoral approach? Understanding the role of integration in urban water management","Nieuwenhuis, E.M. (TU Delft Sanitary Engineering)","de Bruijn, J.A. (promotor); Cuppen, E.H.W.J. (promotor); Langeveld, J.G. (promotor); Delft University of Technology (degree granting institution)","2023","Urban areas are highly dependent on their urban water systems, which provide essential services such as access to clean drinking water, public health protection, and flood control. Global developments increasingly threaten the provision of these services: changing weather patterns, ongoing urbanization processes, and depleting natural resources lead to environmental and public health issues, and increase the risk of urban flooding.
While traditional urban water systems (i.e., centralized water supply systems, sewer networks, and large-scale wastewater treatment facilities) have significantly contributed to global public health and protected cities from flooding, they are ill-equipped in the face of emerging global developments. For example, traditional systems have a limited ability to cope with extreme climate conditions, have a high net energy consumption, and lead to the deterioration of the environmental quality....","","en","doctoral thesis","","978-94-93353-05-3","","","","","","","","","Sanitary Engineering","","",""
"uuid:a356d348-add2-4995-9392-1b16daa8dbfa","http://resolver.tudelft.nl/uuid:a356d348-add2-4995-9392-1b16daa8dbfa","Pitch-Matched Integrated Transceiver Circuits for High-Resolution 3-D Neonatal Brain Monitoring","Guo, P. (TU Delft Electronic Instrumentation)","Pertijs, M.A.P. (promotor); de Jong, N. (promotor); Delft University of Technology (degree granting institution)","2023","This thesis presents the design and implementation of integrated ultrasound transceivers for use in transfontanelle ultrasonography (TFUS). Two generations of ultrasound transceiver ASICs integrated with PZT transducer arrays intended for TFUS are presented. In the first generation, a novel AFE design that combines an LNA with the continuous TGC function is realized in a bid to mitigate the gain-switching and T/R switching artifacts. Besides, a new current-mode micro-beamforming design based on boxcar integration (BI) is also implemented to reduce the channel count within a compact layout. In the second generation, the AFE is derived from the first version, while the design focuses on RX backend circuitry and channel-count reduction, including a passive BI-based µBF merged with a charge-sharing SAR ADC, which digitizes the delayed-and-summed signals, and a subsequent multi-level data link, which concatenates outputs of four ADCs. In total, a 128-fold reduction in channel count is finally achieved. The techniques we developed have established the groundwork and removed the initial barriers for an electronics architecture suitable for a wearable 3D TFUS device.","Ultrasound; TFUS; ASIC; TGC; Mircrobeamforming; On-chip digitization; Channel-count reduction; Wearable Ultrasound","en","doctoral thesis","","978-94-6469-514-4","","","","","","","","","Electronic Instrumentation","","",""
"uuid:d0c09760-c716-4e8f-8574-29ef0fc84977","http://resolver.tudelft.nl/uuid:d0c09760-c716-4e8f-8574-29ef0fc84977","How can something empty be so full: Virus-like particles as next generation vaccines","Kuijpers, L.C. (TU Delft BN/Nynke Dekker Lab)","Dekker, N.H. (promotor); Jakobi, A. (copromotor); Van der Pol, Leo (copromotor); Delft University of Technology (degree granting institution)","2023","Vaccination is the most effective strategy in humanity’s fight against viruses. The concept of vaccination was first proposed by dr. Edward Jenner in the 18th Century, and its efficacy has been proven over time, providing unparalleled protection against viral infections. Large-scale vaccination campaigns have been successful in eradicating diseases such as smallpox in 1980. In recent times, the coronavirus pandemic has brought vaccines to the forefront of public, academic, and industry interest. Apart from conventional live vaccines, which were originally designed and employed by Jenner, there has been a significant expansion in vaccine types, including mRNA, viral vector, attenuated, and inactivated vaccines, each with their own advantages and limitations. Furthermore, virus-like particles (VLPs), a novel class of vaccines, have emerged as a promising alternative to traditional vaccine design strategies, potentially offering improved efficacy and safety. VLPs are multimeric nanoparticles derived from one or more viral structures. They are typically empty or devoid of genetic material, rendering them non-replicative and non-infectious. Despite the proposed absence of genetic material, VLPs possess immune-inducing surface patterns resembling those of the native virus, making them recognizable to the immune system. This unique feature can be exploited for vaccine purposes. VLP-based vaccines offer a safer alternative to traditional vaccines and present an opportunity to create vaccines against viruses that recognize specific viral surface structures instead of a single protein. Large-scale production of VLPs can potentially mitigate many of the drawback of current vaccines, such as extreme storage conditions (mRNA), vector immunity (vector), reversion (attenuated), and high production costs (inactivated). Therefore, VLP-based vaccines hold great promise in the fight against viral infection, by providing a safer and superior product. The safety and efficacy of VLP-based vaccines have been established through extensive research. At present, the market has numerous VLP vaccines available, with HPV VLP vaccines being the most prominent. Its success has set the standard and paved the way for the development of other VLP-based vaccines. In this work, we demonstrated the feasibility of producing and purifying virus-like particles that closely resemble enterovirus A71 (EV71) and coxsackievirus A6 (CVA6). Both viruses have a positive sense RNA genome of ~7.4 kilobase pairs (kbp), which encodes a 260 kDa polyprotein that is stepwise cleaved into eleven viral proteins. Enteroviruses, and most prominently EV71 and CVA6, are the main causative agents of hand, foot, and mouth disease (HFMD). HFMD is named after the characteristic lesions that develop on the hands, feet, mouth, and buttocks of infected individuals. In severe cases, especially among children, the disease can spread to the central nervous system (CNS), resulting in complications such as aseptic meningitis and encephalitis. By employing VLPs, a multivalent vaccine can be developed to target multiple viral strains simultaneously, providing an opportunity for the prevention and control of HFMD. We utilized the baculovirus expression vector system (BEVS) to produce the enterovirus-like particles. For the insect cell lines employed in the BEVS, cell counting is crucial for the maintenance and manipulation of cell cultures. It is a vital aspect of assessing cell viability and determining proliferation rates, which are critical to maintaining the health and functionality of the culture. In Chapter 2, we introduce a machine learning (ML) model based on YOLOv4, capable of performing cell counts with high accuracy (>95%) for Trypan blue-stained insect cells. The model was trained, validated, and tested using images of two distinctly different insect cell lines, Trichoplusia ni (High FiveTM; Hi5 cells) and Spodoptera frugiperda (Sf9). The model achieved F1 scores of 0.97 and 0.96 for alive and dead cells respectively, demonstrating substantially improved performance over other cell counters. Furthermore, the ML model is versatile, as an F1 score of 0.96 was also obtained on images of Trypan blue-stained human embryonic kidney (HEK) cells that the model had not been trained on. Our implementation of the ML model comes with a straightforward user interface and can image in batches, which makes it highly suitable for the evaluation of multiple parallel cultures (e.g., in Design of Experiments). Overall, this approach for accurate classification of cells provides a fast, bias-free alternative to manual counting. Previous studies have shown that the expression of the viral P1 structural proteins and the 3CD protease is sufficient to produce enterovirus-like particles in various organisms. However, there has been a lack of optimization based on the interplay between the three most commonly altered infection parameters, namely multiplicity of infection (MOI), viable cell density at the time of infection (VCD), and the infection period (tinf). In Chapter 3 we addressed this point by using Design of Experiments (DoE) to optimize the production of both EV71 and CVA6 VLPs. Our results indicated distinctively different preferences for infection parameters between the two types of VLPs, with EV71 VLP production preferring low MOI, low VCD, and long infection period, while CVA6 VLP production preferring for high MOI, high VCD and long infection period. Additionally, we developed a purification process for both VLPs, resulting in yields of 158 mg/l and 38 ml/l of culture volume for purified EV71 and CVA6 VLPs, respectively. These concentrations translate into thousands to tens of thousands of vaccines, highlighting the economic potential of enterovirus-like particles for vaccine purposes. Virus-like particles have been identified as a promising approach for the development of a multivalent vaccine. However, their stability is a major issue due to the significantly lower particle integrity lifetimes compared to inactivated vaccines. In Chapter 4, the VLPs produced using the optimized protocols described in Chapter 3 were subjected to biophysical characterization. We employed multiple biophysical techniques such as transmission electron microscopy and atomic force microscopy, to elucidate the origins of the reduced VLP stability (on average 1.5-2-fold lower) in comparison to native virions. Contrary to previous work on enterovirus VLPs, this study demonstrates that a substantial portion (31%) of the produced VLPs were able to encapsidate viral RNA (vRNA). Additionally, this work shows that the presence of vRNA in the capsids may not be the primary factor in enterovirus capsid stability. Furthermore, vRNA may not be the sole factor responsible for triggering the stabilizing viral maturation, and other underlying mechanisms may be at play. To achieve stability comparable to that of virions, artificial methods of inducing viral maturation or alternative means of stabilizing the capsids are of the utmost important to ensure success of VLPs as vaccine candidates. In Chapter 5, we present a protocol for the simultaneous investigation of RNA synthesis dynamics of hundreds of single polymerases with magnetic tweezers (MT). The protocol encompasses the entire process, starting from RNA construct preparation to quantitative and statistical analysis of the MT measurements of RNA synthesis kinetics. The protocol enables the measurement of hundreds of RNA tethers simultaneously, resulting in the characterization of single-molecule dynamics, which is presented in the subsequent chapter.
Chapter 6 of this dissertation showcases the potential of magnetic tweezers (MT) for the detailed mechanistic characterization of the viral RNA-dependent RNA polymerase (RdRp). By examining the pause dynamics and probabilities of each viral polymerase, we were able to decipher their individual mechanistic properties. In particular, we investigated the effects of the T-1106 triphosphate, a pyrazine-carboxamide ribonucleotide with antiviral properties, on the enterovirus A71 RdRp. Our result indicated that T-1106 incorporation into nascent RNA led to increased pauses and backtracking by the RdRp. Additionally, we identified the backtracked state as an intermediate used by the RdRp for copy-back RNA synthesis and homologous recombination, suggesting that pyrazine-carboxamide ribonucleotides function by promoting template switching and formation of defective genomes. Finally, we demonstrated that MT can scan promising antiviral candidates and indicate the most propitious ones for further development. The detailed mechanistic characterization of viral RdRp dynamics afforded by MT is a promising avenue for identifying and optimizing antiviral therapeutics. Chapter 7 of this dissertation provides concluding remarks and aims to illuminate potential avenues for subsequent studies. This work can serve as a basis for future investigations, not only from a biophysical perspective but also from a biochemical standpoint.
These concepts and vocabulary, it is argued, in both practical and metaphoric sense, should be the starting point of new urban imaginaries for Addis Ababa. Urban planning and housing projections thus, should draw inspiration from these notions, elements, and phenomena. Furthermore, lessons learnt from the trinocular and the findings are presented as new avenues for architectural research in similar, less-known, and complex urban conditions as the sefer of Addis Ababa.
Large scale utilisation of CO2 in the chemical industry is currently limited to a few applications (e.g. synthesis of urea, carboxylic acids, food industry) and generally requires high purity feedstock. Integrated processes that combine CO2 capture from diluted sources (e.g. industrial flue gases, air) and its conversion to value-added chemicals represent a solution to enhance the utilisation of CO2 and mitigate its emissions. CH4 is an abundant hydrocarbon with diversified sources ranging from fossil-based (natural gas, shale gas) to renewable ones (biomass, biogas), which can potentially substitute oil for the synthesis of valuable chemicals and fuels, including higher hydrocarbons. At the moment, however, CH4 utilisation is circumscribed to combustion for heat and energy production or energy-intensive production of H2 and syngas (H2 + CO) via steam reforming, resulting in a high carbon footprint.
In general, the thermodynamic stability of CO2 and CH4 molecules imposes severe limitations to their exploitation as chemical feedstocks, in terms of low conversion efficiencies and control on the selectivity of products. Their efficient conversion requires harsh reaction conditions (high temperatures and pressures, highly chemically reactive substances) at which the stability of the desired products is threatened, resulting in low selectivity. In this scenario, catalysis is essential to identify functional materials and develop new catalytic processes able to maximise the selective conversion of CO2 and CH4 feedstocks to value-added products.
Unsteady-state operation in catalysis is an option to overcome the thermodynamic constraints imposed by the conventional steady-state operation. Integrated CO2 capture and conversion, sorption-enhanced reactions, chemical looping combustion are examples of intrinsically unsteady-state catalytic processes that demonstrated enhanced performances compared to their steady state analogues. Moreover, the analysis of the transient catalytic behaviour developed in unsteady-state conditions leads to a deeper understanding of the catalytic processes in terms of identification of specific reactant-catalyst interactions, the steps involved in products formation and the mechanism of catalyst deactivation.
This dissertation deals with the catalytic activation of CO2 and CH4 molecules targeting at their valorisation to important chemical commodities as CO (syngas) and light hydrocarbons. Unsteady-state catalysis is explored as a means to overcome thermodynamic constraints associated to the conventional CO2 and CH4 conversion routes....","","en","doctoral thesis","","978-94-6384-468-0","","","","","","","","","ChemE/Catalysis Engineering","","",""
"uuid:ef3e6834-988c-4d61-99f3-94133e005f1d","http://resolver.tudelft.nl/uuid:ef3e6834-988c-4d61-99f3-94133e005f1d","Exploring the potential of yeast mitochondria for synthetic cell research","Koster, C.C. (TU Delft BT/Industriele Microbiologie)","Daran-Lapujade, P.A.S. (promotor); Pronk, J.T. (promotor); Delft University of Technology (degree granting institution)","2023","Building synthetic cells is extremely interesting from a fundamental perspective, as the ability to rationally build viable, dividing, and self-maintaining cells provides knowledge on the minimal requirements to sustain life. This should enhance our understanding about which biological parts are minimally required for a cell to live, and how different cellular functions work and interact with each other. Besides a fundamental understanding of life, synthetic cells can also be applied as synthetic biology tools, for example for synthesis and delivery of therapeutics, or the production of compounds that cannot be produced with currently available organisms used as cell factories. The overarching goal of the research described in this thesis was to devise a strategy for building genomes for synthetic cell, using baker’s yeast Saccharomyces cerevisiae. Two methods were explored, the first being building a genome de novo using yeast in vivo assembly, and secondly, it was investigated whether the preexisting minimal genome of S. cerevisiae mitochondria could be expanded. To this end, yeast mitochondrial DNA and RNA was characterized using novel methods and various strategies for engineering the yeast mitochondrial genome were tested.","Mitochondria; YEAST; Synthetic Biology; synthetic genomics; synthetic cell; arginine","en","doctoral thesis","","978-94-6366-717-3","","","","","","","","","BT/Industriele Microbiologie","","",""
"uuid:3f379e63-9a3e-4755-974d-4d2099db43ea","http://resolver.tudelft.nl/uuid:3f379e63-9a3e-4755-974d-4d2099db43ea","Tuning Magnetoelastic Transitions in Mn2Sb-based and Fe2Hf-based Magnetocaloric Materials","Shen, Q. (TU Delft RST/Fundamental Aspects of Materials and Energy)","Brück, E.H. (promotor); van Dijk, N.H. (promotor); Delft University of Technology (degree granting institution)","2023","Magnetic refrigeration is based on the magnetocaloric effect (MCE) and has attracted considerable attention due to its potentially higher energy efficiency, environmental friendliness and quietness compared to conventional vapour compression refrigeration. Boosting giant MCE materials with a magnetoelastic transition into commercial applications requires not only insights into the coupling between its magnetism and the lattice, but also the correlation between macroscopic performance and microstructure. In this thesis, the fundamental physical properties, including crystal structure, microstructure, magnetic structure, negative thermal expansion behaviour and the magnetocaloric effect, are studied in Mn2Sb-based intermetallic compounds with an antiferromagnetic-to-ferrimagnetic transition and Fe2Hf-based Laves phase compounds with a ferromagnetic-to-antiferromagnetic transition...","Magnetocalroic effect; magnetoelastic transition; Laves phase; magnetic refrigeration","en","doctoral thesis","","","","","","","","","","","RST/Fundamental Aspects of Materials and Energy","","",""
"uuid:4be5a4e1-7c68-4a28-a1ba-4cff02a9f024","http://resolver.tudelft.nl/uuid:4be5a4e1-7c68-4a28-a1ba-4cff02a9f024","Synthesis of Slender Spatial Compliant Mechanisms with application to passive exoskeletons","Amoozandeh, A. (TU Delft Mechatronic Systems Design)","Herder, J.L. (promotor); van Ostayen, R.A.J. (promotor); Delft University of Technology (degree granting institution)","2023","Industrial passive exoskeletons have been developed for years as a tool to reduce the physical workload of their users. They accomplish this by compensating for the user’s body weight and decreasing fatigue caused by repetitive loads. Despite their advantages, current exoskeletons have drawbacks that make them less convenient for users and thus less common as a supporting tool. Among the other issues, the most important are impeded and reduced movement, posture strain, and increased discomfort. This thesis proposes designs for spatial compliant mechanisms to address these issues in exoskeletons.","Spatial Compliant Mechanisms; Spatially Curved Beams; Passive Exoskeletons; Zero Torsional Stiffness; Compliant Transmission; Anisotropic Variable Stiffness","en","doctoral thesis","","978-94-6366-741-8","","","","","","","","","Mechatronic Systems Design","","",""
"uuid:5c3cc4c4-f38a-43ce-851a-8f758769171a","http://resolver.tudelft.nl/uuid:5c3cc4c4-f38a-43ce-851a-8f758769171a","Probing nanoscale forces of nature in fluid: From pneumatics to biomechanics","Roslon, I.E. (TU Delft Dynamics of Micro and Nano Systems)","Steeneken, P.G. (promotor); Alijani, F. (promotor); Delft University of Technology (degree granting institution)","2023","Nanoscale forces from natural phenomena are hard to measure. It is not insofar that we lack the ability to measure such small forces. On the contrary, nanotechnology offers a wide spectrum of techniques that allow us to sense at this scale. However, many natural systems are subject to a noisy environment or need to be surrounded by liquid to maintain their shape and function - which on its own account drastically limits the achievable sensitivity of measurement methods.
Graphene, a single layer of carbon atoms, shows extreme strength and flexibility at the 2D limit of miniaturization. We have rationalized that graphene membranes are a perfect candidate to play the role of flexible support for detection of minute forces in nature, that are often hidden behind the veil of the environmental noise. Graphene owes its suitability to its ultimately thin nature, its low stiffness but simultaneously high tensile strength that prevents it from breaking under high tension. The limits of sensitivity can now be pushed further so that nanoscale forces can be measured in liquid - from pneumatic forces of attoliter volumes of gas, down to the level of single living bacteria.
In this thesis the motion of graphene membranes is studied under the influence of external forces. The motion is detected by a reflectometry setup devised for the study of optomechanical systems immersed in fluid. In Chapter 1 an introduction is given to the topic and the experimental methods are described. In Chapter 2, gases are pumped through a milled nanometer orifices in graphene membranes. The pneumatic interaction and the escape of the gasses through the nanometer scale pores is studied. In Chapter 3, we probe the nanomotion of single bacteria adhered to the surface of a graphene drum. The interplay between the processes occurring at cellular level and the motion of the suspended graphene with bacteria deposited on top is investigated. In Chapter 4, we study the signals obtained when motile bacteria cross a focused laser beam. We also find, that we can enhance the signal by patterning substrates to localise the bacteria close to the laser spot. Finally, in Chapter 5 we give prospects and outlooks, both on application of graphene drum enabled nanomotion sensing for rapid drug susceptibility testing, as well as on further research that might offer new insights into biological processes that can be held accountable for bacteria nanomotion. Furthermore, we discuss developments that would allow for further improvement of the current measurement system that go beyond bacterial sensing.","Nanomechanics; 2D materials; Antibiotic resistance; Bacteria; Gas sensing","en","doctoral thesis","","978-94-6384-475-8","","","","","","2024-03-21","","","Dynamics of Micro and Nano Systems","","",""
"uuid:7283cfe5-4b1f-4cd7-9795-0f9abaabf530","http://resolver.tudelft.nl/uuid:7283cfe5-4b1f-4cd7-9795-0f9abaabf530","Tensions and opportunities at Shanghai’s waterfronts: Laboratories for Institutional Strategies toward Sustainable Urban Planning and Delta Design Transitions","den Hartog, Harry (TU Delft Spatial Planning and Strategy; TU Delft History, Form & Aesthetics)","Hein, C.M. (promotor); Hooimeijer, F.L. (copromotor); Delft University of Technology (degree granting institution)","2023","How can the Global North oriented and welfare state rooted Sustainability Transitions theories be enriched with the Chinese and communist state rooted Ecological Civilization thinking that has been included in the Chinese constitution since 2007, to make it able to evaluate the making of the direct-controlled municipality Shanghai into an institutional frontrunner of sustainable transitions in urban planning and design with its prime waterfront as exemplary ‘urban lab’? Around this central question, this dissertation examines how Shanghai's coastal and waterfront developments have changed over the past two decades under the influence of shifts in Chinese state capitalism towards what is called an Ecological Civilization. Two cases along the waterfronts of Shanghai – one on former docklands in Shanghai’s Central City, and one on peri-urban Chongming Island ¬– have been examined to test how both lines of thinking can enrich each other, and if a sustainable transition can be done more efficiently and convincingly in a centrally controlled society than in a non-autocratic (liberal) society. What lessons does the Chinese approach in Shanghai offer for elsewhere, and how can different approaches and practices reinforce each other in spatial planning and strategies for a sustainable transition? This dissertation emphasizes that ecological civilization thinking can offer hopeful starting points for sustainable transitions but can only work well if sufficient 'checks and balances’ are included. It gives suggestions to improve the accessibility, inclusivity, and vibrancy of Shanghai’s waterfronts, and to mitigate ecological degradation in the context of an urban delta.","energy transition; socio-technical change; sustainability; urban delta; urban planning; Waterfront Regeneration; Sustainable transitions; Ecological civilization; China; National Demonstration Projects; Experiments","en","doctoral thesis","","978-94-6366-744-9","","","","","","","","","Spatial Planning and Strategy","","",""
"uuid:a7b16311-35f5-4819-9d95-5ff1f8cae84f","http://resolver.tudelft.nl/uuid:a7b16311-35f5-4819-9d95-5ff1f8cae84f","Physics of broadband noise reduction by serrated trailing edges","Lima Pereira, L.T. (TU Delft Wind Energy)","Scarano, F. (promotor); Ragni, D. (promotor); Avallone, F. (copromotor); Delft University of Technology (degree granting institution)","2023","Wind-turbine noise can restrict the growing implementation of renewable energy sources and their application close to urban environments. The largest contributor to the noise of modern turbines is the scattering of the turbulent fluctuations at the blade trailing edge. This source of noise is directly correlated with the turbine’s extracted power. Therefore, operating in noise-restricted environments and at night times entails lower energy production. An extensively applied solution for reducing the noise of wind turbines is the use of trailing-edge serrations, i.e. imposing periodic variations in the geometry of the blade trailing edge. Serrations reduce the effectiveness of the scattering at the trailing edge as the turbulent fluctuations reach the trailing edge at different times along the blade span, consequently reducing the wind-turbine noise. Although extensive literature and knowledge exist on serrations, their measured performance does not compare with the predicted one. Even more problematic, the trends predicted for the geometric alterations of the serrations are not observed in reality. Notably, two things are worth mentioning: first, geometries shown as optimal by theory perform worse than other concepts, and second, the noise from serrations is affected by the angle between the insert and the flow. As a result, the design of trailing-edge serrations still requires dedicated experiments and numerical simulations, hampering the assessment of several geometries necessary for complete optimization of the serration design. This work seeks a physical interpretation of the noise generation mechanisms of trailing edges with serrated add-ons. This interpretation is focused on understanding the underlying physical principles of the flow surrounding a serrated trailing edge. This is carried out in this work with three studies, respectively on the observation, modelling, and control of the flow and acoustic properties of serrated trailing edges...","trailing-edge noise; trailing-edge serrations; aeroacoustics; experimental aeroacoustics","en","doctoral thesis","","","","","","","","","","","Wind Energy","","",""
"uuid:81a0f1cc-aba6-4cd5-8d8e-d6732c29ea2a","http://resolver.tudelft.nl/uuid:81a0f1cc-aba6-4cd5-8d8e-d6732c29ea2a","Shaping Nonlinearity in Reset Control Systems to Realize Complex-Order Controllers: Application in Precision Motion Control","Karbasizadeh, Nima (TU Delft Mechatronic Systems Design)","Herder, J.L. (promotor); Hassan HosseinNia, S. (copromotor); Delft University of Technology (degree granting institution)","2023","This dissertation addresses the demand for faster, more precise, and robust controllers in the precision motion industry. Traditional linear controllers have limitations due to the waterbed effect and Bode’s phase-gain relationship. To overcome these limitations, complex-order controllers are explored in this study. The dissertation focuses on shaping nonlinearities in reset controllers to realize complex-order behavior. Various methods and approaches are investigated, each contributing to the understanding and improvement of reset control systems for linear time-invariant systems. The dissertation demonstrates that reset controllers exhibit first-order harmonic behavior, which can be advantageous in achieving complex-order behavior and enhancing controller performance compared to linear controllers. However, higher-order harmonics resulting from nonlinearities play a significant role and should not be neglected. The study explores methods to shape and manipulate these higher-order harmonics for improved performance. Different approaches are categorized into two main categories: methods based on shaping the reset phase (ψ) and continuous reset (CR) methods. In the first category, ψ-shaping methods focus on manipulating ψ to achieve desirable non-linear behaviors. This includes the introduction of elements such as fractional-order lag elements and filters to shape ψ and suppress higher-order harmonics. The dissertation presents practical frameworks for analyzing and utilizing ψ-shaping concepts. The second category, continuous reset methods, addresses both transient and steadystate performance. By introducing lead and lag elements as pre- and post-filters for reset elements, improvements in transient response are achieved. Additionally, CR methods preserve the first-order harmonic behavior while reducing higher-order harmonics across the entire frequency range. The dissertation highlights the advantages and tradeoffs between ψ-shaping and CR methods, providing insights for selecting the appropriate approach based on application requirements. Practical implementation aspects are also considered throughout the dissertation. Challenges such as noise amplification caused by lead elements in CR architectures are addressed, offering solutions through increased filter orders or observer-based filtering techniques. The dissertation demonstrates the effectiveness of the proposed approaches through implementation in industrial precision motion stages, showcasing the superiority of complex-order reset controllers over their linear counterparts. Overall, this dissertation contributes to the understanding and practical implementation of reset controllers for realizing complex-order behavior in precision motion control. It provides insights into shaping nonlinearities, optimizing steady-state and transient performance, and selecting suitable architectures based on specific application needs. The findings and guidelines presented in this study offer valuable contributions to the precision motion industry and pave the way for further advancements in controller design and performance.","Reset Control; Complex-Order Control; Shaping Nonlinearity; Precision Motion Control","en","doctoral thesis","","978-94-6384-484-0","","","","","","","","","Mechatronic Systems Design","","",""
"uuid:ae1a632b-91ab-466c-b40c-c111ac2ffe2d","http://resolver.tudelft.nl/uuid:ae1a632b-91ab-466c-b40c-c111ac2ffe2d","Exploring biocatalytic alternatives for challenging chemical reactions","Wu, Y. (TU Delft BT/Biocatalysis)","Hollmann, F. (promotor); Paul, C.E. (copromotor); Delft University of Technology (degree granting institution)","2023","Catalysts are often used in challenging chemical reactions to accelerate the reaction rate, increase reaction efficiency, reduce energy consumption, and minimise waste production. In biocatalysis, enzymes or whole cells are used as catalysts with the advantage of reactivity, selectivity and mild reaction condition over chemocatalysis. Nowadays, with the increasing variety of enzymes, biocatalysis exhibits more and more applicability potential as an alternative tool for chemical reactions.
This thesis focuses on two categories of challenging chemical reactions: oxyfunctionalisation and decarboxylation reactions, where two enzyme families have been investigated. Unspecific peroxygenases (UPOs) exhibit remarkable catalytic activity by facilitating the specific incorporation of oxygen atoms into both C-H and C=C bonds through hydroxylation and epoxidation reactions, respectively. This biocatalytic ability occurs under mild reaction conditions, rendering UPOs highly versatile and attractive for various synthetic applications. Fatty acid photodecarboxylases (FAPs) demonstrate the capacity to effectively catalyse the cleavage of carboxylic groups from substrates, leading to the formation of the corresponding alka(e)nes when subjected to illumination. This photoenzymatic reaction offers a sustainable and environmentally friendly pathway for the conversion of fatty acids into valuable hydrocarbon products by harnessing light as an energy source. In chapter 1, we show a critical and quantitative comparison between chemocatalysis and biocatalysis in oxyfunctionalisation reactions and an overview of decarboxylation reactions.
For oxyfunctionalisation reactions, this thesis is focusing on both classic hydroxylation and epoxidation reactions. For instance, further derivatisation fatty acids generally relies on pre-existing functional groups such as the carboxylate group or C=C-double bonds. However, the enzymatic conversions of saturated, non-activated fatty acids remain relatively underdeveloped, primarily owing to the inherent difficulty of C-H activation. In chapter 2, we demonstrate the application of a peroxygenase mutant AaeUPO-Fett for selective fatty acid hydroxylation. The primary products (i.e. hydroxy fatty acids) are interesting building blocks for lactone and polyester synthesis. Besides, when the produced w-1 hydroxy fatty acid (esters) are transformed, further synthetic possibilities arise as demonstrated by the fatty acid decarboxylation, Baeyer-Villiger oxidation and reductive amination reactions. Thereby, the utilisation of peroxygenase-promoted enzymatic cascades has emerged as a versatile toolbox for the conversion of recalcitrant saturated fatty acids into valuable products and essential building blocks...
The standard methods for dyke slope stability assessment cannot model large deformations. This thesis therefore develops and applies the Material Point Method (MPM), a large deformation variant of the Finite Element Method, to investigate the residual (remaining) resistance of a dyke against flooding after an initial slope instability. The residual dyke resistance has been assessed within a risk-based framework using the Random MPM (RMPM), which accounts for the effects of soil heterogeneity on the failure process by combining random fields with MPM. From the realisations of an RMPM analysis, both the probability of initial failure as well as the probability of flooding may be determined. Moreover, with RMPM, the likelihood of failure processes can be evaluated such that the process between initial failure and flooding can be understood.
To model the external water level in the RMPM analysis, the application of boundary conditions in MPM has first been investigated. The thesis shows that the boundary conditions should systematically match the MPM discretisation. Improvements of MPM, such as the Generalized Interpolation Material Point Method (GIMP), often change the discretisation. Therefore, the accurate application of a boundary condition can therefore depend on the version of MPM being used. Consistent boundary conditions are described in this work for MPM and GIMP. For standard MPM, a consistent boundary condition is proposed for simple 1D problems. However, it is shown that this solution is not generally applicable for dyke slope failures or other higher dimensional problems. For GIMP, two generally applicable algorithms for (almost) consistent boundary conditions are proposed: one algorithm constructs the exact material boundary, while the other merges the support domains of all material points. The algorithms are shown to outperform other boundary condition methods presented in literature.
The residual (dyke) resistance has been investigated by modelling both a 2D dyke failure and 3D slope instability using RMPM. It is shown that secondary failures (required to trigger flooding) often do not occur or may not be large enough to trigger flooding. Therefore, the probability of flooding can be significantly lower than the probability of an initial failure due to residual dyke resistance. In the best case scenario for the problem analysed, a reduction of the probability of flooding compared to the probability of initial failure of more than 90% has been observed, while in the worst case only a 10% reduction was found. The reduction was high (90%) for a material without layering of the spatial variability of the strength properties and decreased when the spatial variability was more layered. However, note that, to reduce computational costs, the probability of initial failure was unrealistically high in these examples, i.e. the dyke was relatively weak. In stronger slopes, secondary failures are less likely and more residual dyke resistance is therefore expected. Additionally, secondary slope failures are less likely in 3D simulations compared to 2D simulations, generally due to the additional resistance of the sides of the failure surfaces (the so-called 3D-effect). A 2D simulation can therefore be seen as a conservative estimate of the residual dyke resistance. In 3D, the failure process more often spreads sideways rather than backwards. This is also beneficial for dyke slope stability assessments, where backward failures are required to trigger flooding.
The degree of anisotropy of the soil heterogeneity changes the expected failure process. For smaller horizontal scales of fluctuation, i.e. less layering of the soil, secondary failures are less likely to occur, since the initial and secondary failures are mostly uncorrelated. Additionally, in the 3D simulation, smaller horizontal scales of fluctuation triggered small failure blocks, again likely to reduce the risk of flooding. For larger horizontal scales of fluctuation, initial failure in a weaker layer can more easily trigger secondary failures through the same layer, thereby decreasing residual dyke resistance. A depth trend, i.e. a linear increase with depth, in the mean resistance of the material, typical due to compaction processes, also impacts the failure process. For a material without a depth trend, progressive failure occurs along approximately circular failure surfaces, whereas for a material with a depth trend, a steady flow like behaviour along a gentle ’straight’ slope occurs. Moreover, retrogressive failure can flow in any direction for a material with a depth trend while avoiding local strong zones.
This thesis highlights that RMPM can provide estimates of the residual dyke resistance, thereby more accurately estimating the probability of flooding due to dyke slope instability in many situations. This leads to more targeted and cost effective dyke reinforcements. RMPM also provides insight into the size and shape of the initial and subsequent failures. RMPM can therefore be used in future research to develop guidelines for practice to approximate the probability of flooding, for example based on the probability and the shape of the initial failure computed with a small deformation model.","Random Material Point Method; MPM; Residual dyke resistance; Slope failure; Soil heterogeneity; Random fields; Probability of flooding","en","doctoral thesis","","978-94-6384-469-7","","","","","","2023-09-16","","","Geo-engineering","","",""
"uuid:ba9ccec6-589e-4bff-8555-17bdf48c4712","http://resolver.tudelft.nl/uuid:ba9ccec6-589e-4bff-8555-17bdf48c4712","Countermeasures against Fault Injection Attacks in Neural Networks and Processors","Köylü, T.C.","Hamdioui, S. (promotor); Taouil, M. (copromotor); Delft University of Technology (degree granting institution)","2023","Machine learning has gained a lot of recognition recently and is now being used in many important applications. However, this recognition was limited in the hardware security area. Especially, very few approaches depend on this powerful tool to detect attacks during operation. This thesis reduces this gap in the field of fault injection attack detection and prevention in neural networks and processors.
This thesis presents our methods of machine learning-based fault attack detection and prevention in different chapters, after providing the background information. Our first idea is to detect fault attacks from the processor’s instruction flow. The essence of the idea is that machine learning algorithms can learn the generated machine instruction sequences of a security-sensitive application. Thereafter, any fault in the instructions can be detected. The thesis demonstrates this idea by using RNN, CAM, and BF. Additionally, it demonstrates how to correct them using Hopfield networks.
The second idea is to use smart sensors to detect fault attacks. The first type of smart sensor is sensitive to multiple changes, such as in clock signal and supply voltage. The thesis demonstrates how to design such a sensor using RO PUFs. The second type of smart sensor is based on the operation of the device. The thesis demonstrates a design for ANNs, where the smart sensor detects fault attacks from discrepancies in neuron activation rates.
The thesis finally presents the idea of preventing fault attacks using smart verification. The first way is attained via a memory verification module, which verifies data from the external memory before processor execution. The second way is designed to protect ANNs via redundancy. However, the thesis presents a way to do this more efficiently, by using smart and selective redundancy.","fault injection attack; countermeasure; machine learning; neural networks; processor; hardware security; artificial intelligence","en","doctoral thesis","","978-94-6384-472-7","","","","","","","","Quantum & Computer Engineering","","","",""
"uuid:06ccff49-3558-4f70-a58a-547e1514af52","http://resolver.tudelft.nl/uuid:06ccff49-3558-4f70-a58a-547e1514af52","Novel Covalent Organic Frameworks for the Energy Transition","Veldhuizen, H.V. (TU Delft Novel Aerospace Materials)","van der Zwaag, S. (promotor); van der Veen, M.A. (promotor); Delft University of Technology (degree granting institution)","2023","Covalent organic frameworks (COFs) are a subclass of hyper-crosslinked polymers that contain ordered nanoporosity within their polymer network. Control over pore size and shape, as well as structure regularity, is obtained through careful selection of the monomeric building blocks and the synthesis reaction conditions. This has led to a vast library of available COF structures that are each designed for a specific application, typically in the direction of controlled capture and release. The work presented in this thesis addresses challenges in the development of COFs as active materials in energy transition applications. The goal of this thesis was to draw structure-property relationships of novel COFs in order to establish COF design rules for applications such as: CO2 capture, separation and conversion, as well as electrochemical energy storage...","Covalent organic frameworks; Synthesis; structure-property relationships; micro- and mesoporosity","en","doctoral thesis","","978-94-6473-201-6","","","","","","","","","Novel Aerospace Materials","","",""
"uuid:71e0ed4c-e8ea-404c-9846-b95bce6d17ed","http://resolver.tudelft.nl/uuid:71e0ed4c-e8ea-404c-9846-b95bce6d17ed","Complex networks: topology, spectrum and linear processes","Jokic, I. (TU Delft Network Architectures and Services)","Van Mieghem, P.F.A. (promotor); De Schutter, B.H.K. (copromotor); Delft University of Technology (degree granting institution)","2023","The concept of a network, defined as a collection of interconnected nodes or entities, has become a foundation for a new field of inquiry, namely network science. Despite the apparent simplicity of the concept, the pairwise representation of interconnecting nodes has enabled a plethora of insights into the structure of networks and the effects of interactions on dynamic processes. This generality of the network concept has paved the way for novel approaches with the aim of understanding complex systems, from social networks to biological pathways. It has opened up new avenues for research into
the fundamental mechanisms underlying these systems. As such, network science has become a highly active and dynamic field, driving the development of new theoretical frameworks, computational tools, and empirical methods that continuously push the boundaries of knowledge and understanding in numerous science and engineering domains.
The first part of this thesis centres on the structural properties of complex networks and their practical applications. We demonstrate that the orthogonal eigenvectors of the adjacency matrix of a simple, unweighted, and undirected graph are sufficient to recover that graph, albeit potentially not in a unique manner (Chapter 2). This observation led us to uncover co-eigenvector graphs, which are graphs that share the same eigenvectors while having distinct eigenvalues. Co-eigenvector graphs are the dual counterparts of cospectral graphs, which share identical eigenvalues but possess distinct eigenvectors.
In an unweighted graph, the number of walks between node pairs of a particular length can be expressed in terms of the corresponding power of the adjacency matrix. However, deriving a similar solution for the number of paths is significantly more intricate (Chapter 3). We present three distinct analytical solutions in matrix form for computing the number of paths of any length between node pairs, utilising different types of walks and leveraging principles from the mathematical field of combinatorics. The computational complexity of these solutions varies depending on the sparsity of the graph. The effective resistance metric, which characterises the entire network as perceived from the
vantage point of two given nodes, represents a powerful tool for addressing a wide range of challenges in network theory. In Chapter 4, we leverage the information contained in effective resistance to solve the inverse all shortest path problem, wherein a weighted graph satisfying given upper bounds on the shortest path weights between node pairs is sought, with sparsity being a critical consideration. Additionally, we propose a novel graph sparsification algorithm that selectively removes links from an unweighted graph in a stepwise manner, with the goal of either minimising or maximising the effective resistance
of the resultant graph.
The second part of this thesis pertains to linear processes on complex networks, exploring their properties and applications. Our research reveals that a simple process of attraction and repulsion between adjacent nodes on a one-dimensional line, based on the similarity of their neighbourhoods, can effectively group together nodes from the same community (Chapter 5). Our linear clustering process generally produces more accurate partitions than the most prevalent modularity-based clustering methods in the literature, requiring a comparable amount of computational complexity. An empirical part of our research on processes in complex networks became possible thanks to
our network construction based on a unique data set containing each municipality’s area, population and its geographically adjacent neighbouring municipalities. Thanks to this network construction, research became possible on a dynamic network of connected municipal nodes at a national level over the period from 1830 to 2019 (Chapter 6). By connecting the population data, area data and municipal merger data of all Dutch municipalities, we discovered that the logarithm of the municipal area and population size yields an almost linear difference equation over time. Research into the municipal merger process over the period 1830-2019 has shown that 873 of the 1228 Dutch municipalities
have merged into adjacent larger municipalities with a larger population.
Our simulation of municipality mergers based on network effects caused by population growth by municipality resulted in a county-level predictive accuracy of 91.7 % over a 200-year period. Suppose every node within a network exhibits linear internal dynamics of a specific order, and the dynamic interactions between these nodes are also linear. In that case, the entire network conforms to a collection of linear differential equations (Chapter 7). Our study offers an analytical solution for the comprehensive network dynamics in state space form, achieved by merging the fundamental topology and internal linear dynamics of every individual node.","paths; networked systems; graph spectra; effective resistance; inverse shortest path; graph sparsification; clustering; linear process","en","doctoral thesis","","978-94-6473-200-9","","","","","","2023-09-14","","","Network Architectures and Services","","",""
"uuid:d2d035ff-40ff-4867-9f4a-3d11fe9c8c62","http://resolver.tudelft.nl/uuid:d2d035ff-40ff-4867-9f4a-3d11fe9c8c62","Reverse bias degradation of CIGS solar cells","Bakker, N.J. (TU Delft Photovoltaic Materials and Devices)","Weeber, A.W. (promotor); Zeman, M. (promotor); Theelen, M.J. (copromotor); Delft University of Technology (degree granting institution)","2023","","","en","doctoral thesis","","978-94-6483-307-2","","","","","","","","","Photovoltaic Materials and Devices","","",""
"uuid:be007bb6-fcee-4814-87b4-d7a14b691547","http://resolver.tudelft.nl/uuid:be007bb6-fcee-4814-87b4-d7a14b691547","A Framework for Optimal Reservoir Operation to Improve Downstream Aquatic Environment: Application to Nakdong River Basin in South Korea","Kim, J. (TU Delft Water Resources)","Solomatine, D.P. (promotor); Jonoski, Andreja (promotor); Delft University of Technology (degree granting institution)","2023","In the water sector, issues concerning the aquatic environment have been extensively discussed due to climate change. In particular, water quality problems such as harmful cyanobacterial blooms (CyanoHABs) in rivers have arisen in South Korea since 2012. The Korean government constructed 16 weirs in the rivers during the Four Major Rivers Restoration Project. These weirs were built to more effectively use water resources in the rivers. Many environmental activists, however, have claimed that the weirs have caused water quality problems of CyanoHABs in the rivers. These CyanoHABs can be threats to the water environment while harming human health and aquatic ecosystems since CyanoHABs produce toxic substances such as microcystins.
To address the problems of these CyanoHABs, many researchers have conducted studies on predictive models for CyanoHABs. A predictive model using a data-driven approach can be useful in exploring the main factors affecting CyanoHABs at a specific location. However, these studies have not focused on preventing the occurrence of CyanoHABs but only on predicting their occurrence. If these studies are designed to link with a practical method for reducing the frequency of CyanoHABs, viable strategies can be proposed to effectively control CyanoHABs. Therefore, detailed considerations are required concerning the prevention or mitigation of CyanoHABs.
Reservoir operation can be a solution for reducing the problem of CyanoHABs in a downstream river. For example, discharging more water from upstream reservoirs can flush CyanoHABs downstream. However, the risk of water shortage can be increased in a reservoir if it is operated for improving water quality downstream. This is because reservoirs were typically designed for management of water quantity such as water supply. To use limited water resources in a reservoir to reduce the frequency of CyanoHABs downstream, optimal reservoir operations are necessary that simultaneously consider both the quantity and the quality of water.
This study focused on establishing a practical framework for the optimal operation of upstream reservoirs to address the problem of CyanoHABs in a downstream river. Furthermore, the applicability of this framework was demonstrated using observational data related to the quantity and quality of the upstream reservoirs in the study area, the Nakdong River basin of South Korea. The framework was established by incorporating three models: a machine learning model, a river water quality model, and an optimization model for reservoir operation.
Principles of strength of materials and fracture mechanics can be adopted to apply specific delamination initiation and propagation prediction methods to composite laminates. It is known that fracture mechanics methods have advantages in addressing delamination growth problems. On the other hand, it is also shown in the literature that strength of materials methods are generally best suited for quasi-static delamination growth. This implies that the growth of low-velocity impact or quasi-static indentation delaminations in composite laminates could also be predicted with an appropriate strength of materials approach. It is therefore necessary to comprehensively assess the ability of strength of material approaches to predict delamination of a composite laminate under out-of-plane concentrated loading. Since a stable stress field is the basic condition for the application of the strength of materials methods, the out-of-plane quasi-static indentation loading condition is first considered in this thesis.","Polymer-matrix composites; Composite laminates; Delamination initiation; Delamination growth","en","doctoral thesis","","978-94-6384-476-5","","","","","","2027-09-11","","","Structural Integrity & Composites","","",""
"uuid:f74561e6-c8ba-4bbb-8ac2-19a6a18ed41d","http://resolver.tudelft.nl/uuid:f74561e6-c8ba-4bbb-8ac2-19a6a18ed41d","Remote river rating in resource constricted river basins: Exploring opportunities for ungauged basins through low-cost technological advancements","Samboko, H.T. (TU Delft Water Resources)","Winsemius, H.C. (copromotor); Savenije, Hubert (promotor); Delft University of Technology (degree granting institution)","2023","The unavailability of consistent accurate river flow data is a significant impediment to understanding water resources availability, and hydrological extremes. This is particularly true for remote, difficult to access, morphologically active and therefore rapidly changing rivers. The state of global river discharge monitoring with respect to water infrastructure and frequency of data collection has been on the decline over the past few decades. This is despite the significant importance of these data for river flow predictions. Fortunately, rapid advancements in technologies open up possibilities for water resource authorities to increase their ability to accurately, safely and efficiently establish river flow observation through remote and non-intrusive observation methods. Low-cost Unmanned Aerial Vehicles (UAVs) in combination with Global Navigation Satellite Systems (GNSS) can be used to collect geometrical information of the riverbed and floodplain. Such information, in combination with hydraulic modelling tools, can be used to establish physically based relationships between river flows and permanent proxies. This study attempts to monitor flow in volatile, dangerous and difficult to access rivers using only affordable and easy to maintain new technologies. This thesis consists of three main components: i) generating a workable framework for monitoring rivers using low-cost technologies; ii) establishment of river geometry using a combination of airborne photogrammetry and low-cost GNSS equipment iii) and physically based rating curve development through hydraulic modelling of surveyed river sections.
The first three chapters of this thesis provide an introduction in the form of a literature review, justification for the study and a description of the study area. In chapter 4, a framework is developed through an intensive review of traditional river monitoring processes. Uniquely effective and low-cost individual components are selected and placed within a framework. The ideal outcome is an interconnected framework which clearly presents the steps which are necessary for river monitoring in remote locations. The manner in which each critical step is related to the other is explained. Furthermore, the method by which modern technologies are assimilated into the method is described. Within the framework, critical thresholds are set up in order to signal the to the water manager whether the proposed model in its current state continues to perform as required.
Chapter 5 investigates how low-cost technologies such as UAVs in combination with low-cost GNSS devices can be used to generate river geometry for the purposes of application in a hydraulic model. Furthermore, performance of the open-source photogrammetry software substantiated the claim that, free and open-source available packages are capable of producing results which are as good as proprietary alternatives as shown by the RMSE analyses. A novel approach to generate a seamless bathymetry through merging and volumization was successfully tested. Results presented in this chapter encourage future studies to investigate the impact of variations in the number of Ground Control Points (GCPs) on discharge estimations in a hydraulic model with different hydrodynamic boundary conditions. This follow up was instituted in Chapter 6.
In this sixth chapter we accept that uncertainties in the data acquisition may propagate into uncertainties in the relationships found between discharge and state variables. This uncertainty prompts the need to understand the impact of varying geometries on hydraulic models. Specific attention is placed on variations caused by differing GCP numbers since the task of GCP placement is time consuming, potential dangerous and resource intensive in certain location and instances. We are successfully able to determine the minimum number of control points required to reproduce geometry. Overall, we successfully develop and test a workable method for water resources authorities to estimate river flows accurately through the application of advanced, low-cost technologies with minimal contact with measured variables.
The development and application of low-cost technologies for river flow monitoring has led to the following important conclusions:
• For the purpose of flow estimation, there is no need to use more than seven GCPs to establish accurate UAV-based geometry. Rather, it is more crucial to distribute the available markers to be maximally representative of the terrain elevations. Furthermore, it may be necessary to place more markers in close proximity to locations where one may expect the largest challenge for photogrammetry software (e.g.: water, thick forest/vegetation)
• In order to limit the impact of the “doming” effect on terrain geometry measurements, one of the most effective, yet easily implementable mechanisms is to measure a river line using Real Time Kinematic (RTK) Global Navigation Satellite Systems (GNSS) equipment. This data can then be used to correct the terrain post photogrammetry processing.
◦C and pursue efforts to limit it even further to 1.5 ◦C. Photovoltaic energy is the key to achieving this target.
This dissertation focuses on improving the efficiency and sustainability of interdigitated back contact (IBC) solar cells. A special emphasis is also placed on cost and reliability. IBC cells and modules utilized in this study are based on ZEBRA technologies, which were developed at ISC Konstanz and implemented using processes and equipment that are comparable to those employed in conventional solar cells, such as Al-BSF and PERC. A detailed discussion of the process and history can be found in Chapter 2...","photovoltaic; silicon solar cells; back contact; cut losses; edge recombination; copper metallization","en","doctoral thesis","","978-94-6384-462-8","","","","","","","","","Photovoltaic Materials and Devices","","",""
"uuid:25f6f514-a07d-4b78-8cc3-2769555a5c20","http://resolver.tudelft.nl/uuid:25f6f514-a07d-4b78-8cc3-2769555a5c20","Strategic Language Workbench Improvements","Smits, J. (TU Delft Programming Languages)","van Deursen, A. (promotor); Cockx, J.G.H. (copromotor); Visser, Eelco (promotor); Delft University of Technology (degree granting institution)","2023","Computers execute software to do the tasks we expect from them. This software is written by human beings, we call this programming. The most common way to program is by writing text in a programming language. A programming language is very structured so we can be precise, but ultimately these languages are still for humans to read and write. In order to execute the written program, we need to translate it to a list of tiny instruction steps that the hardware of the computer can execute. This translation is also automated with software. The most common forms this software takes is (1) interpreters that execute a program live as they read it, or (2) compilers that translate the entire program for later execution.
Interpreters and compilers are tools of the domain of Programming Languages (PL). Apart from interpreters and compilers, there is more support software available around programming languages. This includes smart text editors, program analysis, running-program observers, etc. The requirements for PL tools are high: they should not get in the way when used to create software. In particular, they should support useful features, be fast enough in interaction, and not make mistakes.
Given these requirements, it is not a simple task to make PL tools. In an effort to make it easier to create PL tools, Language Workbenches (LWBs) were created: a suite of tools specifically for creating PL tools.
In this dissertation, you can find several improvements I made to a particular language workbench. I have—in multiple ways—sped up the language development cycle in this workbench: in terms of improved development, feedback, and execution speed.
Throughout my research, I have worked on and in the Spoofax language workbench, a research language workbench used for programming language research at TU Delft. Spoofax splits up the specification of programming languages into different domains, and captures each of those domains in a meta-language. For example, to describe the structure of the text of a programming language, Spoofax uses a formalism based on context-free grammars, extended with different useful features, which is called the Syntax Definition Formalism 3 or SDF3 for short. Similarly, there are meta-languages for the description of names, references and types; for what it means to execute a program; for defining assumptions and behaviour by example for testing purposes; and for transforming programs, which is a catch-all, but still a fairly high-level language. This language for transforming programs, called Stratego, is particularly relevant to this dissertation.
Contributions. Firstly, we introduce a new meta-language specialised in control- and data-flow analysis: FlowSpec. FlowSpec improves the development speed of programming languages in Spoofax, and the feedback in Spoofax and in the PL tools generated by Spoofax.Secondly, we improve the compilation speed of Stratego on successive compilations with an incremental compiler. This compiler improves the speed at which you receive feedback inside Spoofax on changes to a Stratego program, and the speed at which you can see the results of tests and other short program executions after a change.
Thirdly, we add a gradual type system to Stratego to improve the feedback that can be given without executing Stratego programs. A gradual type system does not require a user of Stratego to add types to their program, but if they choose to, the gradual type system will be able to reason about the parts of the program that are typed, and give certain errors at compilation time instead of run time.
Finally, we develop a pattern matching optimisation that work for Stratego’s pattern matching. This improves the execution speed of Stratego programs. Since all PL tools created in Spoofax include at least some of those Stratego programs, this also speeds up the execution of the Spoofax meta-languages themselves.","","en","doctoral thesis","","978-94-6384-474-1","","","","Prof.dr. E. Visser (Delft University of Technology) was the original promotor and supervisor of this research until his untimely passing on April 5th, 2022.","","","","","Programming Languages","","",""
"uuid:edfbbf98-2530-463b-94d0-43dee5435786","http://resolver.tudelft.nl/uuid:edfbbf98-2530-463b-94d0-43dee5435786","Receptivity of Swept Wing Boundary Layers to Surface Roughness: Diagnostics and extension to flow control","Zoppini, G. (TU Delft Aerodynamics)","Kotsonis, M. (promotor); Ragni, D. (promotor); Delft University of Technology (degree granting institution)","2023","The research presented in this thesis focuses on the receptivity to surface roughness of swept wing boundary layers dominated by crossflow instabilities (CFI), providing insights into how surface roughness can be used to passively control the developing instabilities. Discrete roughness elements (DRE) arrays and distributed randomized roughness patches (DRP) are employed to investigate the physical phenomena governing receptivity and their impact on CFI onset. The supporting data combine numerical solutions of linear and non-linear stability theory with advanced experimental flow diagnostics.
This booklet is divided into three main parts. The first part investigates the flow mechanisms dominating the receptivity of stationary CFI to the amplitude and location of DRE arrays. The relation between the external forcing configuration and the initial instability amplitude is investigated, along with scaling principles allowing for the up-scaled reproduction of the swept wing leading-edge configurations, which provide experimentally observable configuration.
The second part of this research explores the stationary CFI receptivity to specific up-scaled roughness configurations, including both isolated discrete roughness elements and DRE arrays. These roughness elements are applied at relatively downstream chord locations to enhance the experimental resolution of the near-roughness flow field.
The isolated discrete roughness elements ensure strong boundary layer forcing, which helps to outline the relation between the near-element instability onset and the rapid transitional process. In contrast, the applied DRE arrays configurations provide boundary layers dominated by the development of CFI. In such scenarios, high-magnification tomographic particle tracking velocimetry identifies the dominant near-element stationary instabilities precursor to CFI. Specifically, the presence of transient growth and decay mechanisms in the near-roughness flow region is outlined, exploring their role in the receptivity process and in the CFI onset. This investigation results in the first conceptual map describing the receptivity of swept-wing boundary layers to a wide range of DRE array amplitudes.
Lastly, the acquired knowledge of the near-element flow topology is employed in the final part of this work to develop a passive laminar flow control technique for stationary CFI cancellation. This technique is based on the destructive interference of the velocity disturbances introduced by a streamwise series of optimally arranged DRE arrays. The performed measurements confirm a reduction in the developing CFI amplitude accompanied by a delay of the boundary layer transition. The compatibility of the proposed technique with the control of CFI developing in a realistic free-flight scenario is as well investigated.","Swept wing boundary layer; receptivity; transition; crossflow instability; laminar flow; surface roughness","en","doctoral thesis","","978-94-6366-719-7","","","","","","","","","Aerodynamics","","",""
"uuid:54dd845a-93eb-4fa8-9797-83e468ecbce5","http://resolver.tudelft.nl/uuid:54dd845a-93eb-4fa8-9797-83e468ecbce5","General-purpose Inverse Modeling Framework for Energy Transition Applications Based on Adjoint Method and Operator-Based Linearization","Tian, X. (TU Delft Reservoir Engineering)","Voskov, D.V. (promotor); Bruhn, D.F. (promotor); Delft University of Technology (degree granting institution)","2023","This study investigates the application of inverse modeling in numerical geo-energy scenarios such as petroleum, geothermal, and CCS projects. The study aims to enhance model accuracy and predictive capabilities for real-world applications. The focus lies on the implementation of the inverse modeling framework within the open-source simulator called Delft Advanced Research Terra Simulator (DARTS), developed using the adjoint method and Operator-Based Linearization (OBL) to assemble derivatives efficiently. The adjoint method's efficiency and analytical gradient solution make it a preferred choice for gradient evaluation in inverse modeling. The work transitions from forward simulation to inverse modeling, elucidating the objective function definition, optimization theory, and the adjoint method's process. Prototype development in MATLAB and its translation to C++ are presented, showcasing the method's superiority. Application examples, like the data-driven proxy model and energy transition projects, demonstrate the framework's versatility and effectiveness in handling diverse observations and solving complex energy transition challenges.","Inverse modeling; Energy transition; Adjoint method; DARTS; History matching; Geo-energy","en","doctoral thesis","","978-94-6366-727-2","","","","","","","","","Reservoir Engineering","","",""
"uuid:a5998475-a7be-46e5-9bd9-6fbd9b81c15c","http://resolver.tudelft.nl/uuid:a5998475-a7be-46e5-9bd9-6fbd9b81c15c","Efficient Visual Ego-Motion Estimation for Agile Flying Robots","Xu, Y. (TU Delft Control & Simulation)","de Croon, G.C.H.E. (promotor); de Wagter, C. (copromotor); Delft University of Technology (degree granting institution)","2023","Micro air vehicles (MAVs) have shown significant potential in modern society. The development in robotics and automation is changing the roles of MAVs from remotely controlled machines requiring human pilots to autonomous and intelligent robots. There is an increasing number of autonomous MAVs involved in outdoor operations. In contrast, the deployment of MAVs in GPS-denied environments is relatively less practiced. The speed when flying indoors is often slow. One reason is that MAVs are surrounded by obstacles. But it should also be noticed that ego-motion estimation becomes more difficult to remain reliable during faster flight. The reason for this is that fast motion brings challenges to the robustness and computational efficiency of ego-motion estimation solutions based on the limited onboard sensing and processing capacities. The challenge to robustness is that the motion blur induced by agile maneuvers reduces the amount of available visual information needed by the current mainstream ego-motion estimation solutions, given the fact that frame-based cameras are the primary sensor for most lightweight MAVs. The challenge of computational efficiency comes from the strong desire for smaller and smaller MAVs to better fit cluttered environments. Moreover, to compensate for the decrease in robustness, additional computational power is required to detect known landmarks or visual processing that better copes with motion blur. This dissertation responds to the challenges by investigating novel ego-motion estimation approaches that combine robustness and efficiency. First, the goal of higher efficiency in the context of traditional visual feature points is pursued, albeit at the cost of reduced accuracy. The targeted scenarios are where known landmarks exist, such as gates in autonomous drone racing. The proposed velocity estimator’s mission is to navigate the MAV until the next landmark appears in the field of view and corrects the accumulated drift in the position estimation. To prevent drift over time, a simple linear drag force model is used for estimating the pitch and roll angles of the MAV with respect to the gravity vector and its velocity within the horizontal plane of the propellers. The translational motion direction and the relative yaw angle are efficiently calculated from the correspondences of feature points using a RANSAC-based linear algorithm...","Micro Air Vehicles; Ego-Motion Estimation; Deep Neural Networks; Self-Supervised Learning; Network Prediction Uncertainty; Monocular Visual-Inertial Odometry; Monocular Depth Prediction","en","doctoral thesis","","978-94-6384-477-2","","","","","","","","","Control & Simulation","","",""
"uuid:32f02090-19d2-4c6f-a8bf-9cbfb7cffd45","http://resolver.tudelft.nl/uuid:32f02090-19d2-4c6f-a8bf-9cbfb7cffd45","Optimization-based Approaches for Fault Detection and Estimation: with applications to health-monitoring of energy systems","Dong, J. (TU Delft Team Peyman Mohajerin Esfahani)","Keviczky, T. (promotor); Mohajerin Esfahani, P. (promotor); Delft University of Technology (degree granting institution)","2023","Advancements in technology and societal demands have led to increasing complexity, size, and automation in modern industrial systems. This trend makes these systems more safety-critical, as the occurrence of faults in system components or subsystems may cause the entire system to fail, resulting in significant economic losses and casualties. Consequently, developing an effective fault diagnosis method is crucial for ensuring the reliability, safety, and performance of industrial systems, especially energy systems, which are so relevant to our lives. However, most model-based fault diagnosis systems developed based on observers and parity space relations have the same order as that of the system. This can cause a significant computational burden when dealing with large-scale and high-dimensional systems. This thesis is dedicated to the design of fault diagnosis filters in the framework of differential-algebraic equations, which produce scalable residual generators with design flexibility. Meanwhile, we consider the impact of disturbances and stochastic noise ondiagnosis results, as well as the fault diagnosis problem within the finite frequency domain. In order to design filters capable of handling these issues, we solve filter parameters through optimization problems that are constructed based on specific diagnosis requirements.","Robust fault detection and estimation; Probabilistic certificates; Filter design; Optimization methods; Energy systems","en","doctoral thesis","","978-94-6483-378-2","","","","","","2024-09-25","","","Team Peyman Mohajerin Esfahani","","",""
"uuid:7eacc7fc-523e-4172-9c62-ab28916398ef","http://resolver.tudelft.nl/uuid:7eacc7fc-523e-4172-9c62-ab28916398ef","Expensive Optimization with Model-Based Evolutionary Algorithms Applied to Medical Image Segmentation Using Deep Learning","Dushatskiy, A. (TU Delft Algorithmics)","Bosman, P.A.N. (promotor); Alderliesten, T. (copromotor); Delft University of Technology (degree granting institution)","2023","Recently great achievements have been obtained with Artificial Intelligence (AI) methods including human-level performance in such challenging areas as image processing, natural language processing, computational biology, and game playing. Arguably, one of the most societally important application fields of such methods is healthcare.
AI is a broad term, which in general refers to systems and methods (components of systems), capable of solving complex tasks and ultimately doing it autonomously, i.e., without human participation, or, if necessary (e.g., in healthcare) with some human supervision. Machine Learning (ML) is a subfield of AI that consists of diverse methods which utilize available data to extractmeaningful and actionable knowledge. Three key factors have contributed to the recent success of ML methods: 1) Novel algorithms; 2)Highly efficient hardware, the computational capabilities of which are perfectly aligned with the currently most popular component of AI systems - deep neural networks (a computational abstraction that vaguely resembles a brain and can be efficient in solving differentML problems); 3) Huge amounts of digitally available data which can be used to train ML models. In this thesis, we mainly focus on the combination of algorithm development and data-related aspects....","evolutionary algorithms; expensive optimization; deep learning; medical image segmentation; neural architecture search","en","doctoral thesis","","978-94-6473-182-8","","","","","","","","","Algorithmics","","",""
"uuid:693f3222-1b86-4658-b11f-6549f78b641e","http://resolver.tudelft.nl/uuid:693f3222-1b86-4658-b11f-6549f78b641e","Time in the Work of Frank Lloyd Wright: Geology, Geography and Geometry of Architecture.","Sturkenboom, F.J.J.M. (TU Delft Situated Architecture)","van Gameren, D.E. (promotor); Havik, K.M. (promotor); Delft University of Technology (degree granting institution)","2023","For a long time Wright’s architecture has been theorized in terms of space. Although space was certainly a key-word in Wright’s discourse, we can neither see it as an objective, three-dimensional space, nor as a more subjective, intimate space. In Wright’s architecture, the third dimension implies time, an axis mundi, a story about the earth as being built. Architecture faces the task to explicate this geological dimension. Geology here not only pertains to the crust of the earth and its materials. It also refers to flora and fauna, all the life having co-built the earth. Designing means the digging up of this natural history of a place that should come to resonate in structure, texture, type, pattern, colour and form. Architecture finds its reason in this geological time, it memorizes that time. Every Wright House is a monument of the American landscape. A new space appears: no longer Cartesian three-dimensional space, not human-centered place-space, but the shallow space of the building as a bas-relief of the earth, “growing out of the ground into the light.”
Wright saw it as a personal assignment to free American architecture from European Eclecticism in order to finally come to “a truly American architecture.” He sought inspiration in the landscape, the earth as being built and as still building itself. Wright’s oeuvre might be read as a journey of discovery of the American landscape. The light, horizontal parts of his buildings refer to an ‘on the way,’ they remind us of vehicles and tents. The stone parts refer to a local earth. The ‘fleet’ of his buildings move over the earth to sample it. In its images we find the archetype of a scientific expedition comparable with the great geographic expeditions of the 19th century. The expedition discovers the styles of American nature as the possible ingredients of a “natural architecture.” The geographic expedition mirrors the adventures of the wanderer and the settler, according to Wright the two characters united in the American soul. It mirrors the adventure of a people of colonists trying to get situated on a terra incognita, trying to root in the American earth while dressing up in American nature.
If nature must become the soul of architecture, geometry is the powerful instrument to analyze nature. It is a an instrument teaching us the intellect of creative nature. Wright used a polyphony of geometric styles, from basic geometric forms to proto-fractals, inventing a style reconciling form with formation. If “an organic building should grow out of the ground into the light, holding that ground as a basic part of itself,” the intelligence of the ground—of nature building the earth—reflects itself in the geometrical patterns of architecture.
Using composite materials in aircraft structures can reduce weight compared to conventional metals. However, utilizing more of the material's load-carrying capabilities can further reduce weight.","Skin-stringer separation; Building block approach; Stiffened panels; Postbuckling; Thermoset composites; Fracture toughness; Component testing","en","doctoral thesis","","978-94-6366-703-6","","","","","","","","","Aerospace Structures & Computational Mechanics","","",""
"uuid:ff3fb812-182e-4d09-803a-fe4ed30ce603","http://resolver.tudelft.nl/uuid:ff3fb812-182e-4d09-803a-fe4ed30ce603","Building a platform for magnetic imaging of spin waves","Simon, B.G. (TU Delft QN/vanderSarlab)","Kuipers, L. (promotor); van der Sar, T. (promotor); Delft University of Technology (degree granting institution)","2023","Spin waves are the elementary excitations of magnetic materials. They are interesting because of their rich physics and potential role in low-dissipation information technology. To better understand spin-wave transport and explore new ways to control it, this thesis focuses on developing magnetic-imaging techniques based on the single spin of the nitrogen-vacancy (NV) defect in diamond that detects spin waves via their magnetic stray fields. These fields decay evanescently on the scale of the spin wavelength. By using NV centres embedded in an atomic force microscope probe that provides nanometre NV-sample proximity, we achieve sensitivity to nanoscale spin waves.","nitrogen-vacancy (NV) centre; scanning NV magnetometry; spin waves; magnetism; diamond nanofabrication","en","doctoral thesis","","978-90-8593-564-3","","","","","","","","","QN/vanderSarlab","","",""
"uuid:e5e2515f-3a21-4831-b948-abbaa75de13f","http://resolver.tudelft.nl/uuid:e5e2515f-3a21-4831-b948-abbaa75de13f","Local Activism in Urban Neighborhood Governance: The case of Cairo, Egypt","Elwageeh, Aya (TU Delft Urban Studies)","van Ham, M. (promotor); Kleinhans, R.J. (promotor); Delft University of Technology (degree granting institution)","2023","This study investigates local activism in politically challenging contexts, focusing on Cairo. In such contexts, active resident groups strive for urban improvement, while governance arrangements often disregard citizen involvement in urban and public affairs. Cairo presents an exemplary case of local activism in a politically challenging and under-researched context. The study explores the characteristics, roles, and interrelations of active resident groups with local governance arrangements and their deviations from existing literature. It employs a qualitative methodology with observations and semi-structured interviews with local officials and active residents from nine different districts. The study uses Facebook to select, observe, and analyze the activities of multiple active resident groups and contributes to theoretical frameworks for analyzing local activism in complex contexts. It reveals the dominant and absent roles and the governance dimensions (un)attainable by active residents. It also traces the sources of limited local activism in the existing governance arrangements in Cairo, highlighting the importance and difficulty of changing governance arrangements in Egypt. The study broadens our understanding of local activism in the Global South beyond dominant forms of activism.","","en","doctoral thesis","","978-94-6366-709-8","","","","","","","","","Urban Studies","","",""
"uuid:c9cb4a2a-abeb-4de5-b0b0-5bebd3416407","http://resolver.tudelft.nl/uuid:c9cb4a2a-abeb-4de5-b0b0-5bebd3416407","A unified modelling framework for vibratory pile driving methods","Tsetas, A. (TU Delft Dynamics of Structures)","Metrikine, A. (promotor); Tsouvalas, A. (copromotor); Delft University of Technology (degree granting institution)","2023","The ambitious goals towards the decarbonization of the global energy sector have amplified the demand for renewable energy resources. Amongst the renewables, offshore wind possesses a pivotal role in this endeavour, showcasing remarkable growth in recent years. However, this rapid expansion has been accompanied by a series of technical challenges. Foundation installation comprises one of the most critical phases in the construction of an offshore wind farm and engineering advancements in this topic are vital to accommodate this developmental pace. Bottom-fixed foundations are primarily used to support offshore wind turbines and amongst the available concepts, the monopile is the foremost one. The installation of these substructures is most commonly performed via impact hammering. Notwithstanding the robustness and efficacy of this technique, major environmental concerns have been raised due to the significant levels of underwater noise pollution during driving. In view of this alarming issue, alternative and sustainable pile installation techniques have been progressively drawing attention during the last decade and an increasing number of research projects focus on their investigation and development.
At present, the offshore wind industry is increasingly adopting vibratory pile driving. The previous method has been successfully employed in onshore projects for decades, albeit its wider use in the offshore environment is hindered due to the incompleteness of available field observations. To boost the improvement of vibratory installation methods, a new technology has been recently proposed by the Delft University of Technology, namely the Gentle Driving of Piles (GDP). The preceding method aims to enhance the installation performance of vibratory driving for tubular (mono)piles and to reduce the associated noise emissions, via the simultaneous application of low-frequency/axial and high-frequency/torsional vibrations. Naturally, the shift to these technologies is accompanied by emerging research questions pertaining to pile installation, vibro-acoustic and post-installation performances. In this thesis, the development of an engineering-oriented modelling framework for axial vibratory driving and GDP is the primary objective, thereby focusing on the topic of sustainable monopile installation.","Pile driving; Monopile installation; Vibratory driving; Gentle Driving of Piles; Vibrations of shells; Soil dynamics; Thin-Layer Method; Green’s functions; Harmonic Balance Method; Friction fatigue; Friction redirection","en","doctoral thesis","","978-94-6366-716-6","","","","","","","","","Dynamics of Structures","","",""
"uuid:e6f2307f-0b46-402a-92d3-aa98a63754f6","http://resolver.tudelft.nl/uuid:e6f2307f-0b46-402a-92d3-aa98a63754f6","Exploring the potential of Safety Management Systems to support New Approaches based on Safety Fractals","Accou, B.O.R. (TU Delft Safety and Security Science)","Reniers, G.L.L.M.E. (promotor); Groeneweg, J. (copromotor); Delft University of Technology (degree granting institution)","2023","The concept of a safety management system (SMS) to control the risks of operational activities has already been introduced in high-risk industries some decades ago. Nevertheless, such an SMS is often criticized as burdensome and complex. Through its requirement to formalise all main activities, the SMS is perceived as bureaucratic and as a vehicle for pure compliance, exemplary for the old view on safety management. Furthermore, the SMS is often perceived as detached from an organisation’s core and operational activities, and as incompatible with local practice. It is questioned whether it can deliver the expected safe performance....","","en","doctoral thesis","","","","","","","","","","","Safety and Security Science","","",""
"uuid:bd895c0f-043b-43f0-a2a3-6e2d3df18121","http://resolver.tudelft.nl/uuid:bd895c0f-043b-43f0-a2a3-6e2d3df18121","Towards Closed-loop Maintenance Logistics for Offshore Wind Farms: Approaches for Strategic and Tactical Decision-making","Li, M. (TU Delft Transport Engineering and Logistics)","Negenborn, R.R. (promotor); Jiang, X. (promotor); Delft University of Technology (degree granting institution)","2023","Europe’s offshore wind capacity is expected to reach 450 GW by 2050, meeting 30% of Europe’s electricity demand. With the increase of installed capacity, the costs invested in O&M will also increase significantly considering O&M cost is one of the biggest contributors to life cycle costs. The improvement of O&M management for offshore wind farms, especially maintenance logistics, represents a significant cost-reduction opportunity and will continue to be a primary factor in shaping the future development of the offshore wind sector. Recent research provides clear insights into maintenance logistics management, categorizing decisions into three levels, strategic, tactical, and operational. Maintenance strategies and resource organization are strategic and tactical decisions respectively, with a long lasting influence on offshore wind farms. With sensors and communication technologies, wind farm owners/operators and service providers can use the health information of wind farms to design maintenance strategies and organize maintenance resources, and utilize new data to update decisions to realize a closed-loop manner. Thus the research question of this thesis is how to improve the effectiveness of maintenance strategy and resource organization for offshore wind farms and move towards a closed-loop decision-making approach? In this thesis, an open-loop predictive opportunistic maintenance strategy utilizing predicted component failures and maintenance opportunities is developed first. Then, the influence of inaccuracy or uncertainty in model parameters is quantified on maintenance performance and strategies. The significance of different uncertainties is ranked, and suggestions are provided to cope with the uncertain decision-making environment. Next, the approaches are proposed to organize the primary maintenance resources, i.e., spare parts and service vessels, to support the implementation of the open-loop maintenance strategy in a cost-effective manner. Finally, the open-loop maintenance strategy develops towards a closed-loop maintenance strategy that is able to capture dynamic wind farm states and mitigate the influence of model parameter uncertainties, reducing more revenue losses than open-loop approaches. Overall, this thesis provides a series of approaches for offshore wind farm owners and operators and maintenance service providers to instruct the strategic and tactical maintenance logistics for offshore wind farms, showing the potential for improving the effectiveness and moving towards a closed-loop manner.","Offshore wind energy; Operation and maintenance; Maintenance optimization; Spare parts inventory; Fleet size and mix","en","doctoral thesis","","978-90-5584-329-9","","","","","","2023-07-05","","","Transport Engineering and Logistics","","",""
"uuid:71338a17-78d9-44a9-96c5-15b4b841f6b8","http://resolver.tudelft.nl/uuid:71338a17-78d9-44a9-96c5-15b4b841f6b8","Extending the Thick Level Set Approach: Plasticity, Parallel Computing and Cohesive Cracks","Taumaturgo Mororo, L.A. (TU Delft Applied Mechanics)","van der Meer, F.P. (promotor); Sluys, Lambertus J. (promotor); Delft University of Technology (degree granting institution)","2023","Developing accurate and robust numerical approaches that are capable of modeling fracture in solids has been a challenging undertaking in the computational mechanics community for decades. Models based on a continuous formulation or on a discontinuous one have been proposed by numerous authors, expanding upon abilities and disadvantages of these approaches. However, models attempting to bridge these two approaches have been less often encountered in the literature.
Over the last ten years, a new approach for modeling fracture in solids has been developed, coined the Thick Level Set (TLS) method, in which the damage evolution is linked to the movement of a damage front described with the level set method. This model offers an automatic transition from damage to fracture and deals with merging and branching cracks as well as crack initiation in a easy and robust manner. Furthermore, the TLS in its new (second) version, coined the TLSV2, is able to model explicitly the displacement discontinuity at the position of a crack.
These TLS features are very beneficial for the modeling of cusp crack patterns in resin-rich regions of fiber reinforced polymer composites under mode II loading. In this process, plasticity might occur prior to fracture, which begins with a series of inclined cracks that eventually merge to form what is at higher scale of observation understood as a single crack. When the crack reaches one of the boundaries of these resin-rich regions, the localized deformation in these parts is a sliding one, which is expected to be traction-free.
This in situ process has been reported as one of the reasons for the differences in terms of fracture energy between mode I and mode II crack growth since forming cusps requires the emergence of more crack surface than forming a single straight crack. Therefore, in order to simulate this fracture process under realistic boundary conditions, a model based on the TLS can be embedded in a ‘macroscopic’ setup, such as three-point bend end-notched flexure. A monolithic scheme with extreme refinement in a zone of interest is the most straightforward approach; however, for a specimen with realistic dimensions, this can be computationally unfeasible due to the computational resources needed to solve the large systems of equations involved in such problem. An ability to comprehensively model this microscopic process could help to achieve a better understanding of the mechanism behind the observed dependence of the fracture energy on the mode of fracture, which may in turn improve macroscale simulations.
This work focuses on extending the TLS method in order to profit from its full capabilities to deal with simulations of failure in solids under quasi-static loading conditions. For this purpose, several original numerical and theoretical components are proposed for reaching qualitative agreement with experimental observations of cusp formation in polymer matrix. In this context, the primary application of this thesis relies on the experimental observations at the microscopic level of such process. However, it is worth mentioning that the numerical tools developed in this thesis are not limited to the problem of cusps; in fact, they can be either used or easily extended to simulate other problems, for instance crack growth through the microstructure of cementitious materials with different aggregates.
First, the TLS is combined with plasticity in order to deal with ductile fracture since polymers may behave plastically prior to failure, particularly when loaded in shear. To accommodate for plasticity, several changes to the TLS framework are introduced. A strength-based criterion for initiation of damage based on the ultimate yield surface of such plasticity model is proposed. A mapping operator for transferring plastic history is included if the integration scheme in a finite element changes due to the evolution of the level set field. Furthermore, a new loading scheme is devised in order to take into account permanent strain.
Next, a generalized framework for the TLSV2 is introduced. The TLSV2 couples continuous and discontinuous approaches within a single framework, where the continuum part allows for handling crack initiation, branching and merging, whereas the discontinuous part brings the capability to handle discrete cracks with large crack opening or sliding without heavily distorted elements, as well as the possibility to model stiffness recovery upon contact.
Two major issues with the TLSV2 method that have not been dealt with since its inception are addressed in this thesis, and solutions are proposed. Firstly, the method depends on identifying the location of the skeleton curve of the level set field, on which the discontinuity in the displacement field is evaluated. The problem of locating the skeleton curve can be a complicated task, even more so because topological events may emerge as the analysis progresses, such as crack branching. The skeleton curve is determined through a combination of ball-shrinking and graph-based algorithms and then mapped onto the finite element mesh. Secondly, the cohesive forces and displacement discontinuity of the TLSV2 are modeled using the phantom node method. Furthermore, a new approach to compute the non-local crack driving force is introduced, and model calibration is discussed. The degree of stiffness recovery under compression that is still needed for the continuum part is investigated.
The TLS can be a computationally demanding approach. Therefore, a domain decomposition strategy is introduced in order to obtain a parallel implementation of the TLS method. To handle the numerical components specific to the TLS analysis steps involving level set update, equilibrium solution, and damage front advance, a parallel strategy is introduced for each of them. The most demanding task in terms of computational cost, i.e., solving the linearized system of equations from the equilibrium problem, is performed with a parallel iterative method profiting from the adopted domain decomposition method. A communication strategy is provided to deal with enriched nodes and new nodes necessary for the phantom node method belonging to shared regions of subdomains. Collective communication strategies are also proposed to deal with operations related to the level set update, damage front advance, and skeleton curve.
Numerical experiments demonstrate the accuracy and efficiency of the proposed framework in handling simulations of failure analysis with complex crack patterns in a sequential and parallel context.","Thick Level Set; Plasticity; Parallel computing; Skeleton curve; Fracture mechanics","en","doctoral thesis","","978-94-6366-710-4","","","","","","","","","Applied Mechanics","","",""
"uuid:738394ab-c9a3-4c19-9a86-89ed0bed1b32","http://resolver.tudelft.nl/uuid:738394ab-c9a3-4c19-9a86-89ed0bed1b32","Cooperation between Vessel Service Providers for Port Call Performance Improvement","Nikghadam, S. (TU Delft Transport and Logistics)","Tavasszy, Lorant (promotor); Rezaei, J. (copromotor); Delft University of Technology (degree granting institution)","2023","Ports are vital for maritime logistics. With the growth of maritime traffic, ports, and their actor organizations have faced rising pressure. Improving port call performance, to accommodate more vessels in shorter times, is now on top of the agenda for many ports. The performance of ports in offering their vessels services can improve by developing cooperative relationships between the vessel service providers. Service providers can engage in cooperative relationships, share information regarding their resources' availability, and adjust their initial plans. Such synchronization can create a seamless sequence of services, shorten the vessel’s waiting times and eventually improve the port call performance. Despite the strong aspiration for this improvement, progress is still slow worldwide.
This thesis discusses that a crucial missing piece for the advancement of cooperation in ports is the perspective of service providers. The existing literature, generally, points out the benefits of cooperation for the port as a whole, assuming that the port service providers would cooperate if it benefits the whole port, regardless of the benefits for the cooperating parties. However, in major ports today, port services are offered by self-governed organizations each of which has its own goals. As these organizations run their own business and have their own resources and characteristics, they are likely to avoid actions and decisions that are not in line with their business, even if collective benefits exist. Therefore, considering the service providers’ perspectives when designing mutually beneficial cooperation strategies is crucial. To this end, this thesis aims to improve port call performance through cooperation among service providers, considering the perspectives of both vessels and service providers....","","en","doctoral thesis","","978-90-5584-331-2","","","","","","","","","Transport and Logistics","","",""
"uuid:64e15692-06d7-4e3a-9d51-97f4a07b403f","http://resolver.tudelft.nl/uuid:64e15692-06d7-4e3a-9d51-97f4a07b403f","One thing after another: The role of users, manufacturers, and intermediaries in iot security","Turcios Rodriguez, E.R. (TU Delft Organisation & Governance)","van Eeten, M.J.G. (promotor); Hernandez Ganan, C. (promotor); Delft University of Technology (degree granting institution)","2023","In recent years the number of Internet-connected devices (aka as Internet of Things (IoT)) has increased dramatically. IoT Manufacturers have launched into the market a variety of IoT products to make a profit, while users buy them for the convenience of the technology. Despite IoT technology’s benefits to society, infected IoT devices with malicious software (malware) are a serious security concern. For instance, in 2016, we witnessed one of the largest Distributed Denial of Service (DDoS) attacks facilitated by IoT devices. This attack disrupted major well-known websites, including Twitter, Spotify, Github, and others.
Infected IoT devices cause negative externalities. A negative externality is the cost that third parties, who are neither the seller nor the buyer of IoT devices, must incur to protect themselves against DDoS attacks.
In the traditional personal computer world, compromised machines can be remedied with self-service solutions like antivirus. However, there is a lack of such tools to help users remove malicious software once it has taken hold for the wide variety of IoT devices. This, in turn, creates usability issues for users in the IoT space. To remediate infected IoT devices, users may need to take different actions. These actions depend on the device type, its manufacturer, patches or software updates available, and available settings of the device.
Some Internet Service Providers (ISPs) (referred interchangeably as intermediaries in this dissertation) have undertaken the task of notifying users about infected IoT devices in their home network. These types of notifications can aid the threat detection mechanisms of infected IoT devices for users.
Considering that the IoT technology has certain limitations, and users will have to deal with infected IoT devices, and the aforementioned actors are involved, we set ourselves to answer the following research question: How can users mitigate infected IoT devices? And what role can manufacturers and intermediaries play in supporting them? To answer this question in short users require information and actionable advice to take appropriate actions. Manufacturers need to improve security practices, such as removing default credentials from the setup process of IoT devices. ISPs can facilitate threat detection through notifications and DNS-based prevention. The results of this dissertation, suggest that governments should incentivize intermediaries and manufacturers to address this issues, and collaboration among stakeholders is essential since users alone cannot mitigate infected IoT devices even though they are motivated.","Internet of Things; cleanup IoT malware; IoT malware remediation; User experience with IoT malware","en","doctoral thesis","","978-94-6419-829-4","","","","","","","","","Organisation & Governance","","",""
"uuid:e00f2539-c0b9-49a4-a20f-8a4d0e68cba7","http://resolver.tudelft.nl/uuid:e00f2539-c0b9-49a4-a20f-8a4d0e68cba7","SiC-deposited ceramic membranes for treatment of oil-in-water emulsions","Chen, M. (TU Delft Sanitary Engineering)","Rietveld, L.C. (promotor); Heijman, Sebastiaan (promotor); Delft University of Technology (degree granting institution)","2023","Water scarcity, population growth, and climate change are causing a shortage of water resources globally. Industries are turning to the reclamation and reuse of wastewater, including oily wastewater, which is a major byproduct of oil and gas extraction. The small droplet size of oil-in-water emulsions, however, makes them difficult to remove using traditional methods like coagulation and flocculation, gravitational settling, dissolved air flotation, hydrocyclone, and adsorption.
Membrane separation has emerged as one of the most promising techniques to deal with oil-in-water emulsions due to its high removal efficiency and small footprint. The main challenge for the wider adoption of membrane technology for oily wastewater treatment is membrane fouling. Membrane fouling is a pervasive problem in water purification membranes. It could cause serious negative effects, such as a decline in water production, higher operational pressure and associated higher energy consumption.
Ceramic membranes, particularly SiC membranes, are a promising method for removing small oil droplets from water. They are physically and chemically stable and have high fouling resistance to oil droplets. SiC membranes have better permeability and lower fouling tendency compared to other ceramic membranes. However, their high cost limits their widespread application in the market.
In this research, extensive literature reviews were first performed (Chapters 2 and 3) and then we proposed a new method, low-pressure chemical vapor deposition (LPCVD), to prepare SiC-deposited ceramic membranes for oily wastewater treatment. With LPCVD, a layer of SiC was deposited on alumina supports at a lower temperature (750 ˚C), compared to 2000 ˚C for commercial SiC preparations. Due to the low water contact angle (< 5˚) and negatively charged surface, these SiC-deposited alumina ceramic membranes are expected to be more fouling resistant to oil emulsions than the pristine alumina membranes. The performance of deposited membranes is influenced not only by the coated SiC layer but also by the filtration modes used for evaluation. As a result, we respectively used constant pressure and constant flux filtration to assess the fouling of ceramic membranes with and without SiC deposition. Additionally, the emulsion chemistry, such as surfactant concentration, pH, salinity, and Ca2+, plays a crucial role in the interactions between oil droplets and the membrane surface, which can cause membrane fouling. Understanding these mechanisms can be a crucial step towards the feasibility of using LPCVD to prepare SiC membranes for treating oily wastewater with lower fouling.
First, novel SiC-deposited ceramic membranes were developed by LPCVD at a relatively low temperature (750 ˚C) (Chapter 4). Different deposition times varying from 0 to 150 min were used to tune membrane pore size. The pure water permeance of the membranes only decreased from 350 L m-2 h-1 bar-1 to 157 L m-2 h-1 bar-1 when the deposition time was increased from 0 to 120 min. Correspondingly, the membrane pore size was narrowed down from 71 to 47 nm. Increasing the deposition time from 120 to 150 min mainly resulted in the formation of a thin, dense layer on top of the support instead of in the pores. Notably, the SiC layer rendered the pristine membrane surface more hydrophilic and negatively charged, effectively reducing membrane fouling during oil emulsion filtration.
Next, the fouling of SiC-deposited ceramic membranes and the pristine alumina membrane was respectively compared at constant pressure and constant flux filtration conditions (Chapter 5). The threshold flux of the membranes was first determined by flux-stepping experiments. Afterwards, membrane filtration was respectively conducted at below and above the threshold flux. In single cycle constant flux filtration experiment, the fouling tendency of the membranes was consistent with the results of threshold flux experiments. However, the inclusion of backwash in constant flux experiments led to a change in the fouling tendency, which was also dependent on the permeate flux. The improved surface hydrophilicity and charge made backwash more efficient for the modified membranes while extensive modification has a negative effect on membrane fouling resistance due to the huge loss in membrane permeance. In contrast, constant transmembrane pressure experiments showed that the order of membrane fouling was only related to membrane permeance, and no effect of surface properties was observed. Therefore, constant flux filtration experiments with backwash are recommended to be applied to evaluate the performance of the membranes with and without modification.
Finally, the impact of emulsion chemistry and operational parameters on the fouling of alumina membranes with and without a SiC deposition was systematically studied under constant flux filtration mode with backwash (Chapter 6). The results showed that the SiC-deposited membrane had a lower reversible and irreversible fouling when permeate flux was below 110 Lm-2h-1. In addition, a higher permeance recovery after physical and chemical cleaning was observed, as compared to the alumina membranes. The fouling of both membranes was decreased with the increase of sodium dodecyl sulphate (SDS) concentration in the feed, but to a higher extent in the alumina membranes. Increasing the pH of the emulsion could reduce the fouling of both membranes due to the enhanced electrostatic repulsion between oil droplets and membrane surface. Under high salinity conditions (100 mM NaCl), the screening of surface charge resulted in only a small difference in irreversible fouling between the alumina and SiC-deposited membranes. The presence of Ca2+ in the emulsion led to high irreversible fouling of both membranes, because of the compression of diffusion double layer and the interactions between Ca2+ and SDS. The low fouling tendency and/or high cleaning efficiency of the SiC-deposited membranes indicated their potential for oily wastewater treatment.
Overall, this dissertation shows that the fouling of SiC-deposited ceramic membranes is lower than that of the pristine alumina membranes towards oil-in-water emulsion treatment. Although there are still limitations, these SiC-deposited membranes show the potential for further development.
Despite the inconvenience of intermittent operation, the benefit of using intermittently-powered devices instead of ‘classical’ battery-based ones is threefold. The removal of batteries creates a more environmentally-friendly device, harvesting energy from ambient sources is sustainable and removing the battery can potentially lead towards perpetual operation—as long as there is an ambient energy source, battery-free devices will continue operating.
Challenges of battery-free devices however, still include basic features that are foundational to IoT devices. Interaction with battery-free devices has so far remained largely unexplored although reactive and screen-oriented systems are a significant part of today’s and future Internet of Things. Common tools used during development, such as debuggers and testing frameworks, are practically non-existent for intermittent devices. Even basic concepts such as keeping track of time need to be carefully considered on intermittently-powered devices. Finally, wireless networking of intermittently-powered devices is severely limited to only backscatter or one directional communication.
This dissertation addresses the challenges mentioned above by developing and deploying mechanisms that enable connected and fully interactive applications on battery-free devices. These mechanisms alleviate key challenges that hinder actual adoption and infrastructure-less deployment of these battery-free devices.","Battery-Free; Intermittent Computing; Wireless Networking; Internet of Things; Embedded Systems","en","doctoral thesis","","978-94-6384-453-6","","","","","","","","","Embedded Systems","","",""
"uuid:fb1e3e10-0495-43e8-a53c-299062dbe58f","http://resolver.tudelft.nl/uuid:fb1e3e10-0495-43e8-a53c-299062dbe58f","Assessment of bolted connections for supporting structures of offshore wind turbine towers: Mechanical performance and structural health monitoring","Cheng, L. (TU Delft Steel & Composite Structures)","Veljkovic, M. (promotor); Groves, R.M. (promotor); Delft University of Technology (degree granting institution)","2023","In the past two decades, offshore wind has emerged as a new source of renewable energy. This highlights the requirement for the utilisation of larger and more efficient offshore wind turbines (OWTs). The connections used in support structures of OWTs are critical to ensure the excellent structural performance of OWFs. An alternative option is the C1 wedge connection (C1-WC) to join virtually all the wind turbine generator (WTG) towers to their foundations. This connection shows promising potential in reducing construction, installation, and maintenance costs by eliminating the ring flange and using smaller diameter bolts.
Till now, C1-WC has undergone three generations of development. A more comprehensive research program is required to explore its implementation in a wind farm. The load transfer mechanism and critical component of C1 wedge connections are different to the conventional bolted ring flange (RF) connections. It is important to understand the mechanical behaviour of this connection. Meanwhile, support structures are exposed to the harsh environment during the service life of the OWTs. Material degradation and local cracks in the connection are inevitable to affect the serviceability of OWTs. A need for a reliable and rigorous structural health monitoring (SHM) system for the connection is evident. As one of the non-destructive techniques (NDT), Acoustic emission (AE) has been extensively used in early damage detection and real-time assessment of steel structures. Despite its successful applications, challenges still exist in using AE technique for monitoring applications, especially in analysing the recorded data. Therefore, the research aims to assist in understanding mechanical behaviour and evaluating the health status of the innovative connection.
An extensive experimental program was conducted to evaluate the static and cyclic behaviour of the C1-WCs. Additionally, a detailed 3D non-linear finite element (FE) model of the C1-WCs has been developed. The incorporation of material non-linearity and ductile damage allows the FE model to model the post-necking and final fracture of the connection. The FE model replicates with a good agreement the experimental tensile static and cyclic tests up to the final damage and reproduces the joint behaviour correctly. Parametric studies investigate the influence of bolt grade, the friction coefficient between contact surfaces, and the preloading force level on mechanical behaviour. Moreover, a quantitative comparison between C1-WC and two types of connections (RF connection and RF connection with defined contacts) is performed to provide practical insights into the selection and application of such connections and further optimization. FE-assisted analyses were performed to examine the effect of applied boundary conditions, bolt pretension level, and steel grade on the behaviour of the connections.
In addition to mechanical behaviour analysis. this research is also focused on developing data processing methods to address the challenges of AE monitoring for the C1-WCs. A hybrid model is proposed to identify the deformation stage of metal material. This method combines a self-adaptive denoising technique and an Artificial neural network (ANN). To reduce noise in the AE signals, a decomposition-based denoising method is proposed based on singular spectral analysis (SSA) and variable mode decomposition (VMD), referred to as SSA-VMD. After denoising, an ANN is constructed to identify the deformation stage of steel materials using features extracted from the filtered AE signals as input.
Fatigue damage of the C1-WCs could result in catastrophic failure of OWTs. Due to space constraints, it can be challenging to detect surface cracks in the lower segment holes of the C1-WC using commercial sensors. Thin PZT sensors are lightweight and small, making them suitable for use in restricted-access areas. However, their poor signal-to-noise ratio can limit their effectiveness in AE monitoring. A criterion for selecting the optimal thin PZT sensors is proposed and a configuration is designed for multiple sensors. Two signal processing methods are then proposed in terms of this issue. Firstly, a data fusion-based method is proposed to enhance the functionality of thin PZT sensors in AE applications. Convolutional neural networks (CNNs) combined with principal component analysis (PCA) are employed for signal processing and data fusion. Secondly, a baseline-based method is proposed to provide early warning of the fatigue damage of C1-WCs using thin PZT sensors. A benchmark model correlating to the damage state is created by breaking pencil leads. Multi-variate feature vectors are extracted and then mapped to the Mahalanois distance for identification.
Based on this research work, an efficient FE method has been developed to further improve the design of C1-WC. By providing an in-depth guideline for evaluating the mechanical performance of connections used in OWTs, this research has the potential to contribute to the development of more robust and reliable wind turbine structures. Moreover, the proposed signal processing methods for identifying the deformation stage and early fatigue damage can be further explored in structures with similar damage mechanisms. This can lead to the development of more accurate and effective methods for monitoring and assessing the health of offshore wind turbines, ultimately contributing to improved safety and reliability in the renewable energy industry.","C1 wedge connection; ring flange connection; tensile behavior; fatigue performance; Finite Element Modelling; acoustic emission; thin PZT sensors","en","doctoral thesis","","978-94-6366-707-4","","","","","","2024-07-03","","","Steel & Composite Structures","","",""
"uuid:e98ab258-5ec4-4536-a2a7-532e5666a0bb","http://resolver.tudelft.nl/uuid:e98ab258-5ec4-4536-a2a7-532e5666a0bb","Urban Food Production: Exploring the potential of urban agriculture for the decarbonisation of cities","ten Caat, P.N. (TU Delft Environmental & Climate Design)","van den Dobbelsteen, A.A.J.F. (promotor); Tenpierik, M.J. (promotor); Tillie, Nico (copromotor); Delft University of Technology (degree granting institution)","2023","The anthropogenic demand for food, energy and water (FEW) resources is growing, changing and increasingly concentrating in cities due fast urbanisation worldwide. Carbon dioxide emissions associated with the FEW supply infrastructure makes cities one of the main drivers of global greenhouse gas emissions. Urban food production (UFP) could potentially mitigate city’s carbon emissions by means of direct and indirect emissions cutbacks, respectively through proximity based advantages and recirculation benefits by integration with the urban resource infrastructure. The inherent complexity and comprehensiveness of food production makes it challenging to explore this method during the urban design process and provide holistic evaluations at an early stage.
This research investigates how urbanising the production of food can mitigate the carbon emissions of urban communities. Along the principles of the FEW nexus approach to resource management, a method and platform have been developed that support professionals such as urban planners and designers with the exploration of urban food production in the design process. The aim of this work is to transform cities into more sustainable and resilient places to live. This work hypothesises that urbanising the production of food resources and making urban food production an integral part of the urban resources infrastructure can help the decarbonisation of cities. The objective of this work is to develop a protocol and platform for a non-expert, multi-disciplinary urban design team that can guide the implementation and evaluation of a food production system. The platform, which has been coined the FEWprint, should guide the agro-urban designer during the exploration phase of the design process by providing quantitative feedback on various relevant indicators. The following main research question has been formulated based on the problem statement, hypothesis, research aim and objective: How could the urban food production design process be harmonised with the FEW nexus principles in order to lower the carbon footprint of the city?...
The research is built on open source data (in-situ and satellite measured as well as numerically modelled) from the Copernicus Marine Environment Monitoring Service, the Dutch Directorate-General for Public Works and Water Management (Rijkswaterstaat), the Royal Netherlands Meteorological Institute, and the Euro-CORDEX regional climate modelling experiment. It also uses the open source numerical modelling software Delft3D from Deltares. All other statistical models and algorithms developed during the research are published and available open source.
The thesis starts by demonstrating the value of probabilistic predictions and uncertainty quantification for coastal ecosystems. That is done by constructing an ensemble modelling framework where certain chosen numerical model inputs and model process parameters are perturbed, to which the simulated coastal chlorophyll-a concentration is sensitive. The model perturbation was implemented using Latin Hypercube Sampling with Dependence (LHSD), and more than 150 ensemble members were produced using the Delft3D model. This ensemble prediction system is then compared to the deterministic model setup. A range of verification metrics that describe the goodness-of-fit, accuracy, reliability, and discrimination properties of both modelling experiments were computed. Apart from the verification metrics, the value of probabilistic predictions was also showcased by evaluating the benefit of having temporal and spatial estimates of uncertainty by producing ensemble band, predictive uncertainty intervals and standard deviations maps.
In Chapter 3 of the thesis, we work towards the quantification of climate change induced uncertainties in coastal phytoplankton response. The first necessary step is a comprehensive data exploration and dimension reduction, which also provides a statistical underpinning of atmospheric variable selection for the climate impact studies conducted later in the thesis. Here a range of existing dimension reduction techniques are described and applied to seven atmospheric variables (air temperature, solar radiation, eastward wind, northward wind, air pressure, relative humidity, and total cloud cover) and the chlorophyll-a data at hand. These techniques are applied in a structured way to include spatial and temporal correlation, as well as functional features in the multi-dimensional data. The applied methods include Principal Component Analysis (PCA), Principal Component Regression (PCR), Partial Least Squares (PLS) Regression, multi-way models (PARAFAC, Tucker and N-PLS), Dynamic Factor Analysis (DFA), and Functional PCA. Room for dimension reduction in the atmospheric data was identified, underlying temporal patterns in the chlorophyll-a signal at different locations were revealed, structural similarities (characterized by a mean function and functional variation) in the Euro-CORDEX climate projections were found, and the most influential atmospheric variables (solar radiation and air temperature) were chosen.
Building on these findings, we propose a way to quantify uncertainties in the climate scenarios that are used for the climate impact studies. The basis of this research step is the development of a stochastic climate generator, which is first tested on the solar radiation variable. This climate generator takes the existing Euro-CORDEX scenarios (a combination of Representative Concentration Pathways and Generic Circulation Model forcings) and enriches them by generating numerous new synthetic scenarios around them. These new generated scenarios are representative of the original ones due to the way the stochastic climate generator is constructed. The basis of the climate generator is a Bayesian multi-layered (hierarchical) model. In this model there are model parameters representing variation in the long term trend, seasonal amplitude, time shift, and additive residual. The generator estimates the distribution of each model parameter with Bayesian inference, and using data from all scenarios. Then, when sampling from the parameter distributions, numerous climate trajectories can be constructed. The climate generator is successfully tested on the solar radiation variable and the generated synthetic radiation projections are used in a demonstration study where uncertainties are further propagated to chlorophyll-a concentrations using the Delft3D numerical model.
In the final research step of the thesis, this Bayesian stochastic generator is extended to air temperature. This way we have numerous (>100) radiation and temperature projections available to propagate climate induced uncertainties to coastal chlorophyll-a response once again, this time covering the entire 21st century. In order to translate the climate signal into chlorophyll-a response, we make use of a Bayesian structural time series model. This model follows a piecewise linear trend and continues to repeat its multi-seasonal behavior, learnt from the past data, and most importantly also includes linear effects of the two climate variables. For the training of this time series model, we construct a historical chlorophyll-a signal by fusing in-situ and satellite measurements. This fused signal helps us to take advantage of the more frequent satellite measurements while correcting them with the more accurate in-situ measurements that are also available for a longer historical period. The Bayesian structural time series model is then trained on the fused chlorophyll-a signal and used for long term projection, taking the generated radiation and temperature scenarios as regressors. Since our main interest is the phytoplankton spring bloom dynamics, as a last step we extract yearly spring bloom cardinal dates (beginning, peak, end) from the long-term chlorophyll-a projections using a non-parametric shape constrained method (log-concave regression). The final result is therefore the estimation of climate change induced uncertainty in the coastal phytoplankton spring bloom dynamics.","climate change; uncertainty quantification; coastal phytoplankton phenology; Bayesian models; data fusion; multivariate analysis","en","doctoral thesis","","978-94-6366-700-5","","","","","","","","","Statistics","","",""
"uuid:366ceaef-39cf-4d87-9883-cf942b4971c4","http://resolver.tudelft.nl/uuid:366ceaef-39cf-4d87-9883-cf942b4971c4","Experimental aeroelastic characterization based on integrated optical measurements","Mertens, C. (TU Delft Aerodynamics)","van Oudheusden, B.W. (promotor); Sciacchitano, A. (copromotor); Sodja, J. (copromotor); Delft University of Technology (degree granting institution)","2023","This thesis presents a novel measurement approach for aeroelastic wind tunnel testing. The key novelty of this approach is the integrated measurement of aerodynamic and structural quantities using an optical technique. The considered approach consists of combined measurements of flow tracer particles and structural markers using a Lagrangian particle tracking system. Based on these measurements, the quantities of interest for the characterization of an aeroelastic interaction, which are the three forces in Collar’s triangle (aerodynamic, elastic, and inertial), are determined. Currently, measurements in aeroelastic wind tunnel tests are typically performed with individual sensors for each quantity of interest (pressure transducers, strain gauges, or accelerometers) that are installed inside the experimental model and/or with a force balance that measures the total loads acting on the model. The integrated optical measurement approach is an advancement over this existing measurement technology because it provides field measurements of the aeroelastic structural response and the unsteady flow field around the experimental model, based on which the aerodynamic and structural load distributions can be determined, without requiring an instrumentation of the model with sensors. This measurement approach is therefore an effective way to produce experimental reference data to support the development of novel aeroelastic prediction methods with a potential to accelerate the technological development process for innovations in aeronautics in the future. The development and applications of the integrated optical measurement in this thesis are based on the measurements that were performed in three experimental campaigns in the wind tunnel. Each of the three experiments corresponds to one of the three main chapters of this thesis. All three experiments are performed on a largemodel scale, with dimensions on the order of 1m, which is a scale of high practical relevance for aeroelastic wind tunnel testing. The complexity of the three experiments, in terms of the aeroelastic phenomena that are observed, is increased incrementally, from a rigid-body motion, over a linear aeroelastic test case, to a nonlinear aeroelastic test case. Based on the observations and findings of the previous experiments, the data analysis methods for the subsequent experiments are selected and applied. The first measurements with the integrated approach...","experimental aeroelasticity; wind tunnel testing; flexible wing; gust response; unsteady aerodynamics; Lagrangian particle tracking; PIV","en","doctoral thesis","","978-94-6366-706-7","","","","","","","","","Aerodynamics","","",""
"uuid:76ff65e4-cf07-4ff4-b3b5-937860e0f675","http://resolver.tudelft.nl/uuid:76ff65e4-cf07-4ff4-b3b5-937860e0f675","Accelerating Programmer-Friendly Intermittent Computing","Kortbeek, V. (TU Delft Embedded Systems)","Langendoen, K.G. (promotor); Pawełczak, Przemysław (promotor); Delft University of Technology (degree granting institution)","2023","The Internet of Things (IoT) is taking the world by storm, from smart lights to smart plant monitoring. This revolution is not only present in consumers’ homes, but companies are also looking for more and more ways to monitor every aspect of their production process. This transition to ubiquitous monitoring is made possible by extremely low power embedded devices, mostly powered by batteries. However, with the projected number of IoT devices reaching tens of billions within the next few years, this growth will directly contribute to a massive increase in battery waste, negatively impacting the environment. This increase in battery waste alone is already a well-founded reason to explore alternative energy sources. However, batteries come with more downsides. Many of these IoT devices will operate in hard-to-reach places (e.g., embedded into walls), and the sheer quantity in which these devices will be deployed will make it nearly impossible to replace batteries periodically without employing a costly dedicated workforce...","intermittent computing; battery-free; compiler; interpretation; embedded; low-power; energy harvesting; non-volatile memory","en","doctoral thesis","","978-94-6473-147-7","","","","","","2024-06-29","","","Embedded Systems","","",""
"uuid:2c4855b0-96f8-4a0f-bbca-22e1dad25874","http://resolver.tudelft.nl/uuid:2c4855b0-96f8-4a0f-bbca-22e1dad25874","Transforming urban heating systems: Integrating perspectives on water use, committed emissions and energy justice in the city of Amsterdam","Kaandorp, C. (TU Delft Water Resources)","van de Giesen, N.C. (promotor); Abraham, E. (copromotor); Delft University of Technology (degree granting institution)","2023","","urban heating systems; water-energy Nexus; climate change mitigation; urban sustainability transitions","en","doctoral thesis","","978-94-93315-82-2","","","","","","","","","Water Resources","","",""
"uuid:abdccd21-e390-45f5-b7a3-985f9a8a682e","http://resolver.tudelft.nl/uuid:abdccd21-e390-45f5-b7a3-985f9a8a682e","Route Choice Behaviour under Uncertainty in Public Transport Networks: Stated and Revealed Preference Analyses","Shelat, S. (TU Delft Transport and Planning)","van Lint, J.W.C. (promotor); Cats, O. (promotor); van Oort, N. (copromotor); Delft University of Technology (degree granting institution)","2023","Arguably, nearly all real-world decisions, including travel choices, are inherently associated with subjective uncertainty where decision-makers’ personal evaluations play a significant role. In public transport networks, uncertainty due to waiting time and, recently, the COVID-19 pandemic possibly induce the most frustration and anxiety. Therefore, with the overarching aim of making public transport a viable and satisfying option, this thesis is dedicated to modelling and analysing the impact of such pervasive uncertainty on public transport travellers’ route choice behaviour.","","en","doctoral thesis","","978-90-5584-327-5","","","","","","","","","Transport and Planning","","",""
"uuid:a8225d35-bb57-4d76-a288-8f96d215f246","http://resolver.tudelft.nl/uuid:a8225d35-bb57-4d76-a288-8f96d215f246","Improving Environmental Sustainability of Regional Railway Services","Kapetanović, M. (TU Delft Transport and Planning)","Goverde, R.M.P. (promotor); van Oort, N. (copromotor); Delft University of Technology (degree granting institution)","2023","Regional non-electrified railways in Europe are facing significant challenges to improve energy efficiency and reduce greenhouse gas (GHG) emissions. In addition to GHG emission regulations, companies are also imposing voluntary emission reduction targets, not only because of corporate responsibility, but also in an attempt to improve their market share, company image, and value. Featured with low transport demand compared to the main corridors, complete electrification of regional lines is often not economically viable. The solutions are being sought in alternative energy carriers and catenary-free propulsion systems. The transition from conventional diesel traction is a complex and context-specific dynamic decision-making process that requires involvement of multiple stakeholders and consideration of numerous aspects. It requires in-depth analyses that include identification of available technology, design, modelling, and assessment of potential alternatives, with respect to the particular case-related constraints imposed by infrastructure, technical and operational characteristics (e.g., track geometry, speed, and axle load limitations, maintaining existing timetables, noise-free and emission-free operation in stations, etc.). Hence, the overarching aim of this thesis is to identify and assess potential solutions in reducing overall (Well-to-Wheel) energy use and GHG emissions from the operation of regional trains, focussing primarily on synergetic adoption of alternative propulsion systems and energy carriers. We use the case study of the Dutch Northern lines with rolling stock and train services of Arriva to undertake this research, providing several scientific and practical contributions.","","en","doctoral thesis","","978-90-5584-325-1","","","","","","2023-06-28","","","Transport and Planning","","",""
"uuid:904477fe-5ac7-41a9-806a-f6178f8ba11c","http://resolver.tudelft.nl/uuid:904477fe-5ac7-41a9-806a-f6178f8ba11c","High-frame-rate volumetric ultrasound imaging using dedicated arrays and deep learning","Ossenkoppele, B.W. (TU Delft ImPhys/Medical Imaging; TU Delft ImPhys/Verweij group)","de Jong, N. (promotor); Verweij, M.D. (promotor); van Sloun, R. J. G. (promotor); Delft University of Technology (degree granting institution)","2023","High-frame-rate volumetric ultrasound imaging is highly desired to enable novel clinical ultrasound applications. However, realizing high-quality volumetric ultrasound imaging at a high frame rates (>500 Hz) is challenging. Keeping the cable count and data rate of the transducer device at a realistic level without sacrificing image quality to an undesirable extend means that a dedicated design with carefully chosen trade-offs is required and powerful processing of the received signals is desired. This thesis describes the development of a high-frame-rate 3D ultrasound transducer through dedicated transducer design and explores the use of deep learning-based beamforming to achieve high-quality 3D imaging. Specifically, the first part of this thesis focuses on the development of an imaging scheme and the realization and testing of two prototype transducers for high-frame-rate 3D intracardiac echography (3D-ICE). The second part of the thesis implements deep learning in the image reconstruction process to improve the image quality of volumetric ultrasound. Deep learning-based beamforming is implemented and evaluated first for a miniature matrix array, which similar to the 3D-ICE design applies micro-beamforming to achieve cable count reduction and finally for a spiral array which uses a sparse distribution of transducer channels.","ultrasound; 3D; ICE; high frame rate; matrix transducer array; deep learning; beamforming","en","doctoral thesis","","978-94-6366-701-2","","","","","","","","","ImPhys/Medical Imaging","","",""
"uuid:5978034d-f9e5-4094-99dc-dd8591828125","http://resolver.tudelft.nl/uuid:5978034d-f9e5-4094-99dc-dd8591828125","Sandy beaches in low-energy, non-tidal environments: Unraveling and predicting morphodynamics","Ton, A.M. (TU Delft Coastal Engineering)","Aarninkhof, S.G.J. (promotor); Vuik, V. (copromotor); Delft University of Technology (degree granting institution)","2023","Sandy foreshores, beaches and dunes play an eminent role in flood risk reduction in coastal areas, reducing the impact of wind waves and storm surges on the hinterland. In some areas, sandy protection is naturally present. In other coastal areas, engineering solutions are needed to provide safety. “Soft” sediment-based solutions often serve multiple objectives, including flood safety, but also provide other ecosystem services. Knowledge of morphodynamics of these “soft” solutions (i.e. beaches) is crucial for protecting and managing coastal areas prone to flood risk. The aim of this thesis is to understand and quantify how hydrodynamic processes drive morphological development of low-energy, non-tidal, sandy beaches.","Sandy beaches; morphodynamics; low-energy; LakeSIDE","en","doctoral thesis","","978-94-6469-414-7","","","","","","","","","Coastal Engineering","","",""
"uuid:3fc78df4-ddf4-4dad-8ea7-016813d4debc","http://resolver.tudelft.nl/uuid:3fc78df4-ddf4-4dad-8ea7-016813d4debc","Understanding degradation mechanisms at railway transition zones using phenomenological models","Faragau, Andrei B. (TU Delft Dynamics of Structures)","Metrikine, A. (promotor); van Dalen, K.N. (promotor); Delft University of Technology (degree granting institution)","2023","Due to the current climate crisis, railway transport is receiving increased attention owing to its capability of running fully on electricity, which can be generated from renewable sources. High-speed railway networks and the new concepts, such as Hyperloop, are already competing with road and aviation transport. However, the increased demand on railway transport causes an acceleration in infrastructure degradation leading to an increased frequency of maintenance and repair operations. Consequently, what before was considered normal ""wear and tear"" of the infrastructure is quickly turning into serious challenges causing disruptions to the normal operation of traffic.
When it comes to track degradation, the so-called transition zones require significantly more frequent maintenance than the regular parts of the railway track. Transition zones in railway tracks are areas with substantial variation of track properties (e.g., foundation stiffness) encountered near rigid structures such as bridges, tunnels, culverts, or rail-crossings. The occurrence of differential settlements at transition zones has been known for a long time and a multitude of mitigation measures have been designed to cope with this problem. Nonetheless, the mitigation measures have had just limited success and in some cases have even exacerbated the problem. Although the failure of some mitigation measures stems from inadequate design and poor implementation, overall, the lack of efficiency of mitigation measures can be attributed to the lack of understanding of the main mechanism(s) that drive(s) the differential settlement. Therefore, to design efficient mitigation measures, one needs to advance the understanding of the physical processes leading to differential settlements at transition zones. This constitutes the first objective of this dissertation.
The settlement mechanisms are studied in this dissertation through models rather than in-situ measurements or lab experiments. The majority of previous studies have used models to (i) understand and (ii) predict the response of railway tracks at transition zones. Researchers aiming at (i) have usually used simplified phenomenological models in which system characteristics that are not of interest are excluded. More recently, the models' complexity has increased tremendously by incorporating many system characteristics, making these models ideal for (ii), but less ideal for (i) due to the many mechanisms simultaneously at play. This led to the second objective of this dissertation, which is to investigate the effect of specific characteristics of the railway system on the degradation at transition zones. In other words, the second objective entails improving the simplified models by incorporating additional characteristics and determining which of these characteristics is of importance and which can be neglected.
Naturally, this dissertation can only focus on a few of the many aspects involved in this complex problem, and the two main constraints are presented in the following. Improving the maintenance operations themselves by employing new technologies could lead to a reduction in the maintenance frequency. However, to develop a long-term solution, one should aim at eliminating the root cause. Therefore, this dissertation investigated the \emph{initiation} phase of the settlement, and not the accumulation phase. Furthermore, this dissertation focused on the differential settlement stemming solely from the amplification of stresses and strains that occur at transition zones, which is significant at relatively large train velocities. Consequently, this dissertation has not treated other sources of differential settlements, such as the different rates at which autonomous settlement develops in the open-track and at the man-made structure.
Using a simple phenomenological model representative of the railway track, Chapter 2 demonstrates that the response amplification at transition zones is caused by the interference between the steady-state field and the free field generated by the transition process. Consequently, the more pronounced the free field, the larger the resulting amplification. It also shows that the soft-to-stiff and stiff-to-soft transitions have significantly different behaviour, strongly suggesting the need of different mitigation measure designs for the two types of transition. Finally, the transition radiation energy is shown to be invariant between the soft-to-stiff and stiff-to-soft scenarios, finding which was unexpected considering the above-mentioned difference in behaviour.
Investigating the vehicle-structure interaction, Chapter 4 demonstrates that the amplification of the wheel-rail contact force caused purely by a change in foundation stiffness and damping (i.e., a track without initial imperfections) can be significant. Previous literature studies concluded the opposite; however, these studies considered only quasi-static velocities and small effective changes in foundations properties. The findings presented in this chapter, thus, supplement earlier findings to offer a more complete picture. Nonetheless, even though the vehicle-structure interaction leads to a stronger transition radiation, it leads to a reduction of the response amplification at the critical locations in transition zones where settlement is usually observed.
Chapter 5 identifies three response amplification mechanisms at transition zones in systems that have a periodic nature. The amplification is the product of a system with periodic nature and with a local inhomogeneity, and if one of these characteristics is omitted, the amplification does not occur. While these mechanisms can be influential for the railway over-head wires and for the emerging Hyperloop transportation system, they have a negligible influence in the conventional railway track. Consequently, for investigations focused on transition zones and response amplification at low frequencies, the periodicity of the railway track can be successfully approximated by the equivalent continuously supported one without neglecting influential amplification mechanisms.
Chapter 6 introduces the ballast settlement and investigates its influence on the transition process. It shows that the development of the initial settlement leads to a redistribution of the transition radiation energy during the transition not only between frequencies, but also between the soft and stiff media. This redistribution is mainly attributed to the separation between the beam and foundation at the settlement location. Consequently, if the developed settlement is not large enough to allow for this separation, the influence of the nonlinear foundation on transition radiation is negligible.
Chapter 8 investigates the influence of the foundation nonlocality on transition radiation. It shows that the nonlocality of the soil layer has an increasingly pronounced effect on the steady-state response with its decreasing shear stiffness. Consequently, modelling the nonlocality of the supporting structure can be important for railway tracks founded on soft soils. Furthermore, for ballasted tracks founded on soft soils, the response amplification at transition zones can be more pronounced in the soil layer than in the ballast layer depending on the transition type. This is caused by the vertical stiffness of the ballast layer can be significantly larger than the one of the soil. This finding suggests that soil settlement should be accounted for if the long-term behaviour is to be correctly represented.
The investigation of several mechanisms of response amplification at transition zones performed in this study has led to a deeper understanding of the mechanisms leading to differential settlement at transition zones in railway tracks. This knowledge can serve future researchers and engineering in designing more efficient mitigation measures.
Quantum networking has been studied for a few years already. Nevertheless, the current state of the art of quantum networks is somewhat comparable to that of the classical internet at the end of the 1960s: lots of interesting ideas, some experimental demonstrations, and very few reliable testbeds. Scaling up to larger networks of quantum computers requires joint efforts of physics, mathematics, electronics and computer science, at the very least. Bringing these disciplines together is a very bumpy road, given that we do not yet have standard quantum physical platforms to work with, nor universal frameworks and testbeds to validate our hypotheses against. One of the missing links between the highly-complex physical platforms and networks and the high-level descriptions of quantum networking applications is a framework that bridges that gap between these two, providing platform-independent abstractions of the underlying physics to programmers and users of a quantum network.
The goal of this thesis is threefold: discuss the requirements for such a framework of abstractions — which we refer to as an operating system — for quantum networks, propose a design for such an operating system, and implement and validate this design on a physical quantum network. Whilst we are interested in measuring the performance of the operating system, we consider our design to be best-effort, and thus we are primarily aiming at establishing a baseline for future research in this field. Nevertheless, we are after a fully-functional product that we hope can be used to push the boundaries of quantum networking demonstrations, and to better understand the challenges of designing and implementing efficient operating systems for quantum network nodes.","Quantum networks; operating systems","en","doctoral thesis","","978-94-6384-457-4","","","","","","","","","QID/Wehner Group","","",""
"uuid:aabdc210-7b2a-4e28-808b-9dce5c3b9a43","http://resolver.tudelft.nl/uuid:aabdc210-7b2a-4e28-808b-9dce5c3b9a43","Catalytic ceramic nanofiltration for direct surface water treatment and fenton cleaning","Lin, B. (TU Delft Sanitary Engineering)","Rietveld, L.C. (promotor); Heijman, Sebastiaan (promotor); Delft University of Technology (degree granting institution)","2023","Over the past decades, direct nanofiltration (NF) without pre-treatment has been widely recognized as an alternative for conventional membrane technologies in both drinking water and wastewater treatment, owing to its advantages in energy saving, low chemical usage and high permeate purity. As an alternative, ceramic NF has received growing attention in recent years, given its good robustness and stable separation capabilities as compared to polymeric NF membranes. Organic fouling of ceramic NF membranes remains the key problem affecting the performance of the membranes in water treatment. However, conventional forward flush with pure water is not effective for removing the organic fouling due to its sticky nature. Backwash with pure water has to be applied at high pressures, thereby having a risk of damaging the structure of the membranes. Therefore, chemical forward flush with strong acids, bases or chlorine is frequently required as a substitute for backwash and conventional forward flush, leading to more consumption of the chemicals. To apply innovative ceramic NF in direct surface water treatment, an eco-friendly cleaning strategy of using Fenton-based oxidation was studied in relation to the fouling characteristic of the membrane.
A literature study was done to review the present knowledge on using oxidation methods for fouling mitigation of ceramic membranes. It was found that existing studies predominantly were focused on direct oxidation of organic substances in feed water of ceramic membrane filtration. This kind of oxidation strategies could mainly reduce cake layer fouling of the membranes, while in many cases aggravating pore clogging due to an oxidation-induced conversion of large-sized organic molecules into smaller ones. Additionally, there is a risk of secondary pollution by using oxidation in the feed water, since the oxidants and potentially produced oxidation by-products can penetrate the membranes into permeate. However, little knowledge is available on using oxidation for cleaning fouling layers, in particular, on a ceramic NF membrane, in terms of the impact of its fouling characteristics on the efficacy of oxidative cleaning. It was thus recommended that, investigating the efficacy and mechanisms of an oxidative cleaning method for ceramic NF membranes, should be based on an in-depth understanding of fouling of the membranes...
The dissertation first explores the requirements for conceptualizing applied game engagement, identified through an analysis of three applied gaming projects and an empirical study. It then uses these requirements to develop the Applied Games Engagement Model (AGEM). The AGEM posits that engagement is the process of focusing attention on a task and that attention can be purposefully directed through design.
The practical use of the AGEM is then explored by analyzing applied games. The theory is extended with relevant game design knowledge and applied to game design practice. This results in the Lens of Engagement for Applied Games, a unique way to view the design of an applied game.
Overall, this dissertation provides a comprehensive perspective on applied game engagement, emphasizing the role of attention and its relation to game design. It offers a practical and workable method of considering and discussing game engagement, which can be used by anyone creating or studying applied games.","applied games; game design; engagement","en","doctoral thesis","","","","","","","","","","","System Engineering","","",""
"uuid:a05ddfd2-b82e-453d-a2c7-a7a9e4ce3082","http://resolver.tudelft.nl/uuid:a05ddfd2-b82e-453d-a2c7-a7a9e4ce3082","Development of metal contacts with screen printing for n+ polysilicon/SiO𝑥 passivated silicon solar cells","Chaudhary, A. (TU Delft Photovoltaic Materials and Devices)","Zeman, M. (promotor); van Swaaij, R.A.C.M.M. (promotor); Delft University of Technology (degree granting institution)","2023","The continued reliance on fossil fuels to satisfy the world energy demand is leading to climate change, accelerating the melting of polar ice shelf, and is dealing irreversible damage to the flora and fauna of earth, to name few of the adverse effects from fossil fuel utilisation. In addition to not being renewable, fossil-fuel resources are also limited. Therefore these resources cannot meet the energy demand at some point in future. The most plausible way is to utilise renewable sources of energy to meet the increasing demand of energy. The Sun, our closest star is the answer to this demand. Utilising the abundant solar radiation arriving at earth to generate electricity is a great way. A photovoltaic (PV) solar cell can achieve this by converting the incident sunlight directly to electricity.....
In order to investigate the potential of flexibility, this thesis presents a mathematical model and a heuristic algorithm (Adaptive Large Neighborhood Search, ALNS) for the simultaneous routing of shipments and vehicles. The proposed approach enables flexible routing and scheduling of vehicles, improving the overall efficiency of the transport system in a static setting as a proof of concept. The results of numerical experiments demonstrate that implementing the proposed approach with flexible services can result in 14% reduction in costs compared to existing methods that do not consider flexibility.
In dynamic planning, this thesis tackles the issue of service time uncertainty in synchromodal transport by using an online Reinforcement Learning (RL) approach, assisted by the ALNS algorithm. The proposed model-assisted RL integrates RL and ALNS to leverage the data-driven strengths of RL and the domain knowledge of ALNS. In this way, the model-assisted RL addresses the ""curse of dimensionality"" caused by the large state space and complex actions in synchromodal transport. The RL approach dynamically adapts to unexpected events that cause uncertainty by learning from real-time data collected from transport operators, terminal operators, and sensors, without requiring any prior information. The proposed approach was tested in various scenarios that included disturbances, disruptions, and a combination of different types of events, and was found to perform better than traditional waiting and average duration strategies in reducing delay, waiting time, cost, and emissions.
When it comes to preference-based planning, this thesis addresses the challenge of incorporating the heterogeneous and vague preferences of shippers and carriers. To account for carriers' preferences, a multi-objective optimization model that incorporates weight intervals is proposed to handle vague preferences. The model generates a Pareto frontier of solutions that best reflects the carriers' preferences, allowing them to make informed decisions. For shippers' preferences, the thesis employs multiple attribute decision-making and fuzzy set theory to address the heterogeneity and vagueness of preferences, respectively. The results demonstrate that incorporating preferences results in improved satisfaction among shippers by providing solutions with preferred attributes on cost, time, emissions, risk, and delay. By improving shipper satisfaction, carriers can benefit from increased customer loyalty and retention, leading to a competitive advantage in the market. Moreover, by considering various attributes, such as cost, time, emissions, risk, and delay, the model can help carriers make more informed and sustainable decisions, leading to improved environmental performance and compliance with regulations. Overall, incorporating preferences in planning can result in a win-win situation for both shippers and carriers, leading to improved operational performance and a sustainable competitive advantage.
In collaborative planning, this thesis examines the benefits of horizontal collaboration among carriers through the sharing of requests and the consideration of eco-labels. The thesis presents an auction-based mechanism to facilitate collaboration and enable distributed planning. Results indicate that this approach leads to increased request fulfillment, improved sustainability, and reduced costs compared to centralized and non-collaborative planning approaches. On the tested instances, the collaboration between carriers can result in significant increases in the proportion of served requests, with gains of 48% and 11% for synchromodal and unimodal carriers, respectively. Additionally, by taking into account eco-label preferences, the use of the highest or mixed eco-labels can lead to emissions reductions of up to 70% and 15%, respectively, compared to ignoring preferences. Compared to synchromodal carriers, unimodal carriers, especially truck carriers, need to share more requests in collaborative planning to reduce the overall cost. From a policy-making perspective, policymakers can take steps to promote the development of synchromodal transport by implementing incentives for collaborative planning and utilizing eco-labels to achieve sustainable synchromodal transport solutions.
In summary, this thesis provides solutions to address the gaps in synchromodal transport planning by proposing innovative mathematical models and algorithms. These methodologies aim to increase the flexibility, reliability, and sustainability of transport services while also reducing cost, time, emissions, and delay. Additionally, the proposed methodologies consider the preferences of both shippers and carriers, promoting a collaborative and eco-friendly approach to transport planning. The numerical experiments and case studies demonstrate the effectiveness and superiority of the proposed approaches compared to existing methodologies.","","en","doctoral thesis","","978-90-5584-326-8","","","","","","","","","Transport Engineering and Logistics","","",""
"uuid:0071a2f9-c56f-4f75-b6eb-63ebadadc918","http://resolver.tudelft.nl/uuid:0071a2f9-c56f-4f75-b6eb-63ebadadc918","Abstraction-Guided Modular Reinforcement Learning","Ponnambalam, C.T. (TU Delft Algorithmics)","Spaan, M.T.J. (promotor); Oliehoek, F.A. (promotor); Delft University of Technology (degree granting institution)","2023","Reinforcement learning (RL) models the learning process of humans, but as exciting advances are made that use increasingly deep neural networks, some of the fundamental strengths of human learning are still underutilized by RL agents. One of the most exciting properties of RL is that it appears to be incredibly flexible, requiring no model or knowledge of the task to be solved. However, this thesis argues that RL is inherently inflexible for two main reasons: 1. If there is existing knowledge, incorporating this without compromising the optimality of the solution is highly non-trivial, and 2. RL solutions can not be easily transferred between tasks, and generally require complete retraining to guarantee that a solution will work in a new task.
Humans, on the other hand, are very flexible learners. We easily transfer knowledge from one task to another, and can learn from knowledge that we learned in other tasks or that other people share with us. Humans are exceptionally good at abstraction, or developing conceptual understandings that allow us to extend knowledge to never-before seen experiences. No artificial agent nor neural network has displayed the abstraction and generalization capabilities of humans in such varied tasks and environments. Despite this, utilizing the human as a tool for abstraction is commonly done only at the stage of defining the model. In general, this means making choices about what to include in the state space that will make the problem solvable without adding unnecessary complexity. While necessary, this step is not explicitly referred to as abstraction, and it is generally not considered relevant to how RL is applied. Much of the research in RL is less focused on how the problem is modelled, and instead centers the development and application of computational advances that allow for solving bigger and bigger problems.
Applying abstraction explicitly is highly non-trivial, as confirming that an abstract problem preserves the necessary information of the true problem can generally only be done if a full solution is already found, which may defeat the purpose of finding an abstraction if such a solution cannot be found. When such a confirmation can be made, the abstraction can be the result of a very complex function that would be difficult for a human to define. In this work, human-defined abstractions are used in a way that goes beyond the initial definition of the problem.
The first approach, presented in Chapter 3, breaks a problem into several abstract problems, and uses the same experience to solve each at the same time. A meta-agent learns how to compose the learned policies together to find the optimal policy. In Chapter 4, a method is introduced that uses supervised learning to train a model on partially observable experience which is labelled with hindsight. The agent then learns a policy on predicted states, trading off information gathering with reward maximization. The last method presented in Chapter 5 is a modular approach to offline RL, where even with expert data, the method can become ineffective if the given data does not cover the entire problem space. This method introduces a second problem of recovering the agent to a state where it can safely follow the expert’s action. The method applies abstraction to multiply the given data and safely plan recovery policies. Combining the recovery policies with the imitation policy maintains high performance even when the expert data provided is limited.
In the methods developed in this research, a learning-to-learn component enables the agent to relax the usually strict requirements of abstraction, the parallel processing allows the agent to learn more from fewer samples, and the modularity means that the agent can transfer its knowledge to other related tasks.
This thesis presents a model tool to optimize the energy yield and impact on the environment of installing turbines in flood defences by altering the turbine placing. Mapping out the effects of turbines on the flow is the central question. To answer this question, this research consists of three parts: (1) measuring the field situation, (2) testing a turbine in the laboratory and (3) setting up an analytical model that is coupled to a regional flow model.
In the first part of this study (1), unique, high-resolution data of the flow through the Eastern Scheldt storm surge barrier and around the turbines were investigated. In particular, for the first time in the literature, commercial-scale turbines are used to determine the effect of tidal turbines on the water flow. The power output of the turbines is also quantified. The data is used to derive an analytical model of the flow around a turbine in a barrier. This model can calculate the power of tidal turbines and the resistance of the barrier and turbine for different forms of the installation and variable strength of the external flow.
In the second part of this study (2), these insights were refined in laboratory tests, in which the configuration of the turbine and barrier was varied. This method is more representative of real turbines because it has a larger scale factor (1:9) than is usual in the literature. The tests show that the generated power strongly depends on the position of the turbine relative to the barrier. The data also show that the combined resistance of a barrier and turbine is lower than the sum of the individual resistances. These outcomes are used to successfully validate the previously developed analytical model.
In the last part of this study (3), the developed analytical model was implemented in a larger-scale numerical flow model. In this larger-scale model, the small-scale flow around a barrier with turbines is linked in an efficient way to the large-scale water movement in a tidal basin. This makes it possible to optimize existing or new tidal power stations, both at the level of the entire barrier and at that of a single flow opening. The impact on the environment can therefore be determined with the model, even more accurately than was previously possible.
The research in this thesis shows that the effect of the turbines on the flow at a larger distance is smaller than previously thought. This offers the possibility, for example, to install more turbines and harvest more energy without exceeding the acceptable environmental impact (e.g. ecological effects). This study has contributed to confidence in the technical and economic feasibility of turbine installations that can be built in hydraulic engineering works in the Dutch Delta. The developed calculation tool is freely available to investigate energy yield and environmental effects of tidal energy projects worldwide.","tidal energy; hydrodynamics; modelling","en","doctoral thesis","","978-94-6483-182-5","","","","","","","","","Environmental Fluid Mechanics","","",""
"uuid:7b9395d3-a654-4a6e-8c98-35eb5901194e","http://resolver.tudelft.nl/uuid:7b9395d3-a654-4a6e-8c98-35eb5901194e","Towards a convergent approach to the use of data in digital health design","Pannunzio, V. (TU Delft Methodologie en Organisatie van Design)","Kleinsmann, M.S. (promotor); Snelders, H.M.J.J. (promotor); Delft University of Technology (degree granting institution)","2023","Digital health is a vibrant and dynamic field, encompassing subsets such as mobile health, health information technology, wearable devices, telehealth and telemedicine, and personalised medicine. While digital health adoption has been markedly accelerated by the covid-19 pandemic (Inkster et al., 2020), an evolving body of research has focused on describing and addressing specific challenges related to the design and evaluation of digital health technologies (Pagliari, 2007; Murray et al., 2016; Blandford et al., 2018; Marvel et al., 2018). This research articulates a need for novel, interdisciplinary design approaches to digital health innovation, integrating disparate sets of requirements such as clinical soundness, user-centeredness, technical interoperability, and cost-effectiveness (Cornet et al., 2019). In this complex domain, design and health disciplines are called not only to collaborate with each other, but also to learn to work with digital data as the raw material fueling digital technologies. This dissertation explores such challenges through a series of exploratory research efforts at the intersection of design, healthcare and digital data. These explorations are conducted within the context of the Cardiolab, a Delft Design Lab born out of a partnership between Philips Experience Design and Delft University of Technology. Throughout the dissertation, knowledge in this domain is gained through a mix of literature reviews and project-based action research (Somekh, 2005). In this way, the relevant scientific literature is connected and put in dialogue with real-life digital health design practice.","Design for health; Design for Healthcare; e-Health; Design Approaches; Design Methodologies; Data-enabled design; Convergence","en","doctoral thesis","","978-94-6384-454-3","","","","","","","","","Methodologie en Organisatie van Design","","",""
"uuid:2297978b-30e2-48e4-9e6a-e2fb61dcab94","http://resolver.tudelft.nl/uuid:2297978b-30e2-48e4-9e6a-e2fb61dcab94","From loose grains to resilient dunes","van IJzendoorn, Christa (TU Delft Coastal Engineering)","de Vries, S. (promotor); Reniers, A.J.H.M. (promotor); Hallin, E.C. (copromotor); Delft University of Technology (degree granting institution)","2023","Coastal dune systems provide valuable functions that are threatened by human activity and climate change. Preserving and strengthening coastal dunes through coastal management and the implementation of interventions require accurate predictions of coastal dune development. The development of coastal dunes is driven by complex interactions between aeolian and marine processes. The aim of this thesis is to determine how marine and aeolian processes influence coastal dune development on yearly to decadal scale. Specifically, the effect of sea level rise and aeolian processes related to grain size were investigated.","coastal dunes; sea level rise; aeolian sediment transport; grain size","en","doctoral thesis","","978-94-6366-695-4","","","","","","2023-05-30","","","Coastal Engineering","","",""
"uuid:98afe3ba-fa0d-4834-b802-60c29196ac35","http://resolver.tudelft.nl/uuid:98afe3ba-fa0d-4834-b802-60c29196ac35","Big slopes, little data: data-driven nowcasting of deep-seated landslide deformation","van Natijne, A.L. (TU Delft Optical and Laser Remote Sensing)","Lindenbergh, R.C. (promotor); Bogaard, T.A. (promotor); Delft University of Technology (degree granting institution)","2023","Landslides are a major geohazard in hilly and mountainous environments. We focus on slow-moving, deep-seated landslides that are characterized by gradual, non-catastrophic deformations of millimeters to decimeters per year and cause extensive economic damage. To assess their potential impact and for the design of mitigation solutions, a detailed understanding of the slope processes is desired. Moreover, where landslide hazard mitigation is impossible, early warning systems are a valuable alternative to reduce landslide risk.
Recent studies have demonstrated the effective application of machine learning for deformation forecasting to specific cases of slow-moving, non-catastrophic, deep-seated landslides. Machine learning, combined with satellite remote sensing products offers new opportunities for both local and regional monitoring of areas with unstable slopes and associated processes without costly and logistically challenging inspection of the landslide. To test to what extent data-driven machine learning techniques and remote sensing observations can be used for landslide deformation forecasting, we developed a machine learning based nowcasting model on the multi-sensor monitored, deep-seated Vögelsberg landslide, near Innsbruck, Tyrol, Austria. Our goal was to link the landslide deformation pattern to the conditions on the slope, and to produce a four-day, short-term forecast, a nowcast, of deformation accelerations.
Changes in hillslope hydrology shift the balance between the shear strength of the soil and the shear (sliding) force applied by the gravitational forces acting on the landmass. Therefore, precipitation, snowmelt, soil moisture, evaporation, and air temperature were identified as hydro-meteorological variables with high potential for forecasting deformation dynamics. Time series of those variables were obtained from remote sensing sources where possible, and otherwise from reanalysis sources as surrogate for data that is likely to be available in the near future. Deformation, the result of slope instability, was monitored daily by a local, automated total station.
Interferometric Synthetic Aperture Radar (InSAR) has shown to be a valuable resource of deformation information from space. However, due to the complex interaction with topography in mountainous environments, its potential is often questioned. We showed that 91% of the world’s slopes are observable by InSAR, given the presence of a coherent scatterer, i.e. a natural or man-made object that exhibits consistent radar reflection over time. A global map is provided to indicate the sensitivity of InSAR to assess downslope deformation on any particular slope. To quickly assess the presence of coherent scatterers, before further investigation, we developed an application in Google Earth Engine to estimate the presence and location of coherent scatterers on a slope. However, the current accuracy and temporal resolution of Sentinel-1 SAR acquisitions proved insufficient to identify the acceleration phases at Vögelsberg.
The five years of daily deformation and hydro-meteorological observations at the Vögelsberg landslide is quite limited for a machine learning model. Therefore, a nowcasting model of low complexity was required. To limit the number of parameters to be optimized, the model was designed to mimic a bucket model, a simple hydrological model. A shallow neural network based on long short-term memory, was implemented in TensorFlow, as custom sequence of existing building blocks. Furthermore, a traditional neural network and recurrent neural network were tested for comparison. Thanks to the limited complexity of the model, the major contributors could be determined by trial-and-error of nearly 150 000 model variations.
Models including soil moisture information are more likely to generate high quality nowcasts, followed by models based solely on precipitation or snowmelt. Although none of the shallow neural network configurations produced a convincing nowcast deformation, they provide important context for future attempts. The machine learning model was poorly constrained as only five years of observations were available in combination with the four acceleration events that occurred in these five years. Furthermore, standard error metrics, like mean squared error, are unsuitable for model optimization for landslide nowcasting.
We showed that landslide deformation nowcasting is not a straightforward application of machine learning. The complexity of the machine learning model formulation at the Vögelsberg illustrates the necessity of expert judgement in the design and evaluation of a data-driven nowcast of slowly deforming slopes. Furthermore, to prepare for unexpected modelling developments, a high level of project level data organisation is recommended. There is a long road ahead for the large scale implementation of machine learning in landslide nowcasting and Early Warning Systems. However, a future, successful nowcasting system will require a simple, robust model and frequent, high quality and event-rich data to train upon.","Deep-seated landslide; Machine learning; Remote sensing; Early warning systems; InSAR","en","doctoral thesis","","978-94-6384-442-0","","","","","","","","","Optical and Laser Remote Sensing","","",""
"uuid:315bdb50-a76b-4ce6-aa2b-fab05aa679b3","http://resolver.tudelft.nl/uuid:315bdb50-a76b-4ce6-aa2b-fab05aa679b3","The Bogeyman Unveiled: Safety and effectiveness within the Royal Netherlands Air Force","Boskeljon-Horst, L. (TU Delft Safety and Security Science)","van Gelder, P.H.A.J.M. (promotor); Dekker, S.W.A. (promotor); Delft University of Technology (degree granting institution)","2023","In my nearly 24 years as an aviation psychologist in the Royal Netherlands Air Force (RNLAF), I have seen first-hand the dynamics and complexity of the daily work situations of pilots and crew. I have also seen an organisation trying to enhance safety, prevent negative incidents and advocate the importance of learning. And I have seen this same organisation failing at all three. I became convinced that giving operators more discretionary space to use their expertise, developing a better understanding of successes and focussing on restorative instead of retributive justice are the keys to enhancing the safety and the effectiveness of the RNLAF.
The RNLAF is currently transitioning to a Fifth Generation Air Force (5GAF). In order to stay relevant and gain the competitive advantage it needs, the RNLAF not only wants new weaponry but also a different management style that will foster different behaviour in employees. Specifically, the 5GAF focusses on trust, accountability, more freedom and space to employees and more room for self-organisation rather than top-down control. In my opinion, more discretionary space, a better understanding of successes and focussing on restorative justice would provide more competitive advantage than any weapons system we could acquire.
Within the RNLAF context I studied the (in)ability to make sense of retrieved safety information and observations, how safety and effectiveness is achieved, and how this is hampered by safety beliefs and retributive response to undesired outcomes. My central research question is: How can we describe and enhance the safety and effectiveness of the RNLAF? The sub-questions all focus on understanding aspects of safety with the intention of enhancing it:
Three key concepts are relevant to my research: safety culture, just culture and compliance vs. adaptation. These three concepts take up a significant part of the literature focused on enhancing safety and therefore, put together, they might provide a solid explanation for not only the safety an organisation achieves but also the stalemate and plateauing results an organisation meets when trying to further enhance safety. Safety culture, just culture and compliance are interrelated. Rules and procedures are regarded essential elements of a safety culture. The response to violating rules and procedures shows the just culture of an organisation.
Safety culture, just culture and compliance provide a common thread in the safety documents of the Defence organisation appearing in the past six years. Documents show there is outside pressure to enhance the safety culture in the Defence organisation, an underdeveloped restorative just culture and a recognition that both compliance and proactive intervention (adaptation) are needed…
One of the most used and advanced qubits is the transmon, a LC oscillator with a capacitor in parallel with a non-linear inductive element called a Josephson junction. Conventionally, the Josephson junction is formed with an Al-AlO-Al tunnel barrier. Contrastingly, here we use a InAs nanowire covered with a thin layer of Al forming a S-N-S Josephson junction. Crucially, this junction is magnetic field compatible, allowing us to do experiments with cQED in a magnetic field. Additionally this junction is voltage-tunable, opening the path towards lower distortion voltage gates. This thesis focusses on measuring the flux noise in a magnetic field using the nanowire Josephson junction. To that end, the chapters address the necessary conditions to achieve this goal....","","en","doctoral thesis","","978-94-6419-836-2","","","","","","","","","QCD/DiCarlo Lab","","",""
"uuid:a5c27498-55c7-4edb-b7b8-3f3ccf7b77c7","http://resolver.tudelft.nl/uuid:a5c27498-55c7-4edb-b7b8-3f3ccf7b77c7","Mass transfer and flooding phenomena in carbon dioxide electrolyzers","Baumgartner, L.M. (TU Delft ChemE/Transport Phenomena)","Vermaas, D.A. (promotor); Kleijn, C.R. (promotor); Delft University of Technology (degree granting institution)","2023","Electrochemical carbon dioxide reduction is a potential pathway to the sustainable production of hydrocarbon fuels and chemicals. This thesis explores the material science and reactor engineering of carbon dioxide electrolyzers.","CO2 Reduction; Electrochemical Engineering; Electrochemistry; Gas diffusion electrode; Bipolar membrane; pH imaging","en","doctoral thesis","","978-94-93330-16-0","","","","","","","","","ChemE/Transport Phenomena","","",""
"uuid:06ebcb2d-864e-4e5f-848f-fcb2a062bd16","http://resolver.tudelft.nl/uuid:06ebcb2d-864e-4e5f-848f-fcb2a062bd16","Facades-as-a-Service: A cross-disciplinary model for the (re)development of circular building envelopes","Azcarate Aguerre, J.F. (TU Delft Architectural Technology)","Klein, T. (promotor); den Heijer, A.C. (promotor); Konstantinou, T. (copromotor); Delft University of Technology (degree granting institution)","2023","Accelerating strategic investment in an energy- and material resource-efficient built environment
The de-carbonisation of the built environment hinges on the use of clean, renewable energy and the conservation of materials and components within circular reprocessing loops. The Façades-as-a-Service research concept aims to accelerate the rate and depth of building energy renovations – while safeguarding long-term responsibility over material resources – by creating a new value-chain based on the provision of integrated building envelopes under a performance contract.
The built environment is a major contributor to the resource management and sustainable development challenges we currently face on a global scale. The rate at which the building stock is improving, in terms of resource efficiency and greenhouse gas emissions (GHG), is far below what is needed to meet even the most conservative climate change and environmental impact mitigation goals (European Commission, Directorate-General for Energy 2020). The strategic investment of limited resources – energetic, material, and financial – which dictates the development of the built environment, is largely driven by individual decision-makers with particular fields of
knowledge, specific interests, and acting within diverse time-scales.
Improving the resource-efficiency of the built environment, in terms of the quality of new constructions and the rate and depth of technical building retrofits, is not only a question of technological readiness, but rather of business and economic incentives. Emerging theoretical frameworks, such as the Circular Economy (CE) and Product-Service Systems (PSS), aim to realign or create these incentives by operationalising the value of better individual decision-making processes, internalising soft values and costs, and developing long-term collaborative project execution mechanisms.
In line with these frameworks, the research elaborates a multi-perspective analysis for a new performance-based investment model to promote the energy transition through the accelerated implementation of high-performance building envelope technologies. Boundaries for the research scope are established, in both technological and managerial ranges, to enhance the applicability of the model and the scientific relevance of the results. Reference is made to specific case-studies, organisations, and regional characteristics, followed by discussions on the implications of such focus groups for the extrapolation of universally applicable conclusions. Finally, the model is evaluated to determine its rate of success at addressing the resource management and environmental impact challenges previously identified.
Results show that, while the implementation of potentially Circular Business Models such as Product-Service Systems is technically possible within the current economic, legal, and managerial landscape, it is by no means a simple or standardised process. Significant systemic changes must take place in order to enable and incentivise the mainstream implementation of performance-based models capable of aligning stakeholder incentives towards more energy-efficient and resource-regenerative building procurement practices. The main bottlenecks towards such innovation are highlighted, and cross-disciplinary recommendations are made regarding the validity, up-scalability, and future development of the proposed methodology.","Circular Economy (CE); Product-Service Systems (PSS); building economics; real estate management; Llfe-Cycle Cost Analysis (LCCA); Total Value of Ownership (TVO)","en","doctoral thesis","","978-94-6366-708-1","","","","","","","","","Architectural Technology","","",""
"uuid:3ec90e7c-c2e1-40f3-84b8-7b4a423a43b0","http://resolver.tudelft.nl/uuid:3ec90e7c-c2e1-40f3-84b8-7b4a423a43b0","Light and Spectra in the Wild - Spectral Structures of Light Fields: Measurement, Simulation and Visualisation","Yu, C. (TU Delft Human Information Communication Design)","Pont, S.C. (promotor); Eisemann, E. (promotor); Wijntjes, M.W.A. (copromotor); Delft University of Technology (degree granting institution)","2023","The study of the light field has become a valuable framework for capturing and analysing the complex distribution of light in natural environments. The directional, spatial, temporal and spectral structure of light, collectively influence the optical information available to an observer and thus impact our perception of the surrounding world. The extended definition of the light field, which is equivalent to the plenoptic function in perceptual studies, incorporates radiance as a function of spectral energy, position, direction, and time in space, quantifying all the optical information available to an observer. However, there is a considerable gap in measuring, describing, and visualizing the properties of the light field in the chromatic domain, which this thesis aimed to address. The thesis focuses on the research question of how to effectively describe, measure, simulate, and visualize the spatiotemporal dynamics of the spectral structure of light fields. To address this research question, We outlined four main objectives in the thesis, which are addressed in separate chapters. The first objective is to investigate the interplay between the colours of surfaces and light sources in 3D indoor scenes, and its effects on the spatial and angular distribution of light. The second objective was to quantify the directional and spatial variations of chromatic light field effects on correlated colour temperature and colour rendering. The third objective was to explore the objective measurement, description, and visualization of the 7D light-field properties of outdoor illumination. Finally, the fourth objective was to examine the relationship between image statistics and perceived time of day in Western European paintings from the 17th to 20th centuries to determine if the representation of lighting in paintings serves as a contextual cue for the time of day.","light field; art perception; colour science; lighting design; photometry","en","doctoral thesis","","9789493315754","","","","","","","","","Human Information Communication Design","","",""
"uuid:662b7fa8-5154-464f-bf7e-6ae5bf15c828","http://resolver.tudelft.nl/uuid:662b7fa8-5154-464f-bf7e-6ae5bf15c828","Machine learning and randomness in mechanical metamaterials","Pahlavani, H. (TU Delft Biomaterials & Tissue Biomechanics)","Zadpoor, A.A. (promotor); Zhou, J. (promotor); Mirzaali, Mohammad J. (copromotor); Delft University of Technology (degree granting institution)","2023","","Randomness; Mechanical metamaterials; Machine Learning; Additive Manufacturing","en","doctoral thesis","","978-94-6419-838-6","","","","","","2023-06-20","","","Biomaterials & Tissue Biomechanics","","",""
"uuid:c565e266-94b9-4a80-afc4-4aa4fb5ce471","http://resolver.tudelft.nl/uuid:c565e266-94b9-4a80-afc4-4aa4fb5ce471","The Bits of Nature: Bioinspired bitmap composites","Cruz Saldivar, M. (TU Delft Biomaterials & Tissue Biomechanics)","Zadpoor, A.A. (promotor); Mirzaali, Mohammad J. (copromotor); Delft University of Technology (degree granting institution)","2023","In the vast domain of biomedical engineering, the challenge of developing synthetic materials that can replace damaged tissues has proved a daunting task. Millions of years of adaptation have provided natural tissues with multiple strategies that yield highly efficient mechanical properties that are not found in human-made materials. The first of these strategies relates to the material composition of living tissues. Natural materials tune their functionality thanks to the presence of multiple constituting phases with highly different properties (e.g., soft collagen and the hard mineral phase in the bone). The second strategy is manifested in the arrangement of these phases, where these constituents take a wealth of intricate geometries to strengthen and toughen their structures. For example, the hierarchical arrangement of different constituents at multiple length scales enables them to work in synergy to distribute the deformation energy within tissues, thereby delaying their critical failure. Yet another strategy is the use of functional gradients, where the volume fraction of one material changes across a relatively short interface, thereby attenuating the stress concentrations caused by the mismatch between the mechanical properties of the different constituents.
In recent years, replicating these exceedingly complex yet harmonious natural design paradigms has been a significant drive in the scientific community. Mainly achieved using multi-material additive manufacturing techniques, architected materials have been developed that implement some of the design strategies found in natural materials to achieve seemingly contradictory design objectives, such as simultaneously high strength and toughness. However, limitations in computational resources and standard processing methods hinder the complexity and multi-scale rationality of the design features that one can introduce within a construct. Bitmap multi-material 3D-printing techniques, however, offer the possibility to experiment with different design strategies at the level of individual microscale voxel, leading to the emergence of -by-es. In such approaches, the constituting material of each voxel can be individually selected, yielding unprecedented freedom to generate microarchitectures that seamlessly mimic the morphologies observed in natural tissues.
poverty. Chapter 5 addresses the discrepancy between the registered data-based measurements of neighbourhood characteristics, specifically the share of neighbours with foreign background and low income, and the individual perceptions of those characteristics by the inhabitants of the neighbourhood. The findings of the thesis confirm the validity of treating the neighbourhood as a social setting that interacts with the micro and macro contexts, rather than simply as an aggregated characteristic which can be controlled for.","","en","doctoral thesis","","978-94-6366-699-2","","","","","","","","","Urban Studies","","",""
"uuid:bc7256fb-9f21-4241-baf3-1f7aeaea5ea4","http://resolver.tudelft.nl/uuid:bc7256fb-9f21-4241-baf3-1f7aeaea5ea4","Advanced Bits-In RF-Out Transmitters","Beikmirza, M.R. (TU Delft Electronics)","de Vreede, L.C.N. (promotor); Alavi, S.M. (copromotor); Delft University of Technology (degree granting institution)","2023","The demand for faster mobile access and higher data throughput drives the evolution of wireless cellular communication, requiring larger modulation bandwidths and higher-order modulations and necessitating more efficient and flexible transmitter systems.
Simultaneously, the advancements in nano-scale CMOS technologies have made transistors smaller and better suited for digital signal processing, with improved high-frequency performance for RF mixed-signal circuits.
These advancements impact wireless RF transceivers creating the need to explore transmitter architectures beyond the level of the most established ones, which are exclusively analog up to date, by pushing them towards incorporating more digital circuitry. Consequently, the primary research question addressed in this is: “What are the potential performance advantages when the strength of (high-speed) digital CMOS is utilized within an RF front-end?”
To answer this research question, this thesis proposes new architectures for digitalintensive transmitter line-ups. These architectures aim to enhance linearity, bandwidth, and power efficiency, and enable the full utilization of CMOS technology in digital operations within the RF front-end....","digital-intensive transmitters (DTXs); digital power amplifier (DPA); RF digitalto- analog converter (RF-DAC); in-phase/quadrature (I/Q); multi-phase; balun; wideband; efficiency enhancement; digital predistortion (DPD)","en","doctoral thesis","","978-94-6384-459-8","","","","","","2024-06-19","","","Electronics","","",""
"uuid:f1e3f0a8-9270-4cf6-82af-592e13be7144","http://resolver.tudelft.nl/uuid:f1e3f0a8-9270-4cf6-82af-592e13be7144","Mitigating the autogenous shrinkage of ultra-high performance concrete by using rice husk ash","Huang, H. (TU Delft Materials and Environment)","van Breugel, K. (promotor); Ye, G. (promotor); Delft University of Technology (degree granting institution)","2023","Concrete made with Portland cement has good behaviour under compressive stress, but it is weak in tension. Cracking caused by tension is a typical problem in concrete practice. High autogenous shrinkage at early ages is one of the causes of cracking, especially in high/ultra-high performance concrete. In that stage shrinkage is strongly related to the decrease of relative humidity (RH) inside the concrete. To minimize the probability of cracking induced by autogenous shrinkage at early age, internal curing, which can provide moisture inside the concrete, has been proposed. Several internal curing agents, such as Super Absorbing Polymers (SAP) and saturated lightweight aggregate (LWA), are utilized in high performance concrete and have shown good results for mitigating autogenous shrinkage. The use of these agents, however, also have some drawbacks. Searching for an alternative internal curing methods for ultra-high performance concrete remains a challenge....","autogenous shrinkage; self-desiccation; pozzolanic reaction; rice husk ash; ultra-high performance concrete; internal curing","en","doctoral thesis","","978-94-6473-143-9","","","","","","","","","Materials and Environment","","",""
"uuid:e23dfeda-dcb1-4b4a-9cbe-45923435f0b4","http://resolver.tudelft.nl/uuid:e23dfeda-dcb1-4b4a-9cbe-45923435f0b4","ShoreScape: A landscape approach to the natural adaptation of urbanized sandy shores","van Bergen, J. (TU Delft Landscape Architecture)","Nijhuis, S. (promotor); Luiten, E.A.J. (promotor); Delft University of Technology (degree granting institution)","2023","Sandy shores around the world suffer from coastal erosion due to land subsidence, a lack of sediment input and sea level rise. This often leads to the construction of hard structures, such as sea walls and breakwaters, that consolidate the coastal zone but disrupt the dynamic system of coastal deltas. To compensate for coastal erosion in a more natural and systemic way, sand nourishments are now increasingly executed. This so-called ‘Building with Nature’ (BwN) technique uses natural resources and dynamics to restore sediment balance within coastal zones and promote coastal regeneration and dune formation. These dynamic nourishment techniques are still in development, placing new demands on coastal spatial planning. How can we position and tune these nourishment dynamics for land formation; not only to optimize coastal safety but also to integrate these dynamics with the ecological and urban functions of the coastal landscape? An integrated design approach is necessary to guide both land-shaping processes and adaptive urban and ecological configurations to support BwN-based dune-formation following nourishment and boost the buffer capacity of coastal zones.
This research aims to develop design principles for integral coastal landscapes that connect geomorphological processes, ecology and adaptive urban design to exploit their potential for the spatial development of multi-functional coastal landscapes— shore-scapes. It focuses on coastal configurations featuring pro-active sediment management through aeolian BwN techniques to build up the coastal buffer in a natural and multifunctional way.
The first step was to reframe BwN nourishment design as a landscape approach, employing natural onshore dynamics to sustain the coastal buffer and increase the multiplicity of the coastal landscape. The coastal landscape can be regarded as the result of the interaction between the geomorphological, ecological and urban system, in response to sea level rise. The mapping of their interactions (via literature review, fieldwork, GIS and CFD-modelling), identified three potential spatial mechanisms to support nature-based dune formation following nourishment: natural succession, dune farming and urban harvesting. To activate these processes for coastal reinforcement and landscaping, and bridge the spatial and time scales involved, three subsequent tools for dynamic design were defined: morphogenesis, dynamic profiling and aeolian design principles.
In the second half of the research, the BwN landscape approach and principles were contextualized and tested across four case studies, which revealed how coastal system’s characteristics and nourishment strategy affect dune formation. Responding to various nourishment and urban conditions, spatial arrangements were composed that enhance the aeolian build-up of coastal profiles and landscapes over time, supporting dune reinforcement, multifunctionality and
landscape differentiation.
The outcome of this research is threefold. First, BwN was redefined as a landscape approach that employs intersystemic land-shaping processes to support coastal safety, multifunctionality and spatial quality. Second, a set of validated design principles was developed for natural aeolian coastal adaptation following nourishment. Third, spatial arrangements were composed to illustrate how BwN processes ashore can be guided in space and time across various nourishment and urban contexts.
Global mean sea level has been rising at a rate of about 3.4 millimetres per year over the last 30 years. Regionally, however, sea level can be changing at a much higher or lower rate. That is because local processes, such as ocean dynamics and gravitational effects associated with continental ice mass changes, cause regional deviations from the global average. But what is causing sea level to change at a specific location? Is sea level changing because the oceans are warming, and thus expanding? Or because the ice from glaciers and ice sheets are melting? The attribution of sea-level change to these and other drivers can be done using a sea-level budget approach. Sea-level budget studies can be used to constrain missing or poorly known contributions and to validate climate models. While the global mean sea-level budget is considered closed within uncertainties, closing the budget on a regional to local scale is still challenging.
In this thesis, I focused on the question: Can we close the regional sea-level budget in the satellite altimetry era on a sub-basin scale consistently for the entire world? For this, we need not only high quality observations of sea-level change and each component, but also of the uncertainties within each process. Therefore, in Chapter 2 and 3, I explored the main drivers of regional sea-level change, focusing on the uncertainty characterization of each component. I then looked at which spatial scale is optimal for analysing the regional sea-level budget, and compared the sum of the drivers with the total observed change in these regions in Chapter 4.","sea-level change; sea-level variability; sea-level budget; observations","en","doctoral thesis","","978-94-6419-821-8","","","","","","","","","Physical and Space Geodesy","","",""
"uuid:d5548689-2e68-4960-ae7c-b475d82c7cd7","http://resolver.tudelft.nl/uuid:d5548689-2e68-4960-ae7c-b475d82c7cd7","Shortcuts towards fiber-based quantum networks","Avis, G. (TU Delft QID/Wehner Group)","Wehner, S.D.C. (promotor); Hanson, R. (promotor); Delft University of Technology (degree granting institution)","2023","The future quantum internet promises to create shared quantum entanglement between any two points on Earth, enabling applications such as provably-secure communication and connecting quantum computers. A popular method for distributing entanglement is by sending entangled photons through optical fiber. However, the probability of successful transmission decreases exponentially with the fiber length. This makes it challenging to realize large fiber-based quantum networks that create shared entanglement, let alone the construction of a quantum internet. Quantum repeaters have been proposed as a solution to mitigate losses by acting as intermediary nodes that divide long optical fibers into smaller segments. The required technology, however, is still under development. In this thesis we aim to expedite the realization of fiber-based quantum networks by identifying shortcuts towards that end.
One way in which we look for shortcuts is by identifying the technological advances that are required to build such networks. To achieve this we translate performance demands on the network to requirements on individual components, such as quantum repeaters. This way we are not only able to indicate how much development current-day technology still requires before functional quantum networks can be built, but also what specific set of improvements could be applied to state-of-the-art hardware to get there as soon as possible.
A specific promising shortcut that we investigate in this thesis is the construction of quantum networks using existing fiber infrastructure. As deploying optical fiber is costly, an economical method for building quantum networks would be to incorporate fiber that has already been placed in the field. Existing infrastructure however imposes restrictions on quantum networks, in particular on the possible locations where quantum hardware could be installed. An important question to answer is then how severe the effects of these restrictions are. We address this question by investigating the performance degradation caused by displacing nodes from their optimal location, and the increase in required technological advances when restrictions are taken into account. Additionally, we provide tools for choosing where to deploy quantum repeaters when subject to placement restrictions.
Finally, we also address the fact that quantum networks may need to provide entanglement to more than just two parties. When a network has many end nodes that require bipartite entanglement between different pairs of them, it is important that it is designed such that every end node is sufficiently connected to every other end node. We provide conditions to judge whether this is the case and a method to ensure the conditions are met. Alternatively end nodes could require multipartite entangled states shared by more than two of them, in which case specialized nodes may need to be included in the network. We investigate what such a node could look like and perform a thorough performance analysis.","quantum network; quantum repeaters; quantum repeater chains; entanglement; entanglement distribution; quantum information","en","doctoral thesis","","978-94-6483-200-6","","","","","","","","","QID/Wehner Group","","",""
"uuid:e3c4bcaa-e7fa-499c-85b0-8cb5d4f473d8","http://resolver.tudelft.nl/uuid:e3c4bcaa-e7fa-499c-85b0-8cb5d4f473d8","On Acoustic Emission Condition Monitoring of Highly-loaded Low-speed Roller Bearings","Scheeren, B. (TU Delft Ship and Offshore Structures)","Kaminski, M.L. (promotor); Pahlavan, Lotfollah (copromotor); Delft University of Technology (degree granting institution)","2023","Highly-loaded low-speed roller bearings form crucial connections in offshore structures, such as heavy-lifting vessels, single-point mooring systems, and wind turbines. In order to safeguard the integrity and reliability of these assets and their operations, a quantitative methodology for condition monitoring of the bearings can be of substantial value. To date, a number of assessment methods have been proposed to for this purpose, e.g. based on strain, vibration, lubrication, and acoustic emission (AE) monitoring. Despite their demonstrated potential for medium- and high-speed bearings (>600 rpm), no notable success has yet been reported in the assessment of low-speed bearings subjected to naturally-developing degradation. In this dissertation, a novel methodology for the analysis of damage-induced AE and inferring the bearing condition has been proposed. Acoustic emissions in this context are ultrasonic signals generated by the release of elastic energy in a material. In solid media, these signals propagate as stress waves and can be recorded by dedicated transducers.
A mathematical framework to describe the generation, propagation, transmission, and detection of transient ultrasonic waves in complex geometries has been presented. An assessment of inter-component stress-wave transmission has been performed utilising this framework. For a representative sheave bearing, results indicate that a transmission loss in the order of 15 dB is to be expected in the amplitude of the AE waves for a single rolling contact arrangement. In conjunction with a preliminary field trial regarding the ultrasonic background noise in representative operational conditions, this evaluation has shown that it is feasible to detect damage initiated AE signals from each of the rolling elements upon field implementations.
A waveform-similarity based clustering algorithm has been proposed for the
identification of damage-induced AE source mechanisms. Consistency in the source mechanism is theorised to indicate gradual progressive failure, such as crack growth. Through the descriptive framework, it has been shown that high similarity of the recorded signal must be the result of high similarity in the emitted source. Additional numerical verification of this assumptions on transfer path similarity has been performed, confirming the equivalence derived from the descriptive framework.
A low-speed run-to-failure test was performed with a purpose-built linear bearing segment, representative of the main bearing of a mooring turret, to assess the performance of the clustering algorithm. Intermediate and final visual inspections report the development of wear comprising erosion, surface roughening, pitting and surface initiated fatigue. In independent analysis of the recorded AE signals, several highly-consistent structures of clusters were identified over multiple measurement channels. The nose raceway could be identified as the source of these structures of clusters, which matched the observed evolution of localised damage during the inspections.
Based on the source-identified AE activity, a novel quantitative indicator has been proposed to infer bearing condition. The bearing condition index (BCI) adopts a value of 1 when the bearing is in good condition. The BCI drops in value as the bearing degrades, as represented by a more significant detection of clusters of similar AE signals within the normalised period of a load cycle over a multitude of measurement frequencies.
Run-to-failure experiments have been conducted to assess the proposed BCI. Intermediate and final inspections report the progressive erosion and surface roughening. Additional lubrication samples collected during these inspections contained high levels of particle contamination. A direct correlation between the AE hit-rate and the particle contamination of the lubricant was observed. Utilising progressive scaling based on cluster size, the excessive influence of lubrication contamination-induced AE signals on the BCI could be reduced, while still providing a timely warning.
In review, it is concluded that the proposed methodology can effectively describe the complex generation and propagation of AE due to damage evolution in highlyloaded low-speed roller bearings. The developed clustering method has shown to effectively identify patterns and trends in the AE signals at different stages of degradation, and provide the basis for filtering out noise-related signals. The formulated BCI can subsequently provide an intuitive indication of the condition of a low-speed roller bearing in an in-situ non-intrusive manner. As such, the methodology is believed to offer promising potential to contribute to the safe and continued operation of the offshore energy infrastructure.","Structural Health Monitoring; Aocustic Emission; Roller Bearing; Offshore","en","doctoral thesis","","978-94-6384-449-9","","","application","","","","","","Ship and Offshore Structures","","",""
"uuid:54431c82-65e6-4f82-b598-c5b7a27c1f93","http://resolver.tudelft.nl/uuid:54431c82-65e6-4f82-b598-c5b7a27c1f93","Utopia as Critical Method: A Comparative Analysis of Six Architectural and Literary Utopias","Čulek, J. (TU Delft Situated Architecture)","Kaan, C.H.C.F. (promotor); Havik, K.M. (promotor); Sioli, A. (copromotor); Delft University of Technology (degree granting institution)","2023","Utopia as a Critical Method is a comparative analysis performed through drawing and text, in which six architectural and literary utopias were examined together with the three historical contexts in which they were created. Looking at utopian works created roughly within the last century, the research examined the different worlds which the utopian authors imagined as a critical response to the issues and topics arising within their own historical contexts. The study addressed not only on the works as a whole, but also focused on their parts – namely the numerous social and spatial forms the authors have imagined and depicted. In this way, the dissertation was able to identify both the common and the discipline-specific forms which the utopian authors used, the various tools and techniques through which the critical aspect of their utopian works was developed (use of dichotomous relationships, mereological approaches, world reduction, and contextual verticals), as well as some of the most common topics which the utopian works addressed or revolved around (such as those of housing, work and production, technology, and governance). By distributing the social and spatial forms identified both within the utopian works as well as their respective contexts into three predominant scales – the small, medium and large – the research was also able to address and identify (in)commensurabilities between the different foci of utopian works across the fields of architecture and literature, as well as in relation to their three different historical periods. And while the use of utopia as a critical method through which we can simultaneously reflect on our own present, while speculating on potential futures has significantly declined (if not disappeared) in the architectural field, one of the goals of this research was to reignite the creative architectural interest in producing these imaginative and critical projects as a response to the numerous and multifaceted crises of our time.","","en","doctoral thesis","","","","","","","","","","","Situated Architecture","","",""
"uuid:62479ffc-c389-4d3d-8320-f7b3ad8d0829","http://resolver.tudelft.nl/uuid:62479ffc-c389-4d3d-8320-f7b3ad8d0829","Multiphase Flow Modelling of Electrochemical Systems: an analytical approach","Rajora, A. (TU Delft Energy Technology)","Padding, J.T. (promotor); Haverkort, J.W. (copromotor); Delft University of Technology (degree granting institution)","2023","The primary objective of this work is to provide new analytical models to support the theoretical understanding of multiphase flows in various electrochemical systems. Most of the previous works in this field use either experiments or numerical simulations to understand the hydrodynamics of the multiphase flows. In this thesis, various new analytical models are derived for different cell configurations such as PEM diffusion layer, flow-through electrolysers, parallel plate electrolyzers and zero-gap electrolysers. New design equations are provided that can be readily used as a first estimate for a new electrochemical cell. The novelty of this work lies in providing new analytical approaches to develop a theoretical understanding of electrochemical cells.","Multiphase flow; Electrolysis; Analytical modeling; Bubbles; Mathematical modeling; Hydrogen evolution","en","doctoral thesis","","978-94-6469-410-9","","","","","","","","","Energy Technology","","",""
"uuid:5fc6009a-d2dd-47d9-91d9-ca522d69f91d","http://resolver.tudelft.nl/uuid:5fc6009a-d2dd-47d9-91d9-ca522d69f91d","Advancements in Large-Scale Volumetric PIV and PTV","Saredi, E. (TU Delft Aerodynamics)","Sciacchitano, A. (promotor); Sciacchitano, A. (copromotor); Delft University of Technology (degree granting institution)","2023","Particle Image Velocimetry (PIV) is considered nowadays the state-of-the-art for non-intrusive and quantitative 3D velocity measurements. Its ability to measure the velocity field around complex geometries is a valuable tool that engineers can exploit for aerodynamic design optimization in various domains, such as aerospace, wind turbines and automotive, among others. Despite recent advancements, performing a PIV measurement in the industrial environment remains challenging due to several reasons: achieving large-scale measurements, complex geometries and high Reynolds numbers. The introduction of helium-filled soap bubbles, new Lagrangian Particle Tracking (LPT) algorithms and Robotic Volumetric PIV has allowed for the measurement of large-scale volumes around complex geometries. However, despite the described advancements, large-scale PIV and LPT measurements for industrial aerodynamics require further development to accelerate their applications. The first bottleneck considered is the maximum measurable velocity. For aerodynamic flows in the transport sector, the velocity is often larger than 50 m/s when considering aircraft and race cars. To apply the mentioned techniques, acquisition frequencies higher than the one commonly available are needed. The double-frame timing strategy, characterized by image pairs with a small time separation, is detrimental to the measurement accuracy, especially when low aperture systems, such as Robotic Volumetric PIV, are considered. This research has led to the development of novel acquisition strategies (chapters 3 and 4) that improve the accuracy of double-frame velocity measurements suited for high speed applications (U∞ > 50 m/s). Another current topic of research concerns the detection of data outliers in PIV measurements, which affect their reliability and trustfulness. In this thesis (chapter 5) a novel approach to outliers detection from time-averaged three dimensional PIV data is introduced. The principle invokes the physical mechanism of turbulence transport and is based on the agreement of the measured data to the turbulent kinetic energy (TKE) transport equation. The application of this new criterium to several experimental databases shows that spurious data can be detected more easily and unambiguously as an outlier along with a low fraction of false positives. This research also attempts to decrease the gap between Computational Fluid Dynamics’ (CFD) and experiments’ aerodynamic data. In chapter 6, the application of PIV data for data assimilation is discussed. Data assimilation is a discipline in which observation and numerical or theoretical models are combined. This can be performed with two possible aims: improving the observation with physics-based models or increasing the capability of the model to represent reality. In this thesis, the latter is considered. A novel state observer technique is investigated for the assimilation of three-dimensional velocity measurements into computational fluid dynamics simulations based on Reynolds-averaged Navier–Stokes (RANS) equations. The state observer approach locally forces the solution to comply with the reference value, with increasing benefits when the density of forced points, or forcing density, is increased.","Quantitative flow visualization; Particle Image Velocimetry; lowspeed aerodynamics; outlier detection; data assimilation","en","doctoral thesis","","978-94-6384-456-7","","","","","","2024-04-30","","","Aerodynamics","","",""
"uuid:1cc9228c-514c-42ee-9edb-6d7596d66f11","http://resolver.tudelft.nl/uuid:1cc9228c-514c-42ee-9edb-6d7596d66f11","Modeling of blood flow in aorta: MRI-based computational fluid dynamics of aortic hemodynamics","Perinajová, R. (TU Delft ChemE/Transport Phenomena)","Kenjeres, S. (promotor); Lamb, H.J. (promotor); Delft University of Technology (degree granting institution)","2023","Aortic aneurysm, a balloon-like enlargement of the healthy artery, is a cardiovascular disease with one of the highest mortality rates. These numbers are due to the usual late diagnosis of the aneurysm, which is often asymptomatic until a fatal event occurs. Such an event can progress to aortic dissection or rupture. Due to the urgent nature of a rupture, we require early detection of asymptomatic aneurysms and proper evaluation of the rupture risk. The clinical guidelines suggest a close follow-up of the luminal size evolution, with advice for surgery based on threshold values. The thresholds are based on the annual maximum arterial diameter or growth rate. However, these guidelines are lacking in many cases.","Computational Fluid Dynamics; 4D-flow MRI; Aorta; Simulations","en","doctoral thesis","","978-94-6483-167-2","","","","","","2023-12-13","","","ChemE/Transport Phenomena","","",""
"uuid:c53da6a5-948a-490d-9061-1f650f7a6125","http://resolver.tudelft.nl/uuid:c53da6a5-948a-490d-9061-1f650f7a6125","Approximations and transformations of piecewise deterministic Monte Carlo algorithms","Bertazzi, A. (TU Delft Statistics)","Jongbloed, G. (promotor); Bierkens, G.N.J.C. (copromotor); Delft University of Technology (degree granting institution)","2023","This thesis studies methods to improve the applicability and the performance of Markov Chain Monte Carlo (MCMC) algorithms based on Piecewise Deterministic Markov processes (PDMPs). First, we discuss the key ideas that lay the foundations of the field of MCMC, spanning from the Metropolis-Hastings algorithm to PDMC methods, emphasising a common structure underlying most non-reversible MCMC algorithms studied in the literature. The rest of the thesis is divided in two parts, respectively treating approximations and transformations of PDMC algorithms.
In the first part we introduce several discretisation schemes that approximate a given PDMP and study the properties of the proposed algorithms in detail. This area is of fundamental importance to make PDMPs widely applicable, as indeed the PDMPs considered in the MCMC literature typically cannot be simulated exactly because of either complicated deterministic dynamics or because the random event times are distributed according to an exponential distribution with non-homogeneous rate. In the latter case, existing approaches to simulate the random event times are applicable exclusively when the rate is of simple form, a requirement that covers only toy models from the MCMC literature. In this thesis we introduce and study a wide variety of time discretisations of PDMPs of any order of accuracy, which can now be used as a basis for MCMC algorithms. We study two types of discretisations: the first kind is obtained generalising the principle behind classical Euler schemes, while the second is based on splitting schemes.
In both settings, we establish the dependence of the error on the step size of the discretisation. For suitable Euler schemes we prove uniform in time estimates on the weak error, a particularly challenging result which gives that the error is fully controlled by the step size and does not depend on the time horizon. Moreover, for approximations of PDMPs obtained with Euler-based schemes we obtain error bounds in Wasserstein and total variation distance using the coupling approach.
For our approximations based on splitting schemes we mainly focus on the Zig-Zag sampler (ZZS) and Bouncy Particle Sampler (BPS) and study the best splitting scheme in terms of bias in the invariant measure. For both samplers we obtain conditions ensuring existence and uniqueness of a stationary distribution for the approximation process, as well as exponential convergence to such a distribution. Importantly, we show that symmetric splitting schemes are of second order, although they only require one computation of the gradient of the negative log-likelihood per iteration. Another important novelty we introduce is the possibility to correct the introduced bias via a skew-reversible Metropolis-Hastings acceptance-rejection step. This allows us to design the first unbiased, PDMP-based MCMC algorithms that can be applied effortlessly to sample from any target probability distribution. Our numerical experiments show that the remarkable properties of PDMPs give their approximations excellent convergence properties improving over benchmark methods such as Hamiltonian Monte Carlo and the unadjusted Langevin algorithm.
The second part of the thesis concerns transformations of PDMPs. First, we discuss space transformations of PDMPs, in which case the main goal is to improve the performance of PDMC algorithms when the target distribution $\pi$ is anisotropic. Our proposal is to design PDMC algorithms that learn adaptively the covariance structure of $\pi$ and use this information to tune the velocity of the underlying PDMP, i.e. the directions that the PDMP is more likely to explore. Finding a good set of directions requires knowledge of the target $\pi$, and hence information on previous positions of the process needs to be used. In a similar fashion, we introduce adaptive PDMC algorithms which automatically tune the refreshment rate of the process, i.e. the frequency at which the current velocity vector is replaced with an independent draw from a suitable distribution. For these algorithms we carefully study the convergence to the target distribution by establishing ergodicity, which is challenging for such non-homogeneous Markov processes. Moreover, we test our algorithms on some benchmark examples, on which we observe relevant improvements over the standard, non-adaptive samplers.
In the last chapter of the thesis we consider time transformations of (piecewise deterministic) Markov processes, with an emphasis on improving the convergence of MCMC algorithms. In particular, we study the effect on the properties of a Markov process of a change of the speed of time, where importantly changes in speed depend on the state of the process. This notion can prove helpful in the context of multimodal target distributions, in which case we argue that communication between different modes can be improved by increasing the speed of time when the process is located in low density regions. We connect various properties of a process to those of a related time-changed process, such as a connection between the stationary distributions, the generators, non-explosivity, ergodicity and rate of convergence to the limiting distribution. For PDMPs we show that suitable time transformations can make a geometrically ergodic Markov process uniformly ergodic, a remarkable property which means that the initialisation of the process does not affect the speed of convergence. We apply our theorem to time transformations of the Zig-Zag process, demonstrating the applicability of our conditions. By applying this framework to PDMPs we define several novel processes which have dynamics depending on a user-chosen, interpretable speed function.","MCMC algorithms; non-reversibility; Piecewise deterministic Markov processes; Bayesian statistics; computational statistics","en","doctoral thesis","","","","","","","","","","","Statistics","","",""
"uuid:46e2e85d-3a66-47cc-acc0-ab541a8fa8a9","http://resolver.tudelft.nl/uuid:46e2e85d-3a66-47cc-acc0-ab541a8fa8a9","Commuting behaviour and subjective wellbeing: A longitudinal perspective","Tao, Y. (TU Delft Urban Studies)","van Ham, M. (promotor); Petrović, A. (copromotor); Delft University of Technology (degree granting institution)","2023","This thesis has investigated the relationship between daily commuting behaviours and long-term subjective wellbeing from a longitudinal perspective. The underlying problem that motivated the thesis is the inconsistent research evidence on the commuting-wellbeing relationship, and more importantly, the insufficient theoretical conceptualisation of this relationship. As a response to the gap between theoretical understandings and empirical research, this thesis used a processual approach to frame the commuting-wellbeing relationship as an interdependent process over time. To operationalise this processual approach, two ways forward were proposed for longitudinal research, namely retrieving the upstream process that leads to changes in commuting behaviours and enriching the contextual understanding of commuting-wellbeing relationships. The upstream process of commuting changes pertains to the reason for people to (not) change their commuting behaviours, while the contextual understanding relates to the commuting-wellbeing relationship as time- and place-specific. Following these two ways forward, the empirical analysis of this thesis drew upon the nationwide panel data from China, the Netherlands and the United Kingdom to longitudinally investigate the relationships between commuting behaviours and subjective wellbeing over time. The aim of this thesis is not to identify a unidirectional commuting-wellbeing causality uniform to the general population and across research areas, but to acknowledge, operationalise and better understand the interdependent commuting-wellbeing relationships situated in the life courses of people and the socio-spatial contexts of places.","","en","doctoral thesis","","978-94-6366-697-8","","","","","","","","","Urban Studies","","",""
"uuid:5cf48a49-596c-475a-bb0a-de916915f4b7","http://resolver.tudelft.nl/uuid:5cf48a49-596c-475a-bb0a-de916915f4b7","Supplementary Power Controllers for Modern VSC-HVDC transmission links: Control design and advanced modelling methods for point-to-point and multi-terminal VSC-HVDC networks","Perilla Guerra, A.D. (TU Delft Intelligent Electrical Power Grids)","van der Meijden, M.A.M.M. (promotor); Rueda, José L. (promotor); Delft University of Technology (degree granting institution)","2023","","Voltage source converters; multi-terminal HVDC networks; power systems; RMS simulations","en","doctoral thesis","","978-94-6384-451-2","","","","","","2025-06-12","","","Intelligent Electrical Power Grids","","",""
"uuid:53e46ea5-5802-42cf-814f-4bb2bf76aadc","http://resolver.tudelft.nl/uuid:53e46ea5-5802-42cf-814f-4bb2bf76aadc","Improving iron oxide-based adsorbents for phosphate recovery from surface water using Mössbauer spectroscopy as main analytical tool","Belloni, C. (TU Delft RST/Fundamental Aspects of Materials and Energy)","Brück, E.H. (promotor); Witkamp, G.J. (promotor); Dugulan, A.I. (copromotor); Delft University of Technology (degree granting institution)","2023","This thesis focuses on recycling resources while preserving water quality and availability. This concept is at the basis of a healthy and sustainable society yet works needs to be done. Water scarcity will be a growing challenge that humanity will have to face in the coming years, due to poor resource management and the climate change crisis. Waters cover 70 % of our planet, but only 3 % of it is freshwater, and only 1 % is easily accessible. Already more than 2 billion people live in water-stressed countries.
Moreover, in some ways, this thesis will show how there is a thin line between resources and waste, nutrients and pollutants, impurity and added value. This thin line is both defined by our everyday life choices, the name we give to things, and their related connotation....","Phosphate recovery; Iron oxide nanoparticles; Mössbauer Spectroscopy; Water Technologies; Adsorption; Doping","en","doctoral thesis","","978-90-8593-559-9","","","","","","","","","RST/Fundamental Aspects of Materials and Energy","","",""
"uuid:eb5c1ae4-f580-4fa0-9faa-3cbb94f04ee8","http://resolver.tudelft.nl/uuid:eb5c1ae4-f580-4fa0-9faa-3cbb94f04ee8","Behaviour and Stability of Interconnected Systems: From Biological Applications to Opinion Dynamics","Devia Pinzon, C.A. (TU Delft Team Tamas Keviczky)","Keviczky, T. (promotor); Giordano, G. (promotor); Delft University of Technology (degree granting institution)","2023","An interconnected system is composed of multiple well-defined self-contained subsystems that interact among them and that together create collective behaviours. We can find many examples of interconnected systems in real life. Ranging from biological systems, such as the growth and interaction of populations in diverse and spatially distributed environments, to electric grids connecting power-generating sources, buildings and infrastructures in a country. When studying interconnected systems, a fundamental and natural question is how the properties and characteristics of the individual subsystems and the way they are connected relate to the collective behaviour of the complete system. That is the driving question of the present dissertation. Given that interconnected systems can be found in a wide variety of contexts, their representation and specific research interests can be equally varied. Because of this, it is impossible to answer the aforementioned question uniquely for all interconnected systems, and specific cases must be considered. In this dissertation, we consider two types of interconnected systems: a general class of uncertain multiple-input-multipleoutput (MIMO) systems, and agent-based opinion formation models. The investigation of uncertain MIMO interconnected systems is focused on providing topology-independent conditions for robust stability. The primary motivation for this approach is that, in real systems, it is costly or even impossible to have complete and accurate information on the network topology and subsystem parameters and dynamics. However, it is of critical interest to guarantee the system’s stability. Therefore we need stability conditions that require only partial information about the network and the subsystems to ensure the system’s stability. By studying these systems both in the time and frequency domain, we are able to provide conditions thatmeet these requirements. As for agent-based opinion formation models, we assume that each individual (or agent) in a population has an opinion about a statement. By exchanging opinions among themselves, the agents update their own internal opinion, resulting in a collective dynamic of opinion evolution. When studying these systems, the interests shifts from stability conditions, to a characterisation of the relation between the agents’ individual traits and qualitative properties of the opinion distribution in the population. Several techniques and approaches to analyse opinion formation models are proposed and applied to multiple models, one of which is new to this dissertation. The collective study of the previously mentioned interconnected systems requires the use of multiple and diverse analysis techniques and approaches, from analytical methods based on the Nyquist criterion, Bauer-Fike theorem, and Lyapunov functions to qualitative and numerical analysis techniques like histograms and binomial proportion confidence intervals. It is our hope that some of the presented results, methods, or ideas may advance the knowledge frontier in this scientific field, sparkle new research directions, and either directly or indirectly prove some value to society.","Interconnected systems; Robust stability; Agent based opinion formation models; Classification-based opinion formation; Linear systems; Nonlinear Systems; Network dynamics; Dynamical networks; Opinion dynamics; Social systems","en","doctoral thesis","","978-94-6384-440-6","","","","","","","","","Team Tamas Keviczky","","",""
"uuid:64c9ded0-950e-4e9f-8032-c691ae6c8deb","http://resolver.tudelft.nl/uuid:64c9ded0-950e-4e9f-8032-c691ae6c8deb","Safety as Airline Business Aspect: From Data to Action by A Value Model for Big Data and Feedback Method for Small FlightStories","Dijkstra, A. (TU Delft Control & Simulation)","Dekker, S.W.A. (promotor); Stoop, J.A.A.M. (promotor); Delft University of Technology (degree granting institution)","2023","This thesis proposes the of integrating safety, economics, and passenger experience and draws on the author's research and experience in aviation to develop a more comprehensive approach to airline business management. The research closes the gap between business and Safety Management Systems in airlines by introducing two novel and complementary concepts: the Airline Value Production Management Model and FlightStory as a tool to enable pilots as intelligent feedback providers.
The thesis discusses two interconnected research projects aimed at improving safety and efficiency in airline operations. The first project, AVPMM, focuses on network performance management, while the second project, FlightStory, aims to empower flight crew as intelligent feedback providers.
This research thesis focuses on airline safety and value production management, with an emphasis on using big data and feedback methods to improve safety and value production. The author presents a specific solution called FlightStory, which empowers flight crew as intelligent feedback providers. The thesis includes a literature review, research questions and methods, and an evaluation of FlightStory's effectiveness. The AVPMM model is proposed as a logical extension of the increase of lower specific levels into network, region, route, and flight. The thesis also discusses the challenges of conducting research in a company context and provides recommendations for further research.
The author shares three war stories from their experience in the aviation industry, highlighting common problems in safety management during flight operations and how it relates to network business decisions. The author aims to develop a new solution for integrating safety and business management in international commercial aviation.
The thesis discusses the challenges of current feedback systems in aviation safety reporting, including the reductionist approach, the lack of qualitative data, and the bias towards Safety-I events. The review suggests the need for a reporting system that collects organizational factors and takes safety out of its silo and into the context of other key performance indicators. The author developed an app designed to collect FlightStory data from the crew, inspired by sensemaking and storytelling concepts.
The thesis proposes an Airline Value Production Model for managing safety in the business context of value production. The review also discusses the lack of an explicit value production model and the need to manage safety as a business aspect, integrated with other essential variables such as economy and customer experience.
The research projects provide innovative feedback methods and an integrated approach to value production management, which can be viewed as disruptive innovations. The thesis concludes with a vision of a Value Production Centre (VPC) that integrates business and safety management using the AVPMM model. The VPC aims to provide a holistic approach to value production management, where safety is not just a compliance issue and operational constraint but an integral part of the business strategy. The thesis provides a valuable contribution to the aviation industry by proposing a new approach to safety management that integrates with business management and value production. The FlightStory app and AVPMM model offer practical solutions for improving safety and efficiency in airline operations, and the research provides a foundation for further development and implementation of these solutions.","Safety; Safety Management; Storytelling; Airline Management; Cybernetics; Value production; Integrated Management","en","doctoral thesis","","978-94-93315-71-6","","","","","","","","","Control & Simulation","","",""
"uuid:9a773783-9771-4537-85d3-341fd00ac376","http://resolver.tudelft.nl/uuid:9a773783-9771-4537-85d3-341fd00ac376","Controlled nano-patterning using focused electron beam induced deposition","Mahgoub, A.M.I.M. (TU Delft ImPhys/Hagen group)","Hagen, C.W. (promotor); Kruit, P. (promotor); Delft University of Technology (degree granting institution)","2023","Focused electron beam induced processing (FEBIP) comprising FEBID (deposition) and FEBIE (etching) is a direct write or etch, single step technique for high resolution nano-patterning. The whole process takes place inside a single tool, the scanning electron microscope (SEM). A focused electron beam hits the sample in the presence of a precursor gas which contains the element to be deposited. The precursor molecules adsorb to the surface of the substrate. The adsorbed precursor molecules are dissociated with a certain probability (given by the dissociation cross section) by the primary, secondary and back scattered electrons into a deposited fragment and volatile byproducts. FEBID provides great potential for 3D nano-printing due to its flexibility and the absence of resists and subsequent processing steps.
The work described in this thesis was part of a Marie Skłodowska-Curie Training Network on ‘Low energy ELEctron driven chemistry for the advantage of emerging NAno-fabrication methods’ (ELENA). In particular, three challenges to the FEBID process were addressed to achieve control over the process for nanofabrication, i) the purity of the deposits, ii) the speed of the process and iii) control over the 3D-shape of deposits....","","en","doctoral thesis","","978-94-6366-685-5","","","","","","","","","ImPhys/Hagen group","","",""
"uuid:0f61f871-7c9c-47fc-a542-10883fb2d4de","http://resolver.tudelft.nl/uuid:0f61f871-7c9c-47fc-a542-10883fb2d4de","Miniature sensorized platform for engineered heart tissues","Dostanic, M. (TU Delft Microelectronics; TU Delft Electronic Components, Technology and Materials)","Sarro, Pasqualina M (promotor); Mastrangeli, Massimo (copromotor); Delft University of Technology (degree granting institution)","2023","The high death toll of cardiovascular diseases worldwide and the lack of effective treatments for them are the main motivation for developing alternative and more efficient models for cardiac drug development and disease research. The missing link between current laboratory research on static in vitro and animal models and the clinical stage research on human patients could be created using the rapidly emerging Organ-on-Chip (OoC) technology. Themicrophysiological models developed within OoC research combine devices made of biocompatible, soft materials and human-origin organ-specific cell types, which are then exposed to flow, chemical, electrical or biomechanical stimuli.
Modeling a human cardiac in vivo environment in an artificial model represents quite a challenge from several aspects. First, cardiac tissue in vivo is exposed to a strong coupling between different biomechanical and electrical stimuli that need to be faithfully captured by an in vitro model. Furthermore, such an in vitro model should recapitulate the complexity of cell-cell and cell-extracellular matrix (ECM) interactions between different cardiac cell types, while obtaining physiologically relevant responses. This thesis addresses the first challenge, in an attempt to engineer a dynamic, artificial microenvironment, suitable for the growth, monitoring, and stimulation of hiPSC-based engineered cardiac tissues (EHTs).....","Engineered heart tissue; Heart-on-chip; Organ-on-chip; Microfabrication; Polymer processing","en","doctoral thesis","","","","","","","","","","Microelectronics","Electronic Components, Technology and Materials","","",""
"uuid:92342a96-7343-4d5e-adad-7b5095cc0666","http://resolver.tudelft.nl/uuid:92342a96-7343-4d5e-adad-7b5095cc0666","When is subjective objective enough?: Frequentist analysis of Bayesian methods","Franssen, S.E.M.P. (TU Delft Statistics)","van der Vaart, A.W. (promotor); Szabó, B.T. (promotor); Delft University of Technology (degree granting institution)","2023","In this thesis, we investigate the properties of Bayesian methods. In particular, we want to give frequentist guarantees for Bayesian methods. A Bayesian starts with specifying their apriori belief as a probability distribution, the prior distribution. The prior is their inherently subjective beliefs. After a Bayesian has specified their prior, they collect data and compute the posterior distribution. For a Bayesian, this posterior distribution encodes their new beliefs on the world. However, this prior was subjective. Thus the posterior is also subjective. So we can wonder, will this posterior distribution give a better representation of reality? Will it be more accurate? The posterior distribution quantifies a subjective belief of uncertainty. How reliable is this quantification of uncertainty?
These questions lie at the foundation of this thesis. They have been answered for certain classes of prior distributions. However, they have not been fully answered for all distributions in use. In this thesis, in the introduction, we explain the foundational statistical theory to study these questions. In particular, we show how to apply Schwartz theorem and the Bernstein-von Mises theorems to study posterior distributions. We then turn to novel research.....","","en","doctoral thesis","","978-94-6384-455-0","","","","","","","","","Statistics","","",""
"uuid:8fa320aa-a1b8-4ca2-9e9d-4428911aa02d","http://resolver.tudelft.nl/uuid:8fa320aa-a1b8-4ca2-9e9d-4428911aa02d","Tracking organoid cell fate dynamics in space and time","Zheng, X.Z. (TU Delft BN/Sander Tans Lab)","Tans, S.J. (promotor); van Zon, J.S. (promotor); Delft University of Technology (degree granting institution)","2023","Throughout the lifetime of living systems, tissue homeostasis and renewal constantly take place to confront challenging conditions, both internally, such as cell aging, and externally, such as infections, so that health can be maintained. Such processes require a tight balance between cell proliferation and differentiation. When homeostasis is disturbed, diseases like cancer can develop. Therefore, understanding the regulation of tissue homeostasis is a key question in biology. However, directly monitoring the dynamics of proliferation and differentiation in live animals remains extremely challenging. Common methods, such as immunostaining and single-cell RNA sequencing, require killing the animal and fixing the cells. Therefore, they can merely provide information in a single time frame. As a result, lineage tracing techniques are introduced, where cells are labeled with a heritable marker that can be detected in progeny after a certain period by fluorescence microscopy or sequencing. Nevertheless, they only produce lineage dynamics indirectly.","","en","doctoral thesis","","","","","","","","","","","BN/Sander Tans Lab","","",""
"uuid:4b5044b3-3718-42e6-ba31-f27c9b984c15","http://resolver.tudelft.nl/uuid:4b5044b3-3718-42e6-ba31-f27c9b984c15","Are the Moons of Jupiter Unique?: Thermochemical Disk Modeling of Moon Formation","Oberg, N.O. (TU Delft Astrodynamics & Space Missions)","Vermeersen, L.L.A. (promotor); Kamp, I.E.E. (promotor); Cazaux, S.M. (copromotor); Delft University of Technology (degree granting institution)","2023","The practice of astronomy is in many ways an intrinsically introspective endeavour. A significant fraction of astronomical motivation is derived from the desire to understand whether a habitable planet such as the Earth is a unique object, and, by extension, whether the inhabitants of the Earth themselves collectively represent a unique phenomenon. By definition, a world is considered potentially habitable if it is theoretically capable of supporting liquid water at its surface. But by focusing solely on strictly Earth-like planets, we risk overlooking other potentially habitable options. In fact, the majority of the worlds known to host liquid water oceans in the solar system are not Earth-like at all. These other worlds do however share a singular defining characteristic: they are the icy moons that orbit the gas giant planets. Their oceans are concealed below kilometers of frozen crust. In the solar system at least three moons are known to host an ocean with a high degree of confidence (Europa, Enceladus, and Titan), and another four are suspected (Ganymede, Callisto, Mimas, and Dione). Hence, any hope of answering the question as to how unique the phenomena of life on Earth really is may hinge predominantly on answering a seemingly unrelated question: how unique are the icy moons?
The formation of gas giants appears to be accompanied by the formation of moons, as, at least in the solar system, the two appear inseparable. The gaseous planets Jupiter, Saturn, and Uranus, are each attended by a retinue of moons both regular and irregular. The regular satellites tend to orbit in a single plane, in the same direction, and on nearly circular paths. These peculiar properties are also exhibited by the planets, and hence it is considered likely that some similar process has been at play to form them both. That process is the formation within a swirling disk of gas and dust. Planets form within disks that surrounded a star (a circumstellar disk) while the moons would have formed within a disk surrounding their planet (a circumplanetary disk, or CPD).
The last decade of strides made in the observation of circumstellar disks has revolutionized our understanding of the planet formation process. To what extent might the moon and planet formation process be similar? To what extent might we be able to extrapolate our understanding of circumstellar disks down to the scales characteristic of circumplanetary disks? Is there a smooth continuum in physical processes, connecting the formation of large moons, with the formation of the smallest planets? In this work we have extended the theoretical tools used to explore planet formation down into this new regime. On the observational side, as the scale of the astrophysical object shrinks, the capabilities of the instrument must rise commensurately to observe it. We are now at the earliest possible stage of directly observing CPDs to gain insights beyond the theoretical into how giant planet moon formation actually proceeds…
This thesis studies cognitive healthy centenarians as extreme controls in the context of aging and AD. Based on a large cohort of data, this thesis indeed shows that some centenarians escaped the buildup of some neuropathologies, indicating resistance to these neuropathologies. Contrarily, this thesis also shows that average levels of AD-associated neuropathologies increase with age in non-demented individuals, whereas these neuropathologies decrease with age in AD cases. Most intriguingly, this thesis shows that some centenarians with the highest cognitive performance, did accumulate the highest levels of some neuropathologies, yet remained cognitive healthy. This thesis then speculates that these observations point towards a resilience to these neuropathologies by these centenarians.
To better understand the resilience and resistance mechanisms in centenarian brains, this thesis then continues with investigating brain proteomics in the context of the degree of AD pathology (Braak stages) as well as age. As a first characterization, clusters of Braak stage-related and age-related proteins are identified that separately are associated with specific biological processes. Some Braak stage-related proteins demonstrate a deviated abundance in centenarians compared to AD (at the Braak stage IV), indicating that these proteins may contribute to the resilient mechanisms of tau accumulation in centenarian brains. A remarkable finding regarding the age-related proteins is that centenarian brains are, in a median of, 18-years “younger"" in their protein expression, when compared with non-demented controls, again hinting towards a resilience to age-related diseases.
To further explore the possible role of aging behind AD, this thesis studies the extend and locations of brain somatic mutations. We show that the number of excitatory neuron specific-somatic mutations increases with age, but there is no significant difference between AD and non-demented individuals. Interestingly, certain somatic mutations occurred more frequently in the brains of AD patients.
Concluding, this thesis demonstrates the value of cognitive healthy centenarians in studying brain aging and neurodegenerative diseases. In doing so, it reveals that the relationship between brain aging and neurodegeneration is extremely complex and deeply entangled. Nevertheless, basic processes that are altered during brain aging are identified, which brings targets to counteract the molecular disorder that leads to neurodegeneration, including AD, closer.","Alzheimer’s disease; Aging; Centenarian; Neuropathology; Neuropsychology; Proteomics; Somatic Mutation","en","doctoral thesis","","978-94-6469-390-4","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:cfe84a26-e30f-430d-8ad1-3d503e780e36","http://resolver.tudelft.nl/uuid:cfe84a26-e30f-430d-8ad1-3d503e780e36","Structured Kinetic Modeling for Rational Scale-down and Design Optimization of Industrial Fermentations","Tang, W. (TU Delft BT/Bioprocess Engineering)","Noorman, H.J. (promotor); van Gulik, W.M. (copromotor); Delft University of Technology (degree granting institution)","2023","","","en","doctoral thesis","","","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:82404585-8ab8-4427-af32-b3e56406c825","http://resolver.tudelft.nl/uuid:82404585-8ab8-4427-af32-b3e56406c825","Bias and debiasing in data-driven crisis decision-making","Paulus, D. (TU Delft Organisation & Governance)","van de Walle, B.A. (promotor); Janssen, M.F.W.H.A. (promotor); de Vries, G. (copromotor); Delft University of Technology (degree granting institution)","2023","The United Nations estimates that hundreds of millions of people worldwide are affected by complex crises. Examples are the protracted conflict in Yemen, climate change-induced displacement, and the COVID-19 pandemic. These crises have severe implications for societies. To mitigate crises’ effects, crisis response organizations strive to make data-driven decisions. However, these crises are complex: they involve many actors with different mandates and objectives that face uncertain information as well as decision urgencies. These issues can lead to systematic errors within collected crisis data, i.e., data bias, and challenge decision-makers cognitive information processing capacities by inducing cognitive bias....","","en","doctoral thesis","","978-94-6384-444-4","","","","","","","","","Organisation & Governance","","",""
"uuid:1ebb628d-ecbe-49d9-b132-3a8440119f69","http://resolver.tudelft.nl/uuid:1ebb628d-ecbe-49d9-b132-3a8440119f69","The effect of uncertainties on the performance of real-time control of urban drainage systems","van der Werf, Job (TU Delft Sanitary Engineering)","Langeveld, J.G. (promotor); Kapelan, Z. (promotor); Delft University of Technology (degree granting institution)","2023","REAL-time control (RTC) is a technique used to dynamically control urban drainage systems to utilise the existing infrastructuremore optimally. It can be used to achieve a number of objectives aiming to improve the functioning of the urban drainage system, typically through the reduction of pollution. When heavy rainfall occurs, combined sewer systems (CSSs) can cause combined sewer overflows (CSOs) to discharge diluted, yet untreated, wastewater into receiving water bodies. These discharges can lead to ecological damage and pose a public health hazard, resulting in more stringent legislation necessitating the reduction of CSO discharges. To achieve this, expensive upgrades to the urban drainage system (UDS) have traditionally been used. RTC can reduce or negate the need for these expensive upgrades by fully utilising the existing infrastructure. RTC increasingly relies on more complex algorithms and data streams due to the rise of cheaper computing power and sensors, leading to a better understanding of the systems and more potential for dynamic optimisation. Uncertainties (inherent to modelling and monitoring exercises) can affect the functioning of these RTC procedures, but the influence of uncertainty on the performance of RTC procedures is poorly understood. This is an often quoted reason against the implementation of RTC strategies as a whole. The aim of this thesis is therefore to increase the understanding of how uncertainties can affect the performance of RTC procedures. Using three case studies (urban drainage systems of WWTP Eindhoven, Hoogvliet and Dokhaven) and both heuristic and real-time optimisation procedures, this aimwas assessed.","Combined Sewer Overflows; Real-time Control; Uncertainty Analysis; Urban Drainage Systems","en","doctoral thesis","","978-94-6384-443-7","","","","","","","","","Sanitary Engineering","","",""
"uuid:32c437c1-b0eb-48a5-8a42-cdf9fb396a7f","http://resolver.tudelft.nl/uuid:32c437c1-b0eb-48a5-8a42-cdf9fb396a7f","Conversion of Polymeric Substrates by Aerobic Granular Sludge","Toja Ortega, S. (TU Delft Sanitary Engineering)","de Kreuk, M.K. (promotor); Pronk, M. (copromotor); Delft University of Technology (degree granting institution)","2023","Domestic wastewater is treated prior to its return to natural water bodies, to minimize its polluting effect. Biological wastewater treatment removes organic matter and nutrients from the wastewater, by employing the activity of microorganisms, which consume polluting compounds present in wastewater to grow. One of such technologies is aerobic granular sludge (AGS), which consists of self-immobilized microorganisms growing in spherical biofilms. The granular structure facilitates the separation between treated water and the biomass due to its excellent settling properties. This way, energy and space are saved in comparison to flocculent sludge-based treatment.
Despite its many advantages, the granular structure can pose some challenges too, particularly regarding the degradation of polymeric substrates. The higher mass-transfer resistance in granules compared to flocs challenges the degradation of these substrates, which have a size spanning from a few kDa to several micrometres. Polymeric substrates, furthermore, need to undergo hydrolysis before microorganisms can take them up, which is generally a slow process. Most AGS applications rely on microbial selection driven by the application of a sequencing batch reactor (SBR) cycle. The cycle consists of an anaerobic substrate feeding and a subsequent aerobic starvation period, which selects for intracellular polymer-storing organisms, such as polyphosphate accumulating organisms (PAO) and glycogen accumulating organisms (GAO). Substrates that experience high mass-transfer limitation and low degradation rates may interfere with the microbial selection strategy applied to AGS, especially when they are not (fully) taken up in the anaerobic feeding period and continue degrading aerobically in the next cycle phase. Some lab-scale studies have reported detrimental effects of polymeric substrates in AGS structure and activity, while others have managed to maintain a stable granule bed and suggest that the microbial utilization of polymeric substrates can contribute to good nutrient removal. The degradation of polymeric substrates by full-scale aerobic granules is still poorly understood.
First, we contribute with resources we created and which are used throughout the thesis. Namely, we introduce a novel dataset of information-seeking dialogues: MANtIS, as well as a library to train and evaluate models for the task of conversation response ranking: transformer-rankers.
Considering a two-stage pipeline for conversational search, we propose approaches for retrieval and also for re-ranking responses. We start by empirically comparing sparse and dense approaches for the first-stage retrieval of responses for dialogues. Next, we go to the second stage of the pipeline and use notions of difficulty to improve response re-rankers. We start with a curriculum learning approach that starts with easy dialogues and moves progressively to harder ones during training. We also investigate how difficult a dialogue can be when predicting the relevance of responses, by proposing models which allow for estimating their uncertainty.
Finally, we move on to evaluating what is the behavior and limitations of retrieval and ranking models for conversational search. We start by evaluating what is the effect of categories of language variations of queries in retrieval pipelines. Additionally, we evaluate what are the capabilities of heavily pre-trained language models for different conversational recommendation tasks.
With this thesis, we make scientific contributions to the field by providing resources, improving retrieval and re-rankers, and enabling a better understanding of models. We hope our contributions can be used as a foundation for future work in conversational search, enabling agents that can improve information-seeking interactions.","conversational search; ranking models; model understanding","en","doctoral thesis","","","","","","","","","","","Web Information Systems","","",""
"uuid:6733411b-b3e8-4027-b935-16ffd6262e8a","http://resolver.tudelft.nl/uuid:6733411b-b3e8-4027-b935-16ffd6262e8a","Can I touch you online?: Embodied, Empathic Intimate Experience of Shared Social Touch in Hybrid Connections","Lancel, K.A. (TU Delft System Engineering)","Brazier, F.M. (promotor); Delft University of Technology (degree granting institution)","2023","Experience of touching and feeling touched is fundamental to human well-being, of safety and trust. Being in touch with others can be emotional and spiritual, it enables space for movement and transformation: to touch, kiss, play, dance, make love, tune and breath together.
Until recently, research into Human Computer Interaction has focussed on the performative potential of technology and physiological aspects of social touch; and less on human experience. However, recent research shows that ethical aspects of vulnerability, inclusiveness, agency, autonomy, responsibility and response ability, and trust are core to human experience of technically mediated social touch. Recent neuroscience research focuses on mirror neuron activity in empathic processes through touch; on synaesthetic mirror-touch perception; and on body ownership perception in visuo-haptic motor data interaction.
Media Performance Art has started to explore digital systems for shared experience of sensory, intercorporal connections and emphatic spectatorship with human and non-human others, in various hybrid social and spatial configurations.
This thesis expands these emergent and fragmented foci in a new, interdisciplinary Art, HCI, Design and Neuro Science perspective, for distributed, hybrid, XR, online, human-agent and robot interaction.
The thesis shows the importance for new performance scripts, for orchestrating ‘Shared Social Touch’: Shared embodied intimate experience of technically mediated social touch, for multiple participants.
A first interaction model for orchestrating social touch: ‘Can I Touch You Online?’ (CITYO) is presented to this purpose. This novel interaction model has been tested internationally, in six participatory case-studies, Artistic Social Labs (ASL). These ASLs have made use of innovative A.I. Facial Technologies, Streaming platforms and Multi-Brain Computer Interfaces (BCI) in multi actor networks. They have been designed to facilitate a new sense of bodily togetherness between familiar and unfamiliar others, lovers, friends, family, and strangers.
The literature, and testing insights, show that performance scripts for Shared Social Touch experience rely on the design of a) Sensory Disruption (of physical touching and being touched, in reciprocal influence, and shared empathic vulnerable interplay) combined with b) Shared Reflection on the experience, through hosted dialogue.
The research method has been based on combined Artist Research and Research through Design.
The CITYO interaction model support these characteristics and present new perspectives for Art, Design, HCI and Science, and Education, on emotional well-being (including social connection, disconnection, and isolation (e.g. through trauma, dementia, depression); neurodiversity and autism) design of e-learning and presence design; in hybrid, A.I., XR, mixed and merging realities.
In genomic data analysis, analysts often compare and contrast new genomic data to an established reference to reduce costs. However, this approach biases comparisons in favor of population-specific genetics since such references encode only a fraction of the genetics of a given population. To address this bias, I propose a method that accounts for population variability in a way that integrates it directly into the comparison process. This integration ensures that the contrast between sample and reference becomes smaller and closer to personalized, so they are treated the same way regardless of the underlying population. The method improves genome characterization and simplifies downstream analyses that rely on these comparisons. As a result, a more accurate portrayal of the genetics of a given population as a whole is obtained.
In non-invasive sequencing-based prenatal testing, we rely on circulating cell-free DNA from maternal plasma to detect pathogenic variants that may affect the fetus. A healthy baseline, which describes the normative state, is generally required to determine the presence of such variants. However, because this DNA is a mixture of maternal and much lower fetal proportions, it remains difficult to disentangle the two, primarily because of biological and technical biases. While this bias can partially be mitigated by changing the baseline and thus contrasting within the individual DNA mixture rather than to a divergent population of mixtures, further improvements are still needed. I present a generalized framework in which the signal-to-noise ratio can be further improved by fully exploiting the information in sequencing data, allowing for more robust predictions at even earlier stages of pregnancy.
The composition of the gut ecosystem can have short- and long-term effects on our health. It is therefore important to understand how it is formed and how a healthy balance can be maintained for as long as possible to preserve our health. To do this, ecosystems must be stratified and compared based on health indices. I show in extremely contrasting Dutch subpopulations that we can obtain valuable characteristics of divergent health states by comparing the gut ecosystems of centenarians with those of Alzheimer's patients. However, significant efforts are required to enable these comparisons due to the many organisms present and the technological limitations in measuring them, introducing bias at all levels.","bioinformatics; population genetics; population graphs; prenatal testing; metagenomics","en","doctoral thesis","","978-94-6366-687-9","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:610c0657-9706-4fde-90ce-6bc9ecd14620","http://resolver.tudelft.nl/uuid:610c0657-9706-4fde-90ce-6bc9ecd14620","Automated and high-throughput reactivity analysis in homogeneous catalysis: The deactivation complexity of Mn(I) hydrogenation catalysts","Hashemi, A. (TU Delft ChemE/Inorganic Systems Engineering)","Pidko, E.A. (promotor); Gaigeot, M.P. (promotor); Delft University of Technology (degree granting institution)","2023","In this thesis, I highlighted a number of projects aimed at developing and testing new simulation methods for studying complex reactive systems, with a particular emphasis on simulation strategies based on the concept of bonding graphs. These mathematical structures provide useful tools for a variety of algorithms developed over the last few decades. Through automated analysis of exhaustive exploration trajectories, I have been able to capture serendipities that could escape the expert heuristics or otherwise needed expertise in different disciplines to be interpreted correctly. Such discoveries could range from very obvious one-step reactions that were just not “normally” considered to multistep complex reactions that were not imaginable to the expert. With automated reactivity screenings on in silico catalyst libraries, I have taken a big step towards “rational”catalyst design.","Automation; High-throughput analysis; Homogeneous catalysis; Deactivation","en","doctoral thesis","","978-94-6366-688-6","","","","","","","","","ChemE/Inorganic Systems Engineering","","",""
"uuid:70db51fd-8a61-4074-8c84-3520854e51f8","http://resolver.tudelft.nl/uuid:70db51fd-8a61-4074-8c84-3520854e51f8","Exploiting the potential of 3D borehole seismic data for high-resolution imaging and velocity estimation, a full wavefield approach","El Marhfoul, B. (TU Delft ImPhys/Medical Imaging; TU Delft ImPhys/Verschuur group)","Verschuur, D.J. (promotor); Slob, E.C. (promotor); Delft University of Technology (degree granting institution)","2023","Geophysics is a branch of physics that is mainly concerned about under- standing and describing the physical behaviour and activities of the earth’s geological system. Usually, seismic data is acquired at the surface and the corresponding signals go through a sequence of preprocessing steps to filter out the noise and enhance the quality of the measurements. These mea- surements are then transformed into a so-called reflectivity image (snapshot in time) of the subsurface via the deployment of the so called migration al- gorithms. Extensive studies and great effort is usually made to determine a suitable acquisition geometry design for optimal illumination of every imaging grid point in the subsurface in the studied area. However, within the geo- energy industry it has always been an undisputed believe that improved and better results can be obtained only if more data is acquired with denser sam- pling at both the source’s and receiver’s side. This ”linear” way of thinking is also consistent with the conceptual assumptions of most current migration algorithms.
The challenge is of course to achieve the same, or maybe even better, results with less data. This is in accordance with the currently ongoing en- ergy transition, which is forcing the geophysical scientific community to shift their focus from the acquisition and processing of large and expensive surface seismic surveys toward optimized management, in terms of data acquisition, processing and production, of the existing hydrocarbon-based reservoirs. Es- pecially datasets with sparse acquisition geometry like 3D Ocean Bottom Node (OBN) and 3D Borehole Seismic Data (BSD) surveys, where we have measurements at a limited number of sensors along the ocean bottom or in the borehole but usually with dense sources sampling at the surface, can greatly benefit from such a development.
3D borehole geophysics, which is the main subject of this thesis, has for a long time been an underdeveloped and, therefore, an unappreciated compo- nent within most geophysical organisations. This is mainly because accurate results are usually obtained only in the immediate vicinity of the borehole and
their quality decays rapidly in the lateral extent. However, and especially in the marine case, 3D BSD surveys are rich in higher-order scatterings that can have significant added value when combined with unconventional and non- linear inversion-imaging algorithms like Full Wavefield Migration (FWM) and Joint Migration Inversion (JMI). Furthermore, in combination with modern measurements techniques (like Distributed Acoustic Sensing (DAS) technol- ogy), continuous and permanent monitoring of existing and new reservoirs – whether hydrocarbon-based or geothermal – can easily be realised.
In this thesis the recently developed inversion-imaging algorithms FWM and JMI are extended to the 3D case and further engineered to properly han- dle the special acquisition geometry of 3D BSD surveys and exploit the full potential of the total wavefield available in 3D borehole seismic data. First, a more complete and comprehensive derivation of the involved gradients, for the reflectivity image and velocity model update, is presented. This makes the combination of one-way tomography of the direct wavefield with reflec- tion tomography of the other energy modes (primary reflections, higher-order scatterings of the up- and down-going wavefield) a straightforward process. Then, an effective strategy of the application of the 3D JMI algorithm to 3D BSD is developed and, with the presented examples, it will be demonstrated that, for instance, the standard and conventional separation of the up- and down-going wavefield of 3D BSD becomes an obsolete process. Along the same lines, we will show that integration of surface seismic data and 3D BSD, or even multi 3D BSD surveys, in one inversion process produces more accurate and geologically consistent solutions. Next, the capability of the 3D FWM algorithm together with 3D BSD surveys for solid reservoir monitoring will be demonstrated. After that the challenges of the current acoustic imple- mentation of the FWM JMI algorithms will be discussed, especially the effect of the mode converted waves on the velocity model gradient. Finally, some suggestions are made for further enhancement of the JMI algorithm, partic- ularly at the side of the migration velocities update. This can be achieved by the combination of complementary and effective objective-functions, which makes the JMI algorithm more robust especially in the case of geological environment with high velocity contrast.","","en","doctoral thesis","","978-94-6384-447-5","","","","","","","","","ImPhys/Medical Imaging","","",""
"uuid:bd310c9f-7169-40df-82b0-3be1e22df4c8","http://resolver.tudelft.nl/uuid:bd310c9f-7169-40df-82b0-3be1e22df4c8","More with Less: Exploring sustainable design through application and development of an integrated multi-level design approach","Scheepens, A.E. (TU Delft Design for Sustainability)","van Engelen, J.M.L. (promotor); Diehl, J.C. (promotor); Delft University of Technology (degree granting institution)","2023","This research project explores the application of the design approach of Eco-efficient Value Creation and the model of the Eco-cost/Value Ratio to practical cases, in order to advance the potential contribution of designers to accelerating the transition towards an environmentally sustainable society.
The Eco-efficient Value Creation approach is aimed at enabling designers to effectively create design solutions which combine a low environmental impact with a high customer perceived value, in order to achieve an increase in sustainable design solutions capturing market share over unsustainable design solutions currently on the market.","","en","doctoral thesis","","978-94-6384-439-0","","","","","","","","","Design for Sustainability","","",""
"uuid:fb05b2ac-b0e0-4f21-9b13-59f8994ff670","http://resolver.tudelft.nl/uuid:fb05b2ac-b0e0-4f21-9b13-59f8994ff670","Border Formation: The Becoming Multiple of Space","Tona, G. (TU Delft Theory, Territories & Transitions)","Schoonderbeek, M.G.H. (promotor); Sohn, H. (copromotor); Delft University of Technology (degree granting institution)","2023","This doctoral thesis examines the militarization of the Southern border of Hungary as a process of spatial formation, expanding the debate on borders from the political to the architectural arena. Combining spatial theory with empirical research on the case study, the thesis rethinks the border as a complex spatial system, with an agency of its own. From this perspective, it contests the enforcement of spatial boundaries from the above and related ideas of fixity. It brings attention to the agency of space in the advancement of a material becoming; the role of migration in the redefinition of meanings and functions of space; and the action of technologies in the strategic manipulation of measures and scales. While conceptualizing the border as a space in formation, this thesis builds a diagrammatic method of study and moves the research in an onto-epistemological direction. With the aim of fostering a change in those structures that control the partition and governance of space, this doctoral study calls the discipline of architecture to review its questions, methods, and practices. It invites to use architectural knowledge to engage with borders’ complexity and challenge their established meanings and makings.","formation of space; border spatiality; spatial becoming; Hungarian-Serbian border; border militarization","en","doctoral thesis","","978-94-6366-678-7","","","","","","2023-05-17","","","Theory, Territories & Transitions","","",""
"uuid:a5ad83c4-f85b-4cc3-b775-7959236b37f1","http://resolver.tudelft.nl/uuid:a5ad83c4-f85b-4cc3-b775-7959236b37f1","Halide solid electrolytes: From structure to properties","van der Maas, E.L. (TU Delft RST/Storage of Electrochemical Energy)","Wagemaker, M. (promotor); Ganapathy, S. (copromotor); Delft University of Technology (degree granting institution)","2023","Batteries are an important aspect of sustainable energy technologies, as they can be used either for the storage of electric energy for the grid or for the electrification of the transport fleet, making these sectors less reliant on fossil fuels (chapter 1). The Li-ion battery has revolutionized the world in many ways, enabling portable electric devices as honored by the Nobel prize in 2019 to John B. Goodenough, M. Stanley Whittingham and Akira Yoshino. As the Li-ion battery is quite a mature technology by now, large gains in performance parameters (especially energy density) will need alternative battery concepts and new chemistries. There are many possibilities, and one of them is a switch from liquid to solid electrolytes (chapter 2). The work presented in this thesis investigates the structure-to-property relationship of halide solid-electrolytes Li₃M(III)X₆. For solid electrolytes to replace liquid electrolytes, the material needs a combination of properties. An important property is the ionic conductivity, which should be high enough for room-temperature operation of the battery and determines, among other design parameters, the rate-capability (or power density) of the battery. Another property that is important is the electrochemical stability window, which determines the electrochemical stability of the electrolyte in contact with the electrodes of the material. Both of these properties are strongly related to the crystal structure and chemistry of the solid electrolyte (chapter 2). Therefore, both the structure and properties are investigated using a variety of techniques, mostly x-ray and neutron diffraction, AC-impedance and solid-state NMR relaxometry (chapter 3). The work is presented in four data containing chapters: • Chapter 4: The materials investigated show very complex behavior relating to diffusion on short time scales, as investigated by NMR T₁-relaxometry. The first chapter therefore provides an in-depth introduction to solid-state NMR relaxometry and spectral density fitting. Using two examples, namely Li₆PS₅X, a sulfide solid-electrolyte class previously studied in the research group, and halide Li₃YCl₃Br₃, it is illustrated how multiple jump processes can present in the curve of the relaxation rates vs. inverse temperature. • Chapter 5: In this chapter, aliovalent substitution in Li₃InCl₆ with Zr(IV) is explored. The Zr(IV) replaces the In(III) and introduces an additional Li-vacancy. The substitution can also affect the crystal structure of the material, affecting ionic diffusion in other ways than changing the charge carrier concentration. Using combined x-ray and neutron diffraction, it is found that the ordering of the In(III) and Zr(IV) is affected by the substitution. This affects also the diffusion on short timescales, as can be observed with NMR relaxometry as well as from the solid-state NMR lineshape. The combination of the structure solution and the puzzle pieces provided by solidstate NMR suggest, that the structural change induced by the substituent leads to more three-dimensional conduction. • Chapter 6: While chlorides have higher electrochemical stability, bromide anions are more polarizable and may have lower association energy with Li, which can lead to higher Li-ion conductivity. This chapter investigates the trade-off between ionic conductivity and electrochemical stability in materials Li₃YClBrₓCl₆₋ₓ. It is found that 75% Br is most beneficial for ionic conductivity rendering a very conductive material (~5 mS/cm at room temperature), higher concentration of bromine indeed lower the electrochemical stability window. The introduction of 25% Br, however, also leads to an increase in ionic conductivity while preserving the electrochemical stability. This suggests that Br-substitution can be a viable method to increase the ionic conductivity of Li-ion conducting chlorides while preserving the electrochemical stability. • Chapter 7: The Li₃M(III)Cl₆ (M(III)= Ho, Y, Dy, Tm) usually are reported to crystallize in a trigonal crystal structure. This paper shows that synthesizing these materials by co-melting with some LiCl deficiency stabilizes an orthorhombic phase of the material. Both of these structures are based on quasi hexagonally close-packed Cl atoms, with the M(III) and Lithium on octahedral sites. The different crystal symmetry is caused by a change in the arrangement of the cations. The orthorhombic phase has ~8 times higher ionic conductivity compared to the trigonal phase. Ab initio molecular dynamic simulations revealed that this is due to a fast conduction pathway along the c-direction of the crystal structure. This path corresponds to jumps between face-sharing octahedra. Therefore, it is likely that the cation arrangement in the orthorhombic structure is favorable for that diffusion path, leading to an increase in ionic conductivity. It is interesting to compare the effect of the different material design strategies aliovalent substitution (Chapter 5), halogen alloying (chapter 6) and tuning of the crystal structure (Chapter 7) on the properties of interest for Li₃M(III)X₆ solid electrolytes. The electrochemical stability window is indeed higher for chlorides than for bromides, but it is found that 25% Br substitution preserves the stability of the chloride in Li₃YCl₆. For ionic conductivity, the largest increase is observed for halogen alloying (factor ~40 increase in ionic conductivity when substituting 25% of the chlorine with bromine atoms), followed by the trigonal to orthorhombic phase transition (factor ~8 improvements) and, lastly, aliovalent substitution (factor ~1.6 improvement). Regarding the measurement methods, two notable findings were found. Firstly, this thesis showed that x-ray diffraction data is important in this system to reach reliable occupancies in the crystal structure solution (chapter 5), as neutrons scattered on lithium and most of the M(III) have a 180°phase shift and therefore cancel their signal when occupying the same site. Lastly, it is shown that the complex shapes of the NMR T₁ relaxation rates can be explained using a superposition of individual, BPP-type jump processes. Fitting such a model is complex, and data measured at multiple larmor frequencies should be used to increase the reliability of the fit. To perform such a fit, a programm was developed in the scope of this thesis to simultaneously fit such measurements and analyze the error associated with the parameter by sampling the posterior probability distribution of the parameter using a Markov chain Monte Carlo sampler.","halide solid electrolytes; Solid-state batteries","en","doctoral thesis","","978-9464-693-836","","","","","","","","","RST/Storage of Electrochemical Energy","","",""
"uuid:a3fb56dd-0f74-449f-8e85-2ffaeb63a65c","http://resolver.tudelft.nl/uuid:a3fb56dd-0f74-449f-8e85-2ffaeb63a65c","Bridging intermittency: with iron electrodes","Weninger, B. (TU Delft ChemE/Materials for Energy Conversion and Storage)","Mulder, F.M. (promotor); van Ommen, J.R. (promotor); Delft University of Technology (degree granting institution)","2023","We have to overcome the intermittent nature of renewables to master the energy transition. Harvesting renewable electricity is only part of the solution. Energy storage is another part and the main challenge of our time. Only with efficient storage solutions can big industries switch to renewables.
Electricity storage is most efficient with batteries while industrial sites and synthetic fuel production require a sustained hydrogen input to drive the processes. The aim of this thesis was the research and development of storage solutions mainly based on earth-abundant iron to bridge intermittency. The following scientific questions were at the basis of the conducted research:
1. Combined battery and electrolyser: is it possible and reasonable to develop a device that serves two purposes? Do the materials endure and support this double functionality?
2. Multiple electrodes: nickel is required for electricity and oxygen storage; iron is required for electricity and hydrogen storage. Is it possible to store electricity, oxygen and hydrogen in one electrochemical cell? Is it possible to decouple the electricity input from the oxygen and hydrogen output? Can a single electrode be used for two purposes simultaneously? Are configurations with multiple electrodes scalable to larger arrays?
3. Fundamentals of iron electrodes: Which phases occur for the first and second iron discharge plateau? And why are iron electrodes less responsive to higher discharge rates?
4. Sustained hydrogen from intermittent sources: Decoupling of the electricity input and the hydrogen output is possible with an electrochemical cell consisting of at least three electrodes. Is more sustained hydrogen from intermittent sources also feasible with a standard electrochemical cell with two electrodes?
5. Doped iron electrodes: Iron electrodes can have a limited rechargeability and can show gas accumulation inside the electrode. Does the addition of dopants enhance the ability of the iron electrode to recharge? Do these dopants enhance performance and the material utilisation?
Battolyser
NiFe batteries are known to be practically indestructible. However, these NiFe batteries have the disadvantages of hydrogen and oxygen production and selfdischarge which makes them inefficient as a battery. In the battolyser we promote this “hydrogen-side-effect”, and obtain an energy-efficient device that produces hydrogen with excess energy with the potential to reduce undesirable renewable electricity curtailments. Such a device can be operational around the clock: either the surplus of electricity is used to charge the battery and to produce hydrogen or electricity is provided to consumers.
The battolyser will supply hydrogen when overcharged, following an intermittent pattern of renewables availability. Therefore, downstream infrastructure needs the capability to handle an intermittent hydrogen input or requires a hydrogen storage infrastructure. Under these conditions the battolyser has the potential to become an essential single-combined tool for the energy transition since renewable electricity can be stored and excess electricity can be converted efficiently into hydrogen.
Multi-Controlled (MC-)electrodes
Then we demonstrated that we could supply a sustained hydrogen output from an intermittent energy input and that time shifting the hydrogen output comes at low energy costs. We accomplished that by creating electrochemical systems consisting of more than two electrodes within a single electrochemical cell. Here the storage electrodes can be charged/discharged while gas production, hydrogen and oxygen, can occur simultaneously and at independent rates. We also demonstrated that storage electrodes can serve two different processes at the same time. The proposed concept of MC-electrodes allows for controlling and scaling up multi-electrode configurations to larger arrays. Most importantly we used it for decoupling the electricity input from the hydrogen output by the combination of an iron storage electrode with two gas evolution electrodes, one for hydrogen evolution and one for oxygen evolution with two independent circuits. The position of the electrode phases in the Pourbaix diagram indicates that charging the iron electrode together with oxygen production requires most of the energy while little energy is required to generate hydrogen from previously charged iron electrodes. Independent operation of both circuits enables decoupling of the electricity input and the hydrogen output, and the iron storage electrode serves as an electrochemical storage reservoir.
Time-shifting 50% of the hydrogen production requires only 5% of the energy while 95% of the required energy can be fed through a main controller when electricity is cheap and abundant. Moreover, hydrogen can later be provided from reduced iron electrodes with a substantial reduction of backup power. Compared to electrolysers, the electricity storage requirement is reduced by 85% to provide the same amount of hydrogen, using the previously reduced iron oxidation. In other words, seven times more hydrogen can now be provided from existing backup power, which can serve as a booster for delayed hydrogen generation.
Half-cell used as Hydrogen Storage and Production cell (HSP-cell)
We reduced the complexity of the system by combining the iron storage electrode with a bifunctional electrode for oxygen and hydrogen production which led to the concept of the HSP-cell. The HSP-cell is a simple half-cell, consisting of two electrodes which makes it easily scalable to larger bi-polar configurations. The HSP-cell can utilize the entire capacity of the iron electrode, comparable to the iron-air battery or battolyser, but delivers hydrogen instead of electricity. Both configurations can operate as a low-cost sink to store energy in reduced iron and both systems can use excess electricity for direct hydrogen generation to reduce undesirable curtailment of renewable power.
The replacement of the nickel hydroxide battery electrode by a thin bifunctional nickel metal electrode provides space and allows to increase the storage density. Considering only the iron electrode (and omitting counter electrode, electrolyte, casing, valves or other parts), a storage density of 0.78 Ah/cm3 is currently feasible, equivalent to 29 kgH2/m3 or to a compressed hydrogen storage density of 500 bar. The stored hydrogen can be released easily and controlled by applying a current. This reduces the safety risk associated with the storage of compressed hydrogen gas. During electrochemical hydrogen release, only hydrogen is generated inside the cell, which offers an oxygen-free hydrogen gas output even at low discharge rates.
The HSP-cells can be configured in a self-sustaining manner and in a way to provide a sustained hydrogen output from an intermittent input by simultaneous and phase-shifted operation of several units. The concept can provide sustained hydrogen to industrial processes or synthetic fuel production with an overall efficiency including storage and production which exceeds 80% when operated at 40 ◦C. Therefore, the HSP-cell has the potential to become an essential device to boost the energy transition.
Doped Iron electrodes
The iron electrode is the common part of all previously discussed configurations. Having an optimal iron electrode is essential since the iron electrode determines the rate capabilities and the efficiencies. In the battolyser thin iron electrodes suffice because the nickel electrode is capacity limiting. Thicker iron electrodes can be used in the iron-air battery/battolyser, in the MC-cell and in the HSPcell.
We developed a strategy to produce sintered iron electrodes to study the phase behaviour of the electrode in operando by means of neutron diffraction. The study revealed that substantial amounts of iron hydroxide were inside the bulk of the sample which could not be reduced back to metallic iron upon charging. We concluded that the electrochemical circuit within the electrode must be interrupted, and it is our hypothesis that gas accumulation within the cell negatively affected the ionic conductivity. We assume that gas accumulation within the electrode replaces electrolyte which increases the ionic resistance for phase transition. As a consequence, the inserted charge shifts from battery charging with phase transition to hydrogen evolution.
We wanted to improve the material utilization of the sintered iron electrodes and therefore needed to improve the ability of these electrodes to recharge. For this purpose, we added either zirconia oxide or alumina oxide to the electrodes. By adding metal-oxides to the electrode-composition we enhance the processability of the materials and the electrode performance.
With the new synthesis strategy, we produced thick sintered iron electrodes which show a volumetric storage density of up to 0.8 Ah/cm3 and reach areal storage densities of up to 160mAh/cm2. These values are among the best reported values in literature for sintered iron electrodes. In the process we may create a sulphur free system which potentially reduces corrosion issues, and which potentially reduce the deterioration of air electrodes.
Bridging intermittency with iron electrodes
Summing up, the creation of an energy system based on renewables confronts us with the intermittent nature of renewable power generation. To bridge the intermittency we need storage solutions for electricity and hydrogen. With a sustained hydrogen output synthetic fuels based on renewables could be produced on a large scale. With the nickel-iron battolyser and the iron-air battolyser we can store electricity and we can convert excess electricity into hydrogen to overcome the curtailment-problem. With the concept of MC-electrodes and of the HSP-cell we can efficiently control, store, and postpone the hydrogen output, to provide a more sustained hydrogen output. The iron electrode is present in all configurations and recharging was the main challenge. We addressed the issue of rechargeability with a modified synthesis strategy for sintered iron electrodes doped with Zr and Al instead of sulphur. Electrodes produced with this strategy may have the potential to perform as effective sintered iron electrodes.
With these new simple concepts and cost-efficient iron electrodes we offer new tools to support and accelerate the storage and conversion of renewable power, which is necessary for the energy transition and to overcome intermittency. We have to speed up the energy transition to limit the impact of climate change.","battolyser; battolyzer; hydrogen storage; hydrogen production; multi-controlled electrodes; hsp-cell; iron electrode","en","doctoral thesis","","978-94-6483-023-1","","","","","","2023-11-16","","","ChemE/Materials for Energy Conversion and Storage","","",""
"uuid:1c1796bd-9a12-4c82-b717-561acdeafea0","http://resolver.tudelft.nl/uuid:1c1796bd-9a12-4c82-b717-561acdeafea0","Another Hit On The Wall: Confined Wave Impacts on Hydraulic Structures","de Almeida Sousa, E. (TU Delft Hydraulic Structures and Flood Risk)","Hofland, Bas (promotor); Jonkman, Sebastiaan N. (promotor); Antonini, A. (copromotor); Delft University of Technology (degree granting institution)","2023","Hydraulic structures are crucial for navigation, water management and flood protection in low-lying coastal and delta regions. Their importance is expected to continue growing in the coming years, because of the consequences of climate change and the continuous development and urbanization of these regions combined with more strict safety requirements. These factors will lead to the construction of a series of new hydraulic structures around the world. In addition, existing hydraulic structures will be renovated after reaching the end of the envisaged design lifetime, and/or due to the previously described modification of load conditions and/or safety standards. Wave loads play a significant role in the stability of these hydraulic structures and a knowledge gap was identified regarding the characterization of confined wave impacts acting on vertical hydraulic structures with overhangs. For this type of wave impact, no validated load prediction method or design approach was previously available. This research addresses this knowledge gap, providing an experimentally calibrated load prediction model and a design approach for characterizing confined wave impact loads acting on vertical hydraulic structures with overhangs.
In a GEDM, a local field is enhanced into a nonlocal field and the nonlocal field is the output of the enhancement. Displacement-based GEDM enhances local displacement fields into nonlocal displacement fields instead of enhancing local equivalent strain fields into nonlocal equivalent strain fields, as in strain- and stressbased GEDMs. The key ingredient of the proposed extension is a transient internal length scale that tends to zero as the damage parameter tends to one. Various expressions for this transient internal length scale are proposed, formulated, and discussed. Also, the need for correction on the gradient activity operator in mode-II failure is demonstrated. To this end, an anisotropic formulation of the displacementbased GEDM is formulated and used to control the material failure mechanism in mode-II failure. Examples of the new model regularization capabilities are compared to the original/classical displacement-based GEDM with a constant internal length scale.
Despite the existence of spurious damage growth in mode-I failure for two dimensional problems (4-point bending beam example) for both the transient isotropic and the transient anisotropic versions, spurious damage growth is eliminated for mode-I failure in one-dimensional problems. Also, the proposed transient isotropic model eliminates spurious damage growth for mode-II failure in two-dimensional problems. However, the damage migration issue is not solved. This issue is addressed by the implementation of the transient anisotropic model. The transient anisotropic model has no damage spreading and damage migration issues in mode-II failure and realistic damage initiation and propagation are guaranteed. These features enable the representation of failure patterns i.e., thin crack-like shear-band. In practical terms this leads to a non-broadening shear fracture process zone in the wake of the crack tip, addressing one of the main criticisms of existing gradient damage models. Applicability of the proposed models is demonstrated by representative one- and two-dimensional examples.","Gradient-enhanced damage model; transient; anisotropic; displacement smoothing","en","doctoral thesis","","","","","","","","","","","Applied Mechanics","","",""
"uuid:caa3cf7f-d438-40ea-9c31-0033f8c38b1f","http://resolver.tudelft.nl/uuid:caa3cf7f-d438-40ea-9c31-0033f8c38b1f","Spectral Modelling of Coastal Waves over Spatial Inhomogeneity","Akrish, G. (TU Delft Environmental Fluid Mechanics)","Reniers, A.J.H.M. (promotor); Zijlema, Marcel (copromotor); Smit, Pieter (copromotor); Delft University of Technology (degree granting institution)","2023","Spectral wave models are widely used for wave prediction over large spatio-temporal scales. Over global scales, spectral models (e.g. WAM and WAVEWATCH III) are used regularly by environmental modelling centers, such as the European Centre for Medium-Range Weather Forecasts (ECMWF) and the American National Center for Environmental Prediction (NCEP), in order to support human activity at sea. Along the coasts, practitioners rely on spectral models which are designated to the coastal environment (e.g. SWAN and TOMAWAC) for applications such as coastal hazard assessment, future coastal development, planning of defense strategies for coastal safety, evacuation planning of coastal communities and so forth.
An important property that characterizes the spectral approach and enables its applicability for large scales is efficiency. This property is achieved owing to the simple wave description that underlies its formulation. Specifically, the spectral approach represents ocean wave fields as quasi-Gaussian, quasi-homogeneous and quasi-stationary processes. These convenient statistical properties provide a full statistical description of wave fields based on the energy spectrum alone, and therefore, allow to describe the waves in the ocean in a complete statistical sense through the solution of a single transformation equation - the energy balance equation.
The validity of this statistical modelling framework is based on the weak (in the mean) wave forcing and the dispersion effects. These two agents provide reasonable justifications that the deviation from the assumed statistical properties (i.e. Gaussianity, homogeneity and stationarity) is kept negligible in the course of wave evolution. While these arguments are reasonable in the open ocean (where dispersive effects are strong and wave processes are characterized by large scales), they become somewhat loose for the coastal environment (where wave dispersion weakens and wave processes develop rapidly). Evidently, processes like medium-induced wave interferences and energy exchanges due to shallow water nonlinearity are not properly represented under this statistical framework.
This study is set forward with the aim of advancing the spectral modelling capabilities in coastal waters by allowing the development of inhomogeneous and non-Gaussian statistics. To this end, the effort of this work is directed to three different parts, concerning three principle issues. The first part considers the formal connection between the classical deterministic formulation (e.g. the Euler equations) and the statistical formulation given by the so-called Wigner-Weyl formulation (a statistical framework that includes the information of wave interferences and reduces to the energy balance equation when interference effects are negligible). The second parts aims to generalize the Wigner-Weyl formulation (which presently accounts for wave-bottom interactions) to allow for the interaction of waves and ambient currents. Finally, the third part is devoted to the investigation of the quadratic modelling approach which defines the starting point for the present phase-averaged formulation of shallow water nonlinearity.
The objective of the first part of this study is achieved by showing the equivalence between a formal definition of the Dirichlet-to-Neumann operator of waves over variable bathymetry and the Weyl operator of the dispersion relation. This equivalence opens the door to a formal use of Weyl calculus, based on which the Wigner-Weyl formulation is formally derived. This result establishes the desired formal link between the deterministic formulation for water waves and the statistical formulation given by the Wigner-Weyl formulation, which includes the energy balance equation as a statistically well-defined limiting case. In the second part of this study, the Wigner-Weyl formulation for water waves is extended to account for wave-current interactions. The outcome is a generalized action balance model that is able to predict the evolution of the wave statistics over variable media, while preserving statistical contributions due to wave interferences. Comparisons with results of the SWAN model and the REF/DIF 1 model through several examples verify model performance and demonstrate that retention of interference contributions is essential for accurate prediction of wave statistics in shear-current-induced focal zones. Finally, the third part of this study explores the predictive capabilities of the quadratic approach. This is performed by analyzing the nonlinear properties of six different quadratic formulations, three of which are of the Boussinesq type and the other three are referred to as fully dispersive formulations. It is found that while the Boussinesq formulations predict reliably the nonlinear development of coastal waves, the predictions by the fully dispersive formulations tend to be affected by false developments of modulational instability. As a result, the predicted fields by the fully dispersive formulations are characterized by unexpectedly strong modulations of the sea-swell part and associated unexpected infragravity response. Additionally, this part of the study also presents an attempt to push the limits of the predictive capabilities of the quadratic approach. The outcome is the model QuadWave1D: a fully dispersive quadratic model for coastal wave prediction in one dimension. Based on a wide set of examples (including monochromatic, bichromatic and irregular wave conditions), it is found that the new formulation presents superior forecasting capabilities of both the sea-swell components and the infragravity field.
In summary, the overall effort of this study provides an additional step toward the broader goal of efficient and accurate spectral modelling capabilities of coastal waves. This step includes strengthening the theoretical foundations of the spectral approach, improving the spectral description of wave transformation over spatial inhomogeneity and helping to minimize the errors associated with the spectral formulation of shallow water nonlinearity. Ultimately, this study also points on and prepares the background to additional required model developments.","Spectral modelling; Statistical wave modelling; Quadratic modelling; Coastal waves; Wave interference; Wave nonlinearity; Wigner distribution; Weyl rule of association","en","doctoral thesis","","978-94-6366-691-6","","","","","","","","","Environmental Fluid Mechanics","","",""
"uuid:dd3bff55-b684-47cc-96f3-6ff109d345a0","http://resolver.tudelft.nl/uuid:dd3bff55-b684-47cc-96f3-6ff109d345a0","Analysis of thermoplastic composites and conduction welded joints","Tijs, B.H.A.H. (TU Delft Aerospace Structures & Computational Mechanics)","Bisagni, C. (promotor); Turon Travesa, A. (promotor); Delft University of Technology (degree granting institution)","2023","Thermoplastic composites enable new manufacturing techniques such as conduction welding to make the aviation industry more sustainable, while at the same time, provide great benefits to cost-efficient high-volume production. One of the benefits of welding is that it reduces the amount of mechanical fasteners required. Fastener-free joining also poses new challenges, because the performance of these highly loaded structural joints relies heavily on the performance of the thermoplastic polymer matrix. Furthermore, there is currently not much understanding of the mechanisms involved in thermoplastic welded joint failure, and the numerical and experimental methodologies, originally developed and validated on thermoset composites, have not yet been fully assessed for thermoplastic composites. On top of that, the process conditions to manufacture these new structures may have a significant influence on the mechanical performance of the material and can thus play an important role in the design of thermoplastic composite structures.
The objective of this research is to analyse matrix dominated failure of thermoplastic composites and conduction welded joints and to develop both experimental and numerical methodologies to support the design of thermoplastic composites structures. The research addresses important linkages between the three main pillars of Manufacturing, Experimental and Numerical analysis....","Thermoplastic composites; conduction welding; virtual testing; continuum damage model; cohesive zone model; interlaminar; fracture toughness; fiber-bridging; characterization","en","doctoral thesis","","978-94-6473-101-9","","","","","","","","","Aerospace Structures & Computational Mechanics","","",""
"uuid:b2d29368-cf59-4c2e-be32-60e794212d0a","http://resolver.tudelft.nl/uuid:b2d29368-cf59-4c2e-be32-60e794212d0a","Highly Efficient Inductive Power Transfer: Variable Compensation for Misalignment Tolerance and Voltage/Current Doubler for Battery Interoperability","Grazian, F. (TU Delft DC systems, Energy conversion & Storage)","Bauer, P. (promotor); Dong, J. (copromotor); Delft University of Technology (degree granting institution)","2023","Wireless charging has the potential to speed up the transition to electric vehicles (EVs) because it is intrinsically a user-friendly technology. Furthermore, it is essential when charging completely autonomous EVs, and it enables the charging of EVs in motion without using overhead cables. The most common technology used in EV wireless charging is inductive power transfer (IPT) with magnetic resonance coupling. This is based on the magnetic field exchange between coupled coils connected to compensation networks to minimize the circulating reactive power. IPT systems have two main variables influencing their operation: the coupling factor between the coils depending on their alignment, and the equivalent load based on the battery charging profile.
The coils' alignment and load operating conditions might vary when considering different applications. Nevertheless, all IPT systems share the same challenges: ensuring a highly efficient power transfer, guaranteeing that the intentionally radiated electromagnetic field (EMF) is both safe for the living beings in the surroundings and lower than the recommended electromagnetic compatibility (EMC) limits, and providing interoperability between IPT charging stations and EVs produced by different manufacturers. This thesis explores these matters. For instance, the content is divided into three main parts: conventional inductive power transfer systems, variable compensation, and voltage/current doubler (V/I-D) converter.","Power Electronics; Wireless Power Transfer; Wireless charging; Resonant converter; Inductive power transfer; Electric vehicle (EV)","en","doctoral thesis","","978-94-6483-055-2","","","","","","2024-01-01","","","DC systems, Energy conversion & Storage","","",""
"uuid:7e7749e5-f5bc-4fba-be46-c896cad7c4c8","http://resolver.tudelft.nl/uuid:7e7749e5-f5bc-4fba-be46-c896cad7c4c8","Assessing yeast proteome dynamics using high-resolution mass spectrometry","den Ridder, M.J. (TU Delft BT/Industriele Microbiologie)","Daran-Lapujade, P.A.S. (promotor); Pabst, Martin (copromotor); Delft University of Technology (degree granting institution)","2023","Mass spectrometry-based cellular proteomics has taken a prominent role in many fields of research, including life sciences, biotechnology and microbial ecology. Still, advanced mass spectrometry-based proteomics methods are not routinely applied to microbes. The cell factory and model organism Saccharomyces cerevisiae, for example, is very well characterized, however, many research questions surrounding proteome dynamics under different growth conditions, and the regulation of its complex metabolic network via post-translational modifications, remain to be answered. Therefore, the aim of this thesis was to establish and apply optimized protocols to enable the large-scale quantitative analysis of yeast proteome dynamics under highly controlled conditions. In addition, a novel mass spectrometric approach was established to quantify the degree of modification of metabolic enzymes, that allows for a better understanding of their role in metabolic regulation.","Proteomics; Yeast; Mass spectrometry","en","doctoral thesis","","","","","","","","","","","BT/Industriele Microbiologie","","",""
"uuid:8c0d814c-8cdd-49d8-ae82-2042fd31e324","http://resolver.tudelft.nl/uuid:8c0d814c-8cdd-49d8-ae82-2042fd31e324","Adsorption of organic micropollutants by zeolite granules and subsequent ozone-based regeneration","Fu, Mingyan (TU Delft Sanitary Engineering)","van der Hoek, J.P. (promotor); Heijman, Sebastiaan (promotor); Delft University of Technology (degree granting institution)","2023","Organic micropollutants (OMPs) that occur in the aquatic environment at trace levels are emerging concerns to society. Domestic wastewater is an important source. OMPs end up in surface water and groundwater via conventional municipal wastewater treatment plants (WWTPs), penetrating drinking water. Comprising pharmaceuticals, personal care products, pesticides, industrial chemicals, and other compounds, OMPs are persistent in water and can lead to adverse effects on human health under longterm exposure. As WWTPs are not designed to remove OMPs, various post-treatment technologies have been developed to remove OMPs from wastewater effluents over the last decades, including activated carbon adsorption, ozonation, and membrane filtration. However, the performance of these technologies is significantly influenced by natural organic matter (NOM). In the combined application of ozonation and activated carbon adsorption, the operational costs and the environmental impact are relatively high because of the off-site thermal treatment of the exhausted carbon. The AdOx technology aims to establish a new barrier by applying sequential adsorption and oxidation to remove OMPs from municipal wastewater effectively. As an alternative adsorbent for activated carbon, zeolite possesses uniform pores (0.6-1.0 nm) that appropriately match the molecules of OMPs. This uniform framework can potentially exclude the large molecules of most NOM fractions in wastewater. This innovative technology, selective adsorption of OMPs on zeolite granules followed by on-site ozone-based regeneration of the granules loaded with OMPs, can lead to the next generation of OMPs removal, characterized by high removal efficiencies, low costs, and low environmental impacts....","adsorption; organic micropollutants; ozone; regeneration; transformation products; wastewater; zeolite granules","en","doctoral thesis","","978-94-6366-689-3","","","","","","","","","Sanitary Engineering","","",""
"uuid:28c38358-a1ca-423c-97fb-841471138e56","http://resolver.tudelft.nl/uuid:28c38358-a1ca-423c-97fb-841471138e56","Developing Data-enabled Design in the Field of Digital Health","Jung, Jiwon (TU Delft Methodologie en Organisatie van Design)","Snelders, H.M.J.J. (promotor); Kleinsmann, M.S. (promotor); Delft University of Technology (degree granting institution)","2023","The research question of this doctoral thesis is: What can be the future impact of design (activities) in digital health, given the rise of data collection and analysis technologies? I answered this question on three knowledge levels: design vision, knowledge-generating approach, and design tool. In Chapter 1, I envision design activities for the collective computing era (an upcoming modern computing era with complex systems of massive social interaction through various connected computing devices) that data collection and analysis technologies are a part of. Based on the literature review and informants’ interviews, I developed a design vision that demonstrates the changes posed in design activities (design tasks, processes, and the designer’s role) due to the upcoming collective computing era, and provides guidance for adopting the changes. Consequently, the vision proposes that design tasks in the collective computing era move towards designing ‘complex system(s)’ and testing these within ‘society as a lab’. The vision’s guidance states that designers can approach these tasks by addressing communities and engaging with their data. In terms of the design process, the vision claims the ‘coexploration’ of the design problem and solution spaces. To tackle such change, the guidance suggests designers: the flexible combination and analysis of mixed data, working on social forces at a system level, and developing through multiple soft launches with modular designs. Finally, the designer’s role becomes conducting an ‘accountable implementation.’ The vision recommends approaching accountable implementation by incorporating a transdisciplinary vision of the value and control of the design output....","Design vision; Knowledge-generating approach; Design tool; Design for health; Digital health; E-health; Data-enabled design; Design methodologies; Design method; Machine learning for design; Design for Healthcare","en","doctoral thesis","","978-94-641-9760-0","","","","","","","","","Methodologie en Organisatie van Design","","",""
"uuid:0d49cb3e-6dd8-4a9e-abc6-b847de938aea","http://resolver.tudelft.nl/uuid:0d49cb3e-6dd8-4a9e-abc6-b847de938aea","How a changing climate is changing behavior: household adaptation to floods","Noll, B.L. (TU Delft Policy Analysis)","Filatova, T. (promotor); Need, Ariana (promotor); Delft University of Technology (degree granting institution)","2023","Floods appear in many of the world's oldest stories (i.e. Noah and the Arc in the Abrahamic religions, Manu in Hinduism, and the Gun-Yu myth in Chinese mythology). When observed historically, they often have an element of mysticism about them, symbolizing eradication and rebirth. In the present, however, there is little that is mystical about the devastation brought on by floods as they cause more destruction annually than any other hazard. With much of the modern development taking place along the coast or near riverways, assets and livelihoods are increasingly concentrated in exposed areas. By-products of climate change such as sea level rise and extreme precipitation events increasingly devastate these regions; with the projection that the risk of floods will continue to increase in the future.
Top-down, government-led adaptation to floods on its own cannot contend with growing risk; rendering household participation essential. Governments, risk modelers, scientists, and other interest groups (i.e. NGOs) need a solid understanding of household behavior in order to formulate strategies and engage stakeholders across scales to address climate-induced risks. This dissertation devotes its attention to better understanding households' perceptions, intentions, and behavioral drivers and their dynamics, concerning floods in various social, geographical, cultural, and environmental contexts. More concretely, the principal research objective of this dissertation is:
To progress toward an understanding of how households perceive, are affected by, and adapt to floods in various contexts over time.
Following a comprehensive review and analysis of prior empirical research on household flood adaptation, this dissertation presents the analysis of a panel survey carried out between 2020-2021 aimed at collecting data to tackle the aforementioned objective. Focusing on large urban centers in the United States, China, Indonesia, and the Netherlands I use various statistical techniques and methods to analyze the survey data and study a range of aspects from household perceptions as they concern floods and climate change, to reported adaptation behavior. The survey solicits information on 18 adaptation measures that range from inexpensive, actions that do not require considerable effort (i.e. having an emergency preparedness kit, emergency coordination with one's neighbor, etc.), to costly measures that require a substantial time investment (i.e. elevating one's home, waterproofing one's windows, etc.)
In analyzing how household adaptation decisions are influenced, depending on the \textit{type} of measure and the context in which the household resides, this dissertation offers insight into which socio-behavioral drivers and barriers of household adaptation are generic and those which may vary depending on the institutional and environmental conditions. A household's perceived ability to cope, and the emotion, ``worry,'' plays a substantial role in driving household adaptation intention. In contrast, the financially calculated risk-based drivers: the perceived probability of a flood happening and the perceived damage should a flood occur, generally have a more subdued effect on household adaptation intentions. This is related to the fact that not all households have sufficient capacity or awareness to subjectively assess the probability and damage of a potential flood. Individual risk-uncertainty - a trait more frequently found in populations historically more vulnerable to floods (i.e. women and lower educated) has a large detrimental effect on households' intention to pursue flood adaptation measures.
While internal perceptions are critical to consider, external factors can have an equally potent role in affecting household adaptation behavior. I examine the effect of context at multiple scales in this dissertation, assessing the role of social expectations, perceptions of government measures, and national culture on household adaptation decisions. Households use their observations of what others (i.e. their social network, the government) are doing with respect to flood adaptation, to inform their decisions. The degree to which both external and internal factors influence household adaptation decisions can differ based on the cultural and geographical context. Various factors have a weaker or stronger influence and at times even the opposite effect on adaptation behavior, depending on where the household resides.
While internal and external perceptions are requisite considerations in understanding household behavior, it is likewise crucial to account for experiences and the co-benefits of various household adaptive actions. The effects of prior flood experiences and the benefits of taking adaptations together are additional key considerations when studying household flood adaptation, due to the economic benefits that can arise from undertaking measures together. Furthermore, prior experience with floods can motivate adaptation behavior, but substantial financial damage from a flood impedes a household's adaptation intention; as their focus is on recovery, not adapting.
The findings in this dissertation are of use to scientists, modelers, risk specialists, and policymakers; whether they are designing models, a communication strategy, or a policy aimed at encouraging household action. With the effects of climate change increasingly affecting communities across the globe, households are having to contend with hazards that are more extreme and frequent than in the living memory of humanity. Unless immediate action is taken across scales, the harrowing effects of climate change are expected to increasingly threaten extensive populations globally. This dissertation provides insights into how households think, perceive, behave, and learn over time concerning one of the most deadly and damaging hazards: floods.
However, in the field of satellite Remote Sensing of agriculture, waterlogging has so far received little attention. Previous related research focused on remote sensing of inundation or monitoring surface water, but waterlogging in agriculture is an overlooked subject. Little is known about how waterlogging is present in (irrigated) agriculture and what the ability of different remote sensing techniques is to detect and monitor waterlogging. Therefore, this thesis aimed to extend knowledge on how waterlogging influences agricultural monitoring with satellite remote sensing. Ultimately, to set footsteps towards monitoring waterlogging with satellite remote sensing.
The results presented evolve around a sugarcane plantation in Xinavane, Mozambique. The plantation served as a case study to demonstrate different satellite remote sensing observations in the context of waterlogging. First, the case study is presented and a describption is provided on the ground data collected. By providing remote sensing evaporation estimates, the high demand of irrigation water is illustrated. Vast quantities of water is needed to sustain sugarcane crop growth in the semi-arid environment of the plantation.
To continue, with a thorough literature review and the case study it is demonstrated waterlogging is a major issue burdening crop productivity. By assessing different remote sensing evaporation algorithms the results showed currently available evaporation estimates interpret waterlogging stress as a need to irrigate. This implies, before evaporation estimates from satellite data can play a role in optimizing field-scale water use in irrigated areas, evaporation algorithms must be able to identify water stress only in the case of water deficit in the root-zone. Throughout the chapter the presence of waterlogging or crop response to waterlogging is illustrated in different satellite remote sensing observations. In sum, the results imply a need to integrate observations of multiple sensors and potentially ancillary data (e.g. DEMs) to unravel how to monitor waterlogging with satellite remote sensing.
In search for the influence of waterlogging on agricultural monitoring, the research continued by comparing optical vegetation indices, radar vegetation indices, and sugarcane yield over the growing season in the plantation. The analysis gave an interesting and unexpected result. Contrary to the expectation the results showed a negative correlation between the Cross Ratio (CR) and sugarcane yield over the growing season. A modeling study proved the negative correlation results from a change in the sugarcane's internal composition which affects the dielectric constant of sugarcane canopies observed. The chemical composition of plant water in sugarcane changes over the growing season. As a consequence of sucrose accumulation in the stalk, water is increasingly bound to sucrose and this process lowers the dielectric constant.
%The results predominantly show a decrease in observed vegetation water content, as a result of a change in chemical composition due to an increase in sucrose accumulation, lowers the backscattered signal.
To follow up, active and passive microwave observations, optical vegetation indices, and production data are evaluated in different seasons. In addition to a temporal change of sucrose and moisture, the results showed vertically the sucrose-moisture distribution changes as well over the growing season. Therefore, the vertical distribution of sucrose - plant moisture influences the dielectric constant and, hence, the backscattered signal. The results highlight the VV backscatter responds to the stalk biomass, which is also the reservoir of sucrose in the sugarcane crop.
Finally, the influence of waterlogging on Sentinel-1 backscatter was detected through benchmarking with passive microwave observations, optical vegetation indices, and production data in a period where waterlogging was reported. Despite a thick sugarcane canopy, an increase in VH and VV polarizations was observed as a result of waterlogging. The increase was present at all stages during the growing season. The difference in backscatter as a result of waterlogging was highest in the VH backscatter. Also, the effect of waterlogging is translated through to the CR, which proves CR can play a role in the discrimination of waterlogging.
The results presented in this thesis help to further understand the influence of waterlogging in agricultural monitoring. Also, this work shows to correctly interpret irrigation estimates and crop development, waterlogging and sucrose development need to be flagged or otherwise considered during the growing season. Especially radar observations from Sentinel-1 backscatter appeared to be useful in monitoring waterlogging and sucrose development.","Waterlogging; Agricultural monitoring; Sugarcane; Irrigated agriculture; Sentinel-1 backscatter","en","doctoral thesis","","","","","","","","","","","Water Resources","","",""
"uuid:6d68b9c6-4c0f-427a-9566-78ad9362d338","http://resolver.tudelft.nl/uuid:6d68b9c6-4c0f-427a-9566-78ad9362d338","Platform Ecosystems: Exploring Participation and Performance","Sobota, V.C.M. (TU Delft Economics of Technology and Innovation)","van Beers, Cees (promotor); Ortt, J.R. (promotor); van de Kaa, G. (promotor); Delft University of Technology (degree granting institution)","2023","Platforms are often seen as the most influential organizational form of our time. Harnessing the strengths of external parties allows for unprecedented innovation (e.g., Facebook, iOS). Platforms aggregate and match participants in fragmented markets (e.g., Craigslist, Marktplaats, Airbnb). As such, platforms often become the epicenters of industries and have often replaced incumbents. What leads to market power and growth of platforms? Understanding this is important if we want to create platforms where they are beneficial to the economy and society and counteract or regulate them where they are harmful. This dissertation investigates how platform participation and platform performance are related to each other. Participation refers to installing and using a technology. From the economics perspective, performance includes mostly financial indicators such as revenues or profit. However, it can also concern other indicators, for instance, the participation of complementors or users. Under network effects, current participation increases the platform’s value to future users, which is closely linked to performance. This dissertation consists of four chapters that together address the main research question. It draws on evolutionary economics, platform economics, and strategic management. It consists of conceptual (Chapters 2, 5, and parts of Chapter 3) and empirical studies (Chapters 3 and 4).","","en","doctoral thesis","","","","","","","","","","","Economics of Technology and Innovation","","",""
"uuid:8cabdf81-f503-463f-89d7-7a2874d3f876","http://resolver.tudelft.nl/uuid:8cabdf81-f503-463f-89d7-7a2874d3f876","Moral Values, Behaviour, and the Self: An empirical and conceptual analysis","van den Berg, T.G.C. (TU Delft Transport and Logistics)","Chorus, C.G. (promotor); Kroesen, M. (promotor); Corrias, L.D.A. (copromotor); Delft University of Technology (degree granting institution)","2023","","Moral Values; Moral Behaviour; Moral Self; Narrative Identity; Moral Foundations Theory; Moral Psychology; Phenomenology; Ricoeur; International Crimes","en","doctoral thesis","","978-94-6483-127-6","","","","","","","","","Transport and Logistics","","",""
"uuid:55b7c07d-9b1b-4e0f-b935-cbfa434e3f9a","http://resolver.tudelft.nl/uuid:55b7c07d-9b1b-4e0f-b935-cbfa434e3f9a","Evaporation of the miombo woodland of southern Africa: A phenophase-based comparison of field observations to satellite-based evaporation estimates","Zimba, H.M. (TU Delft Water Resources)","Savenije, Hubert (promotor); Coenders-Gerrits, Miriam (copromotor); Delft University of Technology (degree granting institution)","2023","Through precipitation retention and evaporation (by both interception and transpiration), woodlands play a significant role in the global moisture cycle. Evaporation is the largest, but at the same time, the most difficult flux to observe in a woodland. Accounting for woodland evaporation is important for hydrological modelling for the efficient development and management of water resources. Assessing evaporation is a challenging undertaking that involves the use of a wide range of equipment and requires skilled personnel. Much work has been conducted on assessing evaporation in agricultural crops. Even satellite data-based models are largely structured to assess evaporation in agricultural crops to the exclusion of understanding evaporation dynamics in natural woodlands, especially in African ecosystems. However, evaporation in woodland surfaces accounts for a significant portion of the water cycle over the terrestrial land mass. Understanding the characteristics of woodland ecosystem evaporation like interception and transpiration, is key to monitoring climate impact on woodland ecosystems, which is important for hydrological modelling and the management of water resources at various scales. One of the key aspects to enable this understanding is the knowledge of woodland phenological interaction with climate variables and the seasonal environmental regimes. “Vegetation phenology” refers to the periodic biological life cycle events of plants, such as leaf flushing and senescence, and corresponding temporal changes in vegetation canopy cover. Solar radiation, temperature and water availability (i.e., rainfall and soil moisture) are some of the key environmental variables that influence plant phenology. The attributes of woodland phenology, solar radiation, temperature and water availability differ across the diverse ecosystems globally, therefore, requires better understanding at a more local or regional level. Yet, evaporation of natural woodlands, especially in African ecosystems, with respect to phenological phases, are poorly characterised. This is largely because phenological studies have mainly focused on northern mid-latitude regions to the exclusion of other regions like the miombo of southern Africa. For increasing the predictive power of hydrological models, it is important to account for the interaction of woodland phenology with climate variables over the seasons and to characterise evaporation. This thesis aims at understanding the miombo woodland evaporation as a consequence of the vegetation phenological interaction with environmental and hydrological variables across seasons. Based on information in public domain, this study is the first independent field observation data-based characterisation of actual evaporation of the miombo woodland. The miombo is a heterogeneous woodland of the genus Brachystegia with the dominant species in the study location being Bauhinia petersenia, Brachystegia longifolia, Brachystegia boehmii, Brachystegia speciformis, Jubenerdia paninculata, Pericopsis angolensis, Uapaca kirkiana and Uapaca sansibarica. Unique phenological attributes are the simultaneous leaf fall, leaf flush and leaf colour changes that normally occur in the dry season between May and October. Most miombo woodland species are broad leaved and have developed dry season coping mechanisms such as deep rooting (capacity to access deep soil moisture and ground water) and vegetation water storage. The canopy closure is varied across the miombo woodland strata and is influenced by several factors including rainfall, soil type, soil moisture and nutrients, species diversity and temperature. These phenological attributes are species dependent, with varied response to phenological stimuli. This study sought to answer the question on the role of the phenology of the miombo woodland in the evaporation dynamics. The thesis also endeavoured to show how phenology, potentially, affects satellite-based evaporation estimates of the miombo woodland. The Luangwa Basin in southern Africa, a largely miombo woodland covered basin, was used as the study area. This basin was chosen because it is located in both the dry miombo woodland and wet miombo woodland in the Zambezian miombo woodland which is the largest strata of the miombo woodland. Furthermore, the Luangwa Basin is located in Zambia which is described as the country possibly with the highest diversity of trees and is said to be the centre of endemism for Brachystegia, with 17 species.
To answer the questions on the importance of the phenology of the miombo woodland on the evaporation dynamics, the study used a coupled approach by applying both satellite data and field observations. Phenological changes of the miombo woodland across seasons were assessed using satellite-based data, the normalised difference vegetation index (NDVI) and leaf area index (LAI). Satellite-based data, land surface temperature (LST) and normalised difference infrared index (NDII), were used as proxies for climate variables canopy temperature and canopy vegetation water content. Point scale field estimates of evaporation across three different phenophases of the miombo woodland were obtained using the Bowen ratio distributed temperature sensing (BR-DTS) system. By measuring profiles of air temperature and wet bulb temperature, the evaporation could be estimated via the Bowen ratio method (BR-DTS). Six satellite-based evaporation estimates were compared across different phenophases of the miombo woodland. This was meant to observe the phenophases in which significant diferences in the trend and magnitude of satellite-based evaporation estimates occured. The general water balance approach was used to assess annual actual evaporation at basin scale. Consequently, satellite-based evaporation estimates were compared to the BR-DTS-based evaporation estimates at point scale and the water balance-based evaporation at basin scale. Results, based on satellite data, show that the phenology of the miombo woodland, i.e., changes in woodland canopy cover and photosynthetic activities, have a season-dependent correlation with climate variables. Woodland canopy cover, across phenophases and seasons, appear to be more influenced rather by water than temperature. This may explain the particular species-dependent buffering mechanisms during water limited conditions i.e., leaf shedding, deep rooting systems with access to ground water, and the vegetation water storage mechanisms. In agreement with available literature in public domain it appears there is little variation in canopy cover/closure (i.e., proxied by LAI) in wet miombo woodland in the dry season. At the wet miombo woodland site in Mpika, Zambia, the BR-DTS observations showed that, across the different phenophases, the actual evaporation trend and magnitude appeared to be more associated with the available energy than the changes in the woodland canopy cover. Further analysis showed that the net radiation has a greater influence on actual evaporation as it accounted for more variations in the actual evaporation compared to the changes in the woodland canopy cover (i.e., NDVI). The energy partitioning showed that available energy expenditure varied with phenological season. In the green down phenophase during the cool dry season the available energy was largely partitioned as sensible heat flux. As the temperature and net radiation begun to increase in the early dormant phenophase during the late cool dry season (July August) the available energy appeared to be equally partitioned between sensible and latent heat flux. In the late dormant phenophase during the early warm pre-rainy season (i.e., September) available energy was largely partitioned as latent heat flux. In the green-up phenophase during the late pre-warm rainy season (i.e., October) and early rainy season (i.e., November to December) the avialable energy was largely partitioned as latent heat flux. During the rain days the available energy appeard to be equally partition between latent and sensible heat flux. It appears that as the net radiation and canopy cover increased the available energy was largely partitioned as latent heat flux during the dry season. A remarkable observation was the continued rising trend of actual evaporation even during the lowest woodland canopy cover period in August and September. The rising trend in actual evaporation during the dry season may be due to the developed dry season water stress buffering mechanism such as deep rooting with access to moisture in deep soils and possibly access to ground water. The trend of the BR-DTS-based actual evaporation of the miombo woodland in the dry season points to the interaction between hydro-climate variables (i.e., precipitation linked soil moisture and net radiation) and the plant phenology. When compared to field observations, at point scale, all satellite-based evaporation estimates underestimated actual evaporation of a wet miombo woodland in the dry season and part of the early rainy season. Substantial underestimations were in the dormant and the green-up phenophases. Additionally, except for the WaPOR, the trends of all other satellite-based evaporation estimates differed from that of field observations. Plausible explanations for the behaviour (trend and magnitude) of satellite-based evaporation estimates in the dry season include the non-integration of soil moisture directly into the modelling of transpiration and the optimisation of the rooting depth. For instance, the use of proxies such as the NDVI and LST for soil moisture in surface energy balance models, such as SSEBop, results in uncertainities as the proxies are unable to take into account other factors that influence the sensible heat flux. In MOD16 the use of relative humidity and vapour pressure difference as proxies for soil moisture may be a source of uncertainty in estimating transpiration. On the other hand it has been observed that direct integration of soil moisture in the MOD16 algorithm appeared to improve the accuracy of actual evaporation estimates. This may explain why the WaPOR which integrate soil moisture stress in the algorithm appeared to have a smilar trend to field observations and also had higher estimates of actual evaporation compared to the other satellite-based evaporation estimates. It has also been shown that optimising the rooting depth improves the accuracy of transpiration estimates in vegetation with a dry season. Most miombo woodland species are deep rooting with access to deep soil moisture and potentially groundwater. Therefore, direct integration of soil moisture into the algorithms for the satellite-based evaporation estimates and optimising the rooting depth is likely to improve the accuracy of actual evaporation estimates for the miombo woodland.
The phenophase-based comparison at pixel scale in dry miombo woodland and wet miombo woodland and at the Luangwa Basin miombo woodland scale showed similar results. In all three scenarios substantially high coefficients of variation in actual evaporation estimates among satellite-based evaporation estimates were observed in the water limited, high temperature and low woodland canopy cover conditions in the dormant phenophase. The coefficients of variation in actual evaporation estimates were also substantially high in the green-up phenophase at the boundary between the dry season and the rainy season. The lowest coefficients of variation in actual evaporation estimates were observed in water abundant, high temperature, high leaf chlorophyll content and high woodland canopy cover during the maturity/peak phenophase. The high coefficients of variation in actual evaporation estimates, among satellite-based evaporation estimates, in the dormant and green-up phenophases, points to the challenge of estimating the actual evaporation of the miombo woodland in the dry season and early rainy season. The same scenario emerged as was observed at point scale, with reference to field observations, in which satellite-based evaporation estimates which directly integrate soil moisture in their algorithm appeared to have higher estimates of actual evaporation in the dormant phenophase in the dry season. For instance, the FLEX-Topo and WaPOR integrate soil moisture in their algorithms. Compared to each other the FLEX-Topo and WaPOR appeared to have no statistically significant (p-value > 0.5) differences in their trends and mean estimates of actual evaporation in the dormant phenophase in the dry season. Compared to the FLEX-Topo and WaPOR the other four satellite-based evaporation estimates, GLEAM, MOD16, SSEBop and TerraClimate showed statisticantly significant (p-value < 0.05) differences in the trend and mean estimates of actual evaporation in the dormant phenophase in the dry season. Considering the canopy phenology and the associated physiological adaptation of the miombo woodland plants in the dry season, it appears that the direct integration of the soil moisture in the algorithms and optimising the rooting depth is likely to improve the accuracy of the satellite-based evaporation estimates. In the maturity/peak phenophase(s) during the mid-rainy season, compared to other satellite-based evaporation estimates, the MOD16 appeared to have significantly (p-value < 0.05) higher estimates of actual evaporation. The plausible explanation for this observation could be that the interception module of MOD16 is more responsive to the miombo woodland phenology. The wet miombo woodland intercepts between 17-20 percent of rainfall annually.
Compared to the general annual water balance-based actual evaporation all six satellite-based evaporation estimates underestimated actual evaporation of the Luangwa Basin. The implication of this observation is that satellite-based evaporation estimates likely underestimates evaporation even in non-miombo woodland such as the mopane woodland that are also part of the larger Luangwa Basin vegetation landscape. However, for a comprehensive overview of the performance of the satellite-based evaporation estimates there is need for vegetation type and land-cover type based assessments of actual evaporation for the Luangwa Basin. At both point and basin scale-based assessments, there was a negative linear relationship between the spatial resolution of satellite-based evaporation estimates and the estimated actual evaporation. Satellite-based evaporation estimates with fine spatial resolutions showed lower underestimates compared to those with coarser resolutions. The implication is that the finer the spatial resolution the lower the underestimation. However, at both assessment scales, the linear relationships between the spatial resolutions and the evaporation estimates were statistically insignificant (i.e., p-value > 0.05). The reason for this outcome is exhibited in that some satellite-based evaporation estimates with relatively coarser spatial resolutions, i.e., SSEBop at both point and basin scale and TerraClimate at basin scale, underestimated less compared to MOD16 which had a finer spatial resolution. Furthermore, at basin scale a coarser spatial resolution estimate FLEX-Topo and a finer spatial resolution estimate WaPOR showed similar magnitude of actual evaporation in the dormant phenophase in the dry season. The implication of this observation is that other factors (i.e., heterogeneity in the landscape, model structure, processes and inputs) influence more the estimated actual evaporation rather than the spatial resolutions of the satellite-based evaporation estimates. Consequently, it appears that satellite-based estimates at finer spatial resolution with the structure, processes and inputs that couple canopy transpiration with the root zone storage, taking into account the vertical upward (beyond 2.5 m) and horizontal moisture flux as well as the canopy phenological changes, are likely to provide actual evaporation estimates that reflect actual conditions of the miombo woodland. This is demonstrated by the WaPOR estimates which appears to include these aspects in simulating actual evaporation. The field-based actual evaporation assessments were conducted in the wet miombo woodland. It is possible that the phenological response to changes in hydrological and climate regimes in the drier miombo woodland are different from the observations at the Mpika site. Therefore, there is need for similar observations to be performed in the drier miombo woodland and to compare the results. However, this thesis has demonstrated the importance of understanding and incorporating the canopy phenology and dry season physiological adaptation (i.e., deep rooting) of the miombo woodland in modelling actual evaporation. Additionally, for basins with heterogenous woodland types like the Luangwa, it is important to conduct actual evaporation assessments in the different vegetation types. This is likely to give a more representative understanding of basin scale evaporation dynamics. Nevertheless, this study has provided a foundation on which other studies can build towards a more comprehensive understanding of the actual evaporation dynamics in this unique woodland.
The development process of an aircraft involves using low-fidelity aerodynamic models at the early stage of the design process to rapidly compute the loads acting on the airframe, and to evaluate the efficiency of wing control surfaces. These models are, however, limited to linear flow conditions, and transonic shock or flow separation cannot be simulated with such methods. This requires important safety factors and leads to generally heavier designs. Computational Fluid Dynamic with Reynolds-Averaged Navier-Stokes (CFD-RANS) analysis is capable of better aerodynamic predictions, but the computational time required for such simulations is too long to be efficiently included in the sizing process of the airframe. The approach proposed in this thesis aims to combine the accuracy of CFD with fast linear loads estimation. This is achieved by deriving reduced-order models (ROM) of the aircraft control surfaces and manoeuvre loads from rigid CFD analysis to improve the accuracy of faster but lower-fidelity results where needed. These fast aerodynamic models for the control surfaces also allow rapid control optimisation to evaluate their load alleviation potential.
The thesis starts by introducing and validating the unsteady and non-linear models with 2D examples. Then, it covers the application of these models to a flexible 3D wing. The models are validated against high-fidelity steady and dynamic Fluid-Structure Interaction simulations and show good agreement with a 5% to 10% error margin in loads and deformations in most of the cases. Finally, a wingbox sizing optimization is performed with active load alleviation. Choosing to either use the linear or the non-linear aileron model for the GLA alone leads to a 2.5% difference in the wingbox structural weight.
Ralstonia solanacearum and the Soft Rot Pectobacteriaceae, Dickeya solani and Pectobacterium carotovorum. They affect a broad variety of crops with hosts ranging from potato to flower bulbs, both being important cash crops worldwide and particularly in the Netherlands.
An ASTR pilot site located in North Holland was investigated where tile drainage water (TDW) is collected from a 10 ha agricultural field and infiltrated into a sandy, anoxic, and originally brackish aquifer. The TDW can mix with surface water where the selected pathogens are regularly detected. ASTR uses separated wells for infiltration and abstraction of the recharged water. This creates a soil passage and forces the water to flow through the porous medium (sand layers) of the aquifer. Water microcosms and column experiments were used to simulate the aquifer processes in the laboratory and analysed pathogen removal during ASTR.
The results showed that the die-off in the water phase depends on the residence time and ranged between 1.3 to 2.7 log10 after 10 or 60 days for R. solanacearum, respectively. A subpopulation of the bacteria persisted for a prolonged time at low concentrations which may pose a risk if the water is recovered too early. However, the soil passage within the aquifer proved to be highly effective in removing the bacteria by attachment (18 log10 after 1 m). Together with results of dose-response experiments where I studied the effect of contaminated irrigation water on potato plants, all results were ultimately combined in a quantitative microbial risk assessment (QMRA). QMRA is a useful (water) management tool to evaluate the treatment steps of water reclamation technologies and support decision-making processes. As a result of this PhD work, ASTR can be considered a natural treatment system to remove bacterial plant pathogens and provide safe irrigation water.","Managed Aquifer Recharge (MAR); Water quality; Irrigation water requirements; Pathogen removal; bacterial transport; Hydrus 1-D; Quantitative microbial risk assessment (QMRA); Dose-response; brown rot; Ralstonia solanacearum; Dickeya solani; Pectobacterium carotovorum; Column breakthrough analysis; Water scarcity","en","doctoral thesis","","978-94-6384-434-5","","","","","","","","","Sanitary Engineering","","",""
"uuid:3c490160-41c0-404e-8df3-88bf417eb9eb","http://resolver.tudelft.nl/uuid:3c490160-41c0-404e-8df3-88bf417eb9eb","Pretreatment of lignocellulosic biomass for acetic acid co-valorization","Jimenez Gutierrez, J.M. (TU Delft BT/Bioprocess Engineering)","Straathof, Adrie J.J. (promotor); van der Wielen, L.A.M. (promotor); Delft University of Technology (degree granting institution)","2023","The use of renewable resources is nowadays a well-established practice and a general policy to address the fossil fuel depletion, as well as the continuous increase in greenhouse gas emissions. Several approaches have been adopted, with a growing trend toward developing new technologies that target efficiency, sustainability and feasibility. Because it closes the carbon cycle, biomass has a significant potential as renewable source, and not exclusively for the production of energy. Thus, similar to the traditional refinery, the fractionation and conversion of sources to generate, separate and purify different products is also applicable to biomass. Hence, the concept of biorefinery enables the use of renewable feedstocks to obtain bio-based fuels, chemicals and materials in a greener and eco-friendlier manner. Moreover, lignocellulosic biomass (LCB) used as second generation feedstock, encompasses plenty opportunities due to features such as availability, price and versatility....","","en","doctoral thesis","","978-90-833109-8-5","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:a54d1bec-00f9-4dfe-ad3a-c8d6645d2b33","http://resolver.tudelft.nl/uuid:a54d1bec-00f9-4dfe-ad3a-c8d6645d2b33","Novel Methods for the Extraction of Galanthamine from Narcissus pseudonarcissus Bulbs","Rachmaniah, O. (TU Delft BT/Environmental Biotechnology)","Witkamp, G.J. (promotor); Verpoorte, Robert (promotor); Choi, Y.H. (copromotor); Delft University of Technology (degree granting institution)","2023","A large number of studies on Narcissus, a member of the Amaryllidaceae family, have been published. In particular on Narcissus species, their alkaloids content, their structures including with MS-fragmentation patterns, their preparative extractions, and the analysis of these compounds covering GC-MS, LC-MS, HPLC-DAD, as well as 1H-NMR have been intensively reported. However, aspects on pre-analytical steps, extraction in bulk quantities using green alternatives solvent have not been reported widely, hence leaving space for investigation. In this thesis, the sustainable production of galanthamine from Narcissus pseudonarcissus cv. Carlton bulbs, a relatively cheap biological left-over matrix from the agricultural-flower industry, was investigated within joint collaboration between the former Process Equipment Laboratory, and the Biotechnology department of Delft University of Technology, and the Natural Product groups of Institute Biology of Leiden University. The aim of the project was gaining insight into the extraction of N. pseudonarcissus alkaloids, especially galanthamine, by means of using green solvents instead of using volatile organic solvents (VOCs) as in the conventional process. Both supercritical fluid (SCF) (c.q. supercritical carbon dioxide) and natural deep eutectic solvents (NADES) which are recently considered as green solvents were applied. Classical alkaloids extraction methods by means of acid-base purification steps of alkaloids as well as an exhaustive Soxhlet extraction as a benchmarking method were also conducted for comparison. It was investigated whether the proposed method provides high yield and selectivity of the targeted compound. The described study must be considered as the first step for further studies on the commercial production of galanthamine from the biological matrix; to address the challenges met in the bulk quantity production of galanthamine, a N. pseudonarcissus alkaloid. Prior to doing the supercritical CO2 (scCO2) extraction, a literature study was carried out. According to the previous studies on the secondary metabolites (SMs) extractions by using scCO2, many aspects were found essential for the success of the extractions process. They are divided mainly into pre-extraction, extraction, and postextraction step, particularly when dealing with the plant’s matrices. Grinding and impregnation of the grinded material are important in this step as well as the drying of the material to keep the water level around 5-10% of dry weight. The selectivity of alkaloids is largely affected by adjusting the CO2 density which can be tuned by controlling the pressure and temperature of the scCO2 as well as by modifier addition. In the extraction step, particle size, porosity, contact surface area, and solubility of target compounds combining with the process systems, i.e. batch or continuous, play a major role. An integrated process including scCO2 extraction as well as fractionation in the postextraction step seems to be a promising strategy to enhance the yield and selectivity of targeted compounds. ..","","en","doctoral thesis","","978-94-6366-646-6","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:a908aba1-8210-4b75-b69f-77aea7871a56","http://resolver.tudelft.nl/uuid:a908aba1-8210-4b75-b69f-77aea7871a56","On Safety in Machine Learning","Viering, T.J. (TU Delft Pattern Recognition and Bioinformatics)","Eisemann, E. (promotor); Loog, M. (promotor); Delft University of Technology (degree granting institution)","2023","This dissertation focuses on safety in machine learning. Our adopted safety notion is related to robustness of learning algorithms. Related to this concept, we touch upon three topics: explainability, active learning and learning curves.
Complex models can often achieve better performance compared to simpler ones. Such larger models are more like blackboxes, whose inner workings are much harder to understand. However, explanations for their decisions may be required by law when these models are used, and may help us further improve them. For image data and CNNs, Grad-CAM produces explanations in the form of a heatmap. We construct CNNs whose heatmaps are manipulated, but whose predictions remain accurate, illustrating that Grad-CAM may not be robust enough for high stakes tasks such as self-driving cars.
Machine learning often require large amounts of data for learning. Data annotation is often expensive or difficult. Active learning aims to reduce labeling costs by selecting data in a smart way — instead of the default, random sampling. Active learning algorithms aim to find the most useful samples. Surprisingly, we find that active learning algorithms with strictly better performance guarantees perform worse empirically. The cause: their worst-case analysis is unrealistic. A more optimistic average-case analysis does explain our empirical results. Thus better guarantees do not always translate to better performance.
A learning curve visualizes the expected performance versus the sample size a learning algorithm is trained on. These curves are important for various applications, such as estimating the amount of data needed for learning. The conventional wisdom is that more data equals better performance. This means a learning curve strictly improves with more data, or in other words, is monotone. Deviations can surely be explained away by noise, chance, or a faulty experimental setup?
To many in our field this may come as a surprise, but this behavior cannot be explained away. We survey the literature and highlight various non-monotone behaviors, even in cases where the learner uses a correct model. Our survey finds that learning curves can have a variety of shapes, such as power laws or exponentials, but there is no consensus and a complete characterization remains an open problem. We also find simple learning problems in classification and regression that show new non-monotone behaviors. Our problems can be tuned so non-monotonicity occurs for any sample size.
Is there a universal solution to make learners montone? We design a wrapper algorithm that only adopt a new model if its performance is significantly better on validation data. We prove that the learning curve of the wrapper is monotone with a certain probability. This provides a first step towards safe learners that are guaranteed to improve with more data. Many questions regarding safety remain, however, this thesis may provide inspiration to develop more robust learning algorithms.
The main take-aways are (TLDR):
• Strictly tighter generalization bounds do not imply better performance.
• Explanations provided by Grad-CAM can be misleading.
• Even in simple settings more data can lead to worse performance.
• We provide ideas to construct learners that always improve with more data.
This thesis focuses on analyzing human behaviors in complex conversational scenes. It proposes novel computational methods that incorporate the context, which is the conversation group and the interaction scene. Prominent behavioral cues in social interaction include head and body orientations, as they are proxy indicators for visual attention and conversation group membership. This thesis first covers methods for head and body orientation estimation (under data-scarce and data-rich settings), and conversation group detection. These methods have an emphasis on learning from multimodal data and context modeling, and their efficacy is shown empirically. Then, the thesis addresses an open challenge in acquiring human social data in real-life by proposing an accurate and scalable method for data synchronization. Lastly, this thesis introduces a new dataset collected by the aforementioned synchronization method, capturing real-life interaction in a conference settings. Therein, results of tasks such as keypoint detection, action recognition, and conversation group detection are reported, which also motivate future research in this area. Combining these contributions in both computational method development and data collection, this thesis takes a step forward in understanding human behaviors in conversation scenes.
Space is the next frontier for innovations in IoT. The main idea is to employ space technologies for IoT applications. Space Internet of Things (Space-IoT), as we call, is a concept that involves a satellite, or a network of them, to address the main challenges in terrestrial IoT deployments – global coverage, scalability, and connectivity. Space-IoT is opening up a world of new possibilities for several applications.
Small satellites are the building blocks of Space-IoT. They represent a formidable mobile computing platform enabling large-scale space applications at a fraction of the cost of larger satellites. Space-IoT calls for hundreds or thousands of small satellites that can communicate directly with various IoT devices on Earth. However, access to space has been expensive due to the high satellite development and launch costs. Miniaturizing a satellite can reduce launch costs but presents a range of interdisciplinary challenges that must be tackled. Resources are severely constrained in terms of size, mass, and available power. Addressing these challenges requires different communities to push the envelope in the design and realization of miniaturized subsystems of a small satellite.
In this dissertation, we chart out a vision for Space-IoT and innovations in embedded and wireless systems for Space-IoT applications. We enlist several important challenges that need to be addressed immediately to bring the vision of Space-IoT to reality. This thesis targets one of the most significant tradeoffs – miniaturization leading to constrained energy while not compromising the reliability of operations of subsystems. We consider three subsystems of a satellite: communication, attitude determination, and health monitoring, to demonstrate the inter-dependencies and novel ways to tackle them. Further, we explain with examples what we envision for the next decade to facilitate Space-IoT.
In Space-IoT, the IoT nodes on Earth are expected to communicate with (small) satellites directly over hundreds of kilometres. Both these terrestrial nodes and the satellites in space are energy-constrained. Hence, the communications must not only be energy-efficient but also support long range. Moreover, the received signal strength and the Signal to Noise Ratio (SNR) on the receiver decrease as the communication distance increases. Further, Doppler shift is inevitable in Low Earth Orbit bound satellite communication. Boosting the transmission power and adopting high-gain large antennas are obvious solutions for reliable communication, however, not feasible with miniaturization and energy minimization as our objectives. One of the solutions to support low-power, long-range communication is to improve the demodulation technique to decode signals with low SNR.
In this dissertation, we revisit the demodulation approach of a widely used modulation technique - Frequency Shift Keying (FSK). We propose a scheme to demodulate bandpass sampled FSK signals that are influenced by Doppler shift and low SNR. Unlike the state-of-the-art techniques, our approach does not compensate for the Doppler shift but lives with it. To suppress the Doppler effect and improve the SNR of the received signal, we employ a matched filter and the Teager Energy Operator, respectively. With extensive evaluations using actual telemetry signals from two satellites, we demonstrate how our proposed technique outsmarts the state-of-the-art FSK demodulation schemes.
Besides the communication subsystem, Global Positioning System (GPS) is one of the essential but significantly energy-guzzling subsystems in a satellite. While big satellites typically do not have any constraints on energy consumption for GPS subsystem, such is not the case in miniaturized satellites. Unlike terrestrial GPS systems, several challenges are imposed on obtaining a position fix in space-borne GPS receivers. The high orbital velocities of a satellite (up to 7.8 km/s) result in a significant Doppler shift in the received signals by the receiver when compared to their terrestrial counterparts. Consequently, the receiver has to search for the GPS signals in a larger Doppler frequency range, thus increasing the signal acquisition duration. Further, the visibility of the GPS satellites to the receiver changes frequently due to high orbital speeds and orbital periods of satellites on which the receiver is mounted. As a result, the receiver needs to search for GPS satellites more often to get a position fix. Likewise, the visibility of GPS satellites is affected adversely if the satellite is tumbling. Due to these constraints, energy conservation techniques such as duty-cycling are not efficient; the receiver is ON most of the time, searching for GPS satellites to obtain a position fix.
To this end, we design a low-power, space-qualified GPS receiver for small satellite applications. We propose an algorithm to significantly improve the ability of the receiver to acquire GPS signals as quickly as possible, thus reducing the ON time when it is duty-cycled. We perform long-duration simulations and real-time in-orbit tests on our GPS receiver to evaluate its performance. Further, we demonstrate that up to 96% of energy savings can be achieved on our GPS receiver compared to the state-of-the-art receivers.
Space-IoT relies on a constellation of hundreds of satellites to accomplish global coverage. Disruption in services can occur if one of the satellites malfunctions or ceases to work. Certain applications may not endure such risks, especially where satellites are typically employed as secondary communication channels. Hence, it is crucial to monitor the health of satellites regularly.
Existing satellites are generally equipped with onboard health monitoring units as a part of the subsystems. However, they are tightly coupled in terms of hardware and software. Any fault in the subsystem may affect its onboard health monitoring modules as they are electrically connected. Hence, we propose a system called Chirper, which is an electrically isolated and independent module that monitors the health of critical subsystems. The Chirper is equipped with multiple sensors that can measure several parameters, such as temperature, bus voltage, current, and rotation rate, of a satellite at specified intervals and transmits them to ground stations through an independent communication module.
The proposed system is not only energy-efficient but also measures the different health parameters of a satellite reliably. This work mainly addresses the resilience and energy issues of a satellite. In this dissertation, we present the overall design of the Chirper. We also provide
a novel approach to measuring the DC voltage at different locations of a satellite in a completely isolated way. Further, we subject Chirper to different tests in state-of-the-art simulators and a helium balloon to evaluate its capabilities.
This thesis advocates that Space-IoT is an ideal complement to terrestrial IoT networks and deployments. Small satellites can bring the vision of Space-IoT into existence. However, several technical breakthroughs need to emerge in small satellites to realize Space-IoT. We tackled some of the primary challenges through theory, experimentation and demonstration on satellites in orbit. With the results obtained, we are convinced that revolutionary transformations can be brought in small satellites to enable Space-IoT and will significantly influence the space related-activities, both in research and development.
China is still in its immature stage, as evidenced by unstable rents and tenure, insufficient tenant rights, low levels of tenant satisfaction, minimal institutional landlord participation, and a lack of motivation among local governments to develop the PRS. This dissertation aims to gain an indepth understanding of the PRS in metropolitan China and explore how to improve its functioning using Shenzhen as a case study. Both qualitative and quantitative data were collected to examine the determinants of tenants’ intention to rent and residential satisfaction, the relationship between residential environment, social exclusion, and life satisfaction, the impact of landlords' management practices on tenants' housing experiences, and main challenges and solutions for a well-developed PRS. The results suggest that the PRS in Shenzhen is highly heterogeneous and comprised of several distinct sub-sectors. Housing policies should be tailored to each subsector's unique characteristics. The dissertation also reveals that the PRS is interconnected with other institutions such as the hukou system and education system. Therefore, a well-functioning PRS depends on the simultaneous reform of other sectors and institutions.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-676-3","","","","","","","","","Real Estate Management","","",""
"uuid:14351363-b5d1-41ec-ac32-8efb4817481b","http://resolver.tudelft.nl/uuid:14351363-b5d1-41ec-ac32-8efb4817481b","Understanding the Fundament of Virus Inactivation via Modeling","Tan, C. (TU Delft Electronic Components, Technology and Materials)","Zhang, Kouchi (promotor); Delft University of Technology (degree granting institution)","2023","Historically, viruses have always been the causative agent of most human diseases. As one of the most devastating pandemics in human history, the COVID-19 pandemic, associated with SARS-CoV-2, is responsible for tens of millions of casualties in the world since the end of 2019. Meanwhile, it also has destabilized global economics. Therefore, in the absence of vaccines and particular drugs, exploring effective disinfection methods for lethal viruses is critical to prevent the spread of pandemics. At present, many scientific studies have demonstrated a variety of inactivation methods for bacteria and viruses, including conventional and advanced ones. Those methods show high antiviral activity for viruses such as human CoVs. However, most research focuses on the effectiveness and efficiency of viral inactivation. Besides benefiting from the development of semiconductor technology, it is possible for viral inactivation by utilizing multi UVC-LEDs (UVC irradiation) or microelectrodes (electric field). Most importantly, the molecular-level mechanisms of virus inactivation are still unclear and debated. Therefore, it is meaningful to uncover the molecular-level mechanism of virus disinfection methods and explore more effective antiviral schemes for preventing viral diseases.","Molecular-level inactivation mechanism; Density functional theory; Molecular dynamics; Quantum chemical calculation; SARS-CoV-2; Heating inactivation; Chemical disinfectants; UVC irradiation; Electric field treatment","en","doctoral thesis","","978-94-6473-098-2","","","","","","","","","Electronic Components, Technology and Materials","","",""
"uuid:97127a09-d53b-4969-a1e0-ae09b5e92a68","http://resolver.tudelft.nl/uuid:97127a09-d53b-4969-a1e0-ae09b5e92a68","Efficient Control for Cooperation: Communication, Learning and Robustness in Multi-Agent Systems","Jarne Ornia, D. (TU Delft Learning & Autonomous Control)","Mazo, M. (promotor); Alonso Mora, J. (copromotor); Delft University of Technology (degree granting institution)","2023","Besides facing the same challenges as single-agent systems, the distributed nature of complex multi-agent systems sparks many questions and problems revolving around the constraints imposed by communication. The idea that multi-agent systems require communication to access information, to coordinate or simply to sense the environment they are acting on is sometimes overlooked when thinking of (and solving) emerging theoretical challenges. However, research problems related to communication in Cyber-Physical Systems have been a prevalent target for network control research for decades. In particular, we take inspiration on Event Triggered Control to study how communication affects performance, safety and robustness in multi-agent systems.
The work in this dissertation begins by covering a communication-based form of swarm robotics systems, where taking inspiration from ants, agents learn to forage cooperatively by communicating through the environment. We study what form of convergence guarantees we can derive in such systems and how these depend on the communication logic, proposing mean field formulations of such systems. We then draw an analogy between such learning-based swarms and distributed Reinforcement Learning (RL), and propose strategies to safely reduce communication of information in a general form of distributed Q-Learning problems. We extend these ideas to cooperative Multi-Agent RL systems where agents communicate state measurements with each-other, and define so-called robustness surrogate functions (value function robustness certificates). These certificates allow agents to distributedly estimate how robust the joint policies are against lack of information, and determine when do they need to update other agents with new measurements. At last, we look into the general problem of robust control in RL systems, and propose a characterization of policy robustness against state measurement noise that allows us to cast robustness as a secondary objective in a lexicographic optimization scheme, applicable to policy gradient algorithms. This answers the following premise: If we need to learn controllers that are then deployed in possibly uncertain environments, we may want to make sure that “robustifying” the controller does not decrease (excessively) the capacity of the controller to successfully solve the original problem (without uncertainty).
The work presented through this dissertation covers different problems and jumps between overlapping fields, but the methods and techniques proposed share a common principle: As complex multi-agent systems become more applicable to engineering problems, the need for understanding (and simplifying) communication rules is increasingly motivated by safety. Therefore, the problems and solutions considered aim to advance towards a formal understanding and design of communication logic in complex, model free multi-agent systems.","Multi-Agent Systems; Event-Triggered Control; Reinforcement Leaning (RL); Swarm robotics","en","doctoral thesis","","978-94-6384-432-1","","","","","","","","","Learning & Autonomous Control","","",""
"uuid:c9aa7e4e-5425-49cc-93ff-e53382402549","http://resolver.tudelft.nl/uuid:c9aa7e4e-5425-49cc-93ff-e53382402549","Spatiotemporal Variability in Global Storm Surge and Tidal Water Levels from Satellite Radar Altimetry","Bij de Vaate, I. (TU Delft Physical and Space Geodesy)","Klees, R. (promotor); Verlaan, M. (promotor); Slobbe, D.C. (copromotor); Delft University of Technology (degree granting institution)","2023","Extreme (still) sea levels and the possibly associated coastal floods, are generally linked to (high) tides and storm surges. The risk of coastal floods will likely intensify in the future. This is because, on the one hand, the population of coastal zones is expected to continue to grow, and, on the other hand, climate change may lead to an increase in the frequency and magnitude of extreme sea levels. Although observations suggest that on the global scale, sea level rise is the primary driver behind the increase in extreme sea levels, locally the increase in extreme sea levels may be amplified or even dominated by changes in stormsurges and tidal dynamics...","","en","doctoral thesis","","978-94-6469-294-5","","","","","","","","","Physical and Space Geodesy","","",""
"uuid:61711144-4dab-4dc4-b73d-bbbc71ce441a","http://resolver.tudelft.nl/uuid:61711144-4dab-4dc4-b73d-bbbc71ce441a","Developing circular building components: Between ideal and feasible","van Stijn, A. (TU Delft Real Estate Management)","Gruis, V.H. (promotor); Klein, T. (promotor); van Bortel, G.A. (copromotor); Delft University of Technology (degree granting institution)","2023","Creating a circular economy within the built environment plays a crucial role in society’s pursuit to become more sustainable. A building consists of building components, such as a kitchen, façade and roof. By replacing building components with more circular ones during new construction, maintenance and renovation, we can gradually make buildings circular. There are many design variants for circular building components. Knowledge on which variants are the most circular, and which are feasible to implement is lacking. In this dissertation, we develop and test 8 circular building components for housing renovation together with Dutch social housing associations and industry partners. Combining Action Research and Research through Design approaches, we generate knowledge on 4 research goals. We present a design tool for circular building components. We develop a Life Cycle Assessment model to assess the environmental impacts of circular building components. We compare the environmental performance of multiple circular design options for multiple building components and derive environmental design guidelines. Finally, we identify which stakeholder choices throughout the development of 8 circular building components led to feasible, circular building components. We conclude that not all circular design options lead to desirable circular building components; not all desirable circular design options are yet feasible. This research makes scientific contributions to circular design theories, management models for the built environment, and research methodology. We recommend 4 changes in practice to implement more circular building components.","Circular Economy; building components; housing; Life Cycle Assessment (LCA); design guidelines; feasibility","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-674-9","","","","","","2023-04-21","","","Real Estate Management","","",""
"uuid:ff29de25-2bc6-47a2-813c-102ccb663316","http://resolver.tudelft.nl/uuid:ff29de25-2bc6-47a2-813c-102ccb663316","Effect of biphasic system constituents on liquid-liquid extraction of 5-hydroxymethylfurfural","Altway, S. (TU Delft ChemE/Transport Phenomena)","de Haan, A.B. (promotor); Delft University of Technology (degree granting institution)","2023","HMF (5-hydroxymethylfurfural) is one of the bio renewable materials that can be used as an important platform chemical to produce biofuel and various chemical products. The main application of HMF in the chemical industry is a platform chemical for the production of plant-based polyethylene terephthalate (PET). HMF is produced through hexose dehydration which fructose or glucose is arranged as a feedstock. Liquid-liquid extraction can be applied in HMF production to enhance the selectivity and yield of HMF. HMF can be extracted from aqueous solution into the organic phase which prevents the degradation of HMF. Furthermore, it has been recognized that ionic liquid (IL) and deep eutectic solvent (DES) can be used as stabilizing agent in HMF production by suppressing the formation of side-products, hence increase the HMF yield as well. However, research on the systematic thermodynamics of HMF extraction is quite limited and needed to be developed. The thermodynamic data, such as phase equilibrium data and partitioning of HMF into organic phase are needed as basis for a rational design and optimal separation of HMF from the aqueous solution.
The objective of this research is systematically study the effect of biphasic system constituents on the liquid-liquid extraction of HMF at 313.15 K and atmospheric pressure (0.1 MPa). The extraction performance was evaluated based on the values of separation factor and HMF distribution coefficient which were determined from liquid-liquid equilibrium (LLE) data. The experimental LLE data of the investigated systems were also correlated well using thermodynamics models. The NRTL and UNIQUAC models were used to correlate the ternary experimental LLE data, whilst the experimental LLE data containing salt, IL, DES, and sugar were correlated using the NRTL model. We used aqueous-organic biphasic systems, and also added IL [EMIM][BF4] (1-ethyl-3-methylimidazolium tetrafluoroborate) or DES ChCl-urea (choline chloride-urea) in the aqueous phase. The effect of the addition of sugar (fructose) and salt in the variety of cation (Na+, K+) and anion (Cl-, SO4 2-) were also studied. Three different extraction solvents, methyl isobutyl ketone (MIBK), 2-pentanol, and tributyl phosphate (TBP), were used for the comparison.
According to the results in this study, it indicated that for 2-pentanol the HMF distribution coefficient is up to 1.4 times higher than MIBK. Besides, MIBK has a 2-3 times higher separation factor than 2-pentanol. While TBP is more selective as extraction solvent than the other two solvents, TBP is also superior in terms of HMF distribution coefficient. The salting-out strength of salts for organic solvent (MIBK or 2-pentanol)-HMF-water-salt systems are in the order NaCl > Na2SO4 > KCl > K2SO4. NaCl was found superior in both separation factor and distribution coefficient of HMF compared to the other salts studied. Furthermore, the separation factor and HMF distribution coefficient decreased with the increase of IL [EMIM][BF4] and DES (ChCl-urea) concentrations. However, DES (ChCl-urea) decreased the extraction performance less than IL [EMIM][BF4]. The addition of salt (NaCl) enhanced the separation factor and the distribution coefficient of HMF, enabling compensation of the IL and DES effects. The presence of salt can enhance both the extraction performance parameters up to 2-4 times for all the investigated systems studied using three different organic
solvents and also in the presence of IL or DES. While, the presence of fructose in the solution had limited effect on the extraction performance. In general, it can be inferred that by taking the advantage of IL/DES as stabilizing agent, aqueous IL/DES with NaCl is a good combination applied in HMF extraction process to achieve good extraction performance.","Extraction performance; 5-Hydroxymethylfurfural; Liquid-liquid equilibria; Separation process; Thermodynamics model","en","doctoral thesis","","978-94-93330-03-0","","","","","","2023-04-14","","","ChemE/Transport Phenomena","","",""
"uuid:60f45315-7ad9-4188-b76e-cbe0b0af6c27","http://resolver.tudelft.nl/uuid:60f45315-7ad9-4188-b76e-cbe0b0af6c27","Focal deblending: Using the focal transform for simultaneous source separation","Kontakis, A. (TU Delft ImPhys/Verschuur group)","Slob, E.C. (promotor); Verschuur, D.J. (promotor); Delft University of Technology (degree granting institution)","2023","Nearly-simultaneous-source (blended) acquisition differs from conventional acquisition in that seismic wavefields originating from different sources are allowed to overlap in the recorded seismic traces. This allows more flexibility in deciding the number of shots, the shot density and the effective acquisition time of a survey, but it adds the complication of having to handle blended wavefields.
This thesis explores an inversion-based deblending method for wavefield separation in the marine setting. As deblending is usually an underdetermined problem, extra information in the form of additional constraints and regularization is needed to obtain a unique solution with minimal blending-noise leakage. To this end, the proposed method uses the focal transform in combination with sparsity-promoting regularization to discriminate against solutions to the blending equation that are valid, but contain excessive amounts of blending noise. The focusing operation provided by the focal transform will tend to focus the coherent signal to be extracted but will not focus equally well incoherent blending noise. Sparse solutions will tend to retain the high-amplitude focused events but not the lower-amplitude blending noise. A key feature that makes sparse solutions possible is the ability to describe curved events in a subsurface-consistent manner, using few focal domain coefficients.
The focal transform can be defined in multiple ways, using one-way or two-way wavefield propagation operators. In the implementations described in this thesis, I use a crude velocity model, based on picked normal-moveout (NMO) stacking velocities, to construct focal operators that can focus surface data onto a set of depth levels where significant reflectors are found. This choice of velocity model is suboptimal for focusing purposes, but is a pragmatic compromise, given that a more detailed velocity model may not be available at the deblending phase of the processing workflow.
In principle the focusing and defocusing operations involve the entire dataset, which makes the focal transform computationally expensive to evaluate. An investigated remedy is to use acquisition-specific subsets of the input data to split the problem in smaller chunks, combined with a suitable flavor of the focal transform and focal grid. Another method extension that I discuss is that of using a focal-curvelet hybrid transform for deblending. The main advantage is that events with linear moveout tend to be more sparsely represented in a curvelet basis. However, this comes at the cost of extra computational effort and some difficulty in balancing the contribution of the two transforms to the final solution.
I test these approaches on both synthetic and field data, with examples on towed streamer and ocean-bottom-node acquisitions. While in most cases a perceptible amount of blending-noise leakage remains present in the results, a significant amount of blending noise is suppressed. In some cases the deblending process is able to uncover weak events previously masked by strong blending noise. When the hybrid transform is used, the results show a better recovery of events that are filtered out when the focal transform is used alone. Curved near offset events are in some cases also recovered with higher fidelity compared to using the curvelet transform alone.
A significant challenge is the sometimes limited focusing for field data and synthetics as a result of trying to approximate the kinematics of complex 3D velocity models with flat-layered models and stacking velocities. The computational cost of the method is also a challenge. While working with data and focal domain subsets helps, additional measures are needed before applying focal deblending on realistically-sized field data. I make several suggestions for modifications of the method and propose extensions for future research.","","en","doctoral thesis","","978-94-6366-677-0","","","","","","","","","ImPhys/Verschuur group","","",""
"uuid:efcc21a7-1d28-48bd-be94-be44fc8d5458","http://resolver.tudelft.nl/uuid:efcc21a7-1d28-48bd-be94-be44fc8d5458","Nanowire Josephson junctions in superconducting circuits","Bargerbos, A. (TU Delft QRD/Kouwenhoven Lab)","Wimmer, M.T. (promotor); Andersen, C.K. (copromotor); Delft University of Technology (degree granting institution)","2023","The Josephson effect is a quintessential topic of condensed matter physics. It has stimulated decades of fundamental research, leading to a plethora of applications from metrology to outer space. In addition, it is set to play a crucial role in the development of quantum computers, forming the dissipationless non-linear inductance that lies at the core of superconducting qubits.
While they are traditionally realized using oxide based tunnel barriers, in this thesis we construct Josephson junctions from non-insulating materials such as semiconducting nanowires and quantum dots. We investigate how their highly nontrivial interplay with superconductivity can lead to new effects, both of fundamental interest and of relevance for quantum applications. To study these effects we make use the exhaustive toolbox available for superconducting circuits, allowing us to probe the junction behavior to beyond what is possible with conventional transport techniques.
The first experimental chapter of this thesis examines the behaviour of a transmon that hosts a highly transparent semiconducting weak-link as the Josephson junction. In this system we find spectroscopic evidence for the predicted vanishing of Coulomb effects in open superconducting islands, in accordance with theoretical predictions from 1999.
In the second experiment we deterministically place a quantum dot inside the junction of a transmon circuit. We then demonstrate that by using microwave spectroscopy we are able to accurately probe the energy-phase relationship of the Josephson junction over a vast regime of parameter space. This reveals the remnants of a quantum phase transition, and allows us to probe the time dynamics of the junction parity.
We subsequently use the same type of device to reveal the predicted spin-splitting of the Andreev bound states in a quantum dot with superconducting leads, as brought about by the spin-orbit interaction. When combined with a magnetic field, this is shown to result in the anomalous Josephson effect. Furthermore, we demonstrate that transitions between the spin-split quantum dot states can be directly driven with microwaves.
This motivated the investigation of a novel superconducting spin qubit, performed in the fourth experiment. Here we demonstrate rapid, all-electric qubit manipulation in addition to detailed coherence characterization. We ultimately show signatures of strong coherent coupling between the superconducting spin qubit and the transmon into which it is embedded, setting the stage for future research of this nascent qubit platform.
In the fifth and final experiment, we utilize a different approach compared to the preceding chapters. While we once-more construct transmons based on semiconducting weak-links, we now do so to leverage the intrinsic magnetic field resilience of semiconducting nanowires. This allows us to use a single device to study the mitigation of phonon-induced quasiparticle losses by trapping the phonons using both super and normal-state conductors.
This thesis concludes by discussing several ideas and proposals that aim to leverage the alternative Josephson junctions studied in this thesis. Combined with the results of the preceding chapters, this shows that hybrid superconducting circuits can be used to obtain deep insights into the fundamental physics governing their constituent junctions, and opens avenues towards building better qubits.","","en","doctoral thesis","","978-90-8593-556-8","","","","","","","","","QRD/Kouwenhoven Lab","","",""
"uuid:03e18765-d269-4637-a801-7a79bec023d0","http://resolver.tudelft.nl/uuid:03e18765-d269-4637-a801-7a79bec023d0","Shedding Light on Electrochemically Doped Semiconductors: Photochemical Stabilization of the Charge Density in Quantum Dots and Organic Semiconductors","Eren, H. (TU Delft ChemE/Opto-electronic Materials)","Houtepen, A.J. (promotor); Eelkema, R. (promotor); Delft University of Technology (degree granting institution)","2023","To utilize the full potential of semiconductor materials in device applications including solar cells, LEDs, and lasers, the ability to precisely and controllably tune the charge carrier concentration and hence the doping density is crucial. The conventional methods such as impurity doping with thermal diffusion or ion implantation, have been successfully implemented for doping bulk semiconductors for decades. In spite of the maturity of doping with traditional methods, it has remained a long-standing challenge to introduce impurity doping successfully into organic and new generation of semiconductors, such as conducting polymers and quantum dots. Additionally, the prospect of new technologies and the shrinkage in the device dimensions to nanoscale have stimulated researchers to search for alternative methods for achieving doping of such semiconductor materials reliably.
Electrochemical doping is arguably the most powerful and versatile method for doping porous semiconductor materials, in which the charge carrier concentration can be precisely and controllably modulated as a function of applied potential by an external voltage source. Unfortunately, when the doped semiconductor film is disconnected from the voltage source, the electrochemically injected charges leave the film spontaneously in a matter of seconds to few minutes.
In that regard, the stability of injected charges as well as the immobilization of external dopant ions need to fixed for achieving stable electrochemical doping of such semiconductor films to be used in device applications. The research carried out in this thesis is aimed to enhance the stability of injected charges and the fixation of dopant ions with photopolymerization treatment at room temperature in electrochemically doped quantum dots and conducting polymers. This was attempted by understanding the underlying mechanism of electrochemical doping in such porous films and eliminating or minimizing possible causes for instability with the final goal of producing stable doped of semiconductor films.","","en","doctoral thesis","","","","","","","","","","","ChemE/Opto-electronic Materials","","",""
"uuid:b86bec0b-0f27-4fac-90ae-7ad4bcac407a","http://resolver.tudelft.nl/uuid:b86bec0b-0f27-4fac-90ae-7ad4bcac407a","Quantum Dots Coupled to Superconductors","Wang, Guanzhong (TU Delft QRD/Kouwenhoven Lab)","Kouwenhoven, Leo P. (promotor); Goswami, S. (copromotor); Delft University of Technology (degree granting institution)","2023","The search for Majoranas bound states has witnessed heated efforts in the past decade. This field of research lies at the intersection of both scientific and commercial interests. The Majorana quasiparticle, being its own antiparticle and exhibiting non-abelian exchange statistics, is a unique member of the family of condensed-matter quasiparticles, distinct from most fermions or bosons. These properties are predicted to be instrumental in the building of a new type of qubits, having no energy splitting between qubit states and intrinsically protected from decoherence. In addition, the theory describing Majorana modes has a rich connection to the mathematical language of topology, making its study also of theoretical value. Thus, the prediction of the existence of Majorana zero modes in hybrid semiconducting-superconducting nanowires has been a strong driving force behind the recent technological progress in the making of these materials and devices.
In this thesis, the most recent advance in materials, specifically the making of clean interfaces between semiconductors and superconductors, are applied to the study of the physical properties of superconducting-proximitized electronic states in semiconductors. This technology is combined with quantum dot techniques to investigate electron transport between individual quantum states in proximitized nanowires. The findings include better understanding of electron transport in these systems as well as presenting new potential applications to the field of Majoranas and beyond.
Following the introductory chapters, this thesis first demonstrates a high-efficiency Cooper-pair splitter, enabled by quantum dots with narrow linewidth and a superconductor with a hard gap. The techniques behind the improved efficiency can be used to make a generator of entangled pairs of electrons. We also demonstrate the use of quantum dots as spin detectors capable of revealing the spin structure of individual Cooper pairs. Next, we report the effect of a Cooper-pair splitter's peculiar response to the tuning of electrical gates in both experiment and theory. This includes the discovery of a new interference effect in electron co-tunneling processes through a superconductor. The key to observing this response is to ensure the hybrid nanowire is also a discrete quantum state instead of a superconducting bulk. The discovery above forms the foundation of fine-tuning the types of electron couplings between two quantum dots coupled via a superconductor. The power of this tunability can been seen via the successful making of a minimal artificial Kitaev chain, opening up new possibilities in the search for Majorana zero modes. This approach is less prone to difficulties encountered in other platforms such as material disorder and the interpretability of data.
Moving from studying quantum dots under the influence of a superconducting hybrid, later chapters of this thesis focus on investigating electron properties in the hybrid nanowire using quantum dots as spin-, charge- and energy-selective probes.
We first use them to detect and quantify the spin polarization of Andreev bound states in the hybrid nanowire. Using quantum dots as charge and energy detectors instead, we observe how electrons traverse through the bulk of a hybrid nanowire and reveal a thermoelectric conversion process in the conductance measurements of these devices. Finally, we report on the selective-area growth of InSb, the semiconductor used throughout this thesis, that can form the basis of future developments.","","en","doctoral thesis","","978-90-8593-554-4","","","","","","","","","QRD/Kouwenhoven Lab","","",""
"uuid:485031ef-4fcf-4c2f-9a6b-41b037e88afe","http://resolver.tudelft.nl/uuid:485031ef-4fcf-4c2f-9a6b-41b037e88afe","Systematic search for new solutions in lens design","Hou, Z. (TU Delft ImPhys/Optics)","Urbach, Paul (promotor); Bociort, F. (copromotor); Delft University of Technology (degree granting institution)","2023","In this thesis, we explore how lens design and optimization techniques can adapt to the design (optimization) space in order to increase the lens design efficiency. We extensively discuss the Saddle Point Construction (SPC), a method that can systematically search for new solutions, as well as replace high-dimensional searches with a discrete number of one-dimensional searches to increase efficiency....","Optical Design; Lens Design Method; Optimization; Saddle Point","en","doctoral thesis","","978-94-6384-431-4","","","","","","","","","ImPhys/Optics","","",""
"uuid:f468a65d-e4ba-42a5-b27b-e667ee582f39","http://resolver.tudelft.nl/uuid:f468a65d-e4ba-42a5-b27b-e667ee582f39","Autonome architectuur en de stad: Ontwerp en onderzoek in het onderwijs van La Tendenza","Engel, H.J. (TU Delft History, Form & Aesthetics)","Riedijk, M. (promotor); Cavallo, R. (promotor); Delft University of Technology (degree granting institution)","2023","Autonome architectuur en de stad gaat over de ‘revisie van de moderne architectuur’ na de Tweede Wereldoorlog. Frappant is de complete ommekeer in de appreciatie van de moderne architectuur die in de jaren zeventig heeft plaatsgevonden. In een breder verband getuigt de introductie van de term ‘post-modernisme’ daarvan. De rol van monumentaliteit in architectuur, wat te doen met de historische stadscentra, het vraagstuk van regionale tradities, kortom de relatie van architectuur tot de geschiedenis kwam in het centrum van de discussie te staan en zou uiteindelijk de bijl leggen aan de wortels van het discours van de moderne architectuur.
De vraag is, welke rol het Italiaanse neo-rationalisme, dat bekend is geworden onder de naam La Tendenza, in dit proces van ontbinding heeft gespeeld. Daar gaat deze studie met name over. Centraal staat ‘het wetenschappelijk en didactisch project’ waar La Tendenza aanvankelijk voor stond. Alleen vanuit dit gezichtspunt is volgens de auteur te begrijpen dat La Tendenza tegelijkertijd de moderne architectuur onder vuur nam én zich opwierp als haar ware erfgenaam. Een dergelijke manoeuvre was niet uniek. La Tendenza deelde die met de jongste lichting onder de leden van het Congrès Internationaux d’Architecture Moderne (CIAM), bekend geworden als Team 10, die zich na de opheffing van deze organisatie in 1959 over de geest van het avant-gardisme had ontfermd.","","nl","doctoral thesis","","","","","","","","","","","History, Form & Aesthetics","","",""
"uuid:d43f7180-be04-4e5b-a090-993f52433513","http://resolver.tudelft.nl/uuid:d43f7180-be04-4e5b-a090-993f52433513","Cellular balancing under dynamic conditions: A systems biology-based discovery using experimental and modelling approaches","Verhagen, K.J.A. (TU Delft BT/Industriele Microbiologie)","Daran-Lapujade, P.A.S. (promotor); Wahl, S.A. (promotor); Delft University of Technology (degree granting institution)","2023","Saccharomyces cerevisiae, also known as baker’s yeast, is a robust microorganism frequently used in industrial biotechnology. The scale of its applications ranges from several millilitres for research and process development in the lab to hundreds of cubic meters for cultivation in industrial production processes. In large-scale reactors mixing limitations inherently lead to physiochemical gradients in substrate and oxygen concentrations, pH or temperature. Such inhomogeneous environment in production processes can cause a reduced yield or titer compared to the small-scale development processes. Such scale performance differences can lead to significant worse process economics and increase costs and development time.
The scope of this thesis is to study and understand the regulation of Saccharomyces cerevisiae metabolism under dynamic substrate conditions, using both experimental and modelling approaches.","","en","doctoral thesis","","","","","","","","","","","BT/Industriele Microbiologie","","",""
"uuid:f3e42dee-c691-4dc8-879d-6b0f828c9d8f","http://resolver.tudelft.nl/uuid:f3e42dee-c691-4dc8-879d-6b0f828c9d8f","Improving satellite remote sensing methodologies for analyzing landscape dynamics in arid environments with focus on Egypt","Delgado Blasco, José Manuel (TU Delft Mathematical Geodesy and Positioning)","Hanssen, R.F. (promotor); Verstraeten, G. (promotor); Delft University of Technology (degree granting institution); Katholieke Universiteit Leuven (degree granting institution)","2023","","Earth Observation; SAR; Radar; Urbanization; Land Use Change; Dunes dynamics; Data Fusion; Machine Learning","en","doctoral thesis","","978-94-6361-830-4","","","","","","","","","Mathematical Geodesy and Positioning","","",""
"uuid:b31521e3-0d1b-4df0-a6d9-12f24f0a4a6e","http://resolver.tudelft.nl/uuid:b31521e3-0d1b-4df0-a6d9-12f24f0a4a6e","Liquid Territories: Configurations of geographic space in the cartographic projections of the Mekong River’s catchment areas","Romanos, C. (TU Delft Theory, Territories & Transitions)","Schoonderbeek, M.G.H. (promotor); van der Velde, J.R.T. (copromotor); Delft University of Technology (degree granting institution)","2023","The role played by the Mekong River in the organization of land and people is inextricably linked with a particular spatial category. The concept of the hydrological catchment extends the space of the river far beyond the limits of the river’s perennial waterbodies, to encompass vast areas inhabited by millions of people speaking different languages. Fundamental to the estimation of precipitation and water volume, areal denotations of the Mekong’s basin, delta and floodplain have been repeatedly drawn on maps by geographers, planners, engineers and cartographers. Mapped representations of the Mekong River however are not only the result of recording the flows of water, nor the domain of a single discourse. With diverging intentions, distinct and sometimes conflicting projections of the basin, delta and floodplain have prescribed the differentiation and unification of parts of mainland Southeast Asia, to articulate liquid territories that are outside a single state’s jurisdiction. As a result, the mapped articulation of surface water is reflected in the configuration of national boundaries and the arrangement of settlements. To understand how the Mekong’s catchments emerge as the geographic reference for human activities, the dissertation examines the technical and cultural notions that underpin the preparation of these maps. Drawing on the discourses of hydrology, geography, cartography as well as infrastructure design, military science, colonial politics and regional planning the research asks what territories are produced and maintained by evoking the geography of the river’s flows.","Mekong River; Mekong basin; Mekong delta; floodplain; regional planning; cartography; catchment hydrology; territory; water infrastructure planning; Settlement development; maps; geographic representation; geography; urbanization processes; hydrosocial territories; urban planning; territorial design","en","doctoral thesis","","","","","","","","","","","Theory, Territories & Transitions","","",""
"uuid:e9de48a5-e222-4b0d-b4c7-9bee9f61c73a","http://resolver.tudelft.nl/uuid:e9de48a5-e222-4b0d-b4c7-9bee9f61c73a","Exploring industrial community energy systems: A missing link in the industrial energy transition?","Eslamizadeh, S. (TU Delft Energie and Industrie)","Weijnen, M.P.C. (promotor); Ghorbani, Amineh (promotor); Delft University of Technology (degree granting institution)","2023","The transition to renewable energy sources affects all sectors of society, including the industrial sector. Besides climate policy ambitions and other concerns regarding the social and environmental acceptability of energy provision, the transition to renewables may also improve the availability and affordability of energy services. The latter holds especially in some developing countries, where the development of energy infrastructure often lags behind the needs of industry. For many industries, the energy transition challenge entails the future substitution of high temperature, fossil-fired processes to lower temperature e.g., electrochemical conversion routes, which will make them much more than now depend on the reliable and affordable provision of electricity. However, in many developing economies, even the current provision of electricity is far from reliable. Transitioning to power generation from renewable energy (RE) sources can contribute to a more diversified, resilient, and environmentally-friendly power generation mix.
If the energy sector in developing economies does not sufficiently invest in a robust generation mix for the future, industry itself may consider to take the lead. For individual companies, however, especially small and medium-sized enterprises (SME), the high upfront investment costs of infrastructure for harvesting and transporting renewable energy present a significant hurdle. Inspired by the literature on community energy systems (CES) and industrial symbiosis (IS), this thesis set out to investigate if, and under which conditions, industrial companies may be willing to join forces in industrial community energy systems (InCES) in order to secure their supply of electricity from renewable energy sources.","Energy transition; Industrial community energy systems; Agent-based modelling; Collective action; Industrial collaboration; Institutional analysis; Renewable energy systems; Industrial energy transition","en","doctoral thesis","","978-94-6366-672-5","","","","","","","","","Energie and Industrie","","",""
"uuid:b980646c-b40f-48f1-977e-9ccb4a86bcab","http://resolver.tudelft.nl/uuid:b980646c-b40f-48f1-977e-9ccb4a86bcab","Towards upscaling the Battolyser- An Integrated Ni-Fe Alkaline Battery and Electrolyser: A combined modeling and experimental study","Mangel Raventos, A. (TU Delft Large Scale Energy Storage)","de Jong, W. (promotor); Mulder, F.M. (promotor); Kortlever, R. (copromotor); Delft University of Technology (degree granting institution)","2023","Electrochemical cells and systems have been around for a few centuries. Lately, these technologies have been attracting attention. Although the technology to generate electricity from renewable sources is well developed and widely available -such as photovoltaic and wind energy- this is not always available. Because of this, it is necessary to store produced surplus electricity to be able to use it at moments when the sun is not shining or the wind is not blowing. Many different electrochemical technologies can be used to store electricity or transform it to a useful energy carrier- such as hydrogen. However, the energy transition will also need to address the optimal usage of critical materials. Integrating functionalities and optimizing energy storage can help bridge the gap between electricity production and consumption using only a limited amount of critical materials. New innovative technologies that use less critical materials will be key to sustainably transition to a fossil-fuel free future. It will be necessary to move forward and upscale technologies at a quick pace. A combined modeling and experimental approach can help move through the TRL development stages quickly, optimizing the use of resources and experimental work required. The battolyser is a new integrated battery and electrolyser system that provides flexibility in energy storage. During periods of high availability of renewable energy it can be charged indefinitely, filling up the battery capacity first and producing hydrogen from there on out. A battolyser system can be used to guarantee access to cheap electricity and green hydrogen, all in one device and using the materials required for one device. Modeling the electrochemical reactions of the battolyser and optimizing the cell design parameters when moving towards an upscaled system is a tool that can be used for the continuous development of a better prototype and scaling up. Chapter 3 describes the modeling studies performed on the battolyser system, including the relevant experimental validation. Here, a 1D COMSOL model was developed to study the cell parameters and understand the effect of electrode and gap thickness, electrode porosity, and electrolyte conductivity. Testing experimentally at larger scales is challenging and often not done. Highly alkaline KOH electrolytes are usually not tested in lab conditions, and therefore the effect of higher concentrations than 5M KOH is unknown on new electrode material developments. To optimize an integrated device, the effect on both the electrolysis function and the battery function need to be reconciled and designed for the specific application. In Chapter 4, extensive lab scale experiments on the electrolyte concentration are described, including different alkali metal cation concentrations. To optimize for different functionalities of the battolyser, different cations can be used at specific concentrations. A flow cell was designed and built, and different flow configurations were tested. 3D printing technology allows for quick iterations and modifications of the design, however the proprietary resins are usually not tested at highly alkaline conditions which could potentially cause degradation of the cell components. Working with higher than 5MKOH concentrations results in practical difficulties that will only scale with plant capacity. In Chapter 5, the preliminary results of a flow cell configuration are included. The results of this work can be applied directly to predict the optimal design and operating parameters of an up-scaled battolyser cell. This will allow for quicker iterations of up-scaled designs to further develop the prototype technology. For this, it is important to verify simulation results with experimental data. Using a combined approach including simulations and experimental work allows testing various setups and optimizing the energetic efficiency of the device. 3D printing manufacturing technology can also help speed up this iterative process to generate design modifications and quickly manufacture experimental setups to validate the simulation data.","","en","doctoral thesis","","","","","","","","","","","Large Scale Energy Storage","","",""
"uuid:43ed643d-0934-4f5c-a0e9-16d66b6c6c50","http://resolver.tudelft.nl/uuid:43ed643d-0934-4f5c-a0e9-16d66b6c6c50","Efficient Thermal Modelling and Topology Optimization for Additive Manufacturing","Ranjan, R. (TU Delft Precision and Microsystems Engineering)","van Keulen, A. (promotor); Langelaar, Matthijs (promotor); Ayas, C. (copromotor); Delft University of Technology (degree granting institution)","2023","With the advent of Additive Manufacturing (AM) techniques, the design principle of `form follows function' no longer remains a utopian proposition. The unprecedented design freedom offered by AM is making it possible to conceptualize highly performant designs by efficiently leveraging geometrical complexity. The increase in design freedom requires novel design tools which are tailored to capitalize on the form freedom offered by AM. Topology optimization (TO) is such a computational design tool which can find the optimal geometric layout of a part to achieve a pre-defined objective, while satisfying certain constraints. However, AM processes have inherent manufacturing constraints which should be considered at the design stage to ensure manufacturability. The suitability of TO as an ideal design tool is already widely recognized and there have been significant research efforts to integrate AM constraints within TO. In this regard, most AM-oriented TO methods utilize geometry-based constraint where a geometric AM design guideline is integrated within TO. The maturity of research in this direction is evident by the fact that most commercial CAD packages are already equipped with TO plugins including these geometry-based AM constraints. Although beneficial, such geometry-based TO constraint do not guarantee defect-free fabrication since manufacturability is not only a function of geometry, but depends on a range of complex physical interactions during the process. Therefore, a TO method that accounts for more of the physics of the AM process would enhance the likelihood of achieving better quality parts with reduced defects.
This thesis is focused on laser based powder bed fusion (L-PBF) since it is the most widely utilized AM technique for metal parts. However, L-PBF suffers from certain constraints which critically compromise the part quality and inhibit its adoption as a mainstream manufacturing method. Among the constraints, the issue of local overheating remains a critical barrier as it leads to poor surface quality, inferior mechanical properties and/or build failures. Moreover, uneven heating/cooling thermal cycles due to overheating could lead to development of undesirable residual stresses and distortions. Typically, overheating is associated with downfacing surfaces called overhangs which led to development of geometry-based design guidelines, for example, avoidance of geometric features with overhangs more acute than a certain threshold. This guideline has been the most common AM constraint to integrate within TO. However, it is evident by a number of numerical and experimental studies in the literature, that the avoidance of overhangs does not guarantee overheating free designs. Therefore, the two aims of this thesis are (1) to thoroughly investigate local overheating during L-PBF process using computational models and (2) to develop a novel TO for generating overheating free AM ready designs. In this regard, the extremely high computational cost of L-PBF models was identified as the biggest challenge for both the objectives i.e. quick assessment of overheating-prone features in AM parts and integration of a L-PBF thermal model with TO.
The first half of this thesis deals with a systematic investigation of the simplifications commonly used in the thermal modelling of the heat transfer phenomena during the L-PBF process. The simplifications have been classified based on the spatio-temporal resolution they assume for modelling the process. With help of numerical experiments, the findings reveal the relationship between spatio-temporal simplifications and their ability to capture certain process attributes. For example, it is found that if peak process temperatures are to be predicted, then short laser exposure times should be specified in the computational domain. On the contrary, if temperatures far away from the topmost layer are analyzed, a simplified model assuming a longer exposure time can capture it. These findings serve as guidelines in making informed choices while setting up an L-PBF thermal model. In addition to this, numerical discretization requirements associated with different simplifications are also provided. Next, a deeper investigation of relevant simplifications for detecting local overheating is presented. Three novel simplifications based on the analytical solution of the heat equation are presented which drastically reduce the computational expense while retaining the ability to identify overheating prone features. The most simplified model in this regard utilizes a localized steady-state analysis which provides maximum computational gain of approximately 600 fold as compared to a high fidelity transient simulation.
The second half of the thesis presents the integration of the aforementioned steady-state L-PBF thermal model with the density- based TO method. This is achieved by formulating a novel constraint which limits the peak temperature predicted by the simplified L-PBF model. This novel physics-based TO method is validated using in-situ optical tomography (OT) measurements. Comparing OT based overheating data across geometry-based and physics-based TO designs, it is revealed that the latter have a lower tendency of overheating. Finally, the usability of the new TO method is demonstrated on an industrial injection mould. Another application of the novel TO is demonstrated by designing support structures for optimal heat evacuation.
Based on the findings presented in thesis, it can be concluded that a physics-based TO method offers significant advantages over a purely geometry-based approach. In particular, it is shown that overheating avoidance cannot be assured just by avoiding acute overhangs. While for overheating detection even a simplification to steady-state analysis was possible, it is expected that for other aspects the full thermal history must be evaluated, which presents a challenge for future work. Apart from development of the novel TO approach, the second major contribution of this thesis are the insights developed regarding modelling simplifications which assist in drastically reducing the computational expenses associated with L-PBF modelling. It is expected that outcomes from this thesis will positively contribute towards development of efficient modelling techniques which will also inherently benefit further advancement of physics-based TO methods.
The framework is developed from a multi-barrier theory with the particular intention to include the effect of plastic strain and deactivation of hard inclusions. In order to quantitively determine the inclusion stress from far-field stress on a matrix, analytical equations are first derived. The proposed framework is first validated with examples of specimens taken from a S690 QT steel plate fractured at -100°C. Centreline segregation bands (CLs) appear in the middle-section specimens, containing smaller grains and elongated inclusion clusters. Two modelling approaches are compared to discuss the effect of CLs in cleavage modelling. A sensitivity study is performed to explore the influence of volume fractions, yield strength, and spacing of CLs. Then, the modelling approach is applied to determine the cleavage parameters across different types of steels. Cleavage parameters are compared among three tempered bainitic (S690) steels, an as-quenched martensitic steel, and a ferritic steel. The variation of cleavage parameters is discussed considering the influence of the matrix types and the hard particle types. Finally, cleavage simulations of the high strength steel after rapid cyclic heating and microstructures representing heat affect zones are performed. The simulations are compared with experiments that feature parametric variations of grain size, second particle size, and second particle density. The effect of different types of microstructures generated by heat treatments is quantitatively established.","Fracture Mechanics; Steels; Microstructures; Statistical modelling","en","doctoral thesis","","978-94-6384-429-1","","","","","","2023-03-26","","","Team Vera Popovich","","",""
"uuid:bb2db244-e032-46bd-a9d7-a36b9ce0ce0e","http://resolver.tudelft.nl/uuid:bb2db244-e032-46bd-a9d7-a36b9ce0ce0e","Divisorial gonality of graphs, the slice rank polynomial method, and tensor products of convex cones","van Dobben de Bruyn, J. (TU Delft Discrete Mathematics and Optimization)","Gijswijt, Dion (promotor); van Gaans, O.W. (copromotor); Delft University of Technology (degree granting institution)","2023","","Finite graph; Metric graph; Gonality; Chip-firing game; Treewidth; Tree decomposition; Monotone search strategy; Slice rank; Finite field; System of balanced linear equations; Convex cone; Partially ordered vector space; Ordered tensor product; Face; Extremal ray; Order ideal","en","doctoral thesis","","978-94-6384-425-3","","","","","","","","","Discrete Mathematics and Optimization","","",""
"uuid:66f0c152-65a0-45bc-b542-ba9799d6a0c1","http://resolver.tudelft.nl/uuid:66f0c152-65a0-45bc-b542-ba9799d6a0c1","The Circle of DL-SCA: Improving Deep Learning-based Side-channel Analysis","Wu, L. (TU Delft Cyber Security)","Lagendijk, R.L. (promotor); Picek, S. (copromotor); Delft University of Technology (degree granting institution)","2023","For almost three decades, side-channel analysis has represented a realistic and severe threat to embedded devices' security. As a well-known and influential class of implementation attacks, side-channel analysis has been applied against cryptographic implementations, processors, communication systems, and, more recently, machine learning models. Two reasons make these attacks powerful. First, they take advantage of unintended information leakages that the security designer could easily forget. These leakages can be conveyed from various sources, such as power consumption, electromagnetic emanations, time, temperature, and acoustic and photonic emissions. Protection from such leakages can be challenging and costly. Second, such attacks do not require complicated and expensive equipment or frameworks. Commonly, an adversary uses an oscilloscope to monitor some of those side-channel leakages, then performs statistical analysis to find the relation between the leakages and the actual executed values, and finally uses these relations to recover secret information.
Fortunately, hardware and software developers are prepared for these attack methods. Several protection mechanisms, also called side-channel countermeasures, have been implemented to increase the security assurance of their devices. However, this cat-and-mouse game is now changed because of the rising of artificial intelligence in side-channel analysis. Some countermeasures, resilient to conventional methods, can be easily bypassed by machine learning. This thesis aims to improve the capability of side-channel analysis using deep learning techniques. Specifically, we propose approaches covering complete deep learning-based side-channel analysis procedures (we denote them as ""The Circle of DL-SCA""). Before applying the leakages to launch actual attacks, in chapter 2, we offer strategies for improving leakage's ''quality'' from various aspects. Then, in chapter 3, the study focuses on critical deep learning hyperparameters and proposes two automated neural architecture search methods that release the burden of the evaluation in tuning the neural network.
Besides developing new attack strategies, we also focus on the existing attack methods and investigate how to enhance their efficiency, robustness, and explainability. Chapter 4 introduces an efficient learning scheme that can reduce the required training traces. Then, we develop an attack evaluation metric that can reliably reflect the performance and robustness of the model. In chapter 5, we create a novel methodology to evaluate the influence of noise and countermeasures on deep-learning models, then apply the research outcomes to design low-cost deep-learning resilient countermeasures. Our research outcomes will push the designers to develop more secure devices. The feed-forward loop between us (researchers) and designers can eventually make the electronic world more secure.","Side-channel analysis; Deep learning (DL); Pre-processing; Hyperparameter tuning; Metric; Countermeasures","en","doctoral thesis","","9789464730678","","","","","","","","","Cyber Security","","",""
"uuid:ac984b23-c4e1-4bf4-9668-5e3d54aec3ff","http://resolver.tudelft.nl/uuid:ac984b23-c4e1-4bf4-9668-5e3d54aec3ff","Cleavage Fracture Micromechanisms of High Strength Steel and its Heat-Affected Zones","Morete Barbosa Bertolo, V. (TU Delft Team Vera Popovich)","Popovich, V. (promotor); Sietsma, J. (copromotor); Walters, C.L. (copromotor); Delft University of Technology (degree granting institution)","2023","The use of materials in increasingly severe service conditions raises concerns about structural safety with respect to cleavage fracture. There are three main material-related challenges that structures face under harsh environments: 1) the trade-off between strength and toughness; 2) the ductile-to-brittle transition behaviour of BCC high strength steels; 3) the inhomogeneous microstructures found in multiphase steels, thick-section steels, and welded structures. Therefore, the objective of this research is to systematically investigate the cleavage fracture micromechanisms in high strength steels considering diverse microstructures (e.g., as-received commercial steel, thermally simulated heat-affected zones, and grain refined microstructure) and experimental conditions (e.g., plastic constraint and temperature). Thereby, this study provides a thorough understanding of the effect of the microstructural details on cleavage fracture behaviour of high strength steel structures allowing for failure control and improvement of cleavage-resistant steel’s design...","Cleavage fracture toughness; High strength steels; Micromechanisms; Multiphase; Heat-affected zones; Grain refinement; Microstructural characterisation","en","doctoral thesis","","978-94-6469-265-5","","","","","","","","","Team Vera Popovich","","",""
"uuid:99f7be1f-af10-4f12-8331-f1d65c76bd8e","http://resolver.tudelft.nl/uuid:99f7be1f-af10-4f12-8331-f1d65c76bd8e","Towards labonachip optical trapping and Raman spectroscopy of extracellular vesicles using multiwaveguide traps","Loozen, G.B. (TU Delft ImPhys/Computational Imaging)","van Vliet, L.J. (promotor); Stallinga, S. (promotor); Delft University of Technology (degree granting institution)","2023","Optofluidic lab-on-a-chips (LOCs) employing a dual-waveguide trap for optical trapping and Raman spectroscopy have proven to be attractive and potent tools for high throughput chemical fingerprinting of bio-particles for disease diagnosis. Among the relevant bio-particles are extracellular vesicles (EVs) which a been proven through recent studies to be potential biomarkers for identification of diseases, such as cancer. However, EVs are small with diameters ranging between 30 and 1000 nm and present a challenge for both on-chip optical trapping and Raman Spectroscopy. The research presented in this thesis is aimed at the development of a multi-waveguide optical trap aimed at the combined on-chip optical trapping and Raman spectroscopy for biochemical characterisation of single EVs.
Firstly, the capabilities and limitations of a dual-waveguide trap for stable on-chip optical trapping of EVs is investigated through an in-depth simulation study. This ultimately yields a comprehensive overview of stable trapping conditions for EVs in terms of EV diameter and refractive index, and the injected optical power.
Then, novel multi-waveguide traps are designed and fabricated. These multi-waveguide traps lead to stronger light confinement in the channel, resulting in improved optical trapping and Raman signal generation. This is experimentally demonstrated through the optical trap stiffness values and the recorded Raman signal strength of polystyrene beads generated between a 2-waveguide and 16-waveguide trap.
Finally, the 16-waveguide trap is used to demonstrate optical trapping of B. Subtillis spores, as an intermediate step towards EVs. Optical trapping of the spores is studied with both experiments and simulations. Special attention is paid to the effect of random phase differences between the beams exiting the waveguides on the optical trap quality.
In conclusion, the results show promising prospects for the realisation of multi-waveguide traps for on-chip biochemical fingerprinting of EVs with optical trapping and Raman spectroscopy.
In this thesis work, we, for the first time, successfully employed extrusion-based 3D printing techniques to fabricate biodegradable porous Mg and Mg-based scaffolds for application in orthopedics. We started with the optimization of the formulated binder system, the printing process, and the subsequent liquid-phase sintering process for the AM of Mg and Mg-based scaffolds. On this basis, a series of Mg and Mg-based porous scaffolds, including Mg alloy and Mg matrix composite scaffolds were successfully fabricated. Then, we conducted comprehensive studies on the microstructure, geometrical characteristics, in vitro biodegradation behavior, mechanical properties, and the in vitro biodegradation and the responses of preosteoblast MC3T3-E1 cells to the fabricated scaffolds to evaluate the ability of the fabricated scaffolds to satisfy the requirements of ideal bone-substituting biomaterials. By modifying the alloy composition and adding bioceramic components, the properties of the Mg scaffolds required were significantly improved as compared to those of the pure Mg specimens. The fabricated Mg-matrix composite scaffolds were shown to be the most promising materials to be further developed for bone substitution. Surface modification could also contribute to bringing the fabricated Mg scaffolds closer to meeting the requirements. Therefore, with proper material design and surface modification, the Mg-based scaffolds fabricated using extrusion-based 3D printing technique constitute a new category of porous Mg-based biomaterials that hold great promise for application as bone substitutes.","additive manufacturing; scaffold; biodegradation; mechanical property; biocompatibility; magnesium","en","doctoral thesis","","978-94-6384-426-0","","","","","","","","","Biomaterials & Tissue Biomechanics","","",""
"uuid:12708aca-dff2-4d59-aa0e-af5c275aa728","http://resolver.tudelft.nl/uuid:12708aca-dff2-4d59-aa0e-af5c275aa728","Anomaly Detection and Synthetic Data Generation for Power Systems Using Autoencoder Neural Networks","Wang, C. (TU Delft Intelligent Electrical Power Grids)","Palensky, P. (promotor); Tindemans, Simon H. (copromotor); Delft University of Technology (degree granting institution)","2023","The scale of the power system has been significantly expanded in recent decades. To gain real-time insights into the power system, an increasing number of sensors have been deployed tomonitor grid states, resulting in a rapidly growing number of measurement points. Simultaneously, there has also been a rise in the penetration of renewable energy generation, with energy production that is highly variable and exhibits strong interdependence between different production locations. Such interdependence also applies to electricity demand at various network positions. Furthermore, new demandside response strategies and policies enhance the flexibility of the power system, leading to changes in load profiles. These developments, combined with the structure of the network itself, mean that measurements in the power system generally exhibit strong dependencies. This dependency means that if you know one or more values, you can infer information about others. This applies to time series with measurements that follow each other chronologically as well as to snapshots that show different states of the system at a particular moment in time. A large collection of such time series and snapshots can be represented as a probability distribution in a multidimensional data space. While larger numbers of measurements enable smarter grid operations, high-dimensional stochastic variables with complex univariate and multivariate distributions could also complicate tasks in modeling power system data.....","Anomaly Detection; Synthetic Data Generation; Autoencoder; Power System Operation and Planning; Machine Learning","en","doctoral thesis","","978-90-833109-4-7","","","","","","","","","Intelligent Electrical Power Grids","","",""
"uuid:e6002163-58f0-48e1-bce2-02ac111a8adf","http://resolver.tudelft.nl/uuid:e6002163-58f0-48e1-bce2-02ac111a8adf","Complexity of Electron Transport in Nanoscale Molecular Junctions","Ornago, L. (TU Delft QN/van der Zant Lab)","van der Zant, H.S.J. (promotor); Grozema, F.C. (promotor); Delft University of Technology (degree granting institution)","2023","In this dissertation, we analyse the charge transport of nanoscale molecular junctions in mechanically controllable break junction (MCBJ) experiments. In particular, we focus on the characterization of molecular features going beyond the ""single-peak"" picture, that is, considering features in the measurements in addition to themost prominent one. To achieve this goal, we use a combination of improved experimental techniques and data analysis...","Single-molecule; Molecular electronics; Break Junction; Nanoscale Charge Transport; Molecule-Metal Interface; Nanotechnology","en","doctoral thesis","","978-90-8593-552-0","","","","","","","","","QN/van der Zant Lab","","",""
"uuid:e9420781-7c76-453b-bacb-d806c8bf5fa8","http://resolver.tudelft.nl/uuid:e9420781-7c76-453b-bacb-d806c8bf5fa8","Efficient Earthquake Inversion using the Finite Element Method","van Zwieten, G.J. (TU Delft Mathematical Geodesy & Positioning)","Hanssen, R.F. (promotor); van Brummelen, E.H. (promotor); Delft University of Technology (degree granting institution)","2023","A vital component in the management of seismic hazard is the study of past seismic events. Classically, this has been the domain of seismology, which studies the dynamic manifestations of the event to infer properties such as epicenter and moment magnitude. More recently it has become possible to perform similar analyses on the basis of the static consequences of a seismic event, as satellite borne Synthetic Aperture Radar (SAR) data allows us to compare the local surface geometries before and aftera seismic event. The locality of the deformation data promises reconstructions with greater detail and subject to fewer model uncertainties.
With current technology, it is not possible to use SAR to their full potential. The non-linearity of the static dislocation problem that links faulting mechanisms to observed deformations causes any inverse method to require many evaluations of the forward model. This poses limits on the permissible cost of solving the dislocation problem, restricting most approaches to simplified model assumptions such as material homogeneity and absence of topography. In situations where more accurate information is available, this presents a clear opportunity for improvement by accelerating the computational methods instead.
This thesis presents the Weakly-enforced Slip Method (WSM), a modification of the Finite Element Method (FEM), as a fast approach for solving static dislocation problems. While the computational cost of the WSM is similar to that of the FEM for single dislocations, the WSM is significantly faster when many different dislocation geometries are considered, owing to the reuse of computationally expensive components such as matrix factors. This property makes the method ideally suited for inverse settings, opening the way to incorporating all available in situ data in a forward model that is simultaneously flexible and cheaply evaluable. Moreover, we prove that the WSM retains the essential convergence properties of the FEM.
A limitation of the WSM is that it produces continuous displacement fields, which implies a large error local to the dislocation. We show that this error decreases rapidly with distance, and that in a typical scenario the majority of deformation data has a discretization error that is smaller than observational noise, particularly when a fault is buried. In the case of shallow or rupturing faults, neighbouring data needs to be discarded from the analysis to avoid disruption. With this measure in place, we show via Bayesian inference of synthesized datasets that the discretization errors of the WSM do not significantly affect the inverse problem.","earthquake; surface deformation; elastic dislocation; synthetic aperture radar interferometry (InSAR); Finite Element Method (FEM); Weakly-enforced Slip Method (WSM); inverse problem","en","doctoral thesis","","978-94-6366-673-2","","","","","","","","","Mathematical Geodesy & Positioning","","",""
"uuid:4e7b7a31-aa89-41df-b5bc-ffe66cb0a0f9","http://resolver.tudelft.nl/uuid:4e7b7a31-aa89-41df-b5bc-ffe66cb0a0f9","Electrode/electrolyte interfaces in (photo-)electrochemical devices","Venugopal, A. (TU Delft ChemE/Materials for Energy Conversion and Storage)","Smith, W.A. (promotor); Houtepen, A.J. (promotor); Delft University of Technology (degree granting institution)","2023","Climate change, due to the continued emissions of carbon dioxide into our atmosphere and subsequent warming of the planet, is an existential crisis facing humanity. The rising temperature is are resulting in the melting of our ice caps and glaciers, influencing the weather patterns, are making many parts of the world unliveable and are driving many species into extinction. If the current trend is continued, catastrophic and irreversible effects to our planet is almost guaranteed. To tackle this issue, we need to immediately find a viable replacement for the carbon emitting fossil fuels currently used in our energy, transportation and chemical infrastructures. Moving to renewable energy sources like wind and solar energy is already proving to be the answer to this problem. However, in-order to scale the use of these non-emitting sustainable energy sources rapidly, some of the issues associated with these sources like the intermittency and the heavy reliance of fossil fuels in the hard to electrify sectors need to be dealt with....","Interfaces; metal oxide semiconductors; (photo-) electrochemical systems; photocharging; infrared spectroscopy; polymer modified catalysts; water oxidation","en","doctoral thesis","","978-94-6419-756-3","","","","","","2023-07-01","","","ChemE/Materials for Energy Conversion and Storage","","",""
"uuid:cb101545-298b-47f4-9e5a-729283f5fdd7","http://resolver.tudelft.nl/uuid:cb101545-298b-47f4-9e5a-729283f5fdd7","Design and Evaluation of Dedicated Lanes for Connected and Automated Vehicles","Razmi Rad, S. (TU Delft Transport and Planning)","van Arem, B. (promotor); Hoogendoorn, S.P. (promotor); Farah, H. (promotor); Delft University of Technology (degree granting institution)","2023","Dedicated lanes have been proposed as a potential scenario for the deployment of connected and automated vehicles (CAVs) on the road network. However, knowledge on the design and operation of DLs and their impacts on the behaviour of drivers of CAVs and manual vehicles is lacking in the literature. This dissertation provides a research agenda on design and operation of dedicated lanes and investigates the impacts of such lanes on the behaviour of human drivers.","Connected and automated vehicles; Dedicated lanes; Driver behavior","en","doctoral thesis","","978-90-5584-323-7","","","","","","","","","Transport and Planning","","",""
"uuid:de97d58f-5d0e-47a6-b649-827062216fae","http://resolver.tudelft.nl/uuid:de97d58f-5d0e-47a6-b649-827062216fae","Customized 3D and 4D Design for Machine Knitting","Liu, Z. (TU Delft Emerging Materials)","Wang, C.C. (promotor); Geraedts, Jo M.P. (promotor); Doubrovski, E.L. (copromotor); Delft University of Technology (degree granting institution)","2023","Garments, one of the human basic needs, were customized and handmade before the Industrial Revolution. After the realization of mass production, the cost of a piece of clothing became lower, but some disadvantages arose. Garments were no longer made to measure and overproduction caused environmental problems. The new developments in digital garment design and digital customization target addressing these limitations.
The computational design of knitting attracted increased attention in recent years. In this dissertation, we consider the customized design and fabrication of 3D and 4D garments as knitwears. The 3D knitwear fits the target human body, and the 4D knitwear also considers comfort during body movement. The main research question (RQ) is: How to design customized 3D and 4D knitwear and generate instructions for a digital knitting machine?
In this dissertation, we researched computational knitwear design methods. We considered not only 3D fitting but also comfort during motion (4D). Our research can be applied in garment production (especially mass customization) or other knitting applications. Garment designers and other industrial designers can use the proposed methods to generate knitting instructions for free-form 3D surfaces. Our 4D design method helps designers place elastic or other varied knitting structures while keeping the intended 3D shape. This dissertation presents new perspectives on computational approaches to existing manufacturing techniques. It also provides enough details to further develop such design systems to be applied in practice.","knitting; computational design; computational fabrication; 3D garment; 4D garment","en","doctoral thesis","","978-94-6384-423-9","","","","","","","","","Emerging Materials","","",""
"uuid:1a678f17-c9c5-46c7-aca2-0dea1f00d1fa","http://resolver.tudelft.nl/uuid:1a678f17-c9c5-46c7-aca2-0dea1f00d1fa","Phase-Coded FMCW for Automotive Radars","Kumbul, U. (TU Delft Microwave Sensing, Signals & Systems)","Yarovoy, Alexander (promotor); Silveira Vaucher, C. (promotor); Petrov, N. (copromotor); Delft University of Technology (degree granting institution)","2023","Autonomous driving is a new emerging technology that will enhance traffic safety. Automotive radars are essential to attaining autonomous driving since they can function in adverse weather conditions and are used for detection, tracking, and classification in traffic settings. However, the dramatic growth in the number of radar sensors used for automotive radars has raised concerns about spectral congestion and the coexistence of radar sensors. The mutual interference between multiple radar sensors downgrades the sensing performance of automotive radar and needs to be mitigated. Moreover, automotive radars have limited processing power, preventing them from using computationally heavy techniques to countermeasure interference. This thesis aims at developing, evaluating and verifying a robust waveform with required processing steps suitable for automotive radars to boost the coexistence of multiple radar sensors. To achieve this task, phase-coded frequency modulated continuous wave (PC-FMCW) and necessary processing steps are studied.
The first step is taken by investigating the sensing properties of the PC-FMCW waveforms and possible receiver strategies in Chapter 2. It is demonstrated that the ambiguity function of the code is sheared after frequency modulation. Moreover, different binary phase codes are examined with the PC-FMCW waveforms, and their sensing performance is compared in terms of integrated sidelobe level. Subsequently, two receiver approaches based on the dechirping process to decrease the sampling demands of the PC-FMCW waveforms are examined. The sensing performance of the investigated receiver approaches is compared, and the trade-offs between the sensing performance and the code bandwidth are analyzed. Moreover, the PC-FMCW waveform is applied to a real scenario, and the sensing performance of the investigated receiver structures is validated experimentally.
Chapter 3 investigates the beat signal spectrum widening due to coding and explores the smoothed phase-coded frequency modulated continuous wave (SPC-FMCW) to improve the sensing performance in the limited receiver analogue bandwidth. The abrupt phase changes seen in binary phase-coded signal is analyzed, and a phase smoothing operation to reduce the spectral broadening of the coded beat signals is proposed. The introduced SPC-FMCW waveforms are analyzed in different domains and compared with the binary phase coding. It is shown that the proposed smoothing operation decreases the spectral broadening of the coded beat signal and improves the sensing performance of the waveform.
In Chapter 4, the limitation in the group delay filter receiver approach is investigated, and the appropriate receiver strategy with low computational complexity is designed to process the PC-FMCW waveforms. The impact of the group delay filter on the coded beat signal is examined in detail, and a phase lag compensation is proposed to enhance decoding performance. It is demonstrated that performing phase lag compensation on the transmitted code eliminates the undesired effects of the group delay filter, and the beat signal is recovered properly after decoding. Then, the properties of the resulting waveforms are theoretically examined, and the sensing performance improvement over the existing approach is demonstrated. Moreover, both sensing and cross-isolation performance of the introduced waveforms with proposed processing steps are validated experimentally.
Chapter 5 studies the PC-FMCW waveforms for a coherent multiple-input-multiple-output (MIMO) radar. To this end, the MIMO ambiguity functions of the PC-FMCW waveform with different code families are investigated for their separation capability and compared with the PMCW waveform. It is illustrated that the PC-FMCW ambiguity function outperforms the PMCW one in terms of range resolution, Doppler tolerance, and sidelobe level for the identical types of codes. Afterwards, the developed phase lag compensated waveform with a single transmitter-receiver approach is performed to a coherent MIMO radar, and a novel PC-FMCW MIMO structure is proposed in Chapter 5. The introduced MIMO structure jointly utilizes phase coding in both fast-time and slow-time to achieve low sidelobe levels in the range-Doppler-azimuth domains while maintaining high range resolution, unambiguous velocity, good Doppler tolerance and low sampling requirements. The sensing performance of the introduced MIMO structure is evaluated and compared with the state-of-the-art techniques. Moreover, the proposed MIMO structure's practical limitations are investigated and demonstrated. In addition, the sensing performance of the developed approach with the simultaneous transmission is verified experimentally.
Finally, the interference resilience and communication capabilities of the developed PC-FMCW radar have been studied in Chapter 6. First, the automotive radar interference problem between various types of continuous waveforms is examined. The interference analysis formulation is extended to PC-FMCW waveforms, and a generalised radar-to-radar interference equation is proposed. The introduced equation can be utilised to quickly and accurately derive the numerous interference scenarios discussed in the literature. In addition, the proposed equation's validity to characterise the victim radar's time-frequency distribution is demonstrated experimentally using the commercially available off-the-shelf automotive radar transceivers. Afterwards, the robustness of the developed PC-FMCW radar against different types of FMCW interference cases is examined, and an improvement in the sensing performance over the conventional FMCW waveform is demonstrated. Moreover, the communication performance of the PC-FMCW with dechirping receivers is compared, and the trade-off between the bit error rate and the code bandwidth is investigated.
This thesis shows that the developed PC-FMCW radar structure can provide high mutual orthogonality to enhance the functioning of multiple radars within the same frequency bandwidth while sustaining the low sampling demand and good sensing performance. Consequently, the introduced approach can be effectively utilized by automotive radars to mitigate mutual interference between multiple radar sensors and improve the sensing performance of simultaneous MIMO transmission. Although the focus is on the application in an automotive radar context, the developed approach can also be used in other radar fields.","Automotive Radar; Phase-Coded Chirps; Interference Mitigation; MIMO Radar; Mutual Orthogonality; Radar Signal Processing","en","doctoral thesis","","978-94-6384-420-8","","","","","","","","","Microwave Sensing, Signals & Systems","","",""
"uuid:8890a4ca-c30a-42e3-9767-a0bf2e16a25a","http://resolver.tudelft.nl/uuid:8890a4ca-c30a-42e3-9767-a0bf2e16a25a","Estimation of multiple components and parameters for quantitative MRI","Nagtegaal, M.A. (TU Delft ImPhys/Computational Imaging; TU Delft ImPhys/Vos group)","Vos, F.M. (promotor); van Osch, Matthias (promotor); de Bresser, Jeroen (copromotor); Poot, D.H.J. (copromotor); Delft University of Technology (degree granting institution)","2023","Magnetic Resonance Imaging (MRI) is a flexible medical imaging technique that facilitates measurement of a wide range of contrasts particularly in soft tissue (e.g. brain and heart). Conventionally, qualitative images are acquired in which certain physical tissue properties are emphasized such as the transverse and longitudinal relaxation times. Such images are frequently referred to as ""weighted"", i.e. T1-weighted. Quantitative MRI (qMRI) aims at measuring the underlying tissue parameters governing the contrast instead of yielding mere weighted images. These quantitative parameter estimations were proven to be more reproducible than conventional MR images and more sensitive to certain disease processes, enabling enhanced longitudinal comparisons within subjects as well as comparisons between subjects.
MR Fingerprinting (MRF) is an example of such a quantitative technique. MRF uses a combination of transient state acquisitions with varying flip angle patterns, severe undersampling and advanced signal models to allow for fast qMRI acquisitions and accurate estimation of a wide range of parameters.
While most qMRI methods assume a single tissue type per voxel, this is almost never a valid assumption. This assumption especially breaks down at tissue boundaries or when tissues consist of multiple, mixed compartments, such as water contained between myelin sheets in the brain, often called myelin water surrounded by extra-cellular water.
The goal of this thesis is to develop enhanced methodology for quantitative MRI by extending traditional signal and image post-processing methods. Specifically, the focus is on MR Fingerprinting in combination with multi-component estimations, in which different compartments are included in a mixed estimation model. This is done to obtain more information from the acquired data and to improve quantification, therefore possibly obtain new clinical insights. Important steps towards clinical use are to enhance estimation accuracy and precision compared to previous methods and reduce the scan time.
In this thesis the Sparsity Promoting Iterative Joint NNLS (SPIJN) algorithm is proposed for obtaining multi-component estimations from MRF data. This enabled sub-voxel, fractional estimation of signal components in a region of interest, without making a priori assumptions about tissues expected to be present. The main novelty of this method is to combine a non-negativity with a joint-sparsity constraint that limits the total number of tissues identified in a region of interest. As a result it became possible to obtain magnetization fraction maps of the white matter, gray matter, CSF and a component with shorter relaxation times related to myelin water.
The repeatability of the proposed method is studied in 5 subjects that were scanned 8 times with one week in between the scans each time. Comparison of the obtained white matter, gray matter and CSF maps with segmentations from conventional methods shows high repeatability of the estimated relaxation times and more fine structures in the CSF magnetization fraction maps.
Additionally, the proposed SPIJN algorithm was applied to data from a more conventional qMRI sequence, i.e. a multi-echo spin-echo sequence, to obtain estimations of the so-called myelin water fraction in the brain. The resulting images show significantly improved noise robustness compared to the standard multi-component analysis method, improving the usability.
MRF scans can be acquired in a relatively short acquisition time of less than 30 seconds per slice, but this will still result in 15 minutes of total scan-time when full brain coverage is needed. A further reduction in acquisition time is desirable for clinical usage, in which every minute counts. Therefore, improved reconstruction methods for MRF data are proposed, especially tailored to multi-component estimations. In in vivo scans we showed the improved image quality enabled by the proposed methods.
In another study, We applied the SPIJN algorithm to MRF brain scans from MS patients. In the results that we obtained we observe that white matter changes are reflected in a component with prolonged transverse relaxation times which is less pronounced in data of healthy controls. We hypothesize that the observed component reflects an increase in extra-cellular water and allows for early characterization of white matter damage.
In a related project, an adaptation on the SPIJN algorithm was introduced that is more sensitive to small local changes. The adjusted algorithm is applied to imaging data of MS patients and it is shown that it can help to identify small cerebral lesions.
MRF sequences can be chosen rather freely, to further reduce the scan time and reduce the estimation error these sequences can be optimized. A method is proposed in which parameter maps of the brain are used as reference upon which the MRF flip-angle series is optimized, taking into account the used undersampling trajectories. As a result undersampling errors, a major source of estimation errors, are effectively minimized.
Finally, we investigated an adjusted simulation method of MRF sequences that is able to accurately model the effects of through-plane motion, which is a major source of errors in MRF scans. Such a model may support the development of new retrospective correction methods for this type of motion as it enables proper simulation of its effects.
In summary, this thesis proposes new methods for multi-component reconstruction and analysis, sequence optimization and studying the effects of motion in MRF and further investigates the possibilities of multi-component MRF.","Quantitative MRI; Multi-component estimations; MR Fingerprinting; Myelin water; Sequence optimization","en","doctoral thesis","","978-94-6384-421-5","","","","","","","","","ImPhys/Computational Imaging","","",""
"uuid:14619578-e44f-45bb-a213-a9d179a54264","http://resolver.tudelft.nl/uuid:14619578-e44f-45bb-a213-a9d179a54264","Wake and wind farm aerodynamics of vertical axis wind turbines","Huang, M. (TU Delft Wind Energy)","Ferreira, Carlos (promotor); Sciacchitano, A. (copromotor); Delft University of Technology (degree granting institution)","2023","The development of offshore wind energy, especially the steps towards deep water and/or higher density wind farms, revives the prospects of vertical axis wind turbines (VAWTs). Because VAWTs may reduce the cost of floating structures, there is a potential to lower energy costs. However, VAWTs are often assumed to be less efficient and less reliable due to a lack of understanding of their complex aerodynamics. This research is motivated by the fact that the performance of isolated turbines is no longer the most important factor, but rather performance at the wind farm level. The objective is to comprehend the possible performance of VAWTs in a wind farm. This dissertation advances the knowledge of wind farm aerodynamics of VAWTs mainly in four aspects: a) It demonstrates the relationship between the rotor loading and wake deflection/deformation, indicating directions for simplified modelling of VAWT wake control; b) It identifies vital characteristics of a VAWT wake, confirming the positive effects of wake deflection on wake recovery and interaction; c) It presents high fidelity experimental data on the wakes and wake interactions of VAWTs placed upwind and downwind, and validates some cutting-edge models with the data; d) it demonstrates the potential of increased power performance of VAWT arrays by controlling the VAWT flow fields. In pursuit of these advances, the dissertation identifies and tackles a series of research topics. The first is on the simplified wake models. The state-of-the-art VAWT wake models are mostly transposed from that for HAWTs, based on the planar actuator disc model. However, the effects of the actuator discs’ shape, specifically the aspect ratio of rectangular ones (corresponding to VAWTs with various height-to-width ratios), on the wake recovery are not considered. We propose the effective mixing diameter D∗ to normalise the shape effects on the wake velocity recovery based on momentum conservation. D∗ is validated through particle image velocimetry (PIV) experiments and Reynolds averaged Navier-Stokes (RANS) simulations, and it outperforms the existing scaling lengths in the literature. The dissertation further questions the validity of planar actuators as surrogates of VAWTs. It compares the three-dimensional wakes of an actuator disc and a lab-scale VAWT using robotic volumetric PIV. The comparison reveals substantial differences in the vortex systems, pointing out the limitations of planar actuators in reproducing VAWT wakes, especially when the wakes are deflected. The results indicate that surrogates for VAWTs should be three-dimensional, coinciding with the swept areas of blades. Based on the three-dimensional actuator cylinder model and a simplified formulation of the vorticity transport equation, we demonstrate the underlying physics of the generation of the streamwise vortex system, highlighting the effect of different load distributions on the wake convection and mixing. We propose four idealised force distributions resulting in different vortex systems and wake topologies; The proposed model is validated qualitatively with stereoscopic PIV measurements on a lab-scale VAWT. We quantify the faster wake recovery consequent from the wake deflection using the experimental data. Furthermore, the wake interaction of two VAWTs placed upwind and downwind is investigated experimentally via PIV and load measurements. The upwind VAWT with positively pitched blades deflects the wake significantly, improving the inflow condition of the downwind VAWT, and thus increasing the overall extraction of the streamwise momentum. With the high-quality experimental data, we validate the state-of-the-art analytical wake models and simulations for VAWTs and identify their validity ranges. Two analytical wake models (the Jensen model and the Bastankhah-Porte-Agel model), five wake superposition models (four algebraic models and one momentum-conservation based model) and an unsteady Reynolds averaged Navier-Stokes (URANS) simulation with VAWTs represented by the actuator line model are compared in both isolated and interaction scenarios. Based on the validated URANS simulation, we explore the wake deflection effects on the enhancement of wind power extraction for two up-scaled VAWTs placed in tandem. The blades of these large H-type VAWTs operate in a high Reynolds number (chordbased, Rec ≈ 1×107), which ensures a high stall angle; The tip speed ratio is set to a relatively high value (4.5) to avoid severe dynamic stalls. And thus, the simulated VAWTs are optimised for engineering operations and perform better than the lab-scale model introduced earlier. Combinations where each turbine operates in three different fixed pitch angles (-10◦, 0◦, 10◦) resulting in different wake deflections are compared. With wake deflections, the overall power coefficient is increased by up to 45%for a tested configuration, which also depends on the inter-turbine distances. Most interestingly, when the turbine blades are pitched in the same direction, the vorticity system in the wake is enhanced and thus yields a flying formation effect for a VAWT array. Furthermore, wakes of three inline VAWTs are scrutinised, focusing on the wake interactions, floor effects and momentum recovery. For all the cases the three VAWTs’ blades are pitched in the same direction following the so-called flying formation scheme. The vertical flux of momentum is notably enhanced by the VAWT array with positive blade pitches even with the floor present, which is vital to the overall increment of power extraction in a wind farm operating in the atmospheric boundary layer. The overall power extraction is increased by 35% compared to the array with zero blade pitches; More importantly, the downwind VAWTs increase their performances by 113%-154%. The latter indicates the tremendous potential of large wind farms consisting of VAWTs employing blade pitching.","Vertical axis wind turbines; actuator surfaces; wake; wake deflections; wake interactions; vortex system; particle image velocimetry","en","doctoral thesis","","978-94-6366-670-1","","","","","","","","","Wind Energy","","",""
"uuid:e88a9277-41a0-495e-8af2-561fbe1a543b","http://resolver.tudelft.nl/uuid:e88a9277-41a0-495e-8af2-561fbe1a543b","Insight into the multifunctionality of TiO2-based catalyst","Meeprasert, J. (TU Delft ChemE/Inorganic Systems Engineering)","Pidko, E.A. (promotor); Li, G. (copromotor); Delft University of Technology (degree granting institution)","2023","Computational chemistry provides powerful research tools for catalysis. It potentially allows us to study the structures of the catalytic sites and reaction mechanisms, which are difficult to observe only by experiment. This is particularly true for supported heterogeneous catalysts, of which reactivity and catalytic behavior are directly related to the presence of various functional groups and reactive ensembles on their surfaces. Such surface heterogeneities give rise to the formation of multifunctional reactive ensembles ready to convert substrate molecules to the desired products efficiently. At the same time, the presence of various reactive centers on the surface may contribute to undesirable conversion paths. Understanding the role of the multifunctional reaction environments established on the complex surfaces of supported heterogeneous catalysts is key to formulating design rules for achieving control over their activity and selectivity....","","en","doctoral thesis","","978-94-6384-413-0","","","","","","","","","ChemE/Inorganic Systems Engineering","","",""
"uuid:a4d2f3d3-74bc-4dcd-9e5a-71deb5f74a38","http://resolver.tudelft.nl/uuid:a4d2f3d3-74bc-4dcd-9e5a-71deb5f74a38","Extrusion-based 3D printing of biodegradable porous iron for bone substitution","Putra, N.E. (TU Delft Biomaterials & Tissue Biomechanics)","Zadpoor, A.A. (promotor); Zhou, J. (promotor); Apachitei, I. (copromotor); Delft University of Technology (degree granting institution)","2023","The treatment of large bone injuries continues to be challenging partially due to the limited quantity and quality of bone-replacing materials. Iron (Fe) and its alloys have been developed as a group of load-bearing biomaterials. Recent advances in additive manufacturing (AM) have enhanced the potential of Fe-based biomaterials as biodegradable bone substitutes. Firstly, AM Fe-based implants can now be personalized to exactly match the geometry of bony defects. Secondly, AM Fe-based implants with macro- and micro-scale porosities can mimic the mechanical properties of the native bony tissue. The mechanical properties can also be tuned to sustain over the biodegradation period until the new bone tissue takes over their biomechanical function. Finally, AM offers a pathway for in situ or ex situ alloying as well as for other types of multi-material printing to achieve multiple functionalities, such as paramagnetic properties, high rates of biodegradation, and, most importantly, bioactivity (e.g., to induce the osteogenic differentiation of stem cells or to ward off implant-associated infections). This thesis contributes to designing biodegradable Fe-based scaffolds material configurations and developing associated fabrication technology with a focus placed on achieving an appropriate biodegradation rate, paramagnetic behavior, mimicking trabecular bone mechanical properties, and osteogenic all at once.","extrusion-based 3D printing; multi-material additive manufacturing; iiron; iron-manganese alloy; iron-akermanite composite; iron-manganese-akermanite composite; Biodegradable; porous; biomaterial; scaffolds; bone tissue engineering","en","doctoral thesis","","978-94-6384-416-1","","","","","","2023-04-30","","","Biomaterials & Tissue Biomechanics","","",""
"uuid:b57a9199-45ef-40d4-a4ed-a9a124fb39ed","http://resolver.tudelft.nl/uuid:b57a9199-45ef-40d4-a4ed-a9a124fb39ed","The accommodation of martensitic phase transformation strains by the ferritic matrix in dual-phase steels","Atreya, V. (TU Delft Team Maria Santofimia Navarro)","Santofimia, Maria Jesus (promotor); Bos, C. (copromotor); Delft University of Technology (degree granting institution)","2023","Dual-phase (DP) steels are an important class of advanced high-strength steels (AHSS) and constitute a major share of steels for the automotive industry. A microstructure consisting of hard martensite embedded in a soft ferritic matrix gives them a good combination of strength and ductility. The martensite formation in the microstructure from austenite involves a shape and volume change, which is accommodated by the deformation of the surrounding ferritic matrix. This accommodation is known to impart typical characteristics in DP steels such as the absence of a yield point, continuous yielding and high initial work hardening rate. This thesis is an attempt to understand and model the aforementioned accommodation process in the ferritic matrix of DP steels. Traditionally, in predictive modelling of DP steel mechanical behaviour, the region of ferrite which undergoes deformation to accommodate martensitic transformation is taken into consideration as a constant thin layer of strain-hardened ferrite at the ferrite/martensite interface. This approach is shown to be inadequate for capturing local variations in ferrite deformation. Hence, electron backscatter diffraction (EBSD) experiments were carried out to study in detail the influence of various microstructural features on local variations in the transformation-induced deformation of ferrite. It was found that the crystallographic orientation of ferrite grains, martensite variant and its prior austenite grain (PAG) play an important role in determining the extent of transformation-induced deformation of ferrite. Taking a cue from this, a novel methodology comprising sequential experimental and numerical research on DP steels is developed which combines the results of PAG reconstruction, phenomenological theory of martensite crystallography (PTMC) and EBSD orientation data to estimate ferrite deformation due to every martensitic variant formed, via full-field micromechanical calculations on a virtual DP steel microstructure. Furthermore, the influence of self-accommodation during martensite variant formation on transformation-induced deformation of ferrite was also investigated. It is shown that the higher the number of variants which form from a PAG, the less the deformation caused by that PAG in the surrounding ferritic matrix. This is because of a decrease in the effective magnitude of the shear component of martensitic transformation during multi-variant transformation. The scientific findings presented in this work can be used for developing predictive models for the mechanical behaviour of not only DP steels but any multiphase steels which exhibit plastic accommodation and residual stresses in their microstructure due to martensitic phase transformation.","Dual-Phase Steel; Martensitic phase transformation; Plastic deformation; Micromechanical modelling; Electron backscatter diffraction; Self accommodation; Martensite variants","en","doctoral thesis","","978-94-6384-424-6","","","","","","","","","Team Maria Santofimia Navarro","","",""
"uuid:bf83b94a-4438-47c7-bfca-7a93334d79e4","http://resolver.tudelft.nl/uuid:bf83b94a-4438-47c7-bfca-7a93334d79e4","Seismic-interferometric applications for near-surface and mineral exploration","Balestrini, F.I. (TU Delft Applied Geophysics and Petrophysics)","Draganov, D.S. (promotor); Ghose, R. (promotor); Delft University of Technology (degree granting institution)","2023","Seismic methods are widely used for the exploration of the Earth’s subsurface. While they allow higher resolution compared to other geophysical methods, their performance depends on site and geological characteristics, and the volume and type of recorded information. Additionally, data processing plays a critical role in the efficacy of the application of seismic methods.
A common challenge when utilising seismic methods arises as a result of field restrictions and cost constraints. As a consequence, seismic data often suffer from irregular or sparse spatial sampling, which can affect the application of advanced processing and imaging algorithms, for instance, surface-related multiple elimination and wave equation migration. These algorithms require dense and regular sampling to provide reliable results. Thus, seismic-data regularisation and interpolation are commonly utilised processing steps. Nevertheless, the interpolation of data for relatively large gaps is not trivial, in particular for land data acquired in complex geological settings where the seismic events exhibit pronounced curvature and lack of continuity....","Seismic interferometry; seismic data processing; data reconstruction","en","doctoral thesis","","978-94-6366-671-8","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:28daa621-5070-48e3-8c92-b19b300aec75","http://resolver.tudelft.nl/uuid:28daa621-5070-48e3-8c92-b19b300aec75","Coalitional games in energy and analytics markets","Raja, A.A. (TU Delft Team Sergio Grammatico)","Grammatico, S. (promotor); De Schutter, B.H.K. (promotor); Delft University of Technology (degree granting institution)","2023","The main themes of this thesis are the design and analysis of payoff distribution methods for situations where agents collaborate to generate a utility. For modeling such scenarios, we majorly focus on the coalitional game theoretic framework that provides mathematical formalism to study the behavior of rational agents when they cooperate for selfish interests [69]. We utilize the tools from coalitional game theory to develop mechanisms for demand-side energy management, namely, energy coalitions, peer-to-peer energy trading (P2P), and real-time local electricity markets, that can help accelerate the energy transition [106]. For the solution of resulting games, we design distributed algorithms that converge to a payoff distribution characterized by stability and fairness. The primary approach to convergence analysis of proposed algorithms relies on the operator theory and fixed-point iterations. Finally, we also propose payoff distribution criteria for a wagering-based forecasting market that can help energy generation sources to improve their forecast....","","en","doctoral thesis","","","","","","","","","","","Team Sergio Grammatico","","",""
"uuid:c4f4db33-0553-4d4c-8b9d-678a2ce09d9e","http://resolver.tudelft.nl/uuid:c4f4db33-0553-4d4c-8b9d-678a2ce09d9e","Free Energy Principle Based Precision Modulation for Robot Attention: Towards brain inspired robot intelligence","Anil Meera, A. (TU Delft Robot Dynamics)","Wisse, M. (promotor); Mohajerin Esfahani, P. (copromotor); Delft University of Technology (degree granting institution)","2023","The potential impact of a grand unified theory of the brain on the robotics community might be immense, as it might hold the key to the general artificial intelligence. Such a theory might make revolutionary leaps in robot intelligence by improving the quality of our lives. The last two decades have witnessed the rise of one such brain theory - the free energy principle (FEP) - that seems to be successful in explaining a large body of cognitive functions. The tremendous amount of research centering FEP is a testament to its popularity within the neuroscience community. This raises two important questions: i) since biological systems are fundamentally different from robots, will FEP be useful in solving real robotics problems? ii) if so, will it outperform classical robot algorithms? To answer these questions, this thesis takes a step in the direction of applying FEP on three class of robotics challenges, with a special focus on Unmanned Aerial Vehicle (UAV): i) action, ii) perception and iii) active perception. This thesis demonstrates the usefulness of FEP in solving these challenges, and shows that FEP is particularly beneficial in dealing with colored (non-white) noise during estimation (perception) when compared to classical methods, marking the utility of FEP not only in neuroscience, but also in robotics. With these results, this thesis aims to contribute to the rise of FEP as a unified theory of robot intelligence.","Free Energy Principle; Robotics; Active Inference; System identification; Informative Path Planning; Formation Control; Filtering; Unmanned Aerial Vehicle","en","doctoral thesis","","978-94-6384-417-8","","","","","","","","","Robot Dynamics","","",""
"uuid:da2f850e-653a-4654-9e4a-4e3009bbc785","http://resolver.tudelft.nl/uuid:da2f850e-653a-4654-9e4a-4e3009bbc785","Urban form influence on microclimate and building cooling demand: An analytical framework and its application on the Rotterdam case","Maiullari, D. (TU Delft Landscape Architecture)","van Timmeren, A. (promotor); van Esch, M.M.E. (copromotor); Delft University of Technology (degree granting institution)","2023","Urban form plays a critical role when planning city transitions toward decarbonization. However, in urban climate conditions the complex relationship between urban form and cooling demand remains understudied. This thesis develops integrated approaches and knowledge in the transdisciplinary domain of urban morphology, urban climatology and energy-related fields while addressing the question: ‘How does urban form influence building cooling demand in urban microclimate conditions, and how can the magnitude of the relationship be assessed?’.
By answering this main research question, the thesis delivers a threefold contribution. First, it contributes to the conceptualization and understanding of both the intrinsic and the extrinsic role of urban form, by identifying urban form characteristics that directly influence building cooling demand, and indirectly contribute to shaping urban microclimate conditions in buildings’ surroundings. Second, the thesis contributes to increasing the assessment accuracy of urban form-related climate and energy performance. It does so by developing a quantitative morphological method to identify Local Climate Types (LCTs) and by developing a modelling method that enhances the use of microclimate data as boundary conditions for energy demand assessments. Thirdly, for the city of Rotterdam, the testing of these novel methods provides an understanding of how and to what extent the form of buildings and contexts influence building cooling demand.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-669-5","","","","","","2024-09-10","","","Landscape Architecture","","",""
"uuid:0e98bb62-5518-4c59-9f6a-b845b8a71997","http://resolver.tudelft.nl/uuid:0e98bb62-5518-4c59-9f6a-b845b8a71997","Thermoelastic Stability of Deployable Space Telescopes","Villalba, Víctor (TU Delft Space Systems Egineering)","Gill, E.K.A. (promotor); Kuiper, J.M. (copromotor); Delft University of Technology (degree granting institution)","2023","Imagery collected from space provides very useful information about our planet. Today there are many Earth Observation satellites in orbit which allow us to collect information which is used for environmental monitoring, response to catastrophes, surveillance and security, urban planning, economic analysis and many other applications. Thus, there is a drive to improve the quality of this imagery, such as its resolution and the frequency with which it can be collected. The quantity of pictures taken will be increased by launching more systems, the quality of the pictures depends, amongst other factors, on the system’s physical size. To serve those needs, bigger telescopes have to be launchedmore often.....","thermoelastics; deployable space telescopes; piezoelectric actuators; compliant mechanisms; mechanical design; systems engineering","en","doctoral thesis","","978-94-6458-915-3","","","","","","","","","Space Systems Egineering","","",""
"uuid:0793986f-b875-4693-a0f9-568978f2d632","http://resolver.tudelft.nl/uuid:0793986f-b875-4693-a0f9-568978f2d632","Utilization of mswi bottom ash as a mineral resource for low-carbon construction materials: Quality-upgrade treatments, mix design method, and microstructure analysis","Chen, B. (TU Delft Materials and Environment)","van Breugel, K. (promotor); Ye, G. (promotor); Delft University of Technology (degree granting institution)","2023","In recent years, considerable attention has been given to the utilization of municipal solid waste incineration (MSWI) bottom ash as a mineral resource for construction materials. MSWI bottom ash is the primary residue discharged after incinerating municipal solid waste. The generation of MSWI bottom ash is increasing dramatically with the wide application of waste incineration techniques. Different methods have been proposed to improve the quality of MSWI bottom ash and make it suitable as supplementary cementitious material (SCM) or precursor for alkali-activated materials (AAM). However, there is no systemic guidance on how to select quality-upgrade treatments for MSWI bottom ash. When using MSWI bottom ash to prepare blended cement pastes and alkali-activated pastes, the optimal mix design is usually found by trial and error. Very little information is available in the literature regarding the reaction of MSWI bottom ash as SCM and AAM precursor. The contribution of MSWI bottom ash to the microstructure formation and strength development of blended cement pastes and alkali-activated pastes is not very well understood.
The goal of this research is to develop knowledge that can be used to support the application of MSWI bottom ash as a mineral resource for construction materials. Based on this knowledge, a strategy for using MSWI bottom ash produced in the Netherlands (4-11 mm) as raw material to produce blended cement pastes and alkali-activated pastes is proposed. This research consists of the following parts:
1. Quality-upgrade treatments of as-received MSWI bottom ash
As-received MSWI bottom ash cannot be used directly as SCM and AAM precursor due to its large particle size and presence of metallic aluminum (Al). Mechanical treatments consisting of grinding and sieving were studied and selected to reduce the particle size and the metallic Al content of as-received MSWI bottom ash. The effectiveness of the mechanical treatments used to reduce the metallic Al content of MSWI bottom ash is strongly influenced by the distribution of metallic Al in bottom ash particles. Most metallic Al separated during mechanical treatment comes from the coarse particles. The metallic Al embedded in the particles smaller than 0.5 mm is difficult to be removed via mechanical treatments (see Chapter 3).
2. Development and microstructure analysis of blended cement pastes and alkali-activated pastes
The reactivity and leaching potential of mechanically treated MSWI bottom ash (MBA) are studied. This information is used in the development of blended cement pastes and alkali-activated pastes. A dissolution test is proposed to assess the reactivity of MBA as AAM precursor. The reactivity of MBA as SCM and AAM precursor is similar to that of Class F coal fly ash (FA), but much lower than that of blast furnace slag (BFS). The leaching of antimony (Sb) and sulfate from MBA is above the threshold value prescribed in Dutch Soil Quality Decree. The dosage of MBA in blended cement pasts and alkali-activated pastes should be controlled to prevent excessive leaching of contaminants into the environment (see Chapter 4).
The reactivity of MBA is determined by the content and the chemical composition of its amorphous phase. The amorphous phase of MBA has a chemical composition within the same range as that of the amorphous phase of FA. Given that the reactivity of MBA is close to that of FA, previous experience with the mix design of Class F coal fly ash-based pastes is used as a reference for the mix design of MBA-based AAM. Additionally, thermodynamic modeling is used to predict the assemblage of reaction products and the composition of pore solution in alkali-activated MBA paste when changing the Na2O content in the activator. The modeling results are also used to guide the mix design of MBA-based AAM (see Chapter 4).
When water treatment and NaOH solution treatment are part of the mixture preparation procedure, the compressive strength of the blended cement pastes and alkali-activated pastes made from MBA is close to that of the pastes prepared with the same amount of FA (Chapters 5 and 6). The metallic Al that cannot be removed during mechanical treatments can be oxidized by treating MBA in water or NaOH solution at room temperature. Apart from reducing metallic Al content, water treatment and NaOH solution treatment also slightly change the mineralogical composition of MBA.
Blending water-treated MBA (WMBA) with Portland cement paste leads to changes in the reaction products and microstructure. WMBA delays clinker hydration on the first day but enhances clinker hydration at later ages. The reaction products of WMBA contribute to the strength development of blended cement pastes (see Chapter 5).
NaOH solution-treated MBA (CMBA) is used together with BFS to prepare alkali-activated pastes. CMBA retards the reaction of BFS during the first seven days but promotes the reaction of BFS at later ages. Adding CMBA into alkali-activated pastes changes the reaction products and microstructure. The reaction products of CMBA contributes to the strength development of alkali-activated pastes (see Chapter 6).
3. Environmental impact assessment of blended cement pastes and alkali-activated pastes
Compared with Portland cement paste, blended cement pastes and alkali-activated pastes prepared using MSWI bottom ash SCM and AAM precursor have lower environmental impacts, especially in the impact category of global warming (see Chapter 7).
This research deepens the understanding of the reaction of MSWI bottom ash as SCM and AAM precursor. This study also demonstrates how to use MSWI bottom ash to prepare blended cement pastes and alkali-activated pastes by considering the chemical and physical properties of MSWI bottom ash. Since the MSWI bottom ash used in this research has chemical and mineralogical compositions within the same range as the MSWI bottom ash reported in the literature, the knowledge developed in this work stimulates the utilization of MSWI bottom ash produced in other regions as SCM and AAM precursor for construction materials.
Part I of this dissertation is dedicated to gaining a deeper understanding of Cu-ZnO synergistic structure as well as other Cu-based catalysts. In Chapter 2, we proposed a greener synthesis route for Cu/ZnO catalysts via urea hydrolysis of acetate precursors that can achieve comparable activity to commercial Cu/ZnO/Al2O3 catalysts without producing wastewater. Co-precipitated Cu-Zn hydroxycarbonate mineral-like precursors are crucial for a high inter-dispersion between CuO and ZnO after calcination and providing Cu-ZnO interfacial sites for the reaction. In Chapter 3, the effects of key process conditions, namely temperature and pressure, on CO2 hydrogenation over a commercial Cu/ZnO/Al2O3 catalyst were investigated using a space-resolved study. The gradients of reactants/products concentration and catalyst bed temperature within the catalytic reactor can reveal the significant effect of temperature on the dominant reaction pathways. CH3OH is formed through direct CO2 hydrogenation at low temperatures, while CH3OH formation is mediated via CO which is formed by a reverse water–gas shift reaction at a high temperature. Although pressure did not influence the reaction pathway, higher pressure helped suppress CH3OH decomposition to CO. In Chapter 4, the decisive roles of peripheral promoters to Cu nanoparticles in promoting CH3OH selectivity were elucidated. The model Cu-based catalysts (Cu-M/SiO2, M = Zn, Ga, and In) were prepared via surface organometallic chemistry (SOMC). The M+ sites played important roles in stabilizing formate species spillovered from Cu and determining the reactivity of formate hydrogenation. Improving the spillover and tuning the reactivity of formate help suppress formate decomposition to CO over Cu and ultimately boost CH3OH selectivity.
Part II is dedicated to exploring the novel catalysts for low-temperature CO¬2 hydrogenation, as well as, gaining a deeper understanding of the state-of-the-art Re/TiO2 catalyst. In Chapter 5, the bifunctionality of Re supported on TiO2 was deciphered, where metallic Re functions as the H2 activator and cationic Re as the CO2 activator. Re/TiO2 suffers from additional CH4 formation, and the active intermediates and reaction pathways for CH3OH and CH4 were identified. Understanding the nature of active sites and reaction mechanisms over Re/TiO2 led to approaches for CH4 selectivity mitigation in Chapter 6. Exploring various transition metals under low-temperature conditions provided insights into the formate stabilization of the coinage metals (Cu, Ag, and Au). Since the balance between metallic and cationic Re limited the CH3OH selectivity of Re/TiO2, the addition of Ag complemented the role of cationic Re. A synergistic interplay between Ag and Re did not only improve CH3OH selectivity significantly by suppressing intermediates in the reaction pathways toward CH4 but also exhibited superior stability.
Finally, the dissertation conveys a message that obtaining the definitive synthesis of well-defined active sites, expansive structure-activity relationships, and comprehensive reaction mechanisms are the major prerequisites for the rational design of novel catalysts.
Numerous studies have been reporting on the autogenous healing of cracks in cement-based materials. However, an active or rapid micro-crack healing is not always the case in the most critical parts of exposed structures. In this thesis, a new formulation of cement-based materials, by integrating selected bacteria and suitable organic mineral precursor compounds, was used to investigate its potential for enabling multiple crack healing events on load-induced cracked and pre-cracked concrete samples. For this purpose, chloride ingress in concrete subjected to compressive loading was investigated through laboratory experiments. Furthermore, investigation was also carried out on cracked mortar under chloride and carbon dioxide environments for healing-potential evaluation....","bacterial concrete; self-healing concrete; microcracks; chloride; combined-load","en","doctoral thesis","","","","","","","","","","","Materials and Environment","","",""
"uuid:0ba71b5c-56c3-4830-bc5d-506538d045a3","http://resolver.tudelft.nl/uuid:0ba71b5c-56c3-4830-bc5d-506538d045a3","Accelerated Discovery of Electrocatalysts for Electrochemical Ammonia Synthesis","Kolen, M. (TU Delft ChemE/Materials for Energy Conversion and Storage)","Mulder, F.M. (promotor); Smith, W.A. (copromotor); Delft University of Technology (degree granting institution)","2023","Ammonia synthesis via the direct nitrogen reduction reaction mechanism has the potential to be more flexible in production level and scale of operation and may enable cost reductions compared to alternative technologies for green NH3 synthesis. For the research field to advance to higher Technology Readiness Levels selective electrocatalysts that promote the reaction over the competing hydrogen evolution reaction are needed. The aim of this work was to build tools that enable the development of selective electrocatalysts for NRR with high NH3 production rate. We have identified two limitations in the workflow which is typically used to test promising materials for NRR activity, that hindered the development of selective electrocatalysts thus far: 1) NRR activity measurements are found to be unreliable due to NH3 contaminations and 2) the experimental throughput of the workflow is too slow to enable rapid progress, due to single catalyst studies that require elaborate ammonia detection and calibration methods. In Chapter 2 and 3 we systematically analyzed the steps involved in an NRR activity measurement to develop alternative methods that overcome these limitations. For even more effective NRR catalyst development, we explored in Chapters 4 and 5 how to accelerate the experimental workflow even further by enabling combinatorial catalyst screenings and by carrying out experiments under thermodynamically more favourable conditions for NRR, respectively.....
Refurbished products are collected after being used, tested, cleaned, and restored into an acceptable state, and subsequently, they are resold. Yet, lowering the environmental impact of consumption by using refurbished products requires that refurbished products are acquired instead of new ones. However, refurbished products are not as desirable to consumers as new products, which has the consequence that they have
lower purchase intentions and are willing to pay less for them.
The aim of this thesis is to understand consumer acceptance of refurbished products and how designers can enhance their desirability. Thus far, marketing strategies, aiming to improve consumer adoption of refurbished products have focused on minimizing the risks associated with refurbished products and underlining their benefits. Refurbished products are, for example, often offered at a lower price than new products and with a warranty. A central issue of these marketing strategies is that they are peripheral to the product, are not applicable to all product categories, and are not appropriate for all consumers. While they can improve the trade-off for refurbished products, they do not help to keep the product at its highest material and economic value.
In this dissertation, we, therefore, explore the main research question: how can designers enhance consumer acceptance of refurbished products by design?","","en","doctoral thesis","","978-94-6366-657-2","","","","","","","","","Marketing and Consumer Research","","",""
"uuid:0ef14921-08f4-4f5f-af73-38914755f47f","http://resolver.tudelft.nl/uuid:0ef14921-08f4-4f5f-af73-38914755f47f","Equilibrium seeking in games under partial-decision information","Bianchi, M. (TU Delft Team Sergio Grammatico)","Grammatico, S. (promotor); De Schutter, B.H.K. (copromotor); Delft University of Technology (degree granting institution)","2023","The topic of this dissertation is the distributed computation of Generalized Nash Equilibria (GNEs) in multi-agent games with network structure. In particular, we design and analyze algorithms in the partial-decision information scenario (also named fully-distributed algorithms), where each agent can only rely on the information received by some neighbors over a communication graph, although its cost function depends on the actions of possibly all the competitors. This setup is motivated by engineering applications with no central system coordinator, for instance multi-agent autonomous driving or coverage control. While the agents can estimate the unknown variables via local data exchange and consensus protocols, the estimation error introduces critical challenges in the development of algorithms. In fact, the existing schemes for GNE seeking under partial-decision information suffer important limitations, as to performance and conditions to guarantee convergence. In this perspective, this thesis advances the theoretical understanding of games in the partial-decision information scenario, and provides a broad tool kit for designing efficient algorithmic solutions, suitable to cope with complex network interaction and dynamic coupling.","","en","doctoral thesis","","978-94-6366-654-1","","","","","","","","","Team Sergio Grammatico","","",""
"uuid:ffa68da2-cd02-45a8-b4bc-99a289638571","http://resolver.tudelft.nl/uuid:ffa68da2-cd02-45a8-b4bc-99a289638571","Experimental Investigation and Lattice Modelling of 3D Printed Concrete Buildability quantification and early-age creep behaviour","Chang, Z. (TU Delft Materials and Environment)","Schlangen, E. (promotor); Šavija, B. (copromotor); Delft University of Technology (degree granting institution)","2023","For some decades now, additive manufacturing has been a revolutionary technology which generates enormous interest in both industrial and academic applications. 3D concrete printing (3DCP), an automated construction method, is able to manufacture the computer-designed model through material deposition. This innovative technique can considerably accelerate the construction process, and make it economically and technically feasible to implement complex structural elements in practice. Although this technique shows a promising future, full adoption in construction sector is still far from possible due to the absence of fundamental knowledge about printable material and structural analysis during or after printing process....","3D concrete printing; lattice; early-age creep; buildability quantification","en","doctoral thesis","","978-94-6366-658-9","","","","","","","","","Materials and Environment","","",""
"uuid:a5637eae-904b-4a52-a5fd-c5b3762a62fe","http://resolver.tudelft.nl/uuid:a5637eae-904b-4a52-a5fd-c5b3762a62fe","Factors Influencing Business-to-Government Information-Sharing Arrangements: Understanding system architectures and governance structures in information-sharing","Praditya, D.","Janssen, M.F.W.H.A. (promotor); Bharosa, Nitesh (promotor); Delft University of Technology (degree granting institution)","2023","The urgent need to improve public services and increase the adoption of cutting-edge technology in public organizations has promoted and encouraged collaboration between private and public organizations, thus allowing for more information to be shared between both parties. However, many issues arise that hinder the implementation of information-sharing, ranging from a lack of information quality to organizational resistance to sharing information due to uncertainty of the benefits or a lack of top-level management support. To address these challenges and to realize the benefits of business-to-government (B2G) information-sharing, it is necessary to understand how to arrange B2G information-sharing.
This research contributes scientifically and practically to the B2G information-sharing domain by proposing the concept of information-sharing arrangements through system architecture and governance structure lenses and analyzing the factors that influence such arrangements. The discussions include when to use a centralized topology or in what situations decentralized information-sharing is preferred, why there are mandatory and voluntary information-sharing, and in which situation consensus-based or hierarchical-based decision-making are needed. In addition, the role of trust among sharing partners, technological requirements, organizational readiness, and other factors identified as potentially influencing information-sharing arrangements were also discussed.
By understanding the arrangements and factors influencing them, B2G information-sharing actors can select the most suitable arrangements and potentially increase the adoption of information-sharing initiatives.","information-sharing; system arrangements; system architecture; governance structure; Inter-organizational system","en","doctoral thesis","","978-94-6384-422-2","","","","","","","","","Information and Communication Technology","","",""
"uuid:b2156264-39f4-4a8d-a34d-35e5a21d38e9","http://resolver.tudelft.nl/uuid:b2156264-39f4-4a8d-a34d-35e5a21d38e9","Sailing through fluid mud: Verification and Validation of a CFD model for simulations of ships sailing in muddy areas","Lovato, S. (TU Delft Rivers, Ports, Waterways and Dredging Engineering)","van Rhee, C. (promotor); Keetels, G.H. (copromotor); Delft University of Technology (degree granting institution)","2023","The increasing size of today's ships is a major concern for navigation in confined waters. In order to ensure safe manoeuvres, port authorities prescribe, among others, a minimum under-keel clearance that must be maintained by the ships during navigation. However, the seabed of ports situated at the estuaries or along rivers is often covered by mud as a result of sedimentation. Hence, while the position of a solid bottom is clearly defined and can be easily detected by sonar techniques, the presence of deposited sediments makes the definition of ""bottom"" and ""depth"" less clear. This also poses some questions on the optimal dredging strategy to adopt to minimise maintenance costs while ensuring the required safety.
For practical reasons, port authorities define the (nautical) bottom as the level where the mud reaches either a critical density or a critical yield stress (i.e. the shear stress below which the fluid behaves as a solid-like material). However, an optimal choice that minimises dredging activities while preserving the required safety shall also take into account the behaviour of ships. As the understanding of the link between mud rheology and ships' controllability and manoeuvrability with muddy seabeds is rather limited, this research project was started. With the rapidly increasing power of today's computers, Computational Fluid Dynamics (CFD) has become a viable option to study this problem.
The CFD code selected for this research is a multi-phase viscous-flow solver developed, verified and validated exclusively for maritime applications. As such, it was originally developed for Newtonian fluids only. Since mud exhibits a non-Newtonian rheology, the `step zero' of this research was to implement the Herschel-Bulkley model, which allows to numerically simulate two important flow features of mud, i.e. its shear-thinning and viscoplastic behaviour. Other rheological characteristics, such as thixotropy, were not considered in this study as they are deemed of minor importance at this stage.
The next step was concerned with ensuring that the modification of the flow solver to account for the non-Newtonian rheology of mud was correct. This was done by using the Method of Manufactured Solutions (MMS), which allows to rigorously verify the code against user-defined exact solutions. The verification exercises showed that the code performs as intended for both single- and two-phase flows of Herschel--Bulkley fluids. The illustrated procedure can be readily adapted to verify the correct implementation of other rheological models that may be implemented in the future. In this case, it is recommended to examine, in addition to the grid convergence of velocity and pressure, also the grid convergence of the apparent viscosity as the latter is particularly sensitive to coding mistakes related to the implementation of the new rheological model.
While code verification ensured that the Herschel--Bulkley model was correctly implemented, obtaining fully-converged solutions for realistic non-Newtonian problems may still be difficult. The non-Newtonian solver has thus been tested on the laminar flow of Herschel-Bulkley fluids around a sphere, as the latter is the simplest three-dimensional flow exhibiting features that are typical of the flow around ships, such as boundary layer development and flow separation. Although obtaining a fully-converged solutions was indeed challenging, it was possible to replicate data from the literature with good accuracy. This provided confidence to employ the CFD code to simulate ships sailing through fluid mud.
The verification of the CFD code was followed by validation of the mathematical model. The problem of a ship sailing through fluid mud was simplified into a simpler one, i.e. a plate moving through homogeneous mud as to mimic a portion of the hull penetrating the mud layer. The objective was to investigate the accuracy of the (regularised) Bingham model (which is a special case of Herschel-Bulkley) to predict the frictional forces on a plate moving through mud. The comparison between experimental and numerical data showed that the ideal Bingham model well captures the relative increase in the resistance due to the increase in the mud concentration but, at low speed, it tends to over-predict the resistance. On the other hand, choosing a lower regularisation parameters seem more favourable, both from the numerical and physical perspective. In fact, this research showed that better predictions at low speed were achieved by using lower regularisation parameters that were determined from the first points in the mud flow curves. It should be noted, however, that the thixotropy of mud and possible deflections of the plate during the experiments may prevent drawing definitive conclusions.
Finally, one question arising when simulating a ship sailing through a non-Newtonian fluid is how accurate are standard Reynolds-Averaged Navier-Stokes (RANS) models, which are developed for Newtonian fluids, when applied to non-Newtonian flows. In the last step of this dissertation, the accuracy of three RANS models was assessed against published Direct Numerical Simulations (DNS) data for pipe flows. From this study it was concluded that, among the three tested Newtonian RANS models, the SST model produced the best predictions and it is reasonably accurate for weakly non-Newtonian fluids and for high Reynolds numbers. In addition, a new RANS model, labelled SST-HB, has been developed. The new model showed good agreement with DNS of pipe flows in the mean velocity, average viscosity, mean shear stress budget and friction factors. However, the new RANS model was calibrated and tested for pipe flows only, a relatively simple internal-flow problem. Hence, the applicability of the new model to complex external flows, such as the flow around a ship, still requires further investigations. Furthermore, RANS simulations with some realistic mud conditions predicted laminar flow in the mud layer. In this case, the use of the standard SST model is recommended.
The developed and tested CFD code, together with other insights provided by this research, can be used in the future to both numerically investigate the effect of mud on ships and to obtain the hydrodynamic coefficients for manoeuvring models. These models could then be used in real- and fast-time simulators for research and commercial purposes, but also for pilots training.
This challenge is especially pronounced in so-called cyber-physical systems (CPSs), in which digital automation is used to coordinate the actions of one or more physical systems. Examples of CPSs are airplanes, robotic arms or the power grid. Such CPSs have the combined advantages of the physical and cyber world, but are also subject to both threats to safety and security. In fact, the integration of physical and cyber parts in a CPS means that security issues can cause safety issues, and although less common safety issues can cause security issues.
Measures for safety and security of CPSs are categorized as prevention, resilience, and detection & accommodation. These different types of precautions can be used independently, but typically they need to be combined to provide adequate safety and security of a CPS. In this dissertation, three advances within safety and security of CPSs are presented which cover contributions on each of the different types of safety and security measures. Firstly, anomaly detection is addressed by extending existing sliding mode observer (SMO) based anomaly estimation methods with detection capability. To this end, two SMO based anomaly detectors are presented, which are applicable to a large class of SMOs. These detectors, by design, have no false alarms and allow for strong theoretical guarantees on detectability.
Secondly, a topology-switching coalitional control technique which integrates resilience, detection and accommodation is designed for safe control of a collaborative vehicle platoon (CVP) subjected to man-in-the-middle (MITM) cyber-attacks. Here resilience to undetected attacks is achieved by means of scenario-based model predictive control (MPC) and detected anomalies are accommodated by disabling the affected communication links. Lastly, a real-time implementation of encrypted control based on fully homomorphic encryption (FHE) is presented. FHE allows for manipulation of encrypted data, such that it can prevent confidentiality breaches during communication and computation.
Each contribution of this dissertation addresses a specific topic within safety and security of CPSs. By doing so, they demonstrate the potential of these methods to increase safety and security of CPSs while minimizing their impact on normal behaviour. This will promote the adaptation of safety and security measures and allows for safety and security throughout the continued progress in automation.","Safety & Security; Sliding Mode Observer; Coalitional Control; Homomorphic Encryption; Collaborative Vehicle Platoon","en","doctoral thesis","","978-94-6384-411-6","","","","","","","","","Team Riccardo Ferrari","","",""
"uuid:0e03913c-898e-4392-8de5-072a7ead7fd6","http://resolver.tudelft.nl/uuid:0e03913c-898e-4392-8de5-072a7ead7fd6","Optimal Mixing Evolutionary Algorithms for Large-Scale Real-Valued Optimization: Including Real-World Medical Applications","Bouter, P.A. (TU Delft Algorithmics; Centrum Wiskunde & Informatica (CWI))","Bosman, P.A.N. (promotor); Alderliesten, T. (copromotor); Delft University of Technology (degree granting institution)","2023","In recent years, the use of Artificial Intelligence (AI) has become prevalent in a large number of societally relevant, real-world problems, e.g., in the domains of engineering and health care. The field of Evolutionary Computation (EC) can be considered to be a sub-field of AI, concerning optimization using Evolutionary Algorithms (EAs), which are population-based (meta-)heuristics that employ the Darwinian principles of evolution, i.e., variation and selection. Such EAs are historically mainly considered for the optimization of difficult, non-linear problems in a Black-Box Optimization (BBO) setting, because EAs can effectively optimize such problems even when very little is known about the optimization problem and its structure. This is in contrast to optimization methods that are specifically designed for certain problems of which the definition and structure are known, i.e., a White-Box Optimization (WBO) setting.","Evolutionary Algorithms; Gene-pool Optimal Mixing; Gray-box optimization; Large-scale optimization; Real-valued optimization; Multi-objective Optimisation; Graphics Processing Unit (GPU); CUDA; Brachytherapy; Treatment planning; Deformable image registration","en","doctoral thesis","","978-94-6366-648-0","","","","","","","","","Algorithmics","","",""
"uuid:9e4a11a1-e7cb-4c56-b69c-a1ee0a502f0f","http://resolver.tudelft.nl/uuid:9e4a11a1-e7cb-4c56-b69c-a1ee0a502f0f","Computational models for clinical drug response prediction: aligning transcriptomic data of patients and pre-clinical models","Mourragui, S.M.C. (TU Delft Pattern Recognition and Bioinformatics)","Wessels, L.F.A. (promotor); Reinders, M.J.T. (promotor); Loog, M. (promotor); Delft University of Technology (degree granting institution)","2023","Extensive efforts in cancer research over the past decades have markedly improved diagnosis and treatments, leading to better outcomes for cancer patients. Paradoxically, however, these discoveries have begun to shed light on a level of complexity that rules out the emergence of a universal cancer treatment. As any tumor is now known to be essentially a unique disease, clinicians and researchers are moving towards a new paradigm, termed “precision medicine”, which consists of designing bespoke lines of treatment for each patient.
This paradigm-shift has been fueled by international consortia that have characterized large collections of tumors, thereby providing a vast reference for cancer heterogeneity. Two main strategies have been employed: sequencing of tumor biopsies directly extracted from patients or studying pre-clinical models, i.e., tumor cells cultured in artificial environments. While the first strategy generates clinically faithful data, the second strategy is flexible and cost-effective, and allows for the study of effects of various drugs at different concentrations.
Based on the large amount of data generated from pre-clinical models, computer
scientists have developed various machine learning algorithms to model drug response based on these data. However, these models do not take into account the complexity of human tumors and the differences between model systems and human tumors, and are therefore not directly applicable in a clinical setting. In this thesis, we aim at bridging this gap. Specifically, we develop algorithms to integrate and align data generated from the two aforementioned strategies with a goal to predict drug response in patients from datasets generated using pre-clinical models.","cell lines; pre-clinical models; translational; cancer; transfer learning; machine learning; gene expression; predictive models; drug response; single cell","en","doctoral thesis","","","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:16e5e2f5-e491-4f86-8a47-6ee0c9ffd000","http://resolver.tudelft.nl/uuid:16e5e2f5-e491-4f86-8a47-6ee0c9ffd000","Development of highly scattering distributed fibre optic sensing for structural health monitoring","Wang, X. (TU Delft Structural Integrity & Composites)","Groves, R.M. (promotor); Benedictus, R. (promotor); Delft University of Technology (degree granting institution)","2023","In this thesis, fibre optic sensing has been investigated as an important technique for structural health monitoring. Distributed fibre optic sensing based on Rayleigh scattering is a fibre optic sensing technique to achieve the spatially continuous strain monitoring for critical locations for the structures. However, the Rayleigh backscattering intensity in commercial optical fibres is low which is a limitation to Rayleigh scattering based fibre optic sensing. In recent years, methods to improve the intensity of the backscattered light in optical fibres have been proposed. By doping nanoparticles into the optical fibre, the backscattered light increases dramatically. Then, the signal-to-noise ratio may increase which would be beneficial for strain measurement with this Rayleigh scattering based method for structural health monitoring. The main research question is ’how can the enhancement of light scattering used in distributed fibre optic sensing be an advantage for structural health monitoring’. The aim of this research is to develop the enhancement of light scattering in the distributed fibre optic sensing as an advantage for structural health monitoring. Gold spherical nanoparticles were chosen as the contrast agents for backscattered light enhancement. The spectral characteristics (light intensity, spectral shift, etc.) have been investigated in detail in this thesis. In this dissertation, firstly, a model of light scattering by gold nanoparticles at optical fibre interfaces was proposed to overcome the difficulty of manufacturing nanoparticle doped optical fibre in an optical laboratory. Gold nanoparticle liquids were dropped to the optical fibre interfaces to evaluate the backscattered light levels from the nanoparticles. Secondly, a model of light scattering by gold nanoparticles in the core of the optical fibres was proposed and an optimisation of light scattering enhancement by gold nanoparticles in fused silica optical fibres was investigated. By comparing the models of light scattering by gold nanoparticles in the core of the optical fibres and at optical fibre interfaces, the relationship between them has been built to evaluate the light scattering level in the optical fibre from the results obtained from the optical fibre interfaces. Then, the characteristics of the backscattered light spectra from the nanoparticle doped optical fibres and the characteristics of the spectral shift under axial strain were investigated. The backscattered light spectral shifts have been compared with the cases of commercial optical fibres and fibre Bragg gratings. A case study of strain acquisition of gold nanoparticle doped distributed optical fibre sensing based on backscattering was investigated with different typical gauge lengths and spectral ranges. Different noise levels were applied to the spectra to analyse the influence on the strain acquisition with signal-to-noise ratio improvement. Lastly, due to the use of gold as the material for nanoparticles, plasmon resonance is induced by gold nanoparticles. The plasmon resonance based gold nanoparticle doped optical fibre strain sensing was studied to make it a potential auxiliary strain detection method along with distributed fibre optic sensing based on Rayleigh scattering.","optical fibre sensor; structural health monitoring; strain sensing; nanoparticle; light scattering","en","doctoral thesis","","978-94-6366-663-3","","","","","","2024-03-13","","","Structural Integrity & Composites","","",""
"uuid:6aded12b-5a15-45ba-aba6-a19536388d39","http://resolver.tudelft.nl/uuid:6aded12b-5a15-45ba-aba6-a19536388d39","Open Source Urbanism: A design method for cultivating information infrastructures in the urban commons","Zhilin, S. (TU Delft Organisation & Governance)","Janssen, M.F.W.H.A. (promotor); Klievink, A.J. (promotor); Delft University of Technology (degree granting institution)","2023","Open Source Urbanism (OSU) emerges as citizens self-organise to alter their urban environments by creating Do-It-Yourself (DIY) urban prototypes and sharing their design manuals on the internet. The examples of urban prototypes might vary from built structures, such as street furniture and urban gardening equipment, to decentralised energy designs and IT artefacts. They emerge as a natural response of citizens to perceived problems in their urban environments. Urban prototypes are designed, paid for, and implemented by self-organised citizens instead of developed by public or private companies and bought on the market. Whereas companies’ staff consist commonly of professionals, and the products are thoroughly tested and standardised to comply with all possible governmental regulations, urban prototypes are incomplete, as they embody the ongoing experimentation of citizens with their urban environments. Furthermore, amateur designers might have limited experience or background in this area.","","en","doctoral thesis","","978-94-6384-407-9","","","","","","","","","Organisation & Governance","","",""
"uuid:2d76b1f1-a60e-48e0-acbf-051628a76da7","http://resolver.tudelft.nl/uuid:2d76b1f1-a60e-48e0-acbf-051628a76da7","Monitoring the deformation behavior of an immersed tunnel with Distributed Optical Fiber Sensor (DOFS)","Zhang, X. (TU Delft Geo-engineering)","Gavin, Kenneth (promotor); Broere, W. (copromotor); Delft University of Technology (degree granting institution)","2023","","Distributed optical fiber sensor (DOFS); Immersed tunnel; Joint deformation; Daily deformation behavior; Tide impacts; Seasonal deformation behavior; Safety andmaintenance","en","doctoral thesis","","978-94-6384-412-3","","","","","","2024-07-01","","","Geo-engineering","","",""
"uuid:df45d4e5-0504-470e-b7e6-0fa3d5231709","http://resolver.tudelft.nl/uuid:df45d4e5-0504-470e-b7e6-0fa3d5231709","A framework to identify and coordinate responsibilities in industrial research and innovation","Sonck, M.M. (TU Delft BT/Biotechnology and Society)","Osseweijer, P. (promotor); Asveld, L. (promotor); Delft University of Technology (degree granting institution)","2023","This doctoral thesis investigates the concept of responsibility in the setting of industrial research and innovation (R&I). Companies have multiple responsibilities in society: profit generation for shareowners, legal and contractual liabilities, as well as socially and morally binding obligations beyond legal compliance. These responsibilities coexist in R&I, and at times, stand in conflict with each other. Moreover, the radical uncertainty of innovation activity raises dilemmas with regard to responsibility. For instance, can R&I practitioners be held responsible for those future impacts of their innovation that still remain unknown at the time of R&I? Furthermore, how should such responsibility be distributed between developers (R&I), enablers (funders, regulators) and appliers (users) of the innovation? To address such questions, the broad notion of responsibility first needs to be opened up, to distinguish between its different meanings and elements. This thesis develops a framework that supports identification and coordination of various responsibilities in the inherently uncertain R&I settings. The main research question of the thesis is: How do different elements of responsibility become identified and carried out in R&I? As outcome, this thesis will present a meta-responsibility map: A tool for industrial R&I teams and consortia to reflect on their responsibilities, in situations such as goalsetting, problem-solving, decision-making, and stakeholder interaction.","Research and innovation; Responsibility; Biobased sector; innovation management; Innovation ecosystems; Responsible research and innovation (RRI)","en","doctoral thesis","","","","","","","","","","","BT/Biotechnology and Society","","",""
"uuid:323b614e-c766-4c15-8fff-8d4890c61806","http://resolver.tudelft.nl/uuid:323b614e-c766-4c15-8fff-8d4890c61806","A Data-Driven Approach to Disaster Resilience in Communication Networks","Oostenbrink, J. (TU Delft Networked Systems)","Kuipers, F.A. (promotor); Langendoen, K.G. (promotor); Delft University of Technology (degree granting institution)","2023","Communication networks are critical in business, government, and even our day-to-day life. A prolonged communication outage can have devastating effects, particularly during and after a disaster. Unfortunately, our communication infrastructure is still vulnerable to natural disasters and other events that damage multiple network components within a confined area. In this thesis, we study the disaster resilience of communication networks. We propose scalable, data-driven methods to help stakeholders both assess and improve the resilience of networks to disasters.
We first study the global risk of earthquakes to Internet Exchange Points (IXPs). We find that many facilities are at risk of earthquakes and that, when an earthquake occurs, it is not unlikely that multiple facilities will fail simultaneously. Fortunately, our analysis also shows that larger IXPs tend to be located in less earthquake-prone areas, and that peering at multiple facilities significantly reduces the impact of earthquakes to IXPs and autonomous systems. To help network operators in reducing the impact of earthquakes on their autonomous systems, we propose a novel metric for selecting peering facilities, based on the probability of simultaneous facility failures. We show that applying our metric can significantly increase the resilience of individual autonomous systems, as well as that of the Internet as a whole.
To effectively improve the resilience of communication networks to natural disasters, stakeholders need to make well-informed trade-offs between costs, network performance, and network resilience. To help stakeholders make these decisions, we propose a single-disaster and a successive-disaster framework for assessing the resilience of a network to natural disasters. These frameworks can help stakeholders anticipate potential disasters, and compare the effects of any trade-off on the resilience of their networks.
The main principle behind both frameworks is to assess the disaster resilience of a network based on a large set of representative disaster scenarios (called the disaster set). This approach is flexible with respect to the underlying disaster dataset, and can be applied to datasets of widely varying sizes and properties. Our single-disaster framework allows one to efficiently compute the distribution of a network performance metric, assuming that a single, random disaster strikes the network and damages one or more network components in a confined area. Our method speeds up computation by first computing the distribution of the state of the network after a random disaster (the number of possible states tends to be much smaller than the disaster set itself), and only then computing the performance of the network in each of these states.
In addition to studying the impact of a single disaster on a network, we also address the issue of successive disasters. We first define the concept of successive disasters: a subsequent disaster that strikes the network while the damage due to a previous disaster is still being repaired. We then propose a framework capable of modeling a sequence of disasters in time, while taking into account recovery operations. We develop both an exact and a Monte Carlo method to compute the vulnerability of a network to successive disasters and find that the probability of a second disaster striking the network during recovery can be significant even for short repair times.
Our successive disaster framework can not only be applied to subsequent disasters, but also to potential follow-up attacks. Experiments on two network topologies show that even small targeted attacks can greatly aggravate the network disruption caused by a natural disaster. Fortunately, we find that this effect can be mitigated - at almost no cost to network performance - by adopting a calculated repair strategy that takes into account the possibility of follow-up attacks.
In addition to providing methods for assessing the resilience of networks, we also provide algorithms for improving the resilience of networks to natural disasters. These algorithms can help stakeholders (1) recover network functionality more effectively in the initial period after a disaster, and (2) reduce the initial impact of a disaster on network performance.
After a disaster, a network operator can quickly restore some functionality by replacing nodes with temporary emergency nodes. These emergency nodes should be deployed as soon as possible. However, selecting an optimal set of replacement nodes is computationally intensive, and the complete state of the network might still be unknown after the disaster. Thus, we propose selecting a disaster strategy a priori - before the occurrence of the disaster. We give an algorithm for evaluating such strategies, by extending our single-disaster assessment framework.
An effective, but costly, method of improving the disaster resilience of a network is to add new, geographically redundant, cable connections. These redundant connections ensure that more areas remain connected after a disaster strikes the network, and thus reduce the initial impact of the disaster on the network. We provide algorithms for finding cable routes that minimize a function of disaster impact and cable cost under any disaster set. Since this problem is NP-hard, we give an exact algorithm, as well as a heuristic, for solving it.","Network Resilience; Natural Disasters; Regional Failures; Geographically Correlated Failures","en","doctoral thesis","","978-94-6384-410-9","","","","","","","","","Networked Systems","","",""
"uuid:cc6cd71d-d46c-4db6-8f55-38ca841391f9","http://resolver.tudelft.nl/uuid:cc6cd71d-d46c-4db6-8f55-38ca841391f9","Dry Aerosol Direct Writing for Selective Nanoparticle Deposition","Aghajani, S. (TU Delft Precision and Microsystems Engineering)","Tichem, M. (promotor); Accardo, A. (promotor); Delft University of Technology (degree granting institution)","2023","Microprocessors, long-lasting batteries, and sensors are a number of examples of nanotechnology revolutionising our daily lives. Nanotechnology is the study, development, and manufacturing of structures and devices which derive unique and novel properties from nanoscale phenomena. To realise such structures and devices, a set of processes summarised under the term ’nanomanufacturing’ (NM) is required to fabricate at the nanoscale. NM includes a wide range of strategies and methods where nanoparticles (NPs) serve as one of the building blocks. Therefore, NP manipulation is essential to addressing the desired applications. Because of their flexibility and efficiency, direct writing (DW) methods have received considerable attention in many studies. With nanoparticle direct writing, patterns and features can be created locally on a surface without the need for lithography processes. Inkjet printing (IJP) and aerosol jet printing (AJP) are widely used DW NP deposition methods for creating patterns with a resolution of less than 100 μm. Both these methods deposit NP from the liquid phase and employ a variety of chemical agents, which can lead to contamination, affecting the properties of the film. Additionally, due to liquid-substrate interaction, high-resolution NP deposition using wet techniques necessitates proper surface modification. Compared to NP liquid-phase-based approaches, dry methods do not involve any chemical agent, thus reducing the possibility of contamination. To use dry-synthesised NPs in a direct-writing method, particles in a gas flow should be focused and deposited on a substrate. The main challenge in fabricating high-resolution patterns employing dry-synthesised NPs is the deposition of fine NPs (<100 nm) from the gas flow onto a defined location or region on the substrate due to their extremely small size and lower relaxation time (time required for a particle to adjust its velocity to a new condition). This dissertation presents a novel, simple, and solvent-free method for selective NP deposition on various substrates, enabling the DW of NPs...","Nanoparticle; Dry aerosol direct-writing; Aerodynamic focusing; surface-enhanced Raman scattering (SERS); Thermal Treatment","en","doctoral thesis","","978-94-6384-408-6","","","","","","","","Precision and Microsystems Engineering","","","",""
"uuid:62782ffc-5958-4eff-b429-545c34405200","http://resolver.tudelft.nl/uuid:62782ffc-5958-4eff-b429-545c34405200","Quantifying cybercriminal bitcoin abuse","Oosthoek, K. (TU Delft Cyber Security)","Lagendijk, R.L. (promotor); Smaragdakis, G. (copromotor); Delft University of Technology (degree granting institution)","2023","Cybercrime is negatively impacting everybody. In recent years cybercriminal activity has directly affected individuals, companies, governments and critical infrastructure. It has led to significant financial damage, impeded critical infrastructure and harmed human lives. Defending against cybercrime is difficult, as persistent actors perpetually hunt for soft spots in Internet-connected systems, which exist due to either lax vulnerability management or for convenience, complicating adequate detection and mitigation. Cybercriminal actors are financially motivated and for their doings and dealings they rely on Bitcoin. Alternatives exist, but Bitcoin has proven to be the most liquid digital currency, meaning it is easy to swap and to conceal illicit transactions. The magnitude of many cybercriminal activities is largely unknown. However Bitcoin runs on a blockchain - an open, dentralized ledger, allowing virtually everyone to analyze financial transactions, as opposed to traditional banking. Furthermore, contrary to popular belief Bitcoin is pseudonymous, not anonymous and several techniques exist to identify illicit activity. In this thesis, we illuminate three cybercriminal ecosystems that did not receive significant prior research attention: Bitcoin exchange heists, ransomware and single-vendor shops in the Dark Web. For each of these, we gather datasets from open sources. We first focus on the technical behavior and financial impact of attacks on Bitcoin exchange platforms. We also highlight the ransomware ecosystem, showing how it moved from small to large-scale attacks with similar financial impact. We further focus on how small shops in the Dark Web generate significant revenue with niche illicit activity. To understand the financial impact within each of these ecosystems, we analyze associated financial transactions. We also apply heuristics to discover additional Bitcoin addresses controlled by the same actor. We observe that cybercriminal actors successfully extract millions of funds from Bitcoin exchanges through relatively low-level attack vectors. When compared with traditional financial institutions, the lack of sophistication of attacks and the accompanying financial impact is unprecedented. In our analysis of ransomware, we observe attackers have shifted from attacking individual users resulting in relatively small ransom amounts to targeting large organizations with significant financial resources, resulting in multimillion ransom payments. We also find that with this shift, attackers have also improved their operational security in address usage and money laundering. For Dark Web shops, we found that this relatively uncharted territory of the Dark Web as compared to the bigger marketplaces specializes into niches such as sexual abuse material and various forms of financial crime. To allow for future research in this area, we introduce a methodology to estimate illicit revenue based on web scrape results and cluster these on category.","Bitcoin; Cybercrime; Cybersecurity","en","doctoral thesis","","","","","","","","","","","Cyber Security","","",""
"uuid:e75408f4-b1f3-446a-bfd5-19d9465f7038","http://resolver.tudelft.nl/uuid:e75408f4-b1f3-446a-bfd5-19d9465f7038","Electrochemical Ammonia Synthesis: Hydrogen Permeable Electrodes as Alternative Pathway for Nitrogen Reduction","Ripepi, D. (TU Delft ChemE/Materials for Energy Conversion and Storage)","Mulder, F.M. (promotor); Smith, W.A. (promotor); Delft University of Technology (degree granting institution)","2023","In the last century, the indiscriminate use of fossil energy to power the industrial revolution and technological progress of humankind, has led to the depletion of limited natural resources and, most importantly, the emission and accumulation of alarming levels of pollutants and greenhouse gases (GHG) in the atmosphere. One of the major consequences of these emissions is the climate crisis that we are currently facing. As our society is in constant need for energy to live and progress, we are urged to find more sustainable and renewable energy sources, and to decrease the environmental impact of industrial processes. Electrochemistry can be used to temporary store intermittent renewable electricity, to then be reconverted back to electrons, or it can be used to produce chemicals. As such, the electrification of the chemical industry offers the opportunity to reduce its GHG footprint. The variable supply of renewable electricity can be used by the chemical industry to generate artificial fuel and feedstock. In this way, the synergy between the chemical industry and the energy sector can boost market access, scale and competitiveness.
In particular, this thesis focuses on one of the largest processes in chemical industry, i.e. the ammonia production. An introduction on the topic is given in Chapter 1. Ammonia is produced at large scale (178 million tons per year) and it is a commodity essential for the fertiliser and food sector. The current production of ammonia, via the Haber-Bosch process, relies on fossil fuels and hydrogen derived from steam-methane reforming. Consequently the sector is responsible for releasing 1.4 % of the global CO2 emissions. The implementation of a fully renewable powered Haber-Bosch process is limited by its large reactor scale and its continuous and steady operation, which clashes with the intrinsic intermittency of sources such as solar and wind. This is one of the reasons why a direct electrochemical route for ammonia synthesis has recently attracted significant attention in the scientific and industrial communities. The concept entails the direct synthesis of ammonia from water, dinitrogen and renewable electricity. Moreover, the possibility of producing ammonia in a sustainable manner may enable a new scenario where ammonia can also be used as carbon free energy carrier, thus playing a key role in a decarbonised energy landscape powered by renewables. However, the lack of a selective catalyst and the arduous competition with side reactions, as the hydrogen evolution reaction, make this process extremely challenging.
The aim of this thesis is to expand the current understanding of the nitrogen reduction reaction at near ambient conditions, addressing both fundamental and practical challenges. The first part of this thesis (Chapter 2-4) provides insights on the implementation of reliable electrochemical nitrogen reduction experiments and sensitive operando ammonia detection. Chapter 2 provides a fast and reliable ammonia detection method to speed-up catalyst screening and development of novel sustainable ammonia evolution devices, as it requires significantly less sample handling and preparation compared to other reported methods. The proposed method is based on a gas chromatography technique, and it allows for in situ monitoring of ammonia evolution, down to 150 ppb, from -but not limited to- electrochemical devices. Chapter 3 presents an isotope sensitive gas chromatography-mass spectrometry method for the quantification of NH3 at low concentration level, typically encountered in electrochemical ammonia synthesis applications. This method allows the discrimination of 15/14NH3, necessary for the required 15N2 isotope labelling control experiments. Additionally, this method can directly and simultaneously measure other species in the analyte, thus it allows researchers to directly assess reaction selectivity by measuring reaction by-products, as well as the presence of gaseous/volatile contaminants in the experimental setup. Chapter 4 investigates the impact of contaminations on electrochemical nitrogen reduction experiments, with the aid of multiple analytical techniques and instrumentation, such as ion chromatography, gas chromatography, mass spectrometry, NOx chemiluminescence analyser and UV-Vis spectrophotometry. This chapter not only provides a comprehensive identification and quantification of the contaminations, but it also critically analyses the effectiveness of different cleaning strategies, establishing a series of guidelines to perform reliable experiments.
The second part of this thesis (Chapter 5-7) investigates the room temperature spontaneous dinitrogen activation on selected metallic surfaces and its hydrogenation to ammonia via electrochemical atomic hydrogen permeation, using a solid metallic hydrogen permeable membrane electrode. Chapter 5 demonstrates a novel strategy for ambient condition ammonia synthesis from water and dinitrogen, designed to limit the competition between nitrogen activation and other competing adsorbates at the catalytic surface. As such, a hydrogen permeable nickel membrane electrode is used to spatially separate the electrolyte and the hydrogen activation side from the nitrogen activation and hydrogenation sites. With this approach, ammonia is produced catalytically directly in the gas phase and in the absence of electrolyte. Gaseous nitrogen activation at the nickel electrode is confirmed with 15N isotope labelling control experiments and it is attributed to a Mars-van Krevelen mechanism enabled by the formation of N-vacancies upon hydrogenation of surface nitrides. Chapter 6 reports on the interactions of adsorbing N and permeating H at the catalytic interface of nickel, iron and ruthenium based hydrogen permeable electrodes during electrolytic ammonia synthesis. In situ near ambient pressure X-ray photoelectron spectroscopy (XPS) is used to measure modifications in the surface electronic structure of the catalyst and the nature of the adsorbed molecules. This chapter shows that permeating atomic hydrogen reduces surface Ni oxide and hydroxide species, under conditions at which gaseous H2 does not. Moreover, the results demonstrate that the availability of surface Ni0 sites is a primary requirement for the chemisorption of gaseous N2. In situ XPS measurements reveal that nitrogen gas chemisorbs on the generated metallic sites, followed by hydrogenation via permeating H, as adsorbed N and NH3 are found on the Ni surface. Our findings indicate that the first hydrogenation step to NH and the last NH3 desorption step might be limiting at the utilised operating conditions. Finally, the study was then extended to Fe and Ru surfaces. However, the formation of surface iron oxide and nitride species on iron blocks the H permeation and prevents the reaction to advance; while on ruthenium the stronger Ru-N bond might favour the recombination of permeating hydrogen to H2 over the hydrogenation of adsorbed nitrogen. Chapter 7 provides a systematic investigation of the effect of operating temperature (in the range 25 to 120 °C) and H permeation flux on the N2 reduction reaction on Ni, leading to a considerably improved NH3 synthesis process. At 120 °C a stable operation was achieved for over 12 h with a 10 times higher cumulative NH3 production and almost 40-fold increase in faradaic efficiency compared to the room temperature operation reported in chapter 5. The results obtained in this chapter indicate that increasing operating temperatures enhances nitrogen adsorption and NH3 desorption, maintaining a steady N surface coverage throughout the NH3 synthesis cycle. Moreover, to operate the nitrogen reduction reaction in a stable and efficient manner, the control over the population of N, NHx and H species at the catalyst surface is critical, as well as the capability of oxides to be reduced by permeating H. As such, the adoption of H permeable electrodes allows to independently control the N activation and H permeation, by a large extent.","Ammonia; Nitrogen; Reduction; Electrochemistry; hydrogen; permeation; electrode; Detection; XPS; Chromatography; Mass spectrometry","en","doctoral thesis","","978-94-6366-633-6","","","","","","2023-04-03","","","ChemE/Materials for Energy Conversion and Storage","","",""
"uuid:49d4dd26-d228-4362-ba2b-ad9c70fa29fe","http://resolver.tudelft.nl/uuid:49d4dd26-d228-4362-ba2b-ad9c70fa29fe","Heat-affected zone in welded cold-formed rectangular hollow section joints","Yan, R. (TU Delft Steel & Composite Structures)","Veljkovic, M. (promotor); Hendriks, M.A.N. (promotor); Delft University of Technology (degree granting institution)","2023","High-strength steel (HSS) has higher strength but lower ductility than mild steel. The cross-section of the structural members may be reduced using HSS instead of mild steel, provided the buckling of elements does not govern the failure. The reduced member size benefits the environment and economy by means of less energy consumption, less carbon dioxide emission, and less labour work during the fabrication and structure construction stages.
The current design rules in prEN 1993-1-8 for welded hollow section joints are developed based on extensive experimental and numerical studies on joints made of mild steel (S235 and S355). A material factor Cf is stipulated to reduce the design resistance of the joint given the lower ductility of HSS than mild steel. In addition, the design yield strength of the material should be lower than 0.8 times the ultimate strength (fu) to calculate the resistance of punching shear failure and tension brace failure. However, these two strength restrictions are proposed based on limited experimental and numerical investigations on welded HSS tubular joints. The mechanical background behind the two restrictions is vague. Applying both Cf and the 0.8fu restriction would eliminate the benefits of using HSS, reducing the competitiveness in the market. Besides, the heat-affected zone (HAZ) often has the lowest strength in a weld region. The strength difference between HAZ and the base material (BM) is more significant for HSS than mild steel, indicating that HAZ plays a more critical role in welded HSS joints. Hence, the HAZ constitutive model should be considered in the numerical study of welded HSS joints in order to predict the load-deformation relationship and failure mode correctly.
This dissertation proposes a systematic approach to include HAZ in the finite element (FE) analysis of welded joints considering ductile failure mode. First, the mechanical and geometrical properties of HAZ were obtained from tensile tests on the milled welded coupon specimen, the low-force Vickers hardness test, and the microstructure observation. The full-field deformation of the milled welded coupon specimen was measured using the digital image correlation (DIC) technique. Using the DIC result, a method is proposed to identify the boundaries of different regions in the milled welded coupon specimen. The identified boundary matches the hardness result well. Based on the identified boundaries, the width of HAZ and the weld metal (WM) are determined, which provides geometric information for the measuring range of the virtual extensometer in DIC and for creating the FE model with different partitions (HAZ, BM, and WM). Due to the transverse constraint imposed by BM and WM, HAZ was under a biaxial or triaxial stress state during the tensile coupon test. The measured stress of HAZ is higher than that under the uniaxial stress state at a given strain. Hence, a method is proposed to correct the measured stress-strain relationship of HAZ. The modified stress-strain relationship is successfully validated against the tensile coupon test regarding the load-deformation relationship and the strain distribution on the specimen surface.
In order to accurately predict the load-deformation relationship and failure mode of welded joints, the Gurson-Tvergaard-Needleman (GTN) damage model is employed to simulate the failure of HAZ and BM. A computational homogenization analysis using representative volume element models was carried out to calibrate the yield-surface-related parameters (q1, q2, and q3). The effect of the hydrostatic pressure, the accumulated initial hardening strain, and the void volume fraction (VVF) on the yield surface were evaluated. An equation is proposed to describe the relationship between VVF and q1 value with a constant q2. The fracture-related parameters (fc and ff) were calibrated against the tensile coupon test. In addition, as the procedures for modifying the constitutive model and calibrating the damage model are rather complicated, a semi-empirical material damage model for HAZ correlating to the mechanical properties of BM is proposed to facilitate the FE analysis of welded joints.
Monotonic tensile tests were conducted on 18 welded cold-formed rectangular hollow section (RHS) X-joints made of S355, S500, and S700 to investigate the validity of Cf and the 0.8fu restriction. The test result shows that a conservative resistance is predicted using the current design rules without applying Cf and the 0.8fu restriction. The calibrated GTN damage model for HAZ and BM was implemented in the fracture simulation of welded X-joints. The FE results agree well with the experimental results concerning the load-deformation relationship and the failure mode. Based on the validated X-joint FE model, the importance of including HAZ in the FE model was revealed by the FE analysis without the HAZ constitutive model. Finally, the semi-empirical material damage model for HAZ was employed to predict the tensile behaviour of all 18 welded X-joints.
In the first part of this thesis, we illustrate how using model-based imaging can be utilized for 3D ultrasound imaging using a single ultrasound transducer, and equipping it with a plastic coding mask. The plastic mask acts as an analog coder, that scrambles the transmitted and received waves in a manner that is location dependent. As a result, the temporal shape of an ultrasound echo can be used instead of the traditional method of using phase differences between sensors in a sensor array. Imaging is instead accomplished using model-based imaging. By measuring the pulse-echo response of each pixel, we can form an image by solving a regularized linear least squares problem, which takes into account the measured pixel-specific pulse-echo signals. The proposed device and imaging method is then verified experimentally.
In the following chapter, a coding mask design method is proposed for the aforementioned imaging device. A measurement model is formulatedwhere themask geometry is an explicit parameter to be optimized. After forming this model, a numerical optimization method is proposed and numerically tested. Our numerical experiments show that optimized mask geometries exhibit an energy focusing effect on the region-of-interest, whilst simultaneously decorrelating echo signals between pixels.
In the second part of this thesis, in contrast, we consider methods for calibrating propagation models when the pulse-echo response per pixel is not known. The most important calibration challenge we consider is that of imaging through an aberrating layer in front of an ultrasound array. This could be subcutaneous fat or the human skull, for example. In this thesiswe formulate ameasurement model consisting of a partwhere wave propagation is known (i.e., the assumed homogeneous region behind the aberrating layer, where the contrast image of interest is located), and an unknown propagation part, consisting of the Green’s functions from an array sensor to any point on the the interface of the aberrating layer and the imaging medium. We then investigate methods for finding this set of Green’s functions without explicitly measuring them (so called ‘blind’ calibration).
The first proposed method exploits the singular value decomposition of the measurement data in combination with the assumed Toeplitz structure of the matrices representing the aberrating layer’s Green’s functions. However, the method is lacking in practicality since an additional set ofmeasurements is required with a phase screen mounted on the interface of the aberration layer and the imaging medium. The second method resolves these practical issues by utilizing a covariance matching technique. A sufficiently large set of measurements is obtained where each measurement is different due to e.g. moving particles such as blood flow or micro-bubbles. Using the covariance of the data, algorithms are then defined that can estimate the transfer functions of the aberrating layer from the measurement covariance data.
Finally,we propose amethod for estimating the electro-mechanical impulse response of an ultrasound sensor, by simply measuring its pulse-echo response from a flat plate reflector in front of the sensor. Estimating the one-way (electro-mechanical) impulse response then becomes a de-autoconvolution problem, for which we propose a method by solving a semi-definite relaxation of the de-autoconvolution problem.
In this PhD thesis, we develop tools for system-theoretical analysis of discrete-event systems when purely (max-plus) algebraic models, derived from timing constraints among events, are enriched with automata-theoretic conflict resolution schemes to treat variable schedules. We follow the hybrid dynamical systems approach that offers a powerful description of the interplay between the logical and timing aspects of discrete-event systems. On the one hand, the resulting hybrid automata allow a continuous-variable dynamic representation of discrete-event systems analogously to time-driven systems. On the other hand, the framework is convenient when timing constraints are of explicit concern in system dynamics and performance specifications. We address issues related to the stability, reachability, and solvability of discrete-event systems in this PhD thesis.
Firstly, we focus on formalising the discrete-event modelling framework as a novel max- plus-algebraic hybrid automaton analogously to the hybrid automaton framework in conventional algebra. There are mainly two phenomena of concern: synchronisation and choice of event occurrences. We illustrate how the proposed framework offers explicit flexibility in modelling the interplay of synchronisation and choice phenomena among event occurrences. We show that the proposed framework unifies and extends the existing max-plus-algebraic models of discrete-event systems with the variable ordering of events. We derive equivalence relations between the proposed framework and other automata-theoretic models with timing features such as weighted automata.
Stability analysis plays an important role in the operation and control of dynamical systems. There has been considerable research on generalising the notions of stability from linear time-invariant systems to hybrid systems in conventional algebra. The research for the counterpart in max-plus-algebraic systems is still limited. This motivates us to study the stability of discrete-event systems in the second part of the thesis. We present a novel stability analysis framework under the broad setting of max-plus-algebraic hybrid automata. We achieve this by reformulating various notions of stability of discrete-event systems phrased in the classical Lyapunov sense. We then integrate tools from max-plus algebra and Lyapunov theory to demonstrate the decision-making capabilities of the proposed approach.
In the last part of the PhD thesis, we focus on the parametric modelling of constrained discrete-event systems. This allows capturing variations in the timing and ordering of event occurrences within the framework of max-plus-algebraic hybrid automata analogously to the conventional time-driven linear parameter-varying systems. The analysis of the effect of parameter variations on the existence of admissible trajectories is of paramount importance in model-based decision-making for discrete-event systems. Therefore, we focus on validating the coherence of the obtained model in presence of nonlinear implicitness in the system dynamics. In our analysis, we borrow tools from max-plus algebra, monotone functions theory, graph theory, and computational geometry. Finally, we study the application of the proposed approach to an urban railway system.","Max-plus algebra; Discrete-event systems; Hybrid systems; Lyapunov stability; Piecewise-affine systems","en","doctoral thesis","","978-90-833032-4-6","","","","","","2023-12-31","","","Team Ton van den Boom","","",""
"uuid:291baefe-c4b9-46ea-b250-a6c8f4e6ece8","http://resolver.tudelft.nl/uuid:291baefe-c4b9-46ea-b250-a6c8f4e6ece8","Pressure-assisted CU sintering for SiC Die-attachment application","Liu, X. (TU Delft Electronic Components, Technology and Materials)","Zhang, Kouchi (promotor); Ye, H. (copromotor); Microelectronics (degree granting institution); Delft University of Technology (degree granting institution)","2023","","nano Cu sintering; Silicon carbide power electronics packaging; Shear Strength; Mechanical reliability; Thermal conductivity; Molecular dynamics; Static and dynamic test; Nanoindentation","en","doctoral thesis","","978-94-6473-018-0","","","","","","2025-01-30","","","Electronic Components, Technology and Materials","","",""
"uuid:ac1af560-c551-46ef-94b8-8c2dbca21176","http://resolver.tudelft.nl/uuid:ac1af560-c551-46ef-94b8-8c2dbca21176","DISARMing viral invaders of bacteria","Aparicio Maldonado, C. (TU Delft BN/Stan Brouns Lab)","Brouns, S.J.J. (promotor); Luzia de Nobrega, F. (copromotor); Delft University of Technology (degree granting institution)","2023","The research performed in this thesis focuses on the understanding on the interactions of bacteria and phages occurring via anti-phage defense system mechanisms. This involved a set of literature research and experimental studies that provide an overview of the diverse defense systems described at the time of submission.","","en","doctoral thesis","","978-90-8593-548-3","","","","","","","","","BN/Stan Brouns Lab","","",""
"uuid:9a32a2ae-0ba7-4686-87df-24cb8be66133","http://resolver.tudelft.nl/uuid:9a32a2ae-0ba7-4686-87df-24cb8be66133","Probing the hyperfine structure of Fe-based water-gas shift catalysts","Ariëns, M.I. (TU Delft RST/Radiation, Science and Technology)","Brück, E.H. (promotor); Hensen, E.J.M. (promotor); Dugulan, A.I. (copromotor); Delft University of Technology (degree granting institution)","2023","Hydrogen gas is an essential reagent in numerous industrial processes including ammonia synthesis. Ammonia is a key intermediate in the synthesis of nitrogen-based fertilisers, e.g. nitrates and urea. According to recent estimates (2008), approximately half of the world population is fed by nitrogen-based fertilisers of synthetic origin. Therefore, statistically speaking, every other person reading this sentence owes their existence to ammonia synthesis. Nowadays, most hydrogen gas is produced from natural gas via steam reforming followed by a dual stage water-gas shift reaction. The catalyst used in high-temperature water-gas shift (HTS) is chromium/copper promoted iron oxide. Chromium is known to stabilise the active iron-oxide phase magnetite (Fe3O4) from sintering and over-reduction to α-Fe and Fe-carbides, while copper enhances the activity by providing additional active sites. The chromium stabiliser has been used for over a century, because it provides excellent stability and its low cost. Chromium is added to the catalyst precursor via a co-precipitation/calcination route. An unintended side effect of calcination is that some of the chromium can oxidise to chromium-6, which is prone to strict handling and partial bans. The active magnetite phase has an inverse spinel structure composed of a 1:1:1 mixture of; tetrahedral Fe3+, octahedral Fe3+, and octahedral Fe2+, resulting in an octahedral Fe3+/Fe2+ redox couple. The active sites of the bulk magnetite catalyst are the surface octahedral Fe3+/Fe2+ redox couple. Rational design of catalysts with alternative dopants to chromium is severely hindered because of a poor understanding of chromium incorporation into the inverse spinel magnetite structure. Accordingly, the position of chromium and its effect on the magnetite structure and the octahedral Fe3+/Fe2+ redox couple was investigated in detail.....","Water-gas shift; Fe-based catalysts; industrially relevant conditions; doped magnetite; (in situ) Mössbauer spectroscopy; (NAP)-XPS","en","doctoral thesis","","978-94-6419-713-6","","","","","","","","RST/Radiation, Science and Technology","","","",""
"uuid:65db94ec-c7b6-4c54-82b8-273dc18b3081","http://resolver.tudelft.nl/uuid:65db94ec-c7b6-4c54-82b8-273dc18b3081","Adsorption and Electrokinetics at Silica-Electrolyte Interfaces: A Molecular Simulation Study","Döpke, M.F. (TU Delft Complex Fluid Processing)","Padding, J.T. (promotor); Hartkamp, Remco (copromotor); Delft University of Technology (degree granting institution)","2023","Experimentally investigating the nanoscale behavior at oxide-electrolyte interfaces has proven to be extremely challenging. Molecular Dynamics (MD) simulations have arisen as a potential computational alternative to gain atomic level insights at these interfaces. But how accurately do these simulations represent the physics and chemistry at the interface? In many situations we do in fact not know. Validation at the interface remains challenging. The force fields used in MD simulations, that describe the inter-particle interactions, are generally optimized for purposes deviating considerably from interfaces. Yet, these same force fields are blindly used to model surface-fluid interactions, yielding wildly varying results of for example ion adsorption. This dissertation tackles the problem of simulating interfaces by critically looking at MD simulations and proposing novel solutions, both for MD simulations in general and specifically targeting their validity and limitations with regards to modeling interfaces…","molecular simulation; molecular dynamics simulation; interface; solid-fluid interface; amorphous silica; electrolyte; silica-electrolyte interface","en","doctoral thesis","","978-94-6384-404-8","","","","","","","","","Complex Fluid Processing","","",""
"uuid:8b974567-3db2-48e1-a54c-19ad6f615449","http://resolver.tudelft.nl/uuid:8b974567-3db2-48e1-a54c-19ad6f615449","Spectroscopy and imaging of spin waves and valley excitons in two dimensions","Carmiggelt, J.J. (TU Delft QN/vanderSarlab)","Steele, G.A. (promotor); van der Sar, T. (promotor); Delft University of Technology (degree granting institution)","2023","Knowledge about the magnetic and electronic properties of materials does not only expand our fundamental understanding of nature, but is also crucial for the development of new technologies. Today’s electronic devices rely on electrical currents to transport and process information, which generate a lot of heat waste via Joule heating. These devices could become much more energy efficient by using the electron’s spin or valley degree of freedom to encode information, rather than its charge. In this dissertation we study the elementary excitations of magnetic and semiconducting materials, called spin waves and excitons, which have been proposed as respectively spin and valley information carriers in future electronic devices...","Spin waves; valleytronics; excitons; nonlinear magnonics; magnetism; NV centers","en","doctoral thesis","","978-90-8593-551-3","","","","","","","","","QN/vanderSarlab","","",""
"uuid:5ac369fa-595c-41b1-a5e0-d96337abe1e8","http://resolver.tudelft.nl/uuid:5ac369fa-595c-41b1-a5e0-d96337abe1e8","“But, it’s just a really good idea!”: Investigating the guidance of design feedback processes to mitigate pupils' fixation and stimulate their creative thinking","Schut, A. (TU Delft Science Education and Communication)","de Vries, M.J. (promotor); Klapwijk, R.M. (copromotor); Delft University of Technology (degree granting institution)","2023","","","en","doctoral thesis","","978-94-6419-686-3","","","","","","","","","Science Education and Communication","","",""
"uuid:7d0e8676-8323-448a-af4e-9fc6b01b775f","http://resolver.tudelft.nl/uuid:7d0e8676-8323-448a-af4e-9fc6b01b775f","Spatio-Temporal Multi- Objective Optimization of Agricultural Best Management Practices","Uribe, N. (TU Delft Water Resources)","Solomatine, D.P. (promotor); Corzo Perez, G.A. (copromotor); Delft University of Technology (degree granting institution)","2023","Farmers around the world are facing the need to improve crop yield due to substantial increase in food demand. However, in an effort to meet the global growing food demand, nutrient pollutants in runoff have also increased due to intensified agricultural practices. For this reason, stakeholders and decision-makers have tried to shift from conventional agricultural practices to other types of practices, commonly referred to as best management practices (BMPs). The emphasis of agricultural BMPs (Ag-BMPs) is on environmental protection, which in this research is extended to consider food production, as well as environmental, economic, and social factors as a part of Ag-BMPs.","","en","doctoral thesis","","978-90-73445-48-2","","","","","","","","","Water Resources","","",""
"uuid:f5c3f45b-6862-4f8c-900a-fa4de4ce731e","http://resolver.tudelft.nl/uuid:f5c3f45b-6862-4f8c-900a-fa4de4ce731e","Landslide hazard assessment: Hydro-meteorological thresholds in Rwanda","Uwihirwe, J. (TU Delft Water Resources)","Bogaard, T.A. (promotor); Hrachowitz, M. (promotor); Delft University of Technology (degree granting institution)","2023","For the development of regional landslide early warning systems, empirical-statistical thresholds are of crucial importance. The thresholds indicate the meteorological and hydrological conditions initiating landslides and are an affordable approach towards reducing people’s vulnerability to landslide hazards. This thesis defined different landslide hydro-meteorological thresholds in Rwanda and evaluated their predictive capabilities. Chapter 1 identifies the landslide problem to society, opportunities for possible solutions, overview of the previous research and knowledge gap. It defines the research concepts, research objectives and outlines...","Landslide; Hydro-geology; Hydro-Meteorology; Groundwater; Soil Moisture","en","doctoral thesis","","978-94-6366-644-2","","","","","","","","","Water Resources","","",""
"uuid:7e4f868b-7716-4c36-8fa0-b55572d1572b","http://resolver.tudelft.nl/uuid:7e4f868b-7716-4c36-8fa0-b55572d1572b","Physics and Control of Transonic Buffet","D'Aguanno, A. (TU Delft Aerodynamics)","van Oudheusden, B.W. (promotor); Schrijer, F.F.J. (copromotor); Delft University of Technology (degree granting institution)","2023","The flight envelope of an aircraft operating at high subsonic velocities is bounded by several limitations, one of these consists in thewing experiencing oscillations of a shockwave on its suction side for a certain range ofMach number (Ma), angle of attack (®) and Reynolds number (Re). This phenomenon is referred to as transonic buffet and it may ultimately result in violent structural oscillations of the wing (the so-called buffeting), in addition to the oscillations of the aerodynamics loads. Notwithstanding the relevance of this topic, there is not yet an only explanation regarding its mechanism, therefore, the first aim of this experimental project is to obtain further insight on the physics of transonic buffet (Part I). As a second objective, in Part II different strategies for the control of buffet have been investigated. The experiments of this study have been carried out in the transonic-supersonic wind tunnel of TUDelft on supercritical airfoil and wings based on theOAT15A airfoil. The behavior of this phenomenon has been scrutinized using optical experimental techniques, such as particle image velocimetry (PIV), schlieren, and, background oriented schlieren (BOS)...","Transonic buffet; Shockwave; Control systems; Supercritical airfoil; Swept wings; PIV; Schlieren; BOS","en","doctoral thesis","","978-94-6366-652-7","","","","","","","","","Aerodynamics","","",""
"uuid:88ce92a5-153e-47e4-bb21-999237161ab7","http://resolver.tudelft.nl/uuid:88ce92a5-153e-47e4-bb21-999237161ab7","Extracellular Polymeric Substances of ""Candidatus Accumulibacter"": Composition, application and turnover","Tomas Martinez, S. (TU Delft BT/Environmental Biotechnology)","van Loosdrecht, Mark C.M. (promotor); Weissbrodt, D.G. (promotor); Lin, Y. (promotor); Delft University of Technology (degree granting institution)","2023","The majority of bacteria grow in the form of microbial aggregates known as biofilms. In these biofilms, microorganisms are embedded in a mixture of extracellular polymeric substances (EPS) produced by the microorganisms themselves. EPS is a complex mixture of biopolymers of different nature, such as polysaccharides, proteins, nucleic acids or lipids, among others. In spite of the significant progress over the last decades, EPS is still a black box waiting to be opened, in terms of specific composition, function, structure and production.
Biofilms have great importance in many environmental engineering processes, as for example, aerobic granular sludge (AGS). AGS is a novel biological wastewater treatment where microorganisms are stimulated to form compact granules. Among the complex microbial community in AGS, polyphosphate accumulating organisms (PAOs) are of great importance, due to their role in phosphate removal and granule stabilization. Because of their dominance in AGS and their rapid anaerobic carbon sequestration, they are assumed to be the main EPS producer in AGS. Therefore, PAOs (specifically the well-studied “Candidatus Accumulibacter phosphatis”) can be used as model microorganism for the study of EPS of AGS.
The goal of this thesis is to study the EPS of “Ca. Accumulibacter” in terms of specific composition, application and synthesis/consumption. A better characterization of the EPS of “Ca. Accumulibacter” will lead to a comprehensive understanding of this microorganism and further optimization of the granular sludge processes, and their application...
This Ph.D. research aims to develop a computational design methodology for configurational layout optimization of hospital buildings concerning physical matters & human factors, which are directly attributable to the layout/configuration of the hospital. In the optimization models, the considered performance indicators are related with patients (e.g. ease of way-finding), staff (e.g. average walking-time), and operations (e.g. fitness for workflows). Two case studies are studied here as (1) reconfiguration of existing hospitals; and (2) designing the new hospitals by focussing on “layout planning” and “corridor design”. The developed models are programmed in the form of design tool-kits for supporting conceptual design phases.
Effectively, this project presents an interdisciplinary methodological framework that can tackle hospital layout design problems by integrating Computational Design workflows, Graph Theory techniques, Operations Research, and Computational Intelligence into the field of Architectural Space Planning.
When employees interact with any system in their organization, interventions can be aimed at employees and at the system. When investigating possibilities to improve the system, it is important to take into account how employees interact with the system. We need to be able to predict human behavior and in order to do that, we need to understand human behavior....","Human error; safety; SPAD; Incidental learning; human factors","en","doctoral thesis","","","","","","","","","","","Safety and Security Science","","",""
"uuid:df26ce2c-4b0f-41ee-93ff-301aa82457c3","http://resolver.tudelft.nl/uuid:df26ce2c-4b0f-41ee-93ff-301aa82457c3","Development of a teaching-learning sequence for scientific inquiry through argumentation in secondary physics education","Pols, C.F.J. (TU Delft ImPhys/Practicum support)","de Vries, M.J. (promotor); Dekkers, P.J.J.M. (copromotor); Delft University of Technology (degree granting institution)","2023","Enabling students to engage in independent scientific inquiry is a highly valued but seemingly elusive goal of (secondary school) science education. Therefore, this study aims to determine and understand how to effectively develop inquiry knowledge in students. The chosen approach to enable students to plan, carry out and evaluate a physics inquiry, is to regard an inquiry as the construction of a scientifically cogent argument for a specific claim. In an authentic scientific inquiry, the researcher invests - from the very start of the inquiry - time and effort in making the inquiry’s claim as indisputable as possible. The researcher strives for optimal cogency of the argument in support of that claim. Throughout the various studies in this thesis it is argued that this idea can be translated to classroom situations: fostering the insight that students’ inquiry should result in a complete, correct and substantiated answer to the research question. It is shown that this is a meaningful strategy in enabling them to engage in independent scientific inquiry: it results in a cognitive need in students to develop the knowledge that allows them to produce such an answer. As such, this thesis shows that argumentation is an indispensable part of teaching scientific inquiry. Explicit attention for argumentation promotes development of students’ inquiry knowledge.","scientific inquiry; argumentation; practical work; Physics Education","en","doctoral thesis","","","","","","","","","","","ImPhys/Practicum support","","",""
"uuid:6f0520c8-791d-41fe-a3d8-b7bc993e3b38","http://resolver.tudelft.nl/uuid:6f0520c8-791d-41fe-a3d8-b7bc993e3b38","Safe Online and Offline Reinforcement Learning","Simão, T. D. (TU Delft Algorithmics)","Spaan, M.T.J. (promotor); Stikkelman, R.M. (copromotor); Delft University of Technology (degree granting institution)","2023","Reinforcement Learning (RL) agents can solve general problems based on little to no knowledge of the underlying environment. These agents learn through experience, using a trial-and-error strategy that can lead to effective innovations, but this randomized process might cause undesirable events. Therefore, to enable the adoption of RL in our daily lives, we must ensure their reliability and safety. Safety requirements are often incompatible with the naive random exploration usually performed by RL agents. Safe RL studies how to make such agents more reliable and how to ensure they behave appropriately. We investigate these issues in online settings, where the agent interacts directly with the environment, and in offline settings, where the agent only has access to historical data and does not interact directly with the environment.
While safety has numerous facets in RL, in this thesis, we focus on two of them. First, the safe policy improvement problem, which considers how to compute a policy offline reliably. Second, the constrained reinforcement learning problem, which investigates how to learn a policy that satisfies a set of safety constraints. Next, we detail these perspectives and how we approach them.
The first perspective is of particular interest in offline settings. In this setting, we can imagine some decision mechanism has been operating the system, we refer to this mechanism as the behavior policy. Assuming these past decisions were recorded in a database, we would like to use RL to compute a new policy using such database. It would be difficult to convince stakeholders to switch to the policy computed by RL if there were chances that the new policy would cause considerable performance loss compared to the behavior policy. Therefore, developing algorithms that reliably compute policies that outperform the behavior policy is essential as this gives confidence to decision-makers that the new policy will not degrade the performance of the underlying system. The safe policy improvement problem formalizes these issues.
Considering that real-world data is limited and costly, in Chapter 3, we investigate how to improve the sample complexity of safe policy improvement algorithms by exploiting the factored structure of the underlying problem. In particular, we consider problems where the dynamics of each state variable depend only on a small subset of the state variables. Exploiting this structure, we develop RL algorithms that require orders of magnitude fewer data to find better policies than their counterparts that ignore such structure. This method also generalizes samples from one state to another, which allows us to compute improved policies if the data only partially cover the problem.
In many real-world applications such as dialogue systems, pharmaceutical tests, and crop management, data is collected under human supervision, and the behavior policy remains unknown. In Chapter 4, we apply safe policy improvement algorithms with an estimated policy built from data. We formally provide safe policy improvement guarantees over the behavior policy even without direct access to it. Our empirical experiments on tasks with finite and continuous states support the theoretical findings.
The second safety perspective is relevant for online RL agents. Engineering a reward signal that allows the agent to maximize its performance while remaining safe is not trivial. Therefore, it is better to decouple safety from reward using constrained Markov decision processes (CMDPs), where an independent signal models the safety aspects. In this setting, an RL agent can autonomously find trade-offs between performance and safety. Unfortunately, most RL agents designed for the constrained setting only guarantee safety after the learning phase, which prevents their direct deployment.
In Chapter 6, we investigate settings where a concise abstract model of the safety aspects is given, a reasonable assumption since a thorough understanding of safety-related matters is a prerequisite for deploying RL in typical applications. We propose an RL algorithm that uses this abstract model to learn policies safely. During the training process, this algorithm can seamlessly switch from a conservative to a greedy policy without violating the safety constraints. We prove that this algorithm is safe under the given assumptions. Empirically, we show that even if safety and reward signals are contradictory, this algorithm always operates safely, while when they are aligned, this approach also improves the agent's performance. Finally, we study how to reduce the performance regret of this algorithm without sacrificing the safety guarantees.
To summarize, we develop new RL methods exploiting prior knowledge about the structure of the problem. We propose reliable offline algorithms that can improve the policy using fewer data and online algorithms that comply with safety constraints while learning. Besides safety and reliability, we also touch on other issues preventing the deployment of RL to real-world tasks, such as data efficiency and learning with a fixed batch of data. Nevertheless, we must recall that other challenges, such as partial-observability and explainability, still require attention. We hope this thesis serves as a stepping stone toward combining different types of prior knowledge to improve various aspects of RL.
Semiconductors with strong spin-orbit coupling proximitized with a superconductor are another prominent example of hybrid devices. Although semiconductors and conventional superconductors have been well understood for decades, their combination is predicted to yield a new state of matter known as topological superconductivity. Topological superconductors hostMajorana bound states: topologically protected quasiparticles with non-abelian statistics that are promising candidates to realize fault-tolerant qubits. Reliably creating and manipulating Majorana modes remains one of the outstanding challenges inmodern condensed matter physics.","","en","doctoral thesis","","978-94-6366-641-1","","","","","","","","","QN/Akhmerov Group","","",""
"uuid:f2047227-f838-4527-a751-acddfee08c13","http://resolver.tudelft.nl/uuid:f2047227-f838-4527-a751-acddfee08c13","Climate Change and the Resilience of Collective Memories: The Case Study of Fındıklı in Rize, Türkiye","Aktürk, Gül (TU Delft Marketing and Consumer Research; TU Delft History, Form & Aesthetics)","Hein, C.M. (promotor); van Bergeijk, H.D. (copromotor); Delft University of Technology (degree granting institution)","2023","Vernacular heritage sites encompass customs, practices, places, objects, artistic expressions, and values that are innate to a particular place and time. Climate knowledge of the particular place and time is embedded in vernacular settlements and lifestyles along with other environmental, cultural, and societal determinants of the place. Rebuilt, restored, and adapted, vernacular settlements evolved with changing climate, cultural practices, community aspirations, and
a gradual influx of modernization and urbanization. However, its legacy —as represented by traditional houses from the pre-industrial period that were built by laypeople— is challenged by climate and disaster risks, e.g., loss of lands, food sources, water resources, intangible values, and displacement. Although the impacts of climate change combined with anthropic influences have been recognized as a threat to cultural heritage by scholars, this underappreciated form
of cultural heritage has not been the focus of the integrated understanding risks of climate and disaster discussions. The aim of this dissertation, therefore, is to reveal the deteriorations caused by changing climate and anthropic interventions on vernacular heritage at both spatial planning decisions such as urban development projects and at local level practices such as maladaptation from the case of Fındıklı of Rize in Turkiye. The factors behind the deterioration of vernacular heritage sites under changing climate and the ways to achieve climate resilience are analysed through interviews with local people, the observations of on-site visits conducted in January and July 2019 in addition to mapping.","vernacular heritage; climate resilience; river flooding; landslides; disaster risk management","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-645-9","","","","","","2023-01-11","","","Marketing and Consumer Research","","",""
"uuid:af2ee2aa-564e-4702-bd36-e3c724ac76b8","http://resolver.tudelft.nl/uuid:af2ee2aa-564e-4702-bd36-e3c724ac76b8","It's a Trap: Studying the quantum dot surface on an atomistic scale","du Fossé, I. (TU Delft ChemE/Opto-electronic Materials)","Houtepen, A.J. (promotor); Grozema, F.C. (promotor); Delft University of Technology (degree granting institution)","2023","Due to their size-dependent properties, high photoluminescence quantum yield and relatively cheap solution-based processing, colloidal quantum dots (QDs) are of great interest for application in optoelectronic devices. However, the efficiency of these devices is often limited by the presence of trap states: localized electronic states that lead to energy levels in the bandgap. Although much research has been geared to passivating (i.e., removing) these trap states, our understanding of the atomic configurations that lead to traps remains limited. Therefore, the work presented in this thesis is aimed at investigating trap states and the QD surface on an atomistic scale. We use a combination of experimental and computational techniques to show that reduced metal sites can lead to trap-formation, and that these trap states can be dynamic in nature. In addition, we find suggestions that the QD surface is more complex than often assumed and that surface reconstructions may play a pivotal role in the delocalization of the wavefunction. Lastly, we study the formation of deep traps in CsPbBr3 perovskite nanocrystals. We find that the traditional picture of defect tolerance in these materials is incomplete and should also include the local electrostatic potential in order to explain deep traps.","quantum dots; dft; trap states; semiconductors; nanocrystals; surface","en","doctoral thesis","","978-94-6421-957-9","","","","","","2024-01-01","","","ChemE/Opto-electronic Materials","","",""
"uuid:c4d83571-9445-44b7-b495-5d18ca66ef4f","http://resolver.tudelft.nl/uuid:c4d83571-9445-44b7-b495-5d18ca66ef4f","Induced superconductivity in antimony-based two-dimensional electron gases","Möhle, C.M. (TU Delft QRD/Goswami Lab)","Kouwenhoven, Leo P. (promotor); Goswami, S. (copromotor); Delft University of Technology (degree granting institution)","2023","Majorana zero modes (MZMs) are a topic of intense research as they constitute the main building block of topological qubits - a qubit type with potentially enhanced coherence time. A promising way to create these quantum states is to couple a one-dimensional (1D) semiconducting segment with spin-orbit interaction to a superconductor, in the presence of an external magnetic field. Growing the active semiconductor as a 2D layer and creating 1Dstructures by top-downprocessing might allowto realize complex multiqubit devices in the future. This thesis explores antimony-based two-dimensional electron gases (2DEGs), known for their favorable material properties, as platforms for topological superconductivity....","Two-dimensional electron gases; planar Josephson junctions; mesoscopic superconductivity","en","doctoral thesis","","978-90-8593-549-0","","","","","","","","","QRD/Goswami Lab","","",""
"uuid:36b8a133-2a5a-49b3-9701-f75f102bbe3d","http://resolver.tudelft.nl/uuid:36b8a133-2a5a-49b3-9701-f75f102bbe3d","Cyclic behaviour of laterally loaded (mono)piles in sand: With emphasis on pile driving effects","Kementzetzidis, E. (TU Delft Offshore Engineering)","Metrikine, A. (promotor); Pisano, F. (copromotor); Delft University of Technology (degree granting institution)","2023","At the end of 2019, the European Union (EU) put forward the European Green Deal to facilitate the technological progress necessary to achieve CO2-neutrality by 2050. Such a monumental achievement would require massive investments in infrastructure for the harvesting, storage and the transnational transportation of green energy. To date, the more mature of the scalable (cf. to hydroelectric) green-energy resources is offshore wind, with joint academic and industry efforts allocated to reduce its capital expenditure. Approximately 13-37% of the required investment for offshore wind farms is currently expended on the design, manufacturing, and installation of the substructure. Further reduction in the cost of offshore wind can be achieved by addressing the main technical challenges associated with the predominant offshore wind foundation, i.e., the monopile. The main challenges typically relate to its lifetime operations, namely, (i) the identification of the wind turbine's fundamental frequencies, which are strongly dependent on the monopile-soil interaction, (ii) and the prediction of the lifetime foundation tilt, but also the current installation technology (impact driving); the current norm in the offshore industry. In particular, impact driving is associated with (i) long installation times, especially in the presence of competent soils, (ii) excessive use of construction material (steel) to avoid pile damage under many hammer blows, and (iii) costly underwater noise mitigation measures to reduce noise the levels of installation-borne noise emissions harmful to marine life.
In an attempt to accelerate the growth of offshore wind, the Netherlands, country of origin of this study, has supported several research initiatives to reduce the engineering and manufacturing costs for the prevalent offshore wind foundation in the country (the monopile). This study elaborates upon the experimental findings of two major research projects, namely the DISSTINCT (2014-2018) and the Gentle Driving of Piles (2018-2022) projects, each designed to address specific technical uncertainties associated with the foundation concept. The DISSTINCT project (launched in 2014) aimed to improve the understanding of the natural frequency of installed monopiles as well as the engineering procedures used in the identification thereof. By conducting experiments at full scale on a monopile installed in the IJsselmeer lake in the Netherlands, the experimental campaign produced invaluable data on the dynamic response of monopiles during small amplitude lateral vibrations. Later, the GDP project (launched in 2018) was designed to propose, engineer, and demonstrate a novel monopile installation procedure, foreseen to alleviate most of the aforementioned installation-related challenges; the Gentle Driving of Piles (GDP) method. Moreover, the project would provide answers to questions concerning the long-term response of (mono)piles in sandy soils, relative to the installation method. For these reasons, an extensive experimental campaign was conducted in the port of Rotterdam (Maasvlakte II), where a total of 9 piles were driven into the sandy Maasvlakte soil via different driving procedures, namely with the established impact hammering, the traditional axial vibro-driving, and the new GDP method. Subsequently, the cyclic lateral performance for four of these piles (which were heavily instrumented), was evaluated via an elaborate 82.000 load cycle (≈42 hours) loading programme of slow (0.1 Hz) high amplitude, and fast (0.1 - 4 Hz) low amplitude cyclic force applied to the (mono)piles' head.
This study elaborates and builds upon experimental findings from the above-mentioned test campaigns. These measurements were first carefully examined, and later interpreted using a variety of modelling tools (both 1D and 3D FE modelling) formulated and adapted to meet the particular geotechnical and loading challenges of the examined fieldwork. Enabled by the diversity of the field and numerical work performed, this study addresses a number of engineering challenges and knowledge gaps related to the design of monopiles, namely i) their post-installation resonance frequency, ii) the long-term response to environmental loading, and iii) the impact of the installation method on the long-term operations. In particular, 3D FE modelling was adopted to successfully simulate the dynamic response of the examined monopile in the DISSTINCT project. The modelling efforts enabled the interpretation of the field test measurements, and in turn, inspired confidence in the suitability of available simulation tools to identify the resonance frequencies of monopile foundations, and accurately calculate dynamic soil-monopile interactions. For the interpretation of the GDP field test data, 1D FE modelling was employed. In the field, the elaborate lateral loading programme returned a fairly complex cyclic pile response, with pronounced differences in the performance of piles installed by different installation methods. The particular geotechnical conditions at the GDP site, i.e., site inhomogeneity and the 4 m deep unsaturated topsoil, prevented the direct comparison of the installation methods. This was later achieved through the formulation of a cyclic soil reaction p-y model able to simulate soil ratcheting and gapping effects. The results provided rich insights into the impact of relevant installation effects on the cyclic pile response on many loading cycles and indicated that the GDP-installed piles performed excellent overall in lateral cyclic loading.","monopiles; cyclic loading; cyclic p-y modelling; monopile installation (effects); GDP driving","en","doctoral thesis","","978-94-6419-699-3","","","","","","","","","Offshore Engineering","","",""
"uuid:ac470aba-0029-4fc5-aa8e-71deae8ac3ca","http://resolver.tudelft.nl/uuid:ac470aba-0029-4fc5-aa8e-71deae8ac3ca","Design as Exploration: Multi-Objective and Multi- Disciplinary Optimization (MOMDO) of Indoor Sports Halls","Yang, D. (TU Delft Design Informatics)","Sariyildiz, I.S. (promotor); Sun, Y. (promotor); Turrin, M. (copromotor); Delft University of Technology (degree granting institution)","2022","There are an increasing number of optimal-design paradigms used in architectural design nowadays. In these paradigms, a design task is formulated, or partially formulated, as an optimization problem. Multi-Disciplinary Optimization and Multi-Objective Optimization, as two important optimal-design paradigms, have shown their great potential in improving the performances of complex buildings in recent decades. Nevertheless, current paradigms for ill‑defined conceptual architectural design still lack ways to ensure the achievement of a reliable optimization problem, which hinders reliable design solutions despite the use of advanced optimization algorithms.
To address this problem, it is necessary to shift the focus from Optimization Problem Solving to Optimization Problem Formulation. This research particularly focuses on knowledge‑supported, dynamic and interactive Optimization Problem Re-Formulation in order to construct a new Multi‑Objective and Multi-Disciplinary Optimization (MOMDO) method suitable for use in ill‑defined conceptual architectural design. The proposed method consists of two subtype methods: Non‑dynamic, Interactive Re-formulation method (Subtype-I) and Dynamic, Interactive Re‑formulation method (Subtype-II), which can be used to explore design space in a convergent and divergent manner respectively. To support the re-formulation, various kinds of information and knowledge need to be extracted by utilizing different computational techniques, such as advanced sampling algorithms, Self-Organizing Map, Hierarchical Clustering, Smoothing Spline Analysis of Variance, Two-Level Variable Structure and modular programming. Moreover, a software workflow that can provide these computational techniques is developed; it integrates McNeel’s Grasshopper, ESTECO's modeFRONTIER and simulation software tools Daysim, EnergyPlus and Karamba3D. With the support of this software workflow, the proposed method is demonstrated via two case studies concerning the conceptual design of indoor sports halls.","","en","doctoral thesis","","978-94-6366-643-5","","","","","","","","","Design Informatics","","",""
"uuid:b266966a-b97b-4f7a-9db3-f85137766c80","http://resolver.tudelft.nl/uuid:b266966a-b97b-4f7a-9db3-f85137766c80","Towards A Poetics of Dwelling: Exploring Nearness Within the Chinese Literati Garden","Lu, L. (TU Delft History, Form & Aesthetics)","Hein, C.M. (promotor); Bracken, G. (copromotor); Delft University of Technology (degree granting institution)","2022","This thesis starts with a worrisome observation tied to various phenomena across modern built environments: humans today are experiencing a weakened relatedness to and reduced intimacy with the world around them. In stark contrast to the general trend, however, most Chinese literati gardens maintain their traditional rich conditions, enabling their visitors to experience a unique, high-quality experience of relatedness to and intimacy with the world, which may serve as an antidote to the existing disruptive modern condition. What lessons can be learned from the Chinese literati gardens to address this weakened intimacy of relatedness in modern built environments? Motivated by this question, this thesis takes the Heideggerian notion of Nearness as its foundation. Through a contextually relevant interpretation of the meaning of Nearness in Heideggerian discourse, it first establishes a theoretical framework through which to assess how the experience of Nearness—the ontological relatedness to and intimacy with the world— generally occurs within built environments. Next, taking the Master of the Nets Garden as a case study, it reveals the various embedded spatial-experiential settings and complex mechanisms that continuously facilitate rich, strong, and multi-dimensional experiences of Nearness. Finally, it reflects on some of the key relevant issues, including what benefits and enlightenments the findings of this thesis could bring to current architectural practices. Overall, by exploring this essential aspect of the literati garden, the thesis equips contemporary spatial practitioners with the theoretical and practical tools necessary to recapture the high-quality experiences of Nearness within their works in the modern era.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-638-1","","","","","","","","","History, Form & Aesthetics","","",""
"uuid:0fb850a4-4294-4181-9d74-857de21265c2","http://resolver.tudelft.nl/uuid:0fb850a4-4294-4181-9d74-857de21265c2","Wave shape prediction in complex coastal systems","de Wit, F.P. (TU Delft Environmental Fluid Mechanics)","Reniers, A.J.H.M. (promotor); Tissier, M.F.S. (copromotor); Delft University of Technology (degree granting institution)","2022","When waves propagate towards the coast, nonlinear interactions occur under the influence of decreasing water depth and variable ambient currents. This changes the initially harmonic wave shape into a nonlinear wave shape due to the presence of bound waves accompanying the freely propagating primary waves. The nonlinear wave shape ranges from skewed waves with steeper crests and flatter troughs to asymmetric waves where the wave front has pitched forward creating a saw-tooth wave shape at breaking. Analogous to the nonlinear surface elevation, also the near-bed orbital wave velocity is nonlinear. This results in a wave-shape driven sediment transport, generally directed in the direction of wave propagation. For accurate predictions of the sediment transport, it is thus important to know the wave shape. This is especially important in complex coastal systems with strong variations in bathymetry where wave-induced sediment transport gradients affect the subsequent morphological evolution. Therefore, this thesis focuses on measuring and modelling of the nonlinear wave shape....","wave shape; bound wave height; bispectrum; triads; BWE model","en","doctoral thesis","","978-94-6384-400-0","","","","","","","","","Environmental Fluid Mechanics","","",""
"uuid:412a4272-9ec2-4aba-852d-981e392d64d0","http://resolver.tudelft.nl/uuid:412a4272-9ec2-4aba-852d-981e392d64d0","Rebuilding Cytokinesis One Molecule at a Time","Baldauf, L. (TU Delft BN/Gijsje Koenderink Lab)","Koenderink, G.H. (promotor); Idema, T. (copromotor); Delft University of Technology (degree granting institution)","2022","Cells are the fundamental units of life. They make up all living things, from bacteria that live in the soil, to archea that give thermal springs their bright colors, to trees and humans. All of these cells share some common functions: they build themselves from basic building blocks, following the instructions of their genetic blueprint, and procreate by growing and dividing. The building blocks must be taken up from the environment and metabolized, and cell division requires the cell to be able to control its own shape. While these basic tasks are shared across the tree of life, different types of organisms have evolved distinct molecular machineries to complete them. In this thesis, we take a close look at animal cells, and ask how they control their shape as they must do in order to move, eat, sense, and divide. In animal cells, shape is controlled by the cytoskeleton, and in particular by the actin cortex. This cortex is a thin layer of actin _laments that sit underneath the plasma membrane, supporting the cell surface. The _laments are highly dynamic: they are constantly growing, shrinking and being remodeled, with well over 100 different proteins regulating their length and architecture. This molecular complexity, combined with the small size and high density of _laments and their rapid remodeling, makes it extremely difficult to disentangle different functions performed by the actin cortex in living cells. To better understand what fundamental principles govern cortex-based shape control of animal cells, we thus pursue a different approach: instead of studying living cells directly, we build minimal versions, so-called ‘synthetic cells’, from the bottom up. In such a bottom-up reconstitution approach, we isolate proteins (for instance actin) from their native environment, purify them, and bring them back together in vitro, following rational design principles. Consequently, we drastically reduce the complexity of the system, giving us a chance to actually understand what is going on. This allows us to test our assumptions about how cellular processes work in vivo, and discover new functions that are normally hidden in the complexity of the living cell. In this thesis, we use bottom-up reconstitution to ask how animal cells control their shape, with the ultimate aim to build a minimal actin-based cell division machinery…","synthetic cell; cell division; reconstitution; actin cytoskeleton; vesicle fusion; myosin; cell mechanics","en","doctoral thesis","","978-90-8593-545-2","","","","","","","","","BN/Gijsje Koenderink Lab","","",""
"uuid:94eeb60e-12cf-4be4-82ad-9b067987842f","http://resolver.tudelft.nl/uuid:94eeb60e-12cf-4be4-82ad-9b067987842f","Housing Justice as Expansion of People's Capabilities for Housing: Proposal for Principles of Housing Policy and Evaluation of Housing Inequality","Kimhur, Boram (TU Delft Urban Development Management)","Delft University of Technology (degree granting institution)","2022","Housing inequality is a growing concern in our society. In recent decades, this inequality has been exacerbated by the phenomenon of housing being financialized and commodified as a means for wealth accumulation. Management of financial institutions and housing markets has become the centre of attention in policy discussion. The questions of how to promote the moral values tied to housing, such as human rights, dignity and freedom, and how to better enable people to access suitable housing have been marginalized. As a way forward, the states’ re-intervention and re-distribution policies, and the human rights-based approach to housing policies are discussed, but this thesis advocates for a more ambitious paradigm shift. By extending Amartya Sen’s capability approach to housing, the thesis argues for resetting the primary goal of housing policies as expansion of people’s capabilities for housing—expanding opportunity, ability and security to lead their valued ways of residing—beyond the distribution of monetary and material resources for housing, such as housing benefits and dwelling units. This thesis presents the theoretical foundations of this argument and proposes basic principles to guide housing policies, which can serve as a normative basis of housing debates on necessary policy actions. An essential tool to guide housing policies towards this newly proposed goal is to evaluate policy outcomes and housing affairs of people—well-being, deprivation and inequality in housing—with capability considerations. The thesis suggests how this evaluation can be done and can help policies address the inequalities in what people can do to pursue their suitable housing options and how well they are actually residing.","Housing; Housing Policy; Inequality; Capability approach; Social justice; Housing justice; Capability approach operationalisation; Evaluation approach; Well-being in housing","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-639-8","","","","A+BE I Architecture and the Built Environment No 23 (2022)","","2023-06-30","","","Urban Development Management","","",""
"uuid:6db863de-fe94-4d0c-abdf-2b268ae5df2a","http://resolver.tudelft.nl/uuid:6db863de-fe94-4d0c-abdf-2b268ae5df2a","Expanding the scope of H2O2-driven biocatalysis","Xu, X. (TU Delft BT/Biocatalysis)","Hollmann, F. (promotor); Paul, C.E. (copromotor); Delft University of Technology (degree granting institution)","2022","H2O2 is a relatively 'green' oxidant because its by-products are only H2O. In recent years, an increasing number of enzymatic synthesis methods based on H2O2 have been estab-lished. H2O2-driven reactions are usually applied as an alternative to NAD(P)H-dependent reactions to avoid complicated cofactor regeneration systems.
The aim of this thesis was to develop H2O2-driven peroxizymes-catalysed reactions. Four approaches were studied: (1) UPO-ADHs combinations for the synthesis of enantiomeri-cally pure (R)- and (S)-phenylethanol derivatives; (2) UPO-catalysed selective oxidation of silane to silanol; (3) VCPO-catalysed oxidative decarboxylation of glutamic acid to the corresponding nitrile at semi-preparative scale; (4) The investigation of the formate oxi-dase (FOx)-driven H2O2 generation system.
In Chapter 3, UPO-catalysed hydroxylation of ethylbenzene could only produce (R)-phe-nylethanol exclusively. We therefore developed a bienzymatic reaction to produce not only (R)- but also (S)-phenylethanols with the combination of a peroxygenase and com-plementary alcohol dehydrogenases. The results obtained are promising (10 samples, >91% ee). Reaction conditions for this one-pot two-step system would require further study and optimisation.
In Chapter 4, a peroxygenase-catalysed hydroxylation of organosilanes is reported. Aae-UPO enabled efficient conversion of a broad range of silane starting materials in attractive productivities (up to 300 mM h-1) and catalyst usage (up to 84 s-1 and more than 120,000 catalytic turnovers). As this enzymatic Si-H oxyfunctionalisation route is a completely new application of UPOs, there are still some limitations that need further research and inves-tigation.
In Chapter 5, the chemoenzymatic oxidative decarboxylation of glutamic acid to the cor-responding nitrile using the vanadium chloroperoxidase (CiVCPO) has been investigated. 1,630,000 turnovers and kcat of 75 s-1 were achieved using 100 mM glutamate. The semi-preparative enzymatic oxidative decarboxylation of glutamate was also demonstrated. Product inhibition was identified as a major limitation.
In Chapter 6, the formic acid oxidase (AoFOx) driven H2O2 generation system was used to drive the AaeUPO-catalysed hydroxylation of ethylbenzene derivatives. The investiga-tion of factors such as formate and enzyme spiking, pH, oxidase concentration, co-sol-vent, O2 supply and production inhibition did not solve the premature ending of the reac-tion. In our opinion, the nature of the electronic donor for AoFOx could be the break-through.
The results of this thesis contribute to the application of H2O2-driven peroxyzymes. The achievements and challenges noted in the thesis will promote the future implementation and popularisation of enzymatic oxyfunctionalisation reactions.","biocatalysis; hydrogen peroxide; Selective oxyfunctionalization; peroxygenases; haloperoxidases","en","doctoral thesis","","978-90-832797-7-0","","","","","","2023-11-22","","","BT/Biocatalysis","","",""
"uuid:ee7b5513-3917-44d0-9307-7689be201153","http://resolver.tudelft.nl/uuid:ee7b5513-3917-44d0-9307-7689be201153","Sensor data fusion for automated driving: Toward robust perception in adverse weather conditions","Domhof, J.F.M. (TU Delft Intelligent Vehicles)","Gavrila, D. (promotor); Kooij, J.F.P. (copromotor); Delft University of Technology (degree granting institution)","2022","The aim of the thesis is to develop methods and algorithms for the development of a robust perception system that is capable of dealing with adverse weather conditions. Robust environmental perception is important in order to guarantee safety for the automated vehicle and the road users in the neighborhood. To create a robust perception system, a sensor setup should be selected with multiple sensing modalities. Commonly used sensing modalities in the field of intelligent vehicles are lidar, camera and radar sensors. This thesis addresses three subjects that are important for robust perception, namely sensor selection, extrinsic calibration and object tracking....","sensor data fusion; adverse weather; object tracking","en","doctoral thesis","","978-94-6419-651-1","","","","","","","","","Intelligent Vehicles","","",""
"uuid:7356bf23-6e9e-4237-90f2-63be8b6a8a4c","http://resolver.tudelft.nl/uuid:7356bf23-6e9e-4237-90f2-63be8b6a8a4c","At the crossroads of Architecture and Landscape: Preservation Strategies of Historic Military Systems: a Comparison between Italy and the Netherlands","Marulo, F. (TU Delft Heritage & Values)","Wagenaar, C. (promotor); van Thoor, M.T.A. (promotor); Russo, V. (promotor); Delft University of Technology (degree granting institution)","2022","In the context of rapid urban transformations, this thesis explores the possible preservation strategies for historic military systems that used to be embedded in extra-urban settings, but that now are absorbed in the development dynamics of complex metropolitan areas. The research stems from the main peculiarity of these heritage systems: namely, the coexistence of cultural and natural values, and their being at the crossroads of the architecture and landscape domains. Although the need to address nature-culture interlinkages has become a topical issue in the field of heritage preservation, military landscapes have been almost completely left out of this debate. Moreover, the lack of inter-scale strategies in current preservation practices for historic military systems further complicates the way nature-culture interlinkages are addressed. The development of a conceptual framework on this topic has required considering the diversity of existing approaches to landscape, architectural heritage and their interconnection. Italy and the Netherlands were selected as relevant contexts in Western Europe for comparison on this topic. Linking archival research, interviews and field observations, Italian and Dutch contemporary experiences with the revitalization and reuse of historic military systems (NL: New Dutch Waterline; IT: Entrenched Field of Mestre) were compared. Both national and international initiatives promoted in the frame of the World Heritage Convention were analysed. To understand the historical roots of the recent approaches, the evolution of landscape protection in the two contexts has been investigated, highlighting the different influences played by the national discourse on architectural heritage and spatial planning. This historical background, together with the cross-reading of the case studies, has led to the definition of a transnational conceptual framework on the possible preservation strategies for historic military systems with an inter-scale approach. Taking into account the peculiarities of each context, it provides a tool for facilitating the decision-making process, bringing historic military systems into the international discussion on nature-culture interlinkages. Ultimately, it can serve as a reference for other historic landscape systems sharing similar characteristics and preservation issues.","historic military systems; nature-culture interlinkages; Italy; Netherlands; heritage preservation","en","doctoral thesis","Delft University of Technology","978-94-6366-647-3","","","","A+BE | Architecture and the Built Environment. No. 22 (2022)","","","","","Heritage & Values","","",""
"uuid:769d3d81-8a84-4f59-80a6-2d237aa878a4","http://resolver.tudelft.nl/uuid:769d3d81-8a84-4f59-80a6-2d237aa878a4","Recommender Systems for DevOps","Maddila, C.S. (TU Delft Software Engineering)","van Deursen, A. (promotor); Nagappan, Nachiappan (promotor); Gousios, G. (promotor); Delft University of Technology (degree granting institution)","2022","The software development life cycle (SDLC) for a developer has increased in complexity and scale. With the advent of DevOps processes, the gap between development and operations teams reduced significantly. Developers are now expected to perform different roles from coding to operational support in the new model of software development. This shift demands the evolution and improvement of software development practices and deliver products at a faster pace than organizations using traditional software development and infrastructure management processes. As a consequence, the demand for more intelligent and context sensitive DevOps tools and services that help developers increase their efficiency is increasing. A lot of research went into developing recommenders for DevOps, by leveraging the advancements made by the recommender system community. However, a lot of existing tools still work in ‘silos’ and does not take into account a holistic view of DevOps processes and the data generated at phase of the DevOps lifecycle while making recommendations.
By contrast, in this thesis, we propose a unified framework to develop recommenders for DevOps: perform data collection, building the models, deploying them, and evaluating the effectiveness of such recommenders in large-scale cloud development environments quickly and efficiently. We study the effect of such recommenders on the DevOps processes by performing empirical research and mixed method approaches (qualitative and quantitative analyses) on each of the deployed recommenders to better understand the productivity gains and the impact created by them.
Our results show that developers benefit greatly from smart recommenders such as Nudge, ConE, Orca, and MyNalanda. We also show, through rigorous experiments, technical action research methods, and empirical analyses that these recommenders provide as much as 65% gains in terms of change progression and 73% accuracy for root causing the service incidents automatically. We also conduct large scale surveys and interviews to support our empirical analysis and quantitative results. Our unified data framework and the platform we developed for building these recommenders is generic enough and encourages reusability of vital functions of such recommenders systems, such as data collection, model training, inference, deployment, and evaluation.","DevOps; Recommender Systems; Artificial Intelligence; Machine learning (ML); Software Engineering; Programming Languages","en","doctoral thesis","","","","","","","","","","","Software Engineering","","",""
"uuid:f37f4e0c-4869-434a-ba93-24db7662eabb","http://resolver.tudelft.nl/uuid:f37f4e0c-4869-434a-ba93-24db7662eabb","Mapping the reaction landscape for the C1 chemistry","Khramenkova, E. (TU Delft ChemE/Inorganic Systems Engineering)","Pidko, E.A. (promotor); Li, Guanna (copromotor); Delft University of Technology (degree granting institution)","2022","In this thesis, we have presented and investigated the possible strategies for modelling complex heterogeneous catalytic systems in operando regimes. By introducing modern computational approaches to sample the potential energy surface of the catalytic active site, we have attempted to account for the reactive conditions, solvent presence, additives inclusion, and structural dynamics of the active site. There is growing spectroscopic and theoretical evidence of the critical role of the active site dynamics for the catalytic performance, advocating for the active site representation as an ensemble of possible isomers. Challenged by the complexity of the reactive environment and common heterogeneous catalysts, we strongly believe that addressing these factors and incorporating them explicitly into the model description will contribute to a more realistic representation of the catalytic system.","","en","doctoral thesis","","","","","","","","","","","ChemE/Inorganic Systems Engineering","","",""
"uuid:5b758b09-292f-4a3d-8fc3-221ae9cc02f3","http://resolver.tudelft.nl/uuid:5b758b09-292f-4a3d-8fc3-221ae9cc02f3","Development and application of ultrafast scanning electron microscopy","Garming, M.W.H. (TU Delft ImPhys/Microscopy Instrumentation & Techniques)","Hoogenboom, J.P. (promotor); Kruit, P. (promotor); Delft University of Technology (degree granting institution)","2022","Scanning electron microscopes (SEMs) can capture detail on the single nanometer length scale through the interaction of a tightly focused electron beam with a sample, but this impressive spatial resolution is not matched with a capability to resolve dynamic processes on the ultrafast time scale. A variety of processes occur at nanosecond and faster time scale, and at spatial scales out of reach of conventional light optical microscopes, for example in nanoscale solid state devices and nanomechanical resonators. An imaging tool combining high spatial and temporal resolution is therefore required. In recent years, some research groups have worked on a technique to add ultrafast imaging to the capabilities of a SEM, building on concepts developed for transmission electron microscopy. In so-called ultrafast scanning electron microscopy (USEM), the combination of a pulsed laser and a pulsed electron beam enables the formation of movies capturing dynamics much faster than possible with a conventional SEM. Dynamics are initiated with femtosecond laser excitation of the sample and probed with electron beam pulses arriving with tightly controlled delay. The temporal resolution of this pump-probe scheme is determined by the laser and electron pulse duration. Secondary electrons, emitted from the top few nanometer of the sample, are collected and used to construct ultrafast movies. The aim of this thesis is to further develop the technique by making multiple improvements in our implementation, gaining additional insight into the contrast mechanism of USEM, and exploring new applications.","","en","doctoral thesis","","978-94-6384-395-9","","","","","","","","","ImPhys/Microscopy Instrumentation & Techniques","","",""
"uuid:9bf59202-4b7a-4313-b972-c12b7d272c06","http://resolver.tudelft.nl/uuid:9bf59202-4b7a-4313-b972-c12b7d272c06","Singular value decomposition for time series analysis with applications to smart energy systems","Khoshrou, A. (TU Delft Intelligent Electrical Power Grids)","la Poutré, J.A. (promotor); Pauwels, Eric J. (copromotor); Delft University of Technology (degree granting institution)","2022","In a world replete with observations (physical as well as virtual), many data sets are represented by time series. In its simplest form, a time series is a set of data collected sequentially, usually at fixed intervals of time. In a number of applications, the mean and the variance of the time series is time-invariant and there is no seasonality in the data (such time series is called stationary). However, in many more applications, e.g., time series that are related to smart energy systems, the data have non-stationary characteristics. This thesis focuses primarily on matrices as an alternative representation of the latter type of time series, in order to take advantage of matrix decomposition methods. The rationale is straightforward: numerically stable matrix decomposition techniques enable us to extract underlying patterns in the data and use them to construct approximations of the corresponding time series. In particular, we will focus on singular value decomposition (SVD) as a powerful and numerically stable matrix factorization technique. Therefore, as the first step in this thesis, the SVD and its geometrical interpretation are extensively studied, in order to acquire a firm understanding of how it performs. That in turn enables us to look at different problems in time series analysis from a fresh perspective. For most of the applications of SVD in various fields, it is important to understand the properties of the SVD of a matrix whose entries show some degree of random fluctuations. Therefore, to determine how the noise level affects the singular value spectrum, it is essential to study the singular value decomposition of random matrices. As we will explain in the introductory chapter, one of the early applications of the SVD in time series analysis is in periodicity detection of the time series data. Therefore, we explore how the geometry of a matrix (the position of the data points with respect to the origin) and the aspect ratio of the matrix (the ratio between the number of columns and the number of rows) can affect its SVD results. Matrix factorisation techniques such as principal component analysis (PCA) and singular value decomposition (SVD) are both conceptually simple and effective. However, it iswell-known that they are sensitive to the presence of noise and outliers in input data. One way to mitigate this sensitivity is to introduce regularisation. To this aim, we hark back to the interpretation of SVD and PCA in terms of low-rank approximations, which involve the minimisation of specific functionals. We then derive algorithms for the minimisation of the regularised version of such functionals...","","en","doctoral thesis","","978-94-6384-396-6","","","","","","","","","Intelligent Electrical Power Grids","","",""
"uuid:0dbff3c1-752b-4211-ab1d-c8ec71ffed9d","http://resolver.tudelft.nl/uuid:0dbff3c1-752b-4211-ab1d-c8ec71ffed9d","Integrated silicon carbide sun position sensor system-on-chip for space applications","Romijn, J. (TU Delft Microelectronics)","Sarro, Pasqualina M (promotor); Zhang, Kouchi (promotor); Vollebregt, S. (copromotor); Delft University of Technology (degree granting institution)","2022","","harsh environments; integrated optics; opto-electronics; SiC CMOS; silicon carbide; sun position sensors; System Integration; wafer-level packaging; wide bandgap semiconductors","en","doctoral thesis","","","","","","","","2024-11-19","","Microelectronics","","","",""
"uuid:01a3ced0-59bb-484a-beeb-0bfbe4d904b8","http://resolver.tudelft.nl/uuid:01a3ced0-59bb-484a-beeb-0bfbe4d904b8","Incorporating institutions into optimization-based energy system models","Wang, N. (TU Delft Energie and Industrie)","Herder, P.M. (promotor); Verzijlbergh, R.A. (copromotor); Heijnen, P.W. (copromotor); Delft University of Technology (degree granting institution)","2022","The pledge for a carbon-free energy system in 2050 requires significant investments into renewable energy sources (RES). The relevant questions are: what technologies to select, where to build them, how much the capacities are, and at what cost. In order to answer these techno-economic questions, optimization models are commonly used to sketch a least-cost future energy system. However, the energy system is far more complex than a mathematical model. Although optimization models can provide the least-cost system design, they do not guarantee that we can realize this design because some key aspects are not captured by such models: the impact of public acceptance issues, conflicting interests among stakeholders, and the imperfection of markets. These non-technical aspects are generalized as institutions in this thesis. In a socio- technical system like the energy system, considering both the social aspects, the institutions, and the technical system, is pivotal. Therefore, the goal of this thesis is to improve optimization models by including institutions in energy system planning.
Since institutions are not commonly mentioned in energy system planning models, this thesis starts with standardizing institutions, and we conducted a literature review. The goal is to provide a common ground for discussing institutions and find research trends and gaps in the state-of-the-art. We identified the following research gaps that need deliberate attention: spatial policies, collective decision-making, and bilateral trading with externalities. In this thesis, we developed three models to deal with these institutions. Since these institutions are indispensable in a socio-technical system, including them in optimization models results in socio-technically optimal future energy system designs beyond only the techno-economic optimums.","socio-technical systems; optimization; energy system planning; institutions; spatial policies; energy system optimization models; multi-objective optimization; multi-criteria decision-making; bilateral trading; externalities","en","doctoral thesis","","978-94-6366-630-5","","","","","","","","","Energie and Industrie","","",""
"uuid:4f4196f3-fadb-4170-b6d2-c0923dbd325a","http://resolver.tudelft.nl/uuid:4f4196f3-fadb-4170-b6d2-c0923dbd325a","Dynamical behavior of trampoline membranes","de Jong, M.H.J. (TU Delft QN/Groeblacher Lab; TU Delft Dynamics of Micro and Nano Systems)","Groeblacher, S. (promotor); Norte, R.A. (copromotor); Delft University of Technology (degree granting institution)","2022","This thesis comprises several experiments involving silicon nitride trampoline membranes. These membranes are excellent mechanical resonators and can be fabricated with desirable optical properties. Their dynamical behavior, particularly the interaction of their mechanical motion with light, is of interest for optomechanics and sensing applications. In the first experiment, I study the dissipation of trampoline membranes due to coupling to the substrate modes. It is known that the clamping of the substrate can affect the dissipation of its resonators, and this experiment provides a systematic investigation into this effect. The results show a clear reduction of mechanical Q-factor (increase of dissipation) when a resonator is resonant with a substrate mode. This highlights the design of the substrate modes for high-Q mechanical resonators. In the second experiment, I study the appearance of mechanical frequency combs in trampoline membranes. The interaction of a standing wave light field with the silicon nitride membrane through the dielectrophoretic force is similar to an optical trap. If the mechanical motion is sufficiently large, the periodicity of that force creates perfect integer multiple copies of the original motion frequency, which form a frequency comb. This makes it possible to generate mechanical frequency combs using a simple setup with little technical requirements. In the third experiment, I study the behavior of ringdown measurements involving near-degenerate modes. When both modes are within the detection bandwidth of the setup, their signal interferes and the ringdown displays ’ringing’. It is possible to extract the linear and non-linear parameters of both near-degenerate modes, and extract their relative coherence in the Brownian motion regime. This provides a characterization method for systems with near-degenerate mechanical modes. In the fourth experiment, I study the interaction of two trampoline membranes with a single optical cavity mode. The optical field couples the mechanical motion of the two membranes, but with a time delay based on the cavity lifetime. The associated phase-shift of the mechanical responses causes destructive interference, which leads to mechanical noise cancellation. This could be used to improve sensors suffering from mechanical thermal noise, and is important when studying optomechanical multi-resonator interactions.","Optomechanics; Dynamics; Microresonators; Mode coupling; Frequency combs; Noise cancellation","en","doctoral thesis","","9789085935438","","","","","","","","","QN/Groeblacher Lab","","",""
"uuid:6aed720a-b970-43c2-9c84-f028a8127230","http://resolver.tudelft.nl/uuid:6aed720a-b970-43c2-9c84-f028a8127230","Supporting Electronic Mental Health with Artificial Intelligence: Thought Record Analysis and Guidance","Burger, Franziska (TU Delft Interactive Intelligence)","Neerincx, M.A. (promotor); Brinkman, W.P. (promotor); Delft University of Technology (degree granting institution)","2022","This thesis investigates how artificial intelligence can support e-mental health for depression, i.e. the delivery of treatment and prevention interventions for depression using technology. E-mental health for depression is a promising means for bridging the treatment gap since it addresses many of the barriers that prevent people in need of help from seeking or obtaining it. Additionally, many systems have been found to be effective in controlled trials. However, as human support for e-health interventions decreases so do their effectiveness and users’ adherence. While one possible explanation is that human support is a necessary ingredient of a successful intervention, another is that the technology is not satisfying the needs of users to the best of its abilities. This finding inspired us to take a closer look at the technological implementation of the functionality of these systems. To this end, we developed a set of scales that assess the technological sophistication of the functional components of systems, the e-mental health degree of technological sophistication (eHDTS) scales. In a systematic literature review of the field, we then divided all systems developed between 2000 and 2017 for the prevention or treatment of depression reported in the scientific literature into their functional components and rated those components with the eHDTS scales. We found that most systems that had been developed until 2017 were low-tech implementations, consisting mostly of psychoeducation and having a one-way information stream from system to user. This clearly contrasts with face-to-face therapy in which the therapist closely attends to the patient and provides his or her knowledge and insight strategically to signal understanding and empathy, foster self-reflection, teach, or obtain more information. Based on this consideration, we set out to develop a conversational agent capable of signaling to the user that it had processed the content of what it had been told when completing a thought record together with a user in dialog with the hypothesis that this would be able to motivate the user to complete more thought records and feel more engaged. Thought recording is a core technique of cognitive therapy in which patients are asked to systematically monitor their thinking in situations that caused a maladaptive response. Cognitive theory posits that the negative, cognitive appraisals that are responsible for the low mood experienced in patients with depression stem from maladaptive schemas, i.e., beliefs that we hold as truths about the world, ourselves, and the future. To get the conversational agent to “understand” the thoughts provided by the user from this cognitive theory perspective, we collected a corpus of thought records from Amazon Mechanical Turk workers, manually coded the thoughts with respect to the underlying schema, and trained various machine learning models to do the same labeling. A set of deep neural networks outperformed the other algorithms and was then deployed in the conversational agent. We used a between-subjects design to expose 308 participants recruited from Prolific to the conversational agent. The three conditions differed with respect to the feedback-giving capabilities of the conversational agent in response to a thought record: low feedback richness entailed an acknowledgment of the completion of the thought record (thanking the user), medium feedback richness entailed the acknowledgment plus feedback on the process (how many steps the user did in relation to his or her previous thought records), and rich feedback richness entailed medium feedback richness combined with feedback on the content (an interpretation of the thought record with respect to the underlying schema). While all users were able to complete the thought records with the conversational agent, we did not find supportive evidence that the agent’s feedback strategy could increase users’ motivation to complete more thought records or their self-reported engagement in self-reflection. Future research may investigate why we observed these null results by studying whether the feedback is processed correctly, whether a population with depression that is motivated by a wish to get healthy might behave or experience the system differently from our sample that was recruited online and did not meet diagnostic criteria for depression, or whether more advanced social and interaction capabilities need to accompany the complex feedback for it to be believable.","computerized therapy; conversational agents; natural language processing; cognitive therapy","en","doctoral thesis","","978-94-6469-147-4","","","","","","2022-11-24","","","Interactive Intelligence","","",""
"uuid:d9943bd3-f988-4af2-bddf-f798a44a03e2","http://resolver.tudelft.nl/uuid:d9943bd3-f988-4af2-bddf-f798a44a03e2","Maximisation of energy recovery from waste activated sludge via mild-temperature and oxidative pre-treatment","Gonzales, A. (TU Delft Sanitary Engineering)","van Lier, J.B. (promotor); de Kreuk, M.K. (promotor); Delft University of Technology (degree granting institution)","2022","The overall objective of the present study was to investigate the effects of thermal pre-treatment of waste activated sludge (WAS) at 70 °C with addition of H2O2 to enhance sludge hydrolysis and subsequent methane production during WAS anaerobic digestion. The research was divided into four parts: Firstly, a bibliographical part, in which literature research revealed that WAS can be considered a mixture of proteins, humic substances, cells (and others).
Subsequently, the effects of several pre-treatment techniques on these constituents and on biochemical and physicochemical properties of WAS, such as methane production and dewatering, were analyzed. This part reviews the response of WAS subjected to pre-treatments of different nature (e.g., thermal, acid-base, oxidative) at different energy intensities. It also compiles the role of pre-treatment techniques on sterilization, dewatering and methane production.
Ultimately, it was made clear that the mechanisms of most of the pre-treatments still remain unknown, hindering a fair comparison of their effects.
In the second part, the effects of low-temperature pre-treatment with the addition of H2O2 on WAS were analyzed in both lab- and pilot-scale scenarios to detect and quantify its effects. During lab-scale experiments, it was found that the application of low-temperature thermal pre-treatment combined with H2O2 at 70 °C; 30 minutes and 15 mgH2O2/g TS increased the methane production rate, which consisted of 2 differently recognizable parts. The high rate, kCH4 rapid, increased from 0.44 ± 0.01 to 0.47 ± 0.01 d-1 and the low rate, kCH4slow, from 0.09 ± 0.00 to 0.11± 0.01 d-1. There were inconclusive results regarding an increase in specific methane production. The lab-scale observations were reproduced during a pilot-scale experiment, although due to methodological restrictions, pre-treatment was applied together with two-staged compartmentalized digestion. It was observed that due to the adoption of pre-treatment and compartmentalized digestion, organic loading rates could be increased from 1.4 to 4.2 kg volatile solids VS/(m3d), which resulted in a solids retention time (SRT) decrease from 23 to 15 days without apparent process impairment. It was considered that most of the observed effects were caused by the pre-treatment, while the influence of compartmentalized digestion remained marginal in this study.
In the third part, further study at lab-scale was conducted to determine the individual contributions of the separate components of pre-treatments, i.e., thermal and oxidative. For instance, thermal pre-treatment solubilized most of the EPS; deactivated catalase and accelerated the reaction rate of H2O2, while H2O2 decreased the apparent viscosity of WAS by 12-30%, resulting in a synergistic effect on the WAS digestibility. As suggested by other rheological parameters, the addition of H2O2 improved the flowability of WAS at 70 °C. The cause of the decrease in viscosity was not determined. However, the presence of hydroxyl radicals via the Fenton’s reagent; the decrease in particle size of WAS, and the combination of H2O2 with conditioning agents were discarded. On the other hand, results suggested that the reason behind the decrease in viscosity was the molecular modification of the carbohydrates in WAS as a result of their reaction with of H2O2.
The above-described experiments were restricted to the grab samples taken at 3 wastewater treatment plants (WWTPs). However, WAS is a matrix of variable composition, depending on location and season. Therefore, in the fourth and last part, the applicability of the pre-treatment methods to WAS with a different composition was tested, using lab-grown sludge. Based on the results, it was inferred that the concentration of metals embedded in lab-grown sludge was relevant for the effectiveness of pre-treatment in terms of methane production, both rate and extent.
The evidence obtained in this study suggests that the lower viscosity of the pre-treated WAS was reflected in the viscosity of the digestate, which allowed a better mass-transfer during non-ideal mixing and therefore a higher methane production rate. Since full-scale digesters are very often poorly-mixed, the applied pre-treatment conditions might be a possible strategy to improve mixing and increasing the BMP without increasing the mixing energy.
3 in the Netherlands). On the other hand, DWDSs also contain thermal energy as a surplus of cold or heat. Depending on the drinking water temperature within the distribution network, thermal energy can either be used for heating or cooling purposes. Thermal energy recovery potential from drinking water has been explored recently. Cold thermal energy recovery from drinking water (TED) can provide cooling for buildings and spaces with high cooling requirements as an alternative for traditional cooling and thus TED helps reduce in greenhouse gas (GHG) emissions.
The effects of increased water temperature induced by TED on the drinking water quality and biofilm development within DWDSs are not yet known. Hence this thesis was initiated with the objective to investigate the effects of TED on microbial water quality and biofilm development within DWDSs. The first part of this thesis investigated the impacts of TED at 25 oC on microbiological drinking water quality, using pilot distribution systems. The first study revealed that the water temperature increased to 25°C in a pilot distribution system as a result of cold recovery does not affect the bacterial water quality in the drinking water phase. However, it does affect the concentration and community composition of biofilms (Chapter 2). Hence, in the second part of this thesis, the effect of TED on biofilm was investigated extensively. In pilot scale distribution systems, both water and biofilm phases were studied with water temperatures increased to 25 oC and 30 oC after TED. It was concluded that the timeline for biofilm microbial development was influenced by temperature: the higher the temperature, the faster the microbial development of a biofilm took place. Simultaneously, higher biomass activity (ATP and cell concentration) was also observed in the water phase. In the biofilm phase, the initial faster microbial development did not lead to differences in microbial diversity and composition at the end of the experimental period (Chapter 3).
Similarly, biofilm development after TED at 25 oC followed for a long period of time, 99 weeks, showed that instantaneous increase in water temperature influenced the early stages of biofilm development. High temperature initiates faster growth of primary colonizers (Betaproteobacteriales, Sphingomonadaceae) (Chapter 4). Both studies univocally showed that as a result of constantly stable increased water temperature after TED, biofilms reached to a steady phase faster when compared to fluctuating drinking water temperatures in reference and control systems (Chapter 3 and 4).
After studying the microbial water quality in unchlorinated drinking water distribution systems for both water and biofilm phases, initial investigation of TED application within chlorinated networks was also performed. Compared with unchlorinated DWDSs, here chlorine dramatically reduced the biofilm biomass growth, and raised the relative abundances of the chlorine-resistant genera (i.e. Pseudomonas and Sphingomonas) in bacterial communities. As a result of TED, no significant effects were observed on chlorine decay, microbial water quality and biofilm composition during the experimental period (Chapter 5).
After extensively studying the changes in the microbial drinking water quality as a result of TED, the last part of this thesis was carried out to determine what raising the maximum temperature limit (Tmax) after recovery of cold would entail in terms of energy savings, GHG emission reduction and water temperature dynamics during water transport. A full-scale TED system was used as a benchmark, where Tmax is currently set at 15 °C. By raising Tmax to 20, 25 and 30 °C, the retrievable cooling energy and GHG emission reduction could be increased by 250, 425 and 600%, respectively. The drinking water temperature model predicted that within a distance of 4 km after TED, water temperature resembles that of the surrounding subsurface soil. Hence, a higher Tmax will substantially increase the TED potential of DWDSs while keeping the same comfort level at the customer’s tap (Chapter 6).
All of these observations indicate that increasing Tmax up to 25-30 °C in TED can be safe in terms of microbiological drinking water quality. However, this is specifically the case for unchlorinated DWDSs with microbiologically stable water (AOC <10 ug C/L). More insight is required in terms of microbiological assessment of TED to further explore the potential within chlorinated systems. Further research on the effects of cold recovery on DWDSs already in operation is highly recommended. In order to get better insight on response of already developed biofilm towards increase in temperature after TED. Moreover, specific opportunistic pathogens that are sensitive to temperature increase, should be investigated thoroughly in order to provide hygienically safe water after recovery of cold from both chlorinated and unchlorinated drinking water distribution systems.
to gain empirical knowledge of ship behavior in real-life sailing environments and to empirically investigate the influencing mechanisms of intrinsic and external factors.","","en","doctoral thesis","","978-90-5584-319-0","","","","","","","","","Rivers, Ports, Waterways and Dredging Engineering","","",""
"uuid:31cb9dc9-0bc0-4b87-8ed2-53e9190f2f4f","http://resolver.tudelft.nl/uuid:31cb9dc9-0bc0-4b87-8ed2-53e9190f2f4f","Beamforming Strategies for Medical Ultrasound Imaging","Mozaffarzadeh, M. (TU Delft ImPhys/Medical Imaging)","de Jong, N. (promotor); Verweij, M.D. (promotor); Renaud, G.G.J. (copromotor); Delft University of Technology (degree granting institution)","2022","","","en","doctoral thesis","","978-94-6384-371-3","","","","","","2023-05-01","","","ImPhys/Medical Imaging","","",""
"uuid:bb3f0f87-ea66-4e75-b414-21425344248e","http://resolver.tudelft.nl/uuid:bb3f0f87-ea66-4e75-b414-21425344248e","Fair Mechanisms for Smart Grid Congestion Management","Hekkelman, B. (TU Delft Intelligent Electrical Power Grids)","la Poutré, J.A. (promotor); Delft University of Technology (degree granting institution)","2022","We consider energy systems in the built environment. With the transition to a more sustainable, distributed, and 'smart' energy system, such local grids are undergoing significant changes. Among other developments, the new role of end-users as 'prosumers' - users that can either produce or consume power depending on the situation - is turning energy systems in the built environment into autonomous microgrids with complex internal interactions.
One of the primary challenges for these local grids is maintaining grid stability, which requires constant balancing of supply and demand. Because local grids were not designed for distributed energy generation and large loads such as electric vehicle charging, their limited capacity is now leading to congestion. Since the responsibility for resolving congestion falls increasingly on the individual prosumers and their flexibility, the concept of fairness must take a central role in congestion management.
In this dissertation we present our research on supply-demand matching mechanisms for fair congestion management. The local networks populated by users can be represented by radial multi-agent commodity flow systems. For the resource allocation problems in this setting we draw on the fields of mechanism design and fair division to design provably fair congestion management mechanisms. We evaluate the merit of different notions of fairness and present algorithmic mechanisms that align agent incentives with fair allocations.
We find that notions of fairness regarding congested commodity flow networks can either focus on local or global fairness. Agents can have differing opinions on the two, depending on how wide they draw the circle of peers that they compare themselves to. We find that the mix of producers and consumers requires slight adaptation of notions of fairness, with agents envying one group while welcoming the other. Furthermore, we find that it is possible to combine notions of fairness with welfare optimization by letting individual agents decide which of the two is more important, and protecting their fair shares.
We are able to use the radial structure prevalent in energy systems in the built environment to design algorithmic mechanisms of consistently low computational complexity. The congestion solutions of these mechanisms satisfy different local and global fairness criteria, for which we provide rigorous proofs. We prove that our mechanisms are individually rational and, for variations of egalitarian fairness, also incentive compatible. Finally, we introduce a congestion aftermarket where agents compensate their peers for flexibility.","Fairness; Smart Grid; Multi-Agent Systems; Mechanism Design; Algorithm Design; Congestion Management; Resource Allocation; Fair Division; Power Systems","en","doctoral thesis","Delf University of Technology","978-94-6366-632-9","","","","","","2022-12-12","","","Intelligent Electrical Power Grids","","",""
"uuid:983cffe4-f7ac-47bc-9fdc-8b671008c23c","http://resolver.tudelft.nl/uuid:983cffe4-f7ac-47bc-9fdc-8b671008c23c","Visual Detection and Pose Estimation of Vulnerable Road Users for Automated Driving","Braun, M. (TU Delft Intelligent Vehicles)","Gavrila, D. (promotor); Kooij, J.F.P. (copromotor); Delft University of Technology (degree granting institution)","2022","This thesis addresses the topic of visual person detection and pose estimation. While these tasks are relevant for a broad range of applications, this thesis focuses on the domain of intelligent vehicles in urban traffic scenes. This domain is particularly interesting due to specific challenges related to visual perception from a moving vehicle. Accident statistics show that a great proportion of traffic fatalities affect vulnerable road users such as pedestrians and riders. This motivates the interest in reproducing or even surpassing the capabilities of an attentive human driver for driver assistance systems and fully automated driving to improve safety. Deep learning contributed to narrowing the performance gap between computer visionmethods and human visual perception. Especially the capability of convolutional neural networks to learn powerful features is helpful for person detection and pose estimation. Throughout this thesis new deep learning methods for these tasks will be presented. The thesis not only focuses on methodical extensions but also on the creation of new datasets for training, evaluation, and benchmarking in the intelligent vehicles domain.
First, a novel approach for joint object detection and orientation estimation with a single deep convolutional neural network is presented. The orientation estimation is implemented by extending an existing convolutional network architecture with several carefully designed layers and an appropriate loss function. The network depends on external proposals for object candidate regions, whose accuracy is crucial for the overall performance. Therefore, two proposal methods are introduced that make use of 3D sensor data - precisely stereo as well as lidar data. The KITTI dataset, which is commonly used for object detection benchmarking in the automotive domain, serves for training and evaluation. The experiments on the KITTI dataset show that by combining proposals of both sensor modalities, high recall can be achieved while keeping the number of proposals low. Furthermore, the method for joint detection and orientation estimation is competitive with other state of the art approaches. It outperforms the state of the art for a test scenario of the bicycle class.
Big data has had a great share in the success of deep learning in computer vision. Still, the number of pedestrians and riders in the KITTI dataset is rather limited and previous works suggest that there is significant further potential to increase object detection performance by utilizing bigger datasets. Regarding benchmarking, small datasets are prone to dataset bias and overfitting.
Therefore, the second part of this thesis introduces the EuroCity Persons dataset, which provides a large number of highly diverse, accurate, and detailed annotations of pedestrians, cyclists, and other riders in urban traffic scenes. The images for this dataset were collected onboard a moving vehicle in 31 cities of 12 European countries. With over 238200 person instances manually labeled in over 47300 images, EuroCity Persons is nearly one order of magnitude larger than datasets used previously for person detection in traffic scenes. The dataset furthermore contains a large number of person orientation annotations (over 211200). Four state of the art deep learning approaches are thoroughly optimized to serve as baselines for the new object detection benchmark. In experiments with previous datasets, the generalization capabilities of these detectors when trained with the new dataset are analyzed. Furthermore, this thesis studies the effect of the training set size, the dataset diversity (day- vs. night-time, geographical region), the dataset detail (i.e., availability of object orientation information), and the annotation quality on the detector performance.
The qualitative and quantitative analysis of error sources for the best-performing detector reveals methodical weaknesses in dense traffic scenes. For these, the commonly used (greedy) implementation of non-maximum suppression, which is needed in the post-processing of the analyzed deep learning methods, poses a tradeoff between recall and precision.
As the robustness of detection and pose estimation is also important in dense groups of persons, the third part of the thesis focuses on improving both tasks for such scenarios. Learning the task of non-maximumsuppression with a neural network architecture incorporating the head boxes of pedestrians as further attributes to discriminate persons in groups does not improve performance. Yet, the experiments reveal issues with ambiguities in detection and attribute estimation (e.g. head box estimation) for pedestrians that highly overlap each other. To solve this ambiguity for pairwise constellations of persons a new pose estimation method is proposed that relies on pairwise detections as input and jointly estimates the two poses of such pairs in a single forward pass within a deep convolutional neural network. As the availability of automotive datasets providing poses and a fair amount of crowded scenes is limited, the EuroCity Persons dataset is extended by additional images and pose annotations, which are made publicly available as the EuroCity Persons Dense Pose dataset. This dataset is the largest pose dataset recorded from a moving vehicle. The experiments on this dataset with the new method show improved performance for poses of pedestrian pairs in comparison with a state of the art method for human pose estimation in crowds.
The final chapter of the thesis draws conclusions from the content of the previous chapters of the thesis and discusses the required performance for automated driving. Furthermore, it reasons about efficiency aspects regarding the collection, annotation, and usage of data for deep learning and presents potential future work regarding methodical improvements and end-to-end training of the functional chain for automated driving including the integration of multiple sensors.","Person detection; Human pose estimation; Benchmarking; Intelligent vehicles; Automated driving","en","doctoral thesis","","978-94-6384-397-3","","","","","","","","","Intelligent Vehicles","","",""
"uuid:949a6032-ddbf-4e2d-9308-8ee30bf9fe84","http://resolver.tudelft.nl/uuid:949a6032-ddbf-4e2d-9308-8ee30bf9fe84","From waste to self-healing concrete: the missing PHA-link","Rossi, E. (TU Delft Materials and Environment)","Jonkers, H.M. (promotor); Kleerebezem, R. (promotor); Copuroglu, Oguzhan (copromotor); Delft University of Technology (degree granting institution)","2022","Self-healing concrete has attracted increasing attention of researchers and industry over the last decades. Given the brittle nature and the relatively low tensile strength of concrete, the occurrence of cracks is an almost unavoidable phenomenon affecting (reinforced) concrete structures. Cracks allow harmful agents present in the environment
to penetrate more easily in structures, accelerating the material degradation and compromising their service life. Repair and maintenance interventions are often needed in practice to maintain the serviceability of structures and to avoid any premature collapse. However, the costs of these interventions are socially and economically impactful. The
aim of both academia and industry is (and it has been) therefore to limit the need of these interventions through different technologies such as, among others, self-healing
concrete. Several technologies (or healing agents) have been proposed and investigated to improve the self-healing capacity of concrete, such as mineral and crystalline admixtures, superabsorbent polymers, micro- or macro-encapsulated polymers, and bio-concrete. Any technology has its own working principle, and the beneficial effects that the healing agents have on concrete properties are continuously under investigation. Despite the relatively high amount of available healing agents, this research focuses on the potential of innovatively using waste-derived polyhydroxyalkanoates (PHAs) as bacterial substrate for self-healing bio-concrete applications. Differently from other healing agents, PHAs are biodegradable and they can be extracted from biomass. The application of PHAs as substrate for bacterial healing agents in concrete does not ideally need high material requirements (e.g., purity) as other applications may do, hence opening the possibility to obtain efficient and relatively low costs healing agents, of which production would significantly contribute to the principles of circular economy. The aim of this research is, therefore, to investigate the applicability of PHAs as bacterial
substrate for self-healing agents in concrete. To do so, a PHAs extraction procedure has been designed, as well as the healing agent particle formulation process, as reported
in Chapter 3. This chapter aimed to formalize the production of PHAs-based healing agents (AKD), and to demonstrate its compatibility with bacterial metabolic activity. In Chapter 4, an extensive experimental campaign on the compatibility between AKD and Ordinary Portland Cement (OPC) mortar has been conducted. The feasibility of this application was demonstrated by the marginal effect that AKD healing agents have on some fundamental functional properties of cementitious materials (e.g., strength) and
by the improved self-healing capacity of the proposed system compared to plain mortar. In Chapter 5, the effect of self-healing on chloride penetration resistance in mortar specimens has been investigated. The results of this chapter firstly showed that the chloride penetration resistance in sound specimens increased thanks to the formation of a
thin calcium carbonate layer at the surface of the specimens and through a slight densification of the matrix. Secondly, self-healing of cracks was beneficial since it delayed the penetration of chlorides. Nevertheless, the initial chloride penetration resistance (e.g., that of sound specimens) could not be completely restored through self-healing.
In Chapter 6, the applicability of AKD in Blast Furnace Slag Cement (BFSC) mortar was investigated. Results of this chapter demonstrated the compatibility between AKD and low-alkaline mixtures, since functional properties did not get negatively affected and the self-healing capacity significantly improved. These results are different from those
observed when applying poly-lactic acid (PLA) healing agents in BFS mixtures, which on the contrary showed incompatibility with the mixture due to its lower alkalinity. In
Chapter 7, analysis and reflections on the potentially positive impact that self-healing could have on the service-life of infrastructures have been conducted based on idealized
scenarios. The environmental impact of AKD healing agents has been also discussed based on the eco-costs related to available processing treatments of PHAs. In Chapter 8, a methodology based on gas chromatography has been proposed to detect and quantify bio-based healing agents in added in cementitious materials. Even though the applicability of this methodology has been demonstrated for PLA healing agent, gas chromatography can be extended to AKD healing agent as well. In Chapter 9 conclusions based on the analysis conducted over the whole research have been drawn, and recommendations for future research have been made. With the present research the author hopes to provide a valuable contribution for those interested in the field of self-healing materials.","self-healing; concrete; bacteria; healing agent","en","doctoral thesis","","978-94-6419-667-2","","","","","","","","","Materials and Environment","","",""
"uuid:06c11b2f-2df0-422f-802c-1b083e205323","http://resolver.tudelft.nl/uuid:06c11b2f-2df0-422f-802c-1b083e205323","Ageing of plastic pipes in urban drainage systems","Makris, K. (TU Delft Sanitary Engineering)","Clemens, F.H.L.R. (promotor); Langeveld, J.G. (promotor); Horoshenkov, K.V. (promotor); Delft University of Technology (degree granting institution)","2022","Plastic materials, such as PVC and HDPE, have become dominant construction materials for sewer systems, mainly due to their reputed chemical resistance. Nonetheless, plastic sewer pipes have operated for decades in a hostile environment, raising concern among sewer managers over the longevity of their drainage systems. Therefore, the main aim of this research was to expand the knowledge about the ageing and failure mechanisms of plastic (especially PVC) sewer pipes and thereby contribute to more efficient sewer management. This research was conducted in three main parts, each part being based on the findings of the previous one. The first step was a literature study into ageing of PVC pipes and an analysis of inspection data from four Dutch municipalities. This resulted in an interesting discrepancy: although the literature indicates that PVC is a durable material with a theoretical lifespan of at least 100 years, the inspection images show that PVC pipes show numerous defects even shortly after installation and that the number of defects increases over time. This discrepancy formed the basis for the second step in this research: the excavation and extensive testing of PVC pipes with defects found in practice through inspections in order to determine the cause of the defects. The main causes of failure were found to be low-quality installation and excavation activities in the direct vicinity of the pipes. However, testing also showed that PVC ages physically, which means that the pipe material gradually becomes more brittle. This last aspect formed the basis for the third part of this research, the development of a non-destructive vibro-acoustic technique that can estimate the physical ageing of plastic pipes in practical conditions.","","en","doctoral thesis","","978-94-93315-13-6","","","","","","","","","Sanitary Engineering","","",""
"uuid:5b00119f-793c-4a5c-8528-bace6934ad2e","http://resolver.tudelft.nl/uuid:5b00119f-793c-4a5c-8528-bace6934ad2e","Kinematic Methods for the Rational Design of Mechanical Metamaterials","Broeren, F.G.J. (TU Delft Precision and Microsystems Engineering)","Herder, J.L. (promotor); van der Wijk, V. (copromotor); Delft University of Technology (degree granting institution)","2022","Most materials around us have properties that are determined at extremely small scales. Often the atoms and molecules that make up these materials determine how they behave under load. While this already leads to a great variety of material properties, a lot more is possible.
Mechanical metamaterials use structure to extend the available range of material properties. In this way, we can design material properties that are not found in nature. An example of this is materials with a negative Poisson’s ratio. When these materials are compressed, they will not expand in the direction perpendicular to the applied deformation, as we would expect from natural materials. Rather, they will contract.
In general, mechanical metamaterials allow us to design materials with properties that are tailored to their intended solution. This provides more design freedom; instead of choosing from a list of available materials, the material properties themselves now become variables that can be designed. Additionally, this makes it possible to integrate multiple functions within a material. In this way, the material is no longer passive but can react based on applied forces, deformations, or changes in the environment.
In practice, designing mechanical metamaterials has turned out to be difficult. While there are many examples of mechanical metamaterials with exceptional properties, their discovery has rarely been based on a rational and structured design process. The lack of such a design strategy makes the design of metamaterials with exactly the desired properties, at least for now, difficult and unreliable.
This dissertation explores this design problem and presents a method to aid in the structured and rational design of mechanical metamaterials. This method is based on a pseudo-rigid body approach, borrowed from the field of compliant mechanisms. Following this approach, the metamaterial is modeled as a collection of rigid parts, which are connected by joints to which we assign a stiffness. This allows us to model both the deformation and the stiffness of the material while keeping the complexity of the models as low as possible.
Because of the limited complexity of the models, this approach allows the designer to understand the effects of design decisions and adaptations. This enables directed and conscious changes to the design, of which the consequences are known beforehand. This is different from alternative methods where highly complex and time-consuming computer models are used to calculate the effects of changes. By using less complex models and making directed choices, new design iterations can be generated more quickly. Especially at the start of a design process, this is expected to quickly lead to new insights. These can then at a later stadium be refined using more detailed methods.","","en","doctoral thesis","","978-94-6384-398-0","","","","","","","","Precision and Microsystems Engineering","","","",""
"uuid:7f589ae2-a71b-43f5-a605-1791522bfc8d","http://resolver.tudelft.nl/uuid:7f589ae2-a71b-43f5-a605-1791522bfc8d","Becoming a design professional through coping with value-based conflicts in collaborative design practice","van Onselen, L. (TU Delft Methodologie en Organisatie van Design)","Snelders, H.M.J.J. (promotor); de Lille, C.S.H. (promotor); Valkenburg, A.C. (promotor); Delft University of Technology (degree granting institution)","2022","Junior designers may experience struggles with collaborative partners (e.g clients, managers, and stakeholders) who may prioritise values differently and which may yield frustration, conflict, and stress. Particularly, junior designers may find coping with value-based conflicts challenging as support is lacking. Senior designers can develop effective ways of coping to promptly reduce frustration. Design schools and professional development courses could offer support to address and facilitate learning from valuebased conflicts. This study aims to answer the following question: how can junior designers cope more effectively with value-based conflicts in collaborative design practice?","Designer Identity; Professionalisation; Collaborative design; Value Conflicts; Collaborative Development; professional development; Reflective Practice","en","doctoral thesis","","978-94-6421-964-7","","","","","","","","","Methodologie en Organisatie van Design","","",""
"uuid:b219e3e7-285b-44cd-a0ff-47f03fb9f3ac","http://resolver.tudelft.nl/uuid:b219e3e7-285b-44cd-a0ff-47f03fb9f3ac","Predictive Aircraft Maintenance: Integrating Remaining-Useful-Life Prognostics into Maintenance Optimization","Lee, J. (TU Delft Air Transport & Operations)","Mulder, Max (promotor); Mitici, M.A. (copromotor); Delft University of Technology (degree granting institution)","2022","Current aircraft maintenance ensures safe and reliable flight operations based on inspections repeated at fixed time intervals. The time interval between inspections is often much shorter than the average life of aircraft components, in an effort to timely detect potential failures. While this approach successfully prevents most potential failures, it is not the most efficient since airlines frequently need to ground aircraft for visual inspections. Furthermore, most inspections do not find any fault; thus, nothing is actually repaired after these frequent inspections.
Predictive aircraft maintenance (PdAM) is a newly emerging approach to maintenance which is expected to be more efficient, while providing the same or higher levels of reliability. PdAM uses the data produced by the plethora of on-board sensors installed on modern aircraft to monitor the health condition of aircraft components, without the need to ground these aircraft for visual inspections. These health condition data are analyzed to predict the Remaining-Useful-Life (RUL) of aircraft components. The core idea of PdAM is to plan maintenance tasks based on the estimated RUL. PdAM is currently not fully implemented in practice, however. Regulatory bodies have only recently started to discuss the integration of aircraft health monitoring (AHM) systems into aircraft maintenance process.
This dissertation aims to identify and address the challenges in implementing PdAM. The first challenge is the lack of mathematical models to assess the performance of PdAM. Before implementing PdAM in actual aircraft, the expected performance needs to be quantified to understand the impact on reliability and cost-efficiency. Although a few studies have proposed aircraft maintenance models, these studies only consider cost as a single performance metric. However, it is clear that aircraft maintenance should also be evaluated in terms of reliability and other key performance indicators (KPIs) representing the various (and often conflicting) interests of all stakeholders involved. In this dissertation, we construct a mathematical model of PdAM to evaluate the balance in maximizing various KPIs altogether. Our model captures the stochastic degradation and failure of aircraft components, and the interactions between stakeholders during the maintenance decision making process.
The second challenge for PdAM is the lack of optimization frameworks to plan PdAM considering RUL prognostics. In the last decades, most researchers have focused on predicting the RUL of aircraft components, but only a few studies address the question of how to actually integrate RUL prognostics into maintenance planning. Aircraft maintenance planning is a very complex process that should consider different aircraft components, a fleet of aircraft, their flight schedules, the limited hangar availability, tight safety margins, and strict regulations. Considering all these together in a single optimization framework is overly demanding. Hence, in this dissertation, the optimization of the PdAM planning is performed at three levels: component level, fleet level, and strategy level.
At the component level, we propose probabilistic RUL prognostics and a deep reinforcement learning (DRL) approach for predictive maintenance planning. The probabilistic RUL prognostics estimate the probability distribution of RUL, instead of a point-estimation. This approach quantifies the uncertainty associated with RUL prognostics. Based on the estimated RUL distribution, the DRL approach determines the optimal moment to replace an aircraft component. In the case study for the maintenance of aircraft turbofan engine, the proposed DRL approach reduces the total maintenance cost by 29.3\% and prevents 94.3\% of unscheduled maintenance, compared to the case when the point-estimation of RUL is used.
At the fleet level, PdAM is planned by simultaneously considering a fleet of aircraft having multiple components. The main interest of fleet-level PdAM is to integrate RUL prognostics and operational requirements, such as the flight schedules and the limited hangar availability. We formulate these in an integer linear programming problem that minimizes the cost of fleet-level PdAM. This approach reduces the usage of hangars by grouping the schedule of maintenance tasks when the RUL of the components are similar. Considering the maintenance of aircraft landing gear brakes for a fleet of aircraft, the total maintenance cost is reduced by 20\% compared to the traditional maintenance strategies.
At the maintenance strategy level, we optimize the design parameters of PdAM, such as safety margins and thresholds of RUL, considering multiple objectives: cost-efficiency and reliability. Since this multi-objective optimization problem is computationally intensive, we propose an efficient search algorithm using Gaussian process (GP) learning models to identify Pareto optimal design parameters of PdAM. Compared to other state-of-the-art multi-objective optimization algorithms, the proposed GP learning-based algorithm identifies more Pareto optimal solutions within the same computational time. The identified Pareto front shows that PdAM using RUL prognostics dominates traditional maintenance strategies by achieving the beneficial balance between efficiency and reliability indices. With only a 1\% reduction in the efficiency index, the Pareto optimal PdAM strategy achieves a 95\% improvement in the reliability index.
The three optimization frameworks at the three different levels of PdAM are proposed and illustrated for case studies on the maintenance of aircraft engines and landing gear brakes. These case studies show three main benefits of PdAM: 1) the maintenance cost is minimized by scheduling maintenance tasks only when necessary; 2) failures and unscheduled maintenance are prevented by considering RUL prognostics; and 3) Pareto optimal performance is achieved considering the balance between reliability and efficiency.
Finally, this dissertation identifies the emerging challenges associated with the introduction of PdAM. Such challenges are often attributable to the introduction of new technologies, such as aircraft health monitoring systems, RUL prognostics algorithms, and decision support systems to plan PdAM. Based on structured brainstorming sessions with domain experts and end-users, three major challenges of future PdAM are identified: 1) the (often unknown) reliability of new technologies, 2) the timeliness and accuracy of communication between the stakeholders of the new PdAM, and 3) the end-users' trust in the new technologies.
Throughout this dissertation, we have focused on decision support systems of PdAM, in the form of optimization frameworks. These frameworks provide substantial support for the implementation of PdAM in practice. Even so, it remains future work to build users' trust in PdAM, to integrate it into strict aviation legislation, and to adopt PdAM at the business level. The strongest support for trust, legislation, and business regarding PdAM should be based on mathematical models and optimization frameworks. Therefore, this dissertation is a starting point for an informed discussion on the future of predictive aircraft maintenance.","Aircraft Maintenance; Predictive Maintenance; Remaining-Useful-Life Prognostics; Scheduling; Optimization; Modeling and simulation","en","doctoral thesis","","978-94-9329-948-1","","","","","","","","","Air Transport & Operations","","",""
"uuid:955fab3d-fc33-4ccb-8c58-c6eaf7b56c9c","http://resolver.tudelft.nl/uuid:955fab3d-fc33-4ccb-8c58-c6eaf7b56c9c","Silicon/silicon-germanium heterostructures for spin-qubit quantum processors","Paquelet Wuetz, B. (TU Delft QCD/Scappucci Lab)","Vandersypen, L.M.K. (promotor); Scappucci, G. (promotor); Delft University of Technology (degree granting institution)","2022","Spin qubits in silicon have emerged as a promising candidate for a scalable quantum computer due to their small footprint, long coherence times, and their compatibility with advanced semiconductor manufacturing. However, all known spin qubit material hosts come with specific challenges, that limit the performance of quantum information processing. In this thesis we study Si/SiGe heterostructures, comprising a strained silicon (Si) quantum well which is sandwiched between two silicon-germanium (SiGe) barriers. Si/SiGe heterostructures designed to act as solid-state matrix to host spin qubits have three intrinsic material challenges that limit performance: hyperfine interaction, valley splitting, and charge noise. Therefore, to realize a scalable quantum computer in Si/SiGe heterostructures we first quantify the performance limiting parameters and subsequently, we improve them systematically with statistical significance.
Acquiring data with statistical significance, however proves challenging for quantum devices in Si/SiGe heterostructures due to complicated and time-consuming fabrication schemes for device manufacturing, and the need of using dilution refrigerators that cool samples down to sub-Kelvin temperatures with only a limited amount of wires for electrical characterization of devices. Therefore, in this thesis we demonstrate fast growth-fabrication-measurement feedback cycles to accelerate our understanding on the materials and devices.
We realize such fast feedback cycles by first establishing a unique workflow at TU Delft, allowing 100~mm wafer growth and fabrication. Subsequently in our first experiment, we overcome the wiring bottleneck by presenting a cryogenic multiplexing platform that multiplies DC wires inside of a dilution refrigerator. This cryogenic multiplexer platform uses commercially available CMOS components, is compatible with any dilution refrigerator, and allows us to measure thirteen chips in the same cooldown at a temperature of 50~mK and at magnetic fields of up to 10~T. We confirm these extreme measurement conditions by showing statistically significant quantum transport properties of industrially grown 300~mm $^{\text{nat}}$Si/SiGe wafers.
In the following experimental chapters we then leverage the cryogenic multiplexer to successively tackle the performance limiting parameters of spin qubit processors in Si/SiGe heterostructures. In the second experiment we first analyze valley splitting in two dimensional electron gases and observe that valley splitting increases linearly with the electric field at the quantum Hall edge states of the device at a rate consistent with theoretical predictions. In turn, this observation allows us to evaluate valley splitting on a micron length scale with relatively simple Hall-bar measurements.
In the third experiment we show two major improvements in our heterostructures. First, we measure valley splitting in quantum dots with varying quantum well interface sharpness with statistical significance. We then proceed to analyze the atomic composition of the quantum well interfaces in several samples using atom probe tomography and show that Ge atoms are distributed randomly in each atomic layer. Subsequently using the atom probe tomography results as input, we simulate valley splitting and show that valley splitting depends on the atomistic details of the interface and needs to be treated as a statistical distribution. We then propose a strategy to increase valley splitting on average above a chosen threshold by introducing a small concentration of Ge atoms into the quantum well. Second, all electrical measurements in this experiment are performed in isotopically purified $^{28}$Si quantum wells, which reduces the hyperfine interaction and hence increases qubit coherence times. While we do not explicitly discuss this improvement in this chapter, it is a crucial baseline for all following experiments in this thesis and for all qubit experiments using Delft grown $^{28}$Si/SiGe heterostructures.
We then move to show wafer-scale improvements of the disorder landscape of Si quantum wells in our fourth experiment. There, we challenge the common approach of growing an epitaxial Si cap on the $^{28}$Si/SiGe heterostructure, by replacing the Si cap with an amorphous Si-rich layer. We compare these two heterostructues by monitoring the statistical performance of mobility, percolation density, maximum electric field before hysteresis, and single particle relaxation time and observe a statistical performance increase of the mean value and the standard deviation.
In the fifth experiment we study a heterostructure with a thin quantum well and compare its statistical performance of mobility, percolation density, and charge noise with the performance of the heterostructures from the preceding experiment. Importantly, we find that misfit dislocation arising from strain relaxation are significantly reduced in thin quantum wells as confirmed by geometrical phase analysis of transmission-electron microscope images. In consequence, we observe a statistical performance increase of all key metrics in the novel heterostructure, only possible by our approach of engineering the critical material layers. Finally, we see promising simulated qubit coherence times and qubit error rates when using our charge noise results as simulation input, hinting at a practical advantage of our novel $^{28}$Si/SiGe heterostructures for quantum processors.
In the last experimental chapter we demonstrate how our improved $^{28}$Si/SiGe heterostructures have enabled two key experiments in the field of spin-based quantum computing. First, we show that our purified heterostrucures may host high-quality qubits, that in turn serve as a testbed for demonstrating CMOS-based cryogenic control of silicon quantum circuits. Second, we show how our isotopically purified, low-disorder heterostructures host a 6-qubit quantum processor with high-fidelity initialization, high-fidelity gate operation, and high-fidelity readout.
We conclude this thesis by highlighting key improvements of our $^{28}$Si/SiGe hetero-structures that have contributed to state-of-the-art spin qubit experiments. However, our heterostructures still require further improvements if we want to achieve error rates around 10$^{-6}$ and scale to large spin qubit arrays with more than a million qubits. Therefore, we discuss additional material changes that could further lower spin qubit error rates and we consider how to assess the uniformity of the material over different length scales, relevant when striving for larger qubit arrays.","Silicon quantum wells; valley splitting; charge noise; quantum dots; quantum processors; Hall effect; quantumHall effect","en","doctoral thesis","","978-90-8593-546-9","","","","","","","","","QCD/Scappucci Lab","","",""
"uuid:1da4c818-dc1a-44d7-a89d-a4047184d854","http://resolver.tudelft.nl/uuid:1da4c818-dc1a-44d7-a89d-a4047184d854","Mechanisms and mitigation of short pitch rail corrugation","Zhang, P. (TU Delft Railway Engineering)","Li, Z. (promotor); Nunez, Alfredo (copromotor); Delft University of Technology (degree granting institution)","2022","Short pitch corrugation is a (quasi-) periodic rail surface defect with shiny crests and dark valleys. It primarily occurs on tangent tracks or gentle curves with a typical wavelength in the range of 20-80 mm. Short pitch corrugation excites high-frequency wheel-rail dynamic contact forces and generates a high level of noise, which is a nuisance to both the passengers and the residents near the railway lines. The resulting large dynamic forces accelerate the degradation of the track components and may induce other rail defects (such as, squats), which increase the maintenance cost. The goal of this dissertation is to better understand the formation mechanism of short pitch corrugation and develop the root-cause solutions to mitigate it. Three steps are taken to achieve this goal: 1) identification and control of rail vibration modes which are crucial to short pitch corrugation formation; 2) design of a new rail constraint to mitigate short pitch corrugation; 3) experimental study of short pitch corrugation using an innovative V-Track test rig.
Step 1 focuses on the identification and control of rail vibration modes. First, the vibration modes and dispersive waves of a free rail are simulated employing a finite element (FE) approach. The modal behaviors, wavenumber-frequency dispersion relations, and phase and group velocities of six types of propagative waves are derived and discussed in detail in 0-5 kHz. The operating deflection shape (ODS) approach distinguishes different types of rail vibration modes experimentally. A synchronized multiple-acceleration wavelet (SMAW) approach is proposed to experimentally study the propagation and dispersion characteristics of these waves. Both the laboratory and in-situ experimental results demonstrate the effectiveness of the ODS measurement for coupled rail mode identification and the SMAW approach for wave dispersion analysis. Afterward, the ODS and SMAW approaches are further applied to investigate rail vibration modes and wave propagation under fastening constraint. A three-dimensional (3D) FE rail-fastening model is also developed and validated against the ODS and SMAW measurement results. Subsequently, a sensitivity analysis of fastening parameters using this FE model is performed to gain insights into the control of rail vibrations. The results indicate that under fastening constraint, ODS measurement identifies vertical bending modes, longitudinal compression modes and lateral bending modes with shifted frequencies and significantly reduced vibration amplitude compared to free rail. Fastenings constrain the rail longitudinal vibrations less strongly compared to the vertical and lateral directions. The variation of fastening parameters can control rail mode frequencies and their vibration amplitudes, and influence the wave propagation velocities and attenuation along the rail.
Step 2 proposes a methodology to design a new rail constraint to mitigate short pitch corrugation. First, a parametric investigation of fastenings is conducted to understand the corrugation development mechanism and gain insight for a new rail constraint design for corrugation mitigation. A 3D FE vehicle-track dynamic interaction model is employed, which considers the coupling between the structural dynamics and the contact mechanics, and the damage mechanism is assumed to be differential wear. Various fastening models with different configurations, boundary conditions, and dynamic parameters are built up and analyzed. The results indicate that the fastening longitudinal constraint to the rail is the major factor determining the corrugation development. The fastening vertical and lateral constraints influence corrugation features in terms of spatial distribution and wavelength components. The increase of fastening constraint in the longitudinal dimension helps to mitigate corrugation, and the inner fastening constraint in the lateral dimension is necessary for corrugation alleviation. Based on these insights, a methodology is proposed to mitigate short pitch corrugation by rail constraint design. First, short pitch corrugation is numerically reproduced employing a 3D FE vehicle-track interaction model. Then, the corrugation initiation mechanism is identified by examining the ODSs of rail longitudinal compression modes. Afterward, different rail constraints are designed, and their effects on longitudinal compression modes are analyzed. Models of these rail constraints are also built and validated. Finally, the rail constraint models are applied to the 3D FE vehicle-track interaction model, and their validity on short pitch corrugation mitigation is evaluated. It is found that a relative rigid constraint can completely suppress rail longitudinal compression modes and significantly reduce the fluctuation amplitude of the longitudinal contact force to mitigate corrugation. A direction is pointed out for corrugation mitigation in the field by strengthening the rail longitudinal constraint.
Step 3 performs an experimental investigation of short pitch corrugation using the downscale V-Track test rig. First, a force measurement system named dynamometer is developed in the V-Track to measure the wheel-rail contact forces for short pitch corrugation experiments. The dynamometer consists of four 3-component piezo-electric force sensors and is mounted between the wheel assembly and the steel frame, enabling it to measure the forces transmitted from the wheel-rail interface to the frame. Static tests are first carried out to calibrate the dynamometer in three directions. Then, several tests are performed in the V-Track to examine the reliability and validity of the dynamometer for measuring the wheel-rail contact forces under running conditions. Experimental results show that the dynamometer is capable of reliably and accurately measuring these forces. Utilizing the measurement results from the dynamometer, the control of the wheel-rail contact forces in V-Track has also been achieved. Afterward, the V-Track test rig is used to investigate the development mechanism of short pitch corrugation experimentally. The loading conditions of the V-Track are designed to simulate the vehicle-track interaction on tangent tracks where short pitch corrugation mainly occurs in the field. Short pitch corrugation is successfully reproduced in the V-Track, and its spatial distribution, wavelength components, and hardness variation are captured by the 3D HandyScan and the hardness tests. Based on the measurement results of wheel-rail contact forces and track dynamic behaviors and observations, the development mechanism of short pitch corrugation is identified. It is found that rail longitudinal and lateral vibration modes contribute to the consistent development of short pitch corrugation.
Overall, the major contribution of this dissertation is threefold: 1) a better understanding of vibration modes and wave propagation of the rail in free condition and under fastening constraint is obtained by ODS and SMAW measurement, which is essential to understand and mitigate short pitch corrugation; 2) a new rail constraint is designed which can effectively suppress rail longitudinal compression modes and mitigate short pitch corrugation; 3) experimental evidence is provided to demonstrate that initial excitation and longitudinal compression modes play a significant role in the consistent growth of short pitch corrugation.
The hindcasting of levee failures can provide valuable information about the factors and uncertainties that dominate levee performance and reliability. Systematic forensic engineering approaches to evaluate failed structures and methods of hindcasting have been developed in the field of structural engineering, but these are not well applicable to failed levees. This is mostly due to the scarcity of relevant information prior to, during or after the levee failure, which leaves multiple scenarios and alternative model choices possible to characterize the event.
This thesis proposes and demonstrates methods for systematic analysis of levee failures at the individual section and system level. These methods of hindcasting are expected to contribute to the overall quality, repeatability, transferability, transparency and recognisability of the analysis of levee failures. In this thesis, existing approaches for evaluating structural failures have been adapted to analyse levee failures using both deterministic and probabilistic techniques. This thesis focuses on the levee failure mechanism of slope instability of the inner slope.
Firstly a deterministic method is proposed which is applied to a slope failure. In this method, the uncertainties in possible causes and computational models are modelled by defining possible scenarios explaining the failure based on all the information available. The influence of the identified scenarios and possible alternatives in model choices are analysed through a sensitivity analysis. Results of the computations are confirmed or refuted by observation information of the failure such as the shape of the failure surface. To illustrate the method, it is applied to the levee failure near Breitenhagen (2013), in Germany in Chapter 2 of this dissertation. The levee near Breitenhagen is located at the intersection of the Saale and the Elbe and it failed due to instability of the slope at the polder side of the levee. Unexpected saturation of the levee, steep slope of the levee, and the influence of the tree roots were identified to cause of the levee failure by previous reports. However, in the present study, an old breach was found to be there (the first proxy was a pond likely caused by this old breach next to the levee; the old breach was later confirmed with archive research). This old breach and pond resulted in a scenario with low strength and high water pressures in both levee and the aquifer and was identified to be the most likely scenario explaining the failure. The results indicate that locally low values of shear strength (low values of pre-overburden pressure or cohesion) explain the failure. Other scenarios that were evaluated resulted in a situation that was not likely to fail or, resulted in a slip surface that differs from the observed failure surface.
The deterministic method does not quantify uncertainties explicitly. That makes it difficult to uniquely identify the most likely scenario to explain the failure. Therefore the deterministic method is advanced by making it probabilistic and by including Bayesian techniques in Chapter 3. Thereby a better insight is provided into the relative likelihoods of the various scenarios explaining the failure. Failure observations (water level at failure, the shape of the slip surface, etc.) and a-priori levee information (soil layering, shear strength etc) are systematically taken into account to quantitatively identify the most likely scenario explaining the failure and the most representative model choices to most accurately characterise the failure. The Bayesian techniques are also used for updating the scenario and possible alternatives in model choices using the observations of the actual failure (if present) such as the shape of the slip surface. To illustrate the method, it is also applied to the levee failure near Breitenhagen (2013) in Germany. Similar to the deterministic method, the old breach resulting in a scenario with locally weak soil and aquifer connection is found to be the most likely scenario. Further, the Limit equilibrium using Spencer’s approach and undrained soil response is identified to be the most representative model choices. The shear strength ratio is identified as the 6 most dominant contributor to the failure. Compared to the “deterministic method” introduced in chapter 2, the probabilistic method adds the possibility to quantitatively substantiate the identification of the most likely scenario explaining the failure as well as the most representative model choices.
Both methods of hindcasting have had little application and validation. Therefore both methods have been applied to a large-scale levee failure experiment. The levee of the Leendert de Boerpolder, in the Netherlands, was brought to failure under controlled circumstances. As a result, very detailed information is available. The levee was brought to failure by gradually lowering the water level in an excavated ditch at the polder side of the levee. Since the water level drawdown is known at the time of failure, this information is used to validate the outcome of both methods of hindcasting. The available levee information was used in two steps. In the first instance, only basic information was used in the hindcasting. In the second step, the geometry of the observed slip surface is also included. The probabilistic method using Bayesian techniques required some adjustment, to account for the survival of previous load phases during a stepwise increase of the load. Both methods of hindcasting identified the same water level drawdown at the moment of failure, but different model choices. In addition, the identified water level drawdown is confirmed by the observed water level drawdown at the time of the failure, i.e. 1.6 m.
Finally, this thesis introduces a method to quantify the influence of deviating conditions on the failure rate of a levee by looking at failures on a system level. The annual failure rate of a levee section is assessed based on information from historical floods. The return period of past events is also taken into account. The presence of deviating conditions at failed and survived levee sections is analysed based on satellite observations. Bayesian techniques and likelihood ratios are used to update the average failure rate as a function of the presence of a deviation. The river system of Sachsen-Anhalt, Germany, is used as a case study. It experienced severe floods with many levee failures in the years 2002 and 2013 resulting in the failure of 41 levee sections due to internal erosion, instability or overflow. It is found that the presence of geological deviations has a significant influence on the observed failure rate and that the failure rate increases with the magnitude of the hydraulic loading. The results show that in the case of the occurrence of a visually identifiable geological deviation in the subsurface, the updated failure rate of a section is about 14 times high than when there is no visually identifiable deviation. The presence of other deviations, such as bushes or trees, or permanent water near the levee also results in a somewhat higher failure rate (20–30% higher) than the calculated average annual failure rate. It is also discussed how the expected number of failures in a system during a high water event with a certain magnitude can be estimated. The results of this research can be used to further optimize soil investigations, calibrate the results of more advanced reliability analyses, and complement risk assessments. The method offers opportunities in particular in environments where little data is available.
Overall, the methods and insights developed in this thesis can contribute to a better understanding of the performance and reliability of flood defence systems.","Hindcasting; Back analysis; Forensic engineering; River levee; Slope instability; Levee failures; Bayesian techniques; Failure rate; Likelihood ratio; Fragility curve","en","doctoral thesis","","978-94-6469-161-0","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:eb5ed3b2-4210-489e-b329-59722a0c50a0","http://resolver.tudelft.nl/uuid:eb5ed3b2-4210-489e-b329-59722a0c50a0","Time-dependent development of Backward Erosion Piping","Pol, J.C. (TU Delft Hydraulic Structures and Flood Risk)","Jonkman, Sebastiaan N. (promotor); Kok, M. (promotor); Kanning, W. (copromotor); Delft University of Technology (degree granting institution)","2022","Structural flood protection systems such as levees are an important component in flood risk reduction strategies. Levees can fail through various failure mechanisms; this thesis focuses on the mechanism Backward Erosion Piping (BEP) which occurs when a sandy levee foundation is eroded by groundwater flow. To assess whether a levee's reliability complies with safety standards, authorities use models which describe the levee properties and failure mechanisms.
This thesis aims to extend the current failure model by considering piping as a time-dependent erosion process instead of the current assumption of immediate failure once a critical threshold is exceeded. Therefore, it is shown how time-dependent development of backward erosion piping can be quantified and how it affects levee reliability analyses. This is achieved by a combination of literature review, analysis of previous experiments, additional experiments on different scales, numerical modeling and probabilistic modeling.
The following key findings were established. Analysis of historical levee failures due to BEP and previous experiments indicates that there can be significant time between initiation and breach, highlighting the importance of time-dependence for piping. The rate of pipe progression in experiments can be explained by the sediment transport rate, which is shown to depend on the pipe flow conditions. A numerical groundwater flow model which includes this sediment transport process can predict the pipe development in small-scale experiments. Relations between the progression rate and levee properties and hydraulic loads as derived with this numerical model can be used efficiently in reliability analyses. These analyses show that including time-dependent pipe development in BEP analyses has a significant impact on the levee failure probability, both in coastal and riverine water systems.
First, considering land registration as the initial activity that guarantees legal tenure of land, this study carried a review of the scholarly literature on the effect of land registration on these relations. 85 studies were included. The review focused on the regular claim that land registration’s facilitation of formal documents-based land dealings leads to investment in a more productive agriculture. I found this claim problematic for three reasons. First, most studies offer no empirical evidence to support this claim. Second, there are suggestions that land registration can actually threaten ‘de facto’ tenure security or even lead to insecurity of tenure. Third, the gendered realization of land registration and security may lead to uneven distribution of costs and benefits. These effects are however often ignored. Next to suggesting the importance of land information updating and the efficiency of local land management institutions, this review also found that more research with a combined locally-set approach is needed to better understand any relation(s) between land tenure security and agricultural productivity.
In the second part, this study attempted the first and the last problems listed here above by the literature review. I have designed a locally- defined Farmland Tenure Security Index (FLTSI) and applied it to the four case studies in Rwanda. On the basis of a data set collected from four research sites over the course of three agricultural years (2006/2007, 2012/2013, 2016/2017), this study empirically assessed the relations between land tenure security and smallholder farms’ crop production in Rwanda. We show that the general assumption that secure land tenure improves farm level harvests, is not found for smallholder farms in Rwanda. My FLTSI is based on plausible threats as conveyed by smallholder farmers at each research site. The findings additionally indicate that the harvest of main crops did neither statistically correlate with this index, nor show differences from the mean at all research sites. Instead, factors mainly related to the ongoing crop intensification program, though threatening tenure security, contributed to the increase of small farm harvests. Lower land tenure security did not affect farmers satisfaction of the crop intensification program Most of them claimed that in the end what matters most is that their harvests continue to increase. The second part concluded that in Rwanda, a new wave of agriculture strategizing contributed to increasing small farms’ harvest of prioritized crops and decreasing farmland tenure security simultaneously.
Third, motivated by the previous conclusion in part two, this study assessed the effect of farmland use change on agriculture production. It sought to determine which of the fragmented or consolidated farmland use earn higher yields for the smallholder farmer in Rwanda. When the agricultural reform started in 2007, the country introduced the Crop Intensification Program (CIP) which promotes farmland Use Consolidation (LUC). Using data collected at the four research sites and considering the three agriculture years, the study confirmed that the CIP/LUC program led to conversion of perennial crops, mainly banana plantations, into seasonal crops prioritized by the program. Overall, this shift in farmland use has created an increase in both harvest and monetary yield of prioritized crops. However, within that general trend, I observed differences: farmers with smaller and/or less farm plots did not realize as much yield increase as those who joined the CIP/LUC program with larger and/or multiple farm plots.
Furthermore, this study made a first attempt to understand the implication of the studied relations on food security. The link between yield and meals per day allowed to demonstrate the farmer’s household food access. However, the available data did not allow to extend the analysis to include the nutritious values of the food. Nevertheless, I clearly showed that following the start of the CIP/LUC program, farmers increased their yield and number of meals per day. Future research is need to study the types of food available on the market.
The locally-defined research approach designed for this study combined statistical and qualitative analysis of the information collected from interviews and focus group discussions at a local level. I argue that this approach has contributed to an understanding of those relations that would be overlooked if the research used larger entity setting and econometric methods. This research recommends that a similar approach be applied while studying locally-defined assessment of the relations between land tenure security, farmland use and agricultural productivity. Future research needs to be concentrated on examining these relations from a more operational perspective, taking into account local social-economic and institutional patterns at work. There is a need for a mixed methods approach utilizing experiments as well as randomization, where feasible, in combination with increasing flows of spatial and time-series data from diverse sources. Household-farm panel data collected over long periods of time, combined with simulations, can also provide valuable insights about the relations between land tenure security, farmland use and agricultural productivity.","Land tenure security; armland use, agricultural productivity; Rwanda","en","doctoral thesis","","978-94-6366-637-4","","","","","","","","","Water Resources","","",""
"uuid:58d1c84e-2fbf-4ce0-98b0-3e229d566c34","http://resolver.tudelft.nl/uuid:58d1c84e-2fbf-4ce0-98b0-3e229d566c34","Analysis of the Hydro-Climatic Regime of the Snow Covered and Glacierised Upper Indus Basin Under Current and Future Climates","Nazeer, A. (TU Delft Water Resources)","McClain, M.E. (promotor); Maskey, Shreedhar (promotor); Delft University of Technology (degree granting institution)","2022","In the high elevation Hindukush Karakoram Himalaya (HKH) mountain region, the complex weather system and sparse measurements make the elevation-distributed precipitation among the most significant unknowns and limit the realistic and comprehensive assessment of precipitation. In addition, due to local orographic effects, precipitation can vary highly over short horizontal distances. Accurate quantification of precipitation, however, is critical for understanding hydro-climatic dynamics. Moreover, snow and glacier dynamics, and their contribution to river flow in the HKH region, are also mostly unknown, leading to serious concerns about current and future water availability. The recent acceleration in climate change (CC) heightens concerns about future water availability from high elevation mountain regions. The HKH region heavily depends on its upstream frozen water resources, and an accelerated melt may severely affect future water availability. In line with rapid population growth in the Indo-Gangetic plain, there will be increased water, food and energy demands in the future. Therefore, increasing knowledge of the hydro-climatic regime and glacier and snowmelt contributions to the river flow under current and future climate change scenarios is essential. The Indus basin, with a downstream population of around 250 million, is among three highly populated river basins originating from the HKH mountains, followed by Ganges and Brahmaputra. This PhD research was designed to quantitatively and comprehensively assess precipitation and its distribution for the Gilgit and Hunza sub-basins of the Upper Indus Basin (UIB). In addition, the hydrological regime and snow and glacier dynamics were investigated, and the future hydro-climatic regime and water availability from the highly glaciated Hunza basin were analysed. For the present-day investigations, the elevation-distributed precipitation was derived from better performing global precipitation datasets which include the high resolution (0.1°x0.1°) and newly developed ERA5-Land, and a coarser resolution (0.55°x0.55°) JRA-55. These estimates were forced to a data parsimonious precipitation-runoff model, Distance Distribution Dynamics (DDD), with its energy balance and temperature index approaches for snow/glacier melt simulation. The model was calibrated from 1997–2005 and validated from 2006–2010. For future scenarios, the ERA5-Land corrected precipitation against the observed flow was employed to bias correct the precipitation from two global circulation models (GCM) using the newly released Coupled Model Intercomparison Project Phase 6 (CMIP6) climate projections. The DDD model was set up again using these bias corrected GCM projections for baseline (1991–2010), mid-century (2041–2060) and end-century (2081–2100) projections under Shared Socioeconomics Pathways (SSP) SSP1, SSP2 and SSP5 emission scenarios.","","en","doctoral thesis","","978-90-73445-47-5","","","","","","","","","Water Resources","","",""
"uuid:de50235b-1d2a-43f3-bcfa-1dd5f570ab5a","http://resolver.tudelft.nl/uuid:de50235b-1d2a-43f3-bcfa-1dd5f570ab5a","Improving the Techno-Economic Performance of Wave Energy Converters: From the Perspective of Systematic Sizing","Tan, J. (TU Delft Offshore and Dredging Engineering)","Miedema, S.A. (promotor); Polinder, H. (promotor); Jarquin Laguna, A. (copromotor); Delft University of Technology (degree granting institution)","2022","Ocean wave energy has a huge potential to make a contribution to global energy transition. However, the high Levelized Cost of Energy (LCOE) is currently a big hurdle to the development of wave energy converters (WECs). This thesis is motivated to improve the techno-economic competitiveness of WECs. It focuses on the effects of systematic sizing of WECs. ""Systematic sizing"" is reflected in this thesis by considering the effects of sizing on the two main components of WECs, namely the buoy and PTO system. The main body of this thesis starts with a literature review. First, it is intended to provide an overview of current wave energy technologies and the application of power take-off (PTO) systems. Secondly, the studies relevant to sizing of WECs are reviewed, and sizing methods used in the context are discussed and compared. It indicates that the existing studies mainly focus on the effects of buoy sizing and there is a lack of consideration of PTO sizing. In addition, the sizing methods based on the Budal diagram and Froude scaling can only be used to conduct sizing of buoy but the influence of PTO sizing cannot be covered. Numerical simulation can be applied to take into account both effects of buoy sizing and PTO sizing, but it is usually associated with low computational-efficiency. As sizing can be regarded as a kind of optimization which normally requires a number of iterations, an efficient method is beneficial for accelerating the design process of WECs. Followed up by the literature review, Chapter 3 to Chapter 7 of this thesis are dedicated to accomplishing two main research objectives...","","en","doctoral thesis","","978-94-6458-745-6","","","","","","2023-06-15","","","Offshore and Dredging Engineering","","",""
"uuid:cdca9bf1-3e6b-4bfc-9d9d-b5acdd3f900d","http://resolver.tudelft.nl/uuid:cdca9bf1-3e6b-4bfc-9d9d-b5acdd3f900d","Generalized Models of Sequential Decision-Making under Uncertainty","Neustroev, G. (TU Delft Algorithmics)","de Weerdt, M.M. (promotor); Verzijlbergh, R.A. (copromotor); Delft University of Technology (degree granting institution)","2022","Sequential decision-making under uncertainty is an important branch of artificial intelligence research with a plethora of real-life applications. In this thesis, we generalize two fundamental properties of the decision-making process. First, we show that the theory on planning methods for finite spaces can be extended to infinite but countable spaces. Second, we propose a unified model of reinforcement learning algorithms that employ the principle of optimism in the face of uncertainty. This model is used to explain why these methods are efficient. We use the developed theory to design novel algorithms. Depending on the user's needs, these algorithms can either automate the decision-making process completely, or provide advice in decision-support systems.
We start with presenting the basic concepts from the theory of decision-making and discuss the two approaches to it: planning and reinforcement learning. We look at a few typical sequential decision-making problems of increasing difficulty. In particular, we present a game that involves grid navigation and the problems of warehouse management and wind farm operation. Next, we survey the state-of-the-art methods for solving such problems.
Based on this analysis, we identify the following research opportunities. In planning, models with non-stationary and countably-infinite data remain relatively untreated because they are equivalent to infinitely-dimensional optimization problems, which are notoriously difficult to solve even approximately. In reinforcement learning, optimistic approaches lead to computational efficiency, yet the theory of optimism remains undeveloped. Moreover, while reinforcement learning shines at playing games, such as chess, shōgi, Go, and StarCraft II, its practical applications remain few.
Next, we overview a mathematical framework of sequential decision-making under uncertainty known as the Markov decision process. We explain how the goal of the decision-maker can be expressed as an optimization problem and present two approaches to achieving this goal. The first—more common—approach assigns so-called values to different actions. The other approach uses so-called occupancies that tell how often the agent should choose the actions instead of evaluating how good these actions are. In fact, the two approaches are known to be dual to each other. While this duality is well studied in the finite case, the infinite case is less explored. To address this knowledge gap, we present a new dual formulation for countable problems, both finite and infinite.
Afterwards, we use the dual formulation to design a new planning algorithm for infinite-horizon problems with non-stationary data. These problems are essentially infinite-dimensional optimization problems and as such are impossible to solve exactly using the standard approaches. We show that they can be solved by changing what is defined as optimal behavior: instead of seeking universally optimal policies, we consider initial-decision-optimal ones. Instead of planning all of the actions beforehand, these policies can be used to plan given the currently observed data. When the next decision is required, the process can be repeated in the same manner, leading to an optimal decision-making strategy. Our approach uses the occupancy-value duality to rule out suboptimal actions based on so-called truncations: finite-time approximations of the infinite-horizon decision-making problem.
We extend the truncation approach to a more general setting of decision-making problems with countably-infinite state spaces. Instead of time-based truncations, we consider state-based ones. This allows us to limit the amount of data required to make the decisions and to design an algorithm for a class of problems that are otherwise unsolvable to optimality. This approach belongs to a family of methods called policy iteration: starting from an initial policy, it constructs a series of improvements in the decisions while ruling out choices that are provably suboptimal.
After that, we turn to reinforcement learning. For a long time, the only provably efficient reinforcement-learning methods were model-based ones; recently, a family of model-free optimistic methods emerged, each of them accompanied by an analysis of how sample-efficient the method is. We, too, study optimistic reinforcement learning, but in contrast to the existing research, we seek to understand not how efficient it is, but why it is efficient. Our analysis results in a formula that explains the three factors that cause regret—the efficiency loss—in optimistic reinforcement learning: the problem size, the measure of exploration, and the estimation error caused by the mismatch between the realized transitions and their true distribution. It can be applied to all of the existing algorithms as well as new ones. We design one such new algorithm and show how our theoretical framework can facilitate the proof of its efficiency.
Finally, we consider a high-impact real-world sequential decision-making problem known as active wake control. Wind turbines can negatively impact each other with their wakes. These wake-induced losses can be reduced by changing the turbine orientations. Unfortunately, the optimal control strategy is non-trivial. To address this, existing approaches use simplified wake models in combination with numerical optimization methods; instead we propose to use model-free reinforcement learning. As a first step towards this goal, we present a wind farm simulator that is suitable for reinforcement learning and better reflects the realities of wind farm operation than other existing tools. Using this simulator, we show that previous research used a suboptimal action representation in this problem; we identify two alternatives, both of which improve the learning efficiency. Additionally, we demonstrate that reinforcement learning is robust to errors in the observations, providing further evidence that it is a fitting approach to active wake control.
Our contributions advance the state of the art in the theory of sequential decision-making under uncertainty and its applications. These advances hint at unexplored connections between countably-infinite planning and optimistic learning, which may lead to even more efficient algorithms for sequential decision-making under uncertainty in the future.","sequential decision-making under uncertainty; optimization; Markov decision processes; planning; linear programming; duality; reinforcement learning; optimistic learning","en","doctoral thesis","","978-94-6366-624-4","","","","","","","","","Algorithmics","","",""
"uuid:0b046c92-2aeb-4260-94ec-3e13876e1712","http://resolver.tudelft.nl/uuid:0b046c92-2aeb-4260-94ec-3e13876e1712","Complex Simplicity: Towards reconstituting Cdc42-based polarity establishment","Tschirpke, S. (TU Delft BN/Liedewij Laan Lab)","Laan, L. (promotor); Jakobi, A. (copromotor); Delft University of Technology (degree granting institution)","2022","Saccharomyces cerevisiae proliferates through budding, where a daughter cell grows by budding off one side of the mother. The first step towards budding is polarity establishment: here Cdc42 accumulates in one spot on the membrane, marking the site of bud-emergence. Cdc42 accumulation arises through at least two interconnected regulatory feedback loops, based on I) a reaction-diffusion mechanism and II) the actin cytoskeleton. Cdc42 is a highly regulated protein and dissecting the molecular mechanisms and coupling between the different feedback loops has turned out to be controversial, because of both the parameter sensitivity and the high level of observed redundancy and interdependence within and between the feedback loops. This calls for the development of a minimal in vitro system. In this thesis I show our progress towards reconstituting Cdc42-based polarity establishment. Our system is based on, due to theoretical predictions, the three proteins: the GTPase Cdc42, its GDP/GTP Exchange Factor Cdc24, and the scaffold protein Bem1. Such a minimal system, where proteins can be added and removed at will, will not only facilitate mechanistic studies, but also help to understand how molecular functions necessary for pattern formation are distributed within the polarity network.","Cdc42; reconstitution; minimal systems; in vitro; prenylation; Protein kinetics","en","doctoral thesis","","978-94-6384-393-5","","","","","","","","","BN/Liedewij Laan Lab","","",""
"uuid:c6011a64-4c85-486c-abf7-bf4332543a16","http://resolver.tudelft.nl/uuid:c6011a64-4c85-486c-abf7-bf4332543a16","Towards Proactive Adaptive Vehicle Settings","Melman, T. (TU Delft Human-Robot Interaction; Group Renault; ENSTA Paris)","Abbink, D.A. (promotor); de Winter, J.C.F. (promotor); Delft University of Technology (degree granting institution)","2022","In recent years, cars are increasingly computerized, where the handling of the vehicle can be changed to accommodate individual needs. One specific feature in current vehicles that can alter the vehicle’s dynamic behavior are driving modes: predetermined vehicle settings that drivers can select by the press of a button. Unfortunately, user studies showed that the option to switch modes is underutilized. Possible explanations include mode confusion: drivers may not know when certain vehicle settings could be used best, or they may simply forget the current mode (or forget to change mode). Besides changing driving modes when the vehicle is stationary, driving modes offer the possibility to switch while driving. In theory, this could mean that during a sportier maneuver, such as curve driving or an overtaking maneuver, the driver benefits from dynamic vehicle settings. However, in practice, it is unlikely that drivers will select their preferred vehicle setting in dynamic driving situations or for short periods. A system that automatically changes the vehicle settings for the driver could potentially solve these issues.
The aim of this dissertation is to provide new quantitative and qualitative insights into the underlying principles to design a system with proactive adaptive vehicle settings: A system that automatically changes the vehicle settings to fit the individual and context-dependent needs of the driver.
The first part of this thesis (Chap 2–4) investigates how people adapt to different road environments (road width and curvatures), task instructions, and car characteristics. This kind of knowledge would help to develop a system that adapts according to what the human driver would want when the location (where they drive), the target (i.e., eco vs. normal vs. sport), or the vehicle changes.
The second part of the thesis (Chap 5–7) investigates how offline changes in vehicle settings (e.g., sound, powertrain settings, steering settings) affect the vehicle's dynamic behavior, driving behavior and driver experience. In this part, these questions are addressed for offline vehicle setting changes: changes that occur between driving trials and not while driving. In this way, transient effects in the data can be removed.
The final part of the thesis (Chap 8–9) combines all the learned principles from the previous chapters and investigates how online changes in vehicle settings affect driving behavior and driver experience.
Finally, the individual contributions of each chapter are integrated towards overarching conclusions, limitations, and future work. In short, five overarching conclusions were drawn:
1. Motivational driving models that use emotions or experiences as a construct are theoretically insightful but impractical; driving behavior could better be predicted by car state or location-specific variables.
2. A large part of the variability in driving behavior can be explained by location; location should be included in the design of an adaptive vehicle setting system.
3. The tested sport mode led to objectively more ‘sporty’ vehicle dynamics.
4. Sport mode settings are clearly perceived but do not cause speeding behavior.
5. Proactive adaptations of vehicle settings can objectively improve acceleration performance, lane-keeping, and steering performance, but are not always accepted by drivers.
many of the petroleum-based plastics that are used nowadays.","Circular Economy; Bioplastics; Biopolymers; Waste activated sludge (WAS)","en","doctoral thesis","","978-94-9183-750-0","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:c56e5ef7-0e80-4768-8016-5abb3f539c3d","http://resolver.tudelft.nl/uuid:c56e5ef7-0e80-4768-8016-5abb3f539c3d","Converting Wastewater Treatment Plants into Polyhydroxyalkanoate Production Factories","Pei, R. (TU Delft BT/Environmental Biotechnology)","van Loosdrecht, Mark C.M. (promotor); Kleerebezem, R. (promotor); Werker, A. (promotor); Delft University of Technology (degree granting institution)","2022","Fossils derived plastics offer a wide range of services in applications, frompackaging to building and construction, to electronics and water distribution networks. The use of fossils to produce plastics releases the stored CO2 and contributes to climate change. Due to durability and resistance to degradation, discarded plastics accumulate in the environment and affect the ecosystems. Polyhydroxyalkanoates (PHA) are considered as an alternative for bio-based and biodegradable plastics. PHA is a family of polyesters that is naturally synthesised by microorganisms. After extraction and purification, PHA shows thermoplastic properties similar to polypropylene and polyethylene. However, the current market share of all bioplastics in the overall plastic industry is small (3%) (Plastics Europe, 2020). Efforts are urgently needed to expand the global potential for greater capacity in bioplastics production. PHA may be accumulated directly using the waste activated sludge from wastewater treatment plants. Currently, methods and experiences for PHA accumulation directly using waste activated sludge are still at the pilot scale. This thesis critically evaluated the current status of the technology, identified knowledge gaps and then focused on deepening insight and fundamental understanding to help facilitate a scaling up to industrial scale.","","en","doctoral thesis","","978-94-9183-751-7","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:cdea2e9d-e069-4683-a8ca-c436b43fa64c","http://resolver.tudelft.nl/uuid:cdea2e9d-e069-4683-a8ca-c436b43fa64c","Micromechanics-guided development of strain-hardening alkali-activated composites: Towards a low-carbon built environment","Zhang, Shizhe (TU Delft Materials and Environment)","van Breugel, K. (promotor); Ye, G. (promotor); Delft University of Technology (degree granting institution)","2022","Alkali-activated Materials (AAMs), including those classified as geopolymer, are obtained through the reaction between a solid precursor and an alkaline solution. Compared with ordinary Portland cement (OPC) binders, these materials maintain comparable mechanical properties but have the advantage of reducing greenhouse gas emissions and utilization of industrial by-products and residuals which helps to meet sustainability goals. AAMs are thus considered an environment-friendly construction material with great potential for next-generation concrete.
AAMs are inherently brittle. The low ductility of AAMs makes them prone to cracking and corresponding performance degradation, which is detrimental to their durability. Based on the concept of strain-hardening cementitious composite (SHCC), one solution relates to a family of fiber-reinforced composites that have high tensile ductility and multiple-cracking characteristics, i.e., strain-hardening geopolymer composite (SHGC). While much effort has been taken to develop conventional SHCC, scientific and technical knowledge of SHGC is still in the very early stage of development. This PhD project deals with the development of a cement-free strain-hardening geopolymer composite (SHGC) as a high-performance construction material using industrial wastes and by-products through alkaline activation technology:
The fracture properties and other mechanical properties of the alkali-activated slag/fly ash (AASF) paste as the matrix for SHGC were experimentally tested. The microstructure and chemistry of the reaction products were investigated to understand the fracture mechanism. It was found that the fracture properties of pastes are strongly related to the chemical composition (Ca/Si ratio) of the main reaction product, i.e., C-(N-)A-S-H gel. The fracture properties were also found to be dominated by a cohesion/adhesion-based mechanism. Furthermore, the compressive strength of AASF paste is primarily determined by its capillary porosity.
The fiber/matrix properties, including chemical bonding energy, initial frictional bond, and slip-hardening behavior of fiber during the pullout process were also experimentally studied. The chemistry and microstructure of the reaction product in the fiber/matrix interfacial transition zone (ITZ) were characterized. Their influence on the interface bonding properties was also investigated. It is found that the chemical bonding between PVA fiber and AASF matrix increases with increasing Ca/Si and Ca/(Si+Al) ratio of C-(N-)A-S-H gel. Hence, changing the slag content and the alkali activator Ms appears to be an effective way to modify chemical bonding. Unlike the formation of portlandite near the PVA fiber surface in conventional SHCC, a high-Ca C-(N-)A-S-H phase was formed in the fiber-matrix ITZ of SHGC. This explains the higher chemical bonding energy found in SHGC compared to that in conventional SHCCs. Furthermore, the adhesion mechanism of the PVA molecule in reaction products was studied using MD simulation. The study suggests that the adhesion between PVA fiber and C-(N-)A-S-H gel is primarily due to electrostatic interactions rather than van der Waals interactions.
Based on the result of fracture properties of the matrix and fiber/matrix interface properties, the SHGC is then systematically developed following a micromechanics-based design approach. The experimentally-attained matrix and interface properties served as input for the numerical micromechanics model to simulate the crack bridging behavior. Through the micromechanical modeling, the optimal fiber length and volume were selected and the behavior of mixtures with different fiber/matrix combinations was predicted. With this approach, researchers and materials engineers can design and tailor future SHGC more efficiently than by using the commonly used trial-and-error method.
Finally, the environmental impact of the SHGC with the most promising performance was also evaluated. This evaluation was conducted using a cradle-to-gate life-cycle assessment (LCA) of SHGC compared to that of conventional SHCC materials. The developed SHGC demonstrates a very promising environmental profile. It has a significant reduction of the global warming potential (GWP) and a lower or similar total environmental impact compared to conventional SHCC materials. In addition, the results also provide recommendations for further improvements in mixture design for the future development of SHGC.
This study successfully developed a sustainable slag/fly ash-based SHGC with a lower carbon footprint than conventional SHCC. It is considered a good example to utilize industrial by-products as secondary resources and at the same time contribute to a circular economy. Furthermore, this study helps to understand the fracture properties of AAMs. It also clarifies the adhesion mechanism of PVA fiber in AAMs. All of these give promising guidance for researchers and engineers to design fiber-reinforced AAMs with required fracture properties and interface bonding properties. In particular, it contributes to the design and tailoring strategies for high-performance composite, for instance, SHGC, through proper mixture design.
The formulated requirements for the fabrication of bone scaffolds raise a serious challenge towards the development of such synthetic biomaterials. In order to mimic the structure of living bony tissue, scaffolds need to be composed of a large number of small, interconnected unit-cells, thereby providing a large surface area for cell attachment and tissue ingrowth. In addition, the mechanical properties of the scaffold should match those of the native bone tissue. A too high stiffness could lead to stress shielding and associated implant loosing while a weak scaffold offers a limited load-bearing capacity. Finally, synthetic biomaterials need to be biocompatible.
This thesis presents a number of strategies for the development of synthetic bone scaffolds using shape-shifting techniques. As compared with alternative approaches, such as additive manufacturing, self-folding materials allow for the employment of planar fabrication techniques to embed the initially flat material with a variety of surface-related functionalities. Examples of such surface-features are bactericidal or osteogenic nano-patterns. Upon activation, the initially flat construct folds to create complex 3D constructs with embedded surface-features, which are highly beneficial in the context of porous biomaterials.
The first two chapters of this thesis are related to fundamental aspects of shape-shifting materials. More specifically, in Chapter 2, we reviewed the different mechanisms for the programming of shape-shifting within flat materials. In addition, we describe the development of analytical and computational models to study the theoretical stiffness limits of self-folding hinges (Chapter 3). We found a maximum effective stiffness of 1.5 GPa for shape-memory polymers self-folding elements.
In the second part of this thesis, we present the development of three different shape-shifting techniques. The first technique is based on the 4D printing of shape-memory polymer materials. During the extrusion of the filaments, the polymers chains align along the printing direction. This deformation is then stored as memory inside the structure of the material. Heating the as-printed construct allows for the relaxation of the programmed stress. Based on the alignment of the extruded filaments, different shape-shifting behaviors can be programmed. Both the fabrication of 2D-to-3D shape-shifting materials (Chapter 4) as well the production of deployable materials and devices (i.e., 3D-to-3D shape shifting) (Chapter 5) was studied.
In Chapter 6, we focus on the development of a purely mechanical shape-shifting method. By incorporating different kirigami patterns within the material, large amounts of elastic and permanent deformations can be programmed upon stretching the material. Subsequent release of the pre-stretch allows for the recovery of the elastic deformations, driving the shape-shifting of the material. The main advantage of such a mechanical approach is that it could be applied to many different materials.
The third shape-shifting technique is inspired by sheet metal forming processes (Chapter 7). Miniaturized automated folding devices were developed for the folding of cubic lattice structures. As a demonstration, metamaterials comprising 125 cubic unit-cells with a unit-cell dimension of 2.0 mm were fabricated. In contrast to conventional self-folding methods, sharp folds can be realized in metal sheets using the presented approach. Therefore, metamaterials with a high stiffness can be folded. In addition, a variety of surface-patterns can be incorporated into the initially flat sheets. Protected by a thin layer of coating, the applied surface-related functionalities remain undamaged during the folding process. Finally, a series of cell culture experiments were performed to demonstrate the ability of the folded functionalized materials to serve as a tissue engineering scaffold.
In general, the presented shape-shifting techniques are of relevance to a variety of applications, such as optical metamaterials and 3D electronics. However, the specific aim of this thesis is the development of self-folding materials that can be applied as tissue engineering scaffolds. Considering the listed requirements, the folding technique inspired by sheet metal forming meet the necessary requirements the best. However, additional research towards further miniaturization of the scaffolds resulting from different methods as well as an increase in their stiffness are required for application in clinical settings. In the case of the automatically folded scaffolds, the initial cell culture experiments showed promising results and the proposed folding method can, indeed, serve as a platform for further biological testing.","self-assembly; 4D printing; metabiomaterials; Shape-shifting; self-folding","en","doctoral thesis","","","","","","","","","","","Support Biomechanical Engineering","","",""
"uuid:35343fa4-0dd3-4039-ac58-b2999a9aa2d0","http://resolver.tudelft.nl/uuid:35343fa4-0dd3-4039-ac58-b2999a9aa2d0","The practice and opportunities in re-operating dams for the environment","Owusu, A.G. (TU Delft Policy Analysis)","Slinger, J (promotor); van der Zaag, P. (promotor); Mul, M. (copromotor); Delft University of Technology (degree granting institution)","2022","For most of the 20th century, the design and operation of dams prioritized traditional economic considerations such as hydropower generation, flood risk reduction and provision of water for irrigation and domestic use. This resulted in altered river flow regimes, degraded riverine ecosystems and ecosystem services, and biodiversity loss. Implementation of environmental flows (e-flows), freshwater flows for the environment, is a means to restore some of the benefits of naturally flowing rivers and halt the rapid deterioration of freshwater and estuarine habitats, flora and fauna. Since its early days in the 1940s, e-flows science has grown and there now exists a wide array of methodologies for establishing flow-ecology relationships. The concept of e-flows also has a firm place in many national water laws and policies across the world. Despite this progress, actual implementation of e-flows has not followed suit and remains limited.
This research was aimed at generating insights into how e-flows evolve from recommendation into practice and the trade-offs that are identified between conventional water uses and e-flows during conception or implementation. The study focused in particular on e-flows implementation through the re-operation of existing dams. This study addressed two major shortcomings in e-flows science, specifically, the lack of a global record of e-flows implementation and the lack of insight into why certain e-flow recommendations have been implemented while others have not.
The research followed an exploratory, sequential, mixed methods approach beginning with a systematic literature review and a global survey of practical cases of dam re-operation for e-flows. A logic model of the process was used to develop a conceptual framework of how e-flows are implemented in practice. While the systematic literature review identified the inputs, activities and outputs of dam re-operation in successful cases, the global survey of stakeholders with first-hand experience in dam re-operation attempts revealed how stalled attempts at dam re-operation significantly differed from successful attempts through a comparison of the survey responses for the two groups using statistical methods.
This extensive research phase looking at cases of dam re-operation across the globe formed the first part of this research and was then followed by a case study to investigate the synergies and trade-offs between water users when dam operations are changed to implement e-flows. The Akosombo and Kpong dams in the Lower Volta River, Ghana, were chosen as the case study. The choice of case study was partly informed by the findings from the systematic literature review and survey. Attempts at dam re-operation in this location have stalled despite it possessing some of the key characteristics of successful cases. It thus presented an interesting case for further investigation. While past studies had already developed e-flows for the Lower Volta, these were based on the natural flow paradigm: an e-flows design approach based on the natural, pre-dam flow regime of a river. An additional e-flow was designed based on the designer e-flows paradigm whereby components of a river’s hydrograph are compiled to meet a desired ecological outcome. Owing to the data scarce situation in the case study, a Bayesian Belief Network (BBN) was used to link river flows to the state of the Volta clam fishery, an important artisanal industry in the basin. Finally, a simulation-optimisation technique, Evolutionary Multi-Objective Direct Policy Search (EMODPS), was applied to the case study to determine the trade-offs and synergies between the environment and key water users in the Lower Volta Basin. The new e-flow recommendation developed for the Lower Volta River, together with the past recommendations based on the natural flow of the river, served as inputs to this trade-off analysis.
This research reveals that e-flow recommendations are usually implemented through a collaborative analytical process which makes use of existing supporting frameworks such as legislation, but also takes advantage of opportunities that may arise to advance the process of dam re-operation for e-flows such as flow experiments. The process is usually non-linear and it is important to emphasize the local context which makes each process of dam re-operation unique. A global database of successful e-flow implementations through dam re-operation has also been created. This records the inputs, activities, and outputs as well as the stakeholders involved and the e-flows implementation approaches in successful cases.
Moreover, in regard to stalled re-operation attempts, four hypotheses were derived for further study on why some attempts at dam re-operation are at an impasse, namely:
1. In undertaking scientific studies for determination of e-flows, first a consensus on the priorities, knowledge gap, and solutions must be reached together with local stakeholders.
2. Genuine, carefully designed consultations and negotiations between stakeholders can overcome hurdles encountered in the process
3. Local-level legislation and policy on e-flows provide the enabling environment for dam reoperation for e-flows.
4. Scientists are important stakeholders in the process of dam re- operation, but should play a supportive role rather than drive the process.
Through the in-depth context-dependent examination of a unique stalled case, the Lower Volta, this research demonstrated that a parsimonious ecologically grounded, designer e-flows assessment method using a BBN can be applied successfully in data scarce areas. This resulted in an alternative designer e-flow recommendation for the Lower Volta River for low flow releases during the Volta clam veliger larva and recruitment life stages from November to March. Two other complementary management strategies were also recommended for the Lower Volta: annual full breaching of the sandbar which regularly builds up at the Volta Estuary and prohibition of sand winning from the river bed.
The multi-objective trade-off analysis of water users in the Lower Volta highlighted the dominance of hydropower in the river basin and quantified the amount by which firm hydropower demand from the Akosombo and Kpong dams would have to decrease for the implementation of e-flows under current and future climate scenarios. Notably, and curiously, both an increase and a decrease in annual inflows to the Akosombo Dam reduce the trade-off and create synergies between e-flows and hydropower generation. This is because climate change leading to increased annual inflows to the Akosombo Dam results in increased water availability for both hydropower and e-flows while climate change resulting in lower inflows provides the opportunity to strategically deliver dry season e-flows, that is, reduce flows sufficiently to meet low flow requirements for key ecosystem services such as the clam fishery.
This research has generated knowledge on the process of dam re-operation for e-flows implementation; the enabling factors for successful dam re-operation; the hurdles typically encountered and how they have been overcome in successful cases; as well as inter-sectoral trade-offs that must be made between e-flows and conventional water uses in delivering e-flows in a unique case study. These insights inform attempts to scale up efforts in e-flows implementation through the sustainable and equitable operation of dams for people and the environment.
In model-based robot control, kinematics comprise the fundamental knowledge that can be used to build the mathematical connection between control parameters and robot status. Unlike rigid robots, whose kinematics are well studied and have fast (analytical) solutions, effective and general kinematics computing methods for soft robot systems are still lacking. According to the modeling perspective (i.e., forward kinematics (FK)), predicting the whole-body shape of soft robots under actuation is a non-trivial task since the non-linear deformation in robot bodies and the hyperplastic properties of soft materials create challenges in balancing accuracy and computational costs in existing FK models. The lack of modeling tools further brings the difficulties in developing advanced algorithms to inverse kinematics (IK) and (statics) control thereafter. This Ph.D. project aims to develop a general soft robot kinematics computing pipeline, that can contribute to the effective control of soft robot systems to accomplish given tasks.
A fast numerical simulator for soft robots is firstly presented in this thesis, in which the shape of the robot body is discretely represented by volumetric elements. The development of this simulator was inspired by the fact that the hard-to-model actuation input (e.g., cable force, pressure, and electronic field) in soft robot systems can be directly modeled or transformed to fit the shape change in actuation elements. An optimization pipeline was built to minimize elastic energy in the body elements and compute the deformed shape with actuation parameters as input. As a general numerical simulator, it supports the modeling of various types of actuation, and the hyperelastic soft material properties are integrated. A fast collision checking and response model was added to predict the behavior of soft robots under robot-robot collisions and robot-environment interactions. The numerical computing process of our simulator shows good convergence, even for soft robots with large (rotational) deformation in their bodies, and can therefore balance the computational cost and model precision. In comparison to commercial \textit{finite element analysis} (FEA) software, this geometry-based simulator demonstrates a 20-fold faster computing speed, and the simulation result can well fit the shape that was captured from the physical setup.
The IK problem of soft robots is defined as computing proper actuation parameters that drive soft robots to accomplish given tasks. In this thesis, task-specific IK objectives (which are mainly geometrically defined) are formulated, and the optimal actuation parameters are detected using gradient-based iteration. Through the developed simulator, the gradients of objective functions are estimated using numerical differences. The sequence of motion can be successfully computed using this IK solver, and its efficiency has been verified in two case studies, which include path-following and object pick-and-place.
For the final stage of this Ph.D. project, the speed and precision of the IK solver are enhanced through machine learning. Fully connected neural networks are invited to fit functions of FK and the Jacobian of IK-related objectives. With the high efficiency in the forward propagation of networks (in analytical form), the gradient-based IK solver can run in real-time. Sim-to-real transfer learning is applied to eliminate the reality gap and make the computed actuation parameters more precise in physical setups. Applying sim-to-real transfer learning can also benefit the efficiency of the data generation process. In our pipeline, massive training data is first generated in a virtual environment using a fast simulator; thereafter, a lightweight network layer is employed to map the result of the simulation to the physical hardware. As a result, the amount of physical data can be reduced by 60% to train a network that accurately computes IK solutions.
In conclusion, this dissertation presents a pipeline that computes kinematics solutions for soft robots. A fast geometry-based simulator is presented to contribute to building an iteration-based numerical IK solver. Machine learning is applied to accelerate IK computing to real-time speed with enhanced precision. Task-specific kinematics control is realized in different soft robot systems to verify the effectiveness of the proposed method. The algorithms and code presented in this Ph.D. thesis are open-sourced for researchers and designers, and have the potential to become a general tool for designing and controlling soft robots. Future studies on the design optimization and high-level control of soft robots can all benefit from the research outcomes of this project.","Soft robot; kinematics; geometry computing; Machine Learning","en","doctoral thesis","","978-94-6366-634-3","","","","","","","","","Materials and Manufacturing","","",""
"uuid:93fc3c7b-b94a-4792-9cfb-4023ebebc31a","http://resolver.tudelft.nl/uuid:93fc3c7b-b94a-4792-9cfb-4023ebebc31a","Applications of supersaturated oxygenation to biological wastewater treatment with high biomass content","Kim, S. (TU Delft BT/Environmental Biotechnology)","Brdjanovic, Damir (promotor); Garcia, H. (copromotor); Delft University of Technology (degree granting institution)","2022","The operation of membrane bioreactors (MBRs) at high mixed liquor suspended solids (M1SS) concentrations (higher than 15 g 1-1) may enhance the loading rate treatment capacity, while minimizing even further the MBR system's footprint. However, oxygen transfer in wastewater treatment is significantly influenced by the M1SS concentrations. Particularly, conventional diffused aeration systems (fine and coarse bubble diffusers) exhibit a poor oxygen transfer in wastewater treatment applications; particularly, when operating at M1SS concentrations higher than 15 g 1-1• The oxygen transfer performance of the supersaturated dissolved oxygen (SDOX) system was evaluated in activated sludge with M1SS concentrations from 4 to 40 g 1-1 as a promising technology for uncapping such limitation. The operational conditions exerted by the SDOX technology did not affect the concentration of active biomass. Moreover, the biological performance of the MBR was not affected by the introduction of the SDOX technology. In addition, the microbial community was relatively stable although some variations at the family and genus level were evident during each of the study phases. Indeed, the membrane filtration performance was affected by the SDOX technology. A combination of several factors ( certainly including particle size distribution of sludge) resulted in the serious membrane fouling imposed by the high-pressure and shear effects. However, this could be influenced due to the scale of the laboratory-based research. More research would be needed to confirm those findings.","","en","doctoral thesis","IHE Delft Institute for Water Education","978-90-73445-44-4","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:21994a92-e365-4679-b6ac-11a2b70572b7","http://resolver.tudelft.nl/uuid:21994a92-e365-4679-b6ac-11a2b70572b7","Topology optimization of compliant mechanisms with multiple degrees of freedom","Koppen, S. (TU Delft Computational Design and Mechanics)","van Keulen, A. (promotor); Langelaar, Matthijs (promotor); Delft University of Technology (degree granting institution)","2022","High-tech equipment critically relies on the precise and reliable fine alignment of components such as mirrors and lenses for calibration and adaptation of instrumentation. To meet the ever-increasing requirements on precision, engineers typically resort to monolithic compliant mechanisms. These mechanisms gain mobility by deformation of the material, eliminating any friction and backlash. The design of compliant mechanisms with multiple degrees of freedom, so-called multi-DOF compliant mechanisms, is complex, and the resulting designs are sensitive to exhibit crosstalk between the actuation modes. The manual manipulation of coupled mechanisms is unintuitive and time-consuming, and automated actuation requires complex control scenarios. Computational approaches can greatly improve designing multi-DOF compliant mechanisms without such undesired characteristics. Topology optimisation methods take a mathematical approach to designing a structure. Such methods optimize the material layout in a design domain for a given performance measure, considering a provided set of boundary conditions, loads and design constraints. Topology optimization methods have demonstrated capable as synthesis tools for designing single-DOF compliant mechanisms. The development of topology optimisation approaches for solving multi-DOF compliant mechanism design problems is relatively undeveloped and comes with severe challenges. These design problems typically involve many different loading conditions and stringent design requirements, increasing the complexity of the optimisation problem and required computational effort. Available formulations only partly address these issues and tend to be complex to understand, implement, and use or have limited applicability. This dissertation focuses on developing topology optimisation approaches for synthesising multi-DOF compliant mechanisms with relatively short strokes, which justifies the use of linear elasticity theory. The objective is the development of a topology optimisation problem formulation that is simple to understand, implement and use, applicable to a wide range of problems and relatively computationally efficient. When parts of the structure are forced into a prescribed motion, the energy contained in a compliant system is an indirect measure of the resistance to this motion. One can thus capture the characteristic stiffness of arbitrarily complex kinematics using a single energy measure. The main discovery of this study is that topology optimisation problem formulations based on specific combinations of such energy measures provide a unique combination of simplicity, versatility and computationally efficiency. While similar to the classic compliance minimisation problem, the proposed generalisation for compliant mechanism problems holds similar advantageous optimisation properties. It minimises the number and strictness of design constraints simplifying the optimisation problem. Despite the advantages, such integrated measures come with the loss of exact control over individual displacements and stiffnesses. This dissertation demonstrates the broad applicability of this formulation to the design of high-resolution decoupled multi-DOF compliant mechanisms, as well as flexures and shape-morphing structures. Furthermore, this dissertation studies the impact of design for additive manufacturing constraints on the optimization of compliant mechanisms. A critical observation to designing practically relevant compliant mechanisms is that design for additive manufacturing considerations predominantly impacts thin flexural elements. One may exploit the observation of local impact to reduce the typically negative impact of design for additive manufacturing constraints on the performance of the optimised compliant system. This dissertation introduces a computationally efficient approach to redesign the most critical regions of compliant mechanisms considering design for additive manufacturing constraints while minimizing the negatively influence on the mechanism performance. This redesign approach allows for high-resolution design and accurate modelling of sensitive flexures, providing solutions that are superior to imposing the same restrictions on the entire design domain without substantial additional computational cost. This dissertation also addresses the aspect of computation effort. The relationship between input and output ports defines the working principle of a compliant mechanism. As a result, the response functions standard in multi-DOF design problems are typically a function of the motion at those ports, and the loads often apply to the same ports. This property provides the possibility to reduce computational costs. Such optimisation problems are typically characterised by multiple combinations of boundary and loading conditions and many constraint functions, substantially increasing the computational cost of calculating the response functions and accompanying sensitivity analysis. By exploiting the characteristics of the multi-DOF compliant mechanism design problem and using static condensation, we demonstrate increased computational efficiency in solving problems with different boundary conditions. Although this is a well-known technique, the use of static condensation and corresponding advantages have not been studied in-depth in this context. The sensitivities of the procedure can be calculated without solving other systems of equations of high dimensionality, making this approach very suitable for use in gradient-based optimisation methods. In addition to problems with varying boundary conditions, there is a significant potential for reducing the computational cost for problems involving similar boundary conditions, common in multi-DOF compliant mechanism design problems. Although not commonly detected, such problems contain linear dependencies between the encountered applied loads and adjoint loads. Manually keeping track of such dependencies becomes tedious for real-world design problems that become increasingly involved. This dissertation introduces a linear-dependency-aware-solver that can efficiently detect such linear dependencies between all loads to automatically avoid solving unnecessary equations. In summary, insights and tools are provided to efficiently and effectively (re)design practically relevant high-resolution three-dimensional multi-DOF compliant mechanisms. Energy-based measures under prescribed motion scenarios offer a versatile and straightforward basis for optimising problem formulations, allowing quantitative control over mechanism stiffness and motion transmission. We envision that such problem formulations will find widespread use in industry to design complex compliant systems such as implants, optical mounts and manipulation stages.","Topology optimization; compliant mechanisms; Computational mechanics","en","doctoral thesis","","978-94-6366-627-5","","","","","","","","","Computational Design and Mechanics","","",""
"uuid:3b180cf0-8ff7-4dba-a76a-09709271141a","http://resolver.tudelft.nl/uuid:3b180cf0-8ff7-4dba-a76a-09709271141a","PHA biosynthesis, recovery, and application: A circular value chain for production of self-healing concrete from waste","Vermeer, C.M. (TU Delft BT/Environmental Biotechnology)","Kleerebezem, R. (promotor); Jonkers, H.M. (promotor); Delft University of Technology (degree granting institution)","2022","Polyhydroxyalkanoates (PHA) are a family of biopolymers produced intracellular
by a range of different bacteria.PHA have attracted widespread attention as
an environmental friendly replacement of fossil-based polymers, because they
have thermoplastic and/or elastomeric properties, and are also biobased and
biodegradable.Moreover, the properties of PHA can be adjusted by tuning the
monomeric composition of the polymer.Currently, more than 150 different monomers
have been discovered which can form the building blocks of the PHA polymer.
PHA production can be divided in three parts: biosynthesis, recovery, and application.
The first part is the biotechnological production of bacteria with PHA
inside their cell. First, an organic substrate can be anaerobically converted into
volatile fatty acids (VFA). These VFA form the preferred substrate for PHA production
by bacteria in the next steps. An approach to make PHA biosynthesis
cost-effective is by using organic waste streams as substrate in combination with
mixed microbial communities. This reduces the relatively large costs for raw materials
and for sterilization of the equipment. Thus far, at least 19 pilot projects
have been operated to produce PHA from municipal or industrial organic waste
streams using this approach. In nearly all cases, the random copolymer poly(3-
hydroxybutyrate-co-3-hydroxyvalerate) (PHBV) was produced, indicating that the
biosynthesis of this specific polymer is reasonably well-established. Research on
the production of other types of PHA from waste streams is still scarce (Chapter
2 and 3).
The main obstacles that prevent the large-scale industrial implementation of
waste-derived PHA are the recovery and the application step. First of all, the PHA
recovery costs are responsible for a large fraction of the total production cost
due to high energy and chemical demand. Another challenge of the PHA recovery
step is to achieve a high and consistent quality product when waste is used
as substrate. More research is required to predict the relationship between raw
material input, process parameters, and final mechanical properties of the produced
PHA accurately (Chapter 4).
For the application of PHA, it appeared that introducing waste-derived PHA into
the conventional plastic market is a lasting and complicated procedure. This is
mainly caused by a lack of distribution channels, a lack of experience in bioplastic
processing, and by the small scale at which PHA is currently produced compared
to petrochemical plastics. Therefore, the market entry of waste-derived
PHA could have a higher chance of success if the initial aim is not to produce
bioplastics. Instead, the focus should be on new applications where minor fractions
of impurities, and small variations in polymer characteristics are not regarded
as problematic. Such a niche application can stimulate the introduction
of waste-derived PHA into the market, while avoiding the obstacles and the complexity
of the conventional plastic industry. Moreover, these applications can
potentially exploit the unique properties of PHA (e.g., biodegradability) more effectively
(Chapter 5).
The aim of this thesis was to optimize and balance waste-derived PHA biosynthesis
with recovery, and to target for a niche application of PHA in self-healing
concrete. To this end, research was conducted on all parts of the value chain
from waste to self-healing concrete: PHA biosynthesis (Chapter 2 and 3), PHA
recovery (Chapter 4), and the application of PHA (Chapter 5).
Chapter 2 investigates isobutyrate as sole carbon source for a microbial enrichment
culture in comparison to its structural isomer butyrate. Isobutyrate is a
VFA appearing in multiple waste valorization routes, such as anaerobic fermentation,
chain elongation, and microbial electrosynthesis, but has never been assessed
individually on its PHA production potential. The results reveal that the
enrichment of isobutyrate has a very distinct character regarding microbial community
development, PHA productivity, and even PHA composition. Although
butyrate is a superior substrate in almost every aspect, this research shows that
isobutyrate-rich waste streams have a noteworthy PHA producing potential. The
main finding is that the dominant microorganism, a Comamonas sp., is linked
to the production of a unique PHA family member, poly(3-hydroxyisobutyrate)
(PHiB), up to 37% of the cell dry weight. This chapter is the first scientific report
identifying microbial PHiB production, demonstrating that mixed microbial communities
can be a powerful tool for discovery of new metabolic pathways and
new types of polymers.
In Chapter 3, another uncommon VFA was examined for PHA production, octanoate.
Several enrichment strategies were tested to select for a community
with a high medium-chain-length PHA (mcl-PHA) storage capacity when feeding
octanoate. Based on the analysis of the metabolic pathways, the hypothesis was
formulated that mcl-PHA production is more favorable under oxygen limited conditions
than short-chain-length PHA (scl-PHA). This hypothesis was confirmed by
bioreactor experiments showing that oxygen limitation during the PHA accumulation
resulted in a higher fraction of mcl-PHA over scl-PHA (i.e., a PHA content
of 76 wt% with a mcl-fraction of 0.79 with oxygen limitation, compared to a PHA
content of 72 wt% with a mcl-fraction of 0.62 without oxygen limitation). Physicochemical
analysis revealed that the extracted PHA could be separated efficiently
into a hydroxybutyrate-rich fraction and a hydroxyhexanoate/hydroxyoctanoaterich
fraction. The ratio between the two fractions could be adjusted by changing
the environmental conditions. Almost all enrichments were dominated by
Sphaerotilus sp. This chapter is the first scientific report that links this genus to
mcl-PHA production, demonstrating that microbial enrichments can be a powerful
tool to explore mcl-PHA biodiversity and to discover novel industrially relevant
strains. In solvent extraction of PHA, the choice of solvent has a profound influence
on many aspects of the process design.
Chapter 4 provides a framework to perform a systematic solvent screening
for PHBV extraction. First, a database was constructed of 35 solvents that were
assessed according to six different selection criteria. Then, six solvents were
chosen for further experimental analysis, including 1-butanol, 2-butanol, 2-ethyl
hexanol (2-EH), dimethyl carbonate (DMC), methyl isobutyl ketone (MIBK), and
acetone. The main findings are that the extractions with acetone and DMC obtained
the highest yields (91-95%) with reasonably high purities (93-96%), where
acetone had a key advantage of the possibility to use water as anti-solvent. Moreover,
the results provided new insights in the mechanisms behind PHBV extraction
by pointing out that at elevated temperatures the extraction efficiency is less
determined by the solvent’s solubility parameters and more determined by the
solvent size. Although case-specific factors play a role in the final solvent choice,
we believe that this chapter provides a general strategy for the solvent selection
process.
In Chapter 5, a niche application for waste-derived PHA is proposed and
tested, using it as bacterial substrate in self-healing concrete. Self-healing concrete
is an established technology developed to overcome the inevitable problem
of crack formation in concrete structures, by incorporating a so-called bacteriabased
healing agent. Currently, this technology is hampered by the cost involved
in the preparation of this healing agent. This chapter provides a proof-of-concept
for the use of waste-derived PHA as bacterial substrate in healing agent. The
results show that a PHA-based healing agent, produced from PHA unsuitable
for thermoplastic applications, can induce crack healing in concrete specimens,
thereby reducing the water permeability of the cracks significantly compared to
specimens without a healing agent. For the first time these two emerging fields
of engineering, waste-derived PHA and self-healing concrete, both driven by the
need for environmental sustainability, are successfully linked. We foresee that
this new application will facilitate the implementation of waste-derived PHA technology,
while simultaneously supplying circular and potentially more affordable
raw materials for self-healing concrete.
Chapter 6 will provide a general discussion where overarching topics were
selected for a thorough analysis. Finally, recommendation for further research
are proposed and an outlook for the field is given.
Focusing on this knowledge gap, the aim of this work is to develop an understanding of the effect of the casting parameters on the meso-level structure of cast glass, and thereupon of the relationship between this meso-level structure and the strength, stiffness and fracture resistance of cast glass components. Towards this aim, the dissertation adopts an experimental approach based on physical prototyping by kiln-casting, and destructive and non-destructive testing. The experimental work shows that by kiln-casting, a larger variety of chemical compositions can be cast, even at relatively low processing temperatures. As a consequence, a broad range of mechanical properties arises, especially when waste cullet is employed. Based on the casting parameters, combinations of different defects, grouped in meso-level structures, are commonly found in cast glass, yet these can often be tolerable when situated in the glass bulk. The dissertation highlights the potential of recycling-by-casting of currently challenging to recycle glass waste into reliable and aesthetically unique structural components, and the advantages of engineering composite cast glasses. It also underlines the need for manufacturing guidelines, test data, product certifications and quality control protocols, for the successful implementation of cast glass in the built environment.
Training has always been the traditional answer to help pilots deal with flying vehicles and scenarios that, without adequate preparation, would otherwise be unforgiving. Due to the risks and costs involved in training for such critical circumstances, the exclusive use of in-flight training is untenable, especially for helicopters. A combination of simulator and in-flight training is the solution adopted to reduce accident rates and human fatalities in a safe and efficient manner and to fulfill the ever-harsher mandate for flawless performance required by the military domain. Inevitably, the use of simulation to support pilot training brings forward the issue of skills and performance transfer from the simulator to the actual aircraft, which is addressed in this thesis in relation to helicopters.
The primary focus of flight simulation transfer-of-training research is to assess how learning a task in a flight simulator affects the trainee's performance capabilities in the same task in the actual aircraft. To explicitly measure the transfer of behavior learned in a certain setting (e.g., a simulator) to the evaluation setting of interest (e.g., a real aircraft), transfer-of-training experiments are one of the few available methods for direct evaluation of the training effectiveness. To measure pilot transfer of skills at least two groups of participants are required. The speed of learning in the actual aircraft by one (or more) ""experimental"" group(s), previously trained in the simulator, need to be compared with the learning performance of a ""control"" group having received no special previous training. While this design enables to directly assess the effectiveness of a simulator, it requires strictly balanced groups according to participants' relevant prior training and experience to deliver meaningful results.
Several variations of this basic transfer model, named a true-transfer design, have also been proposed. The most popular is the simulator-to-simulator transfer model, also known as quasi-transfer design. In quasi-transfer experiments, participants are not transferred to the real-world setting, but to a different, often more realistic or enhanced, simulation environment. The quasi-transfer paradigm relies on the assumption that the more realistic simulator acts as a valid replacement for the actual aircraft. Although this is a strong assumption, its effectiveness for evaluating skill transfer is corroborated by experimental evidence. Furthermore, a quasi-transfer design avoids the costs, hazards, and scheduling hindrances (e.g., interruptions due to bad weather) of a true-transfer experiment and offers the possibility of safely investigating dangerous situations such as engine failures. Another issue arising from true-transfer studies (and from flight tests in general) is the reliability of the performance measurements in the real-world setting. Moreover, there are inevitable psychological differences in a pilot's mindset between training in a simulator or in the actual aircraft. This is not necessarily a disadvantage from a training perspective, because relieving the trainee of the stress and the workload deriving from auxiliary duties (e.g., safety and flight regulation aspects, communication, periodic systems monitoring, etc.) enables devoting more mental resources to learning.
For this thesis, three quasi-transfer-of-training experiments were conducted to test the effectiveness of flight simulator training for two different helicopter tasks: hover and autorotation. The ability to hover, i.e., to remain in a nearly stationary flight condition, is the main capability that differentiates helicopters from fixed-wing aircraft. The ability to autorotate, i.e., to keep the rotor spinning by means of the airflow passing through it, is an essential emergency maneuver that enables helicopter pilots to often safely reach the closest suitable landing site in the event of an engine failure.
These two maneuvers were not randomly chosen. The choice was based on the fact that hover and autorotation pertain to different phases of the helicopter pilot training syllabus. While both maneuvers need to be mastered by helicopter pilots, hover is generally the very first maneuver that student pilots learn to perform, whereas autorotation is practiced only when the trainee demonstrates a sufficient level of proficiency in maintaining/controlling the airspeed and the rotorspeed. Therefore, hover and autorotation can be characterized as a ""basic"" and an ""advanced"" maneuver, respectively. Furthermore, hover is performed in normal operating conditions, whereas an autorotation represents an abnormal mode of operation for helicopters and is thus performed only in emergency circumstances. On the other hand, hover is performed by helicopter pilots on a daily basis (or at least every time they fly). Fortunately, nowadays engine reliability is high and they seldomly fail, meaning that real power-out autorotations are not performed often. However, to be prepared for a potential occurrence, simulated engine-failures (generally with a power-recovery, i.e., terminating in a hover) are practiced during recurrent training and proficiency checks. It is therefore evident that issues in simulator training of the hover maneuver need to be assessed especially in relation to novices (ab-initio training), while those in simulator training of the autorotation maneuver require a focus on experienced pilots (recurrent training).
The type of maneuver (e.g., basic or advanced), the operating condition (e.g., normal or abnormal mode of operation), and the trainees' characteristics (e.g., novice or experienced pilots) are all factors that play a role in the level of simulator fidelity needed for effective training. In contrast to the unquestioning and unceasing pursuit of high fidelity, which is typical of the simulation industry and is also supported by current regulations for flight simulator training devices, there is increasing evidence that adding more fidelity beyond a certain point results in a diminished degree of transfer of skills, especially for nonexpert pilots. Indeed, high fidelity also means high complexity, which generally requires more cognitive effort, thus increasing the trainee’s workload, which may, in turn, impede simulator learning.
With the goal to seek more clarity with respect to the relation between fidelity and training effectiveness, a first quasi-transfer-of-training experiment was conducted, in which the simulator's objective fidelity (i.e., the quality of the cueing systems) was the independent variable. Two groups of task-na\""ive learners (a total of twenty-four participants) underwent a hover part-task training program, formulated according to Cognitive Load Theory, an instructional design theory that reflects the way humans process information. The experimental group first trained in a low-fidelity simulator (a Computer Based Trainer at the Max Planck Institute for Byological Cybernetics) and then transferred to a high-fidelity setting (the CyberMotion Simulator at the Max Planck Institute for Byological Cybernetics), while the control group received all its training in the high-fidelity simulator. The two groups were balanced according to participants' manual control skills, which were evaluated through a pre-experimental aptitude test (a two-axes compensatory tracking task). During the evaluation phase, which both groups performed in the high-fidelity simulator, no statistically significant differences were found between the two groups in all the dependent measures. Of course, this does not directly imply that the two simulators are equally effective, as the hover part-task training program likely had a mitigating effect, supporting the idea that the lack of simulator objective fidelity can be compensated by the use of instructional design (i.e., a proper training program tailored to the trainees' needs). This can be verified in future experiments using a third group of task-na\""ive learners, trained with a different hover training program in the low-fidelity simulator who are then transferred to the high-fidelity setting to prove this hypothesis.
This thesis also describes two quasi-transfer-of-training experiments that focused on autorotation and had the same setup (the SIMONA Research Simulator at Delft University of Technology), but used two different helicopter flight mechanics models, characterized by a different level of fidelity. The lower-fidelity model was chosen to gain a simple understanding of the flight dynamics in autorotation, that could then be more easily extended to a higher-fidelity model. These experiments, in which the helicopter dynamics were chosen as the independent variable, were motivated by an example of in-flight-to-in-flight negative transfer of training reported in several helicopter accidents. Indeed, many engine failure accidents result from an apparent loss in rotor performance (different helicopter dynamics), which is unexpected for pilots who only practiced autorotations with a power recovery (i.e., terminating in a hover). For helicopters with free-turbine engines, even in a ground-idle setting, the engine still transmits some power to the rotor. This is a clear example of in-flight-to-in-flight negative transfer of training: practicing power-recovery autorotations (task A) interferes with learning or performing real power-out autorotations (task B) for helicopters with free-turbine engines, due to the fact that there is a crucial mismatch between the helicopter dynamics characteristics in the two task situations. Here, a pilot's mental model of the helicopter is not representative of the actual helicopter, which requires a different control strategy than learned during training.
Experienced helicopter pilots participated in the two quasi-transfer-of-training experiments on autorotation. They were divided in two groups, which were balanced according to participants' background (license type) and experience (flight hours), and performed a straight-in autorotation maneuver with two different helicopter dynamics presented in a different sequence. The two dynamics used in the experiments were selected to require a different level of pilot control compensation. To this end, a sensitivity analysis on the helicopter eigenmodes was performed to understand which design parameters control the autorotative flare index, a metric to evaluate autorotative performance in terms of available energy over required energy and thus influence helicopter dynamics in autorotation. This was achieved through the structural evaluation and comparison of the helicopter natural modes of motion in steady-descent autorotation. Thirty-two configurations were compared by individually varying the main rotor blade chord, the main rotor radius, the main rotorspeed and the helicopter weight from the baseline value (Bo-105 helicopter) to get eight different values of the autorotation index, spanning from 5 to 40 ft3/lb. This range was chosen after comparing the autorotative flare indices for various existing helicopters. Among these configurations, the two requiring the most and the least pilot control compensation were selected.
In the first experiment on autorotation, fourteen pilots performed the straight-in autorotation maneuver controlling a 3-degrees-of-freedom (DOF) longitudinal dynamics + rotor speed DOF model. Ten pilots performed the same task with a 6-DOF rigid-body dynamics + rotor speed DOF model in the second experiment. In both experiments clear positive transfer was found from the most to the least demanding helicopter dynamics, but not the opposite. This is observed especially for the rate of descent at touch-down, which is considered the key indicator of a smooth landing. The outcome of these two experiments suggests the need to update the current simulator training syllabus for autorotation to include a wide range of helicopter configurations with different handling characteristics. Such configurations can be obtained for example considering different models of the same helicopter family, to give to the trainee the opportunity to familiarize with helicopters with different sizes, dynamics and ""feel"". This can help inexperienced pilots to better understand that an autorotation is not a ""by-the-numbers"" procedure and that adaptability and judgement of the pilot should always cover a prominent role in the accomplishment of the task.
To strenghten the experiments on autorotation, a thourough analysis was conducted to investigate the effects of the rotorspeed degree-of-freedom in autorotation on the classical rigid-body modes. Although the developed 3-DOF and 6-DOF models are characterized by a different level of fidelity, good agreement in terms of stability characteristics of the longitudinal modes of motion was found between the two models. Especially the phugoid and the heave-subsidence modes are strongly affected by the additional rotorspeed degree of freedom, meaning that autorotation requires a different stabilization strategy by the pilot with respect to straight level flight. On the contrary, the pitch subsidence in both models and the lateral-directional modes in the 6-DOF rigid-body helicopter model do not change significantly in steady-descent in autorotation with respect to straight level flight.
In conclusion, this thesis provides enhanced insight into helicopter pilot training in flight simulators by addressing two critical training tasks, hover (Part I of this thesis) and autorotation (Part II of this thesis), that represent two of a helicopter's most unique capabilities. With these new insights, this thesis lays the foundations for an enhanced understanding of the future requirements for helicopter pilots training in flight simulators, which will become even more important considering the current trends towards Urban Air Mobility. Indeed, the transition from helicopters as a niche sector in the aerospace industry to the widespread future use of personal aerial vehicles (PAVs) based on rotorcraft concepts needs to be accompanied by a disruptive change in aviation regulations, encompassing every aspect of safety, including training. Even though these future PAVs will be characterized by a high level of automation, the human operators will keep playing an important role in the safe operation of the flight, hence raising the need to develop training requirements for PAV pilots.","helicopter dynamics; hover; autorotation; flight simulation; transfer of training; training effectiveness","en","doctoral thesis","","978-94-6366-626-8","","","","","","","","","Control & Simulation","","",""
"uuid:a88fbe40-ff1e-47a7-bc8e-030f92213066","http://resolver.tudelft.nl/uuid:a88fbe40-ff1e-47a7-bc8e-030f92213066","Chasing H- in Rare-earth Metal Oxyhydride Thin Films","Chaykina, D. (TU Delft ChemE/Materials for Energy Conversion and Storage)","Dam, B. (promotor); Eijt, S.W.H. (copromotor); Delft University of Technology (degree granting institution)","2022","Rare-earth metal oxyhydride thin films show a photochromic effect, where their transparency decreases (reversibly) with exposure to light with energy greater than its optical band gap. The precise underlying mechanism behind this effect is unknown, but is investigated in this thesis by using techniques such as muon spin relaxation, or materials science methods (aliovalent doping, changing the RE-cation, or altering the O:H ratio of the film). Rare-earth metal oxyhydrides have also been reported as hydride-ion conductors in their bulk form (powder pellets). Since some theories about photochromism involve diffusion, there was a suspicion that these two properties are related. Herein, ion mobility is addressed by electrochemical impedance spectroscopy, along with the other aforementioned methods. In summary, this thesis asserts that (1) photochromism does not involve long-range ion mobility and (2) some of the thin films made here are dominated by electronic rather than ionic mobility. An alternative idea for photochromism is given, where neutral hydrogen (H0) is formed alongside the reduced RE-cation (RE2+), and no mobility is required to prolong this darkened state. The local composition of the film under illumination, therefore, may be a H-deficient phase which is optically dark and highly conductive.","photochromism; rare-earth metal; thin films; oxyhydrides; electrochemical impedance spectroscopy; aliovalent doping; muon spin relaxation","en","doctoral thesis","","978-94-6384-391-1","","","","","","","","","ChemE/Materials for Energy Conversion and Storage","","",""
"uuid:b8315d98-97b7-4509-b33c-bd3ac8627179","http://resolver.tudelft.nl/uuid:b8315d98-97b7-4509-b33c-bd3ac8627179","Strategic design for remanufacturing","Boorsma, N.E. (TU Delft Climate Design and Sustainability; TU Delft Circular Product Design)","Balkenende, A.R. (promotor); Bakker, C.A. (promotor); Peck, David (copromotor); Delft University of Technology (degree granting institution)","2022","BoorsmaRemanufacturing is one of the product recovery approaches in a circular economy. In the remanufacturing production process, used products are returned to their original product specifications. When compared to original manufacturing, remanufacturing results in lower raw material consumption and reduces carbon emissions. Despite having been operative for decades, across multiple industries, remanufacturing remains a niche company activity. The engineering knowledge on how to design products to improve the fit with the remanufacturing process is widely reported in academic literature. This thesis explores the role of strategic design in the context of remanufacturing and aims to promote a wider implementation of remanufacturing in industry. Strategic design determines the extent to which a product meets market needs, what its functionality should be, and whether it supports the company’s long-term objectives.","Remanufacturing; Circular Economy; Strategic Design; Design Management; Circular Product Design; Implementation; Design Method; Case study","en","doctoral thesis","","978-94-6384-388-1","","","","","","","","","Climate Design and Sustainability","","",""
"uuid:300f7a64-53e6-4af7-b352-2805c551611c","http://resolver.tudelft.nl/uuid:300f7a64-53e6-4af7-b352-2805c551611c","Self-Organisation for Survival","Banerjee, I. (TU Delft System Engineering)","Brazier, F.M. (promotor); Warnier, Martijn (promotor); Helbing, D. (promotor); Delft University of Technology (degree granting institution)","2022","In a dynamic or disruptive situation, such as disasters, a fully mobile and decentralized infrastructure-less network seems to be a viable option for communication. Citizens are confronted with challenges such as complicated deployment of these networks, resource-constrained mobile phones and mobility. This requires communication networks to adapt to these changing spatial-temporal-resource contexts. Additionally, mobile and immobile citizens stuck in impoverished, highly populated areas, so-called ‘islands of inequity’ where often initial infrastructure is missing need to be able to connect with disaster response teams requiring hybrid communication approaches. In this thesis, the right to communicate and remain connected is considered the core of the design process. To serve this purpose the thesis focuses on investigating values that can be delivered with the design of a resilient communication network. This involves defining values of ""participatory fairness"", ""inclusion"" and ""continuity"". The main research question addressed by this thesis is:
How to design a value-based citizen-centric adaptive mobile communication system?
To answer this question, this thesis uses self-organization as an approach to design a decentralized context-aware mobile communication system for citizens that: (I) is robust, reliable and scalable; delivering all functional requirements; (II) fulfils the value of participatory fairness at the system level, and (III) seamlessly and automatically integrates with other available infrastructure for inclusive and continuous message delivery for all phones despite energy disparities and varied population densities of a disaster area. This thesis explores, for the first time, the effects of introducing a value-sensitive design approach for citizen-centric communication networks.
Device characterization is an indispensable step in building models for circuit designers. Foundries characterize their technology over the standard military temperature range (-55 to 125 ◦C) and generally do not supply compact models (yet) that are valid at deep-cryogenic temperatures. Therefore, designers of cryogenic circuits have to rely on back-of-the-envelope calculations and must build in margins to allow for parameter shifts, as they are unable to fully simulate their designs with use of the existing electrical simulators. These margins cause circuits to most likely occupy more silicon area than required and thus operate at lower speeds and with increased power dissipation compared to an optimized circuit. As the power budget is severely limited, this is a very important challenge of (current) cryogenic circuit design. Worst of all, circuits deviating from the stringent specifications for quantum control can lead to lower fidelity of quantum operations.
In order to overcome these challenges, cryogenic device characterization needs to be carried out, to investigate and capture the impact of low temperatures on different device parameters. A convenient temperature to operate cryogenic circuits at, is that of liquid helium, which lies around 4.2 K. Therefore, most characterizations are carried-out at this temperature.
Effort was already devoted to characterization and modeling at these temperatures by other groups, however, not much attention was spent on the impact of these extreme temperatures on device matching and self-heating in advanced processes.
The work presented in this thesis, therefore, focuses on the design and characterization of test chips in an advanced 40-nm process, and the subsequent modeling of device mismatch and self-heating at cryogenic temperatures.","Cryogenic; CMOS; Modeling; Device Mismatch; Self-Heating","en","doctoral thesis","","978-94-6419-629-0","","","","","","","","","QCD/Sebastiano Lab","","",""
"uuid:46bc9dd8-c8a9-4de4-ab18-9b53b700e4bd","http://resolver.tudelft.nl/uuid:46bc9dd8-c8a9-4de4-ab18-9b53b700e4bd","Enabling Social Situation Awareness in Support Agents","Kola, I. (TU Delft Interactive Intelligence)","Jonker, C.M. (promotor); van Riemsdijk, M.B. (copromotor); Delft University of Technology (degree granting institution)","2022","The use of support agents that help people in their daily lives is steadily growing. While there have been continuous developments in integrating and modelling internal aspects of the user in these support agents, research shows that people's behavior is also shaped by their environment. While there have been attempts at integrating elements of the physical environment such as location, support agents generally lack the ability to take into account the effect of the user's social situation on their behavior. This is important since the majority of our daily life situations have a social nature.
This thesis proposes a social situation awareness framework for allowing support agents to take into account the user's social situation in order to offer more comprehensive support. The framework is inspired by existing work on situation awareness from research in human factors and computer science, instantiated with concepts from social sciences.
This thesis demonstrates how to integrate social situation awareness components in support agents. The studies presented in the thesis provide insight into the concepts and techniques that are needed for social situation awareness, and how they can be used in practice through a hypothetical case study involving a socially aware agenda management agent. This contribution serves as a blueprint for designers of support agents, and provides a basis towards more comprehensive support for users.","","en","doctoral thesis","","978-94-6469-099-6","","","","","","","","","Interactive Intelligence","","",""
"uuid:f8361576-f35d-4334-8bee-68a48ed70037","http://resolver.tudelft.nl/uuid:f8361576-f35d-4334-8bee-68a48ed70037","Revealing loss and degradation mechanisms in metal halide perovskite solar cells: The role of defects and trap states","Caselli, V.M. (TU Delft ChemE/Opto-electronic Materials)","Savenije, T.J. (promotor); Grozema, F.C. (copromotor); Delft University of Technology (degree granting institution)","2022","For centuries we have relied on fossil fuels to produce energy for our needs, causing significant damage to the environment and our own health. To make an energy transition possible, technology has to step up, providing solutions for cleaner and cheaper energy production. In the field of solar energy, perovskite-based devices can offer a feasible alternative to conventional technologies, involving less energy intensive and cheaper manufacturing processes. Despite the great technological advancements of the past years, open circuit voltage losses and especially poor long-term stability are two of the main bottlenecks that still have to be overcome in order to bring the technology to market. In this thesis we have addressed such issues by investigating the origin and impact of electronic trap states on charge carrier dynamics in perovskite thin films of different composition...","","en","doctoral thesis","","978-94-6421-953-1","","","","","","","","","ChemE/Opto-electronic Materials","","",""
"uuid:fb0fb932-2a5a-44a0-8495-c10b300584e8","http://resolver.tudelft.nl/uuid:fb0fb932-2a5a-44a0-8495-c10b300584e8","Transverse and longitudinal vibrations in axially moving strings","Wang, J. (TU Delft Mathematical Physics)","van Horssen, W.T. (promotor); Wang, J.M. (promotor); Delft University of Technology (degree granting institution)","2022","Varying-length cable systems are widely applied in a vast class of engineering problems which arise in industrial, civil, aerospatial, mechanical, and automotive applications. Due to external excitations, large oscillations can occur when cables are lifted up or down. This phenomenon is caused by resonance. In general, resonance is harmful, and can cause significient deformations and dynamic stresses in machinery and structures, and even can lead to accidents. Therefore, this doctoral dissertation is devoted to the study of transverse and longitudinal resonance phenomena and output feedback stabilization of varying-length cables....","Axially moving string; Resonance; Boundary excitation; Time-varying length; Singular perturbation; Averaging; Backstepping; Vibration control","en","doctoral thesis","","","","","","","","","","","Mathematical Physics","","",""
"uuid:5f127a15-5fa4-4eae-94fb-d0f30bdebe8c","http://resolver.tudelft.nl/uuid:5f127a15-5fa4-4eae-94fb-d0f30bdebe8c","Alternative Representations and Techniques for Accelerated Realistic Image Synthesis: Improved Sampling, Reconstruction and Modelling Methods","Guo, Jerry Jinfeng (TU Delft Computer Graphics and Visualisation)","Eisemann, E. (promotor); Billeter, M.J. (copromotor); Delft University of Technology (degree granting institution)","2022","Realism has always been a major goal in visual content creation - from oil painting to motion pictures, from graphic arts to scientific data visualization. Computer graphics creates a virtual reality with digital representations. Trading between accuracy and speed, realistic rendering either creates photorealistic renders that follow strict rules of physics or approximates them with interactive alternatives.
The two approaches have their distinctive strengths and constraints. In this thesis, we focus on working from both ends towards realistic rendering. We study the highly accurate process of physically based rendering and introduce three novel methods dedicated to making it more efficient. We also investigate real-time depth of field rendering and develop a new model for an effect that is currently missing in state of the art systems.
Part i concerns sampling in numerical integration. Two methods are presented in Chapter 2 and Chapter 3. We first propose to perform path guiding in primary sample space, resulting in an effective and efficient scheme that is easy to plug into existing rendering pipelines. Secondly, we map visibility relations in a matrix-like table to steer the sampling process, which improves processes, such as visible light samples and light subpaths.
Part ii tackles the subsequent step after sampling in numerical integration. We present a new integration scheme in Chapter 4 that associates a weight to samples based on their adjacency, while remaining unbiased. The method delivers similar performance using uniform random samples as one can obtain with costly low-discrepancy sequences.
In Part iii this thesis revisits the optics behind depth of field effects and models distortion and shrinking effects that are missing in modern real-time rendering solutions. We are able to deliver similar effects to that of ray-traced results at a fraction of the cost.
Each chapter has detailed depiction and evaluation that helps readers better understand the methods and gain insights as to the applications thereof.","Rendering; Path tracing; Realistic rendering; Global illumination; Realtime rendering; Monte Carlo; Sampling","en","doctoral thesis","","978-94-6384-394-2","","","","","","","","","Computer Graphics and Visualisation","","",""
"uuid:6f0eaa80-e52e-4a03-b92b-cae847f12906","http://resolver.tudelft.nl/uuid:6f0eaa80-e52e-4a03-b92b-cae847f12906","Similitude augmentation in sub-scale flight test model design: An MDAO based similarity maximization approach","Raju Kulkarni, A. (TU Delft Flight Performance and Propulsion)","la Rocca, G. (promotor); Veldhuis, L.L.M. (promotor); Delft University of Technology (degree granting institution)","2022","By 2050, the aviation industry is expected to grow by 250-300% of its current air traffic. If left unchecked, the corresponding increase in atmospheric and noise pollution will have a catastrophic impact on the environment. Thus, improvements in aircraft design are urgently needed for a sustainable growth of the aviation industry. The performance and behaviour of existing tube-and-wing aircraft have been refined and improved so much in the past decades that further improvements are barely possible. Consequently, by 2050, breakthrough solutions in the form of unconventional design configurations, novel propulsion systems and operation models are required to meet the ambitious goals stated in Europe’s vision for Aviation. However, the design of unconventional aircraft is particularly challenging as it often involves integrated multi-functional components for which legacy data is unavailable. Thus, appropriate means to assess their performance and behaviour are needed to lower their industrial development risk. Sub-scale Flight Testing (SFT), an experimental approach involving the free-flight testing of sub-scale models with an on-board powerplant, shows promise in the evaluation of the in-flightmotion of a given aircraft configuration and its response to control inputs. In the past, SFT has been used in a wide-range of flight tests to study the effect of novel technologies on the aircraft flight behavior, to assess systems integration feasibility and as a proof-of concept for unconventional designs. The actual benefit and validity of SFT mainly depends on the design of the SFT model used for the test. A well designed SFT model can show a similar behaviour to the fullscale aircraft such that any observation on the scaled device can be directly used to predict the full-scale performance. However, ensuring similarity between SFT model and full-scale aircraft is challenging due to the differences in their size and flight conditions. In addition to guaranteeing similitude, SFT model must comply with multidisciplinary design requirements such as safe completion of the mission, adhering to restrictions imposed by local authorities and selecting suitable flight-control and measurement equipment...","Sub-scale Flight Testing (SFT); Similitude Augmentation; Similitude for Flight Dynamics; Design Automation; MDAO; KBE","en","doctoral thesis","","978-94-6384-386-7","","","","","","","","","Flight Performance and Propulsion","","",""
"uuid:31c7c7ec-a510-424e-a6af-24e2d6d00433","http://resolver.tudelft.nl/uuid:31c7c7ec-a510-424e-a6af-24e2d6d00433","Host-guest complexation integrated in chemical reaction networks","Li, G. (TU Delft ChemE/Advanced Soft Matter)","Eelkema, R. (promotor); van Esch, J.H. (promotor); Delft University of Technology (degree granting institution)","2022","Nature has proven to be a great source of inspiration for scientific research and technological innovation in various areas: food, medicine, architecture, chemistry, materials, algorithms, and many other fields. At the basis of sophisticated functions associated with life in nature are all kinds of chemical reactions which are mainly regulated by enzymes through molecular recognition of the substrates. Meanwhile, chemical signals are able to tune the catalytic activities of enzymes through noncovalent bonding or structural modification. Concomitantly, the formation of transient structures that are used temporarily, for instance the mitotic spindle, requires the conversion of energy, mainly in the form of high-energy chemical fuels. All of these phenomena combined endow living systems with high responsivity to various stimuli. Inspired by nature, regulating artificial catalysts in chemical reactions by noncovalent bonding, and controlling formation/deformation of supramolecular materials by chemical reactions are attracting researchers’ attention. This thesis integrates chemical reaction networks with host-guest complexation, aiming to bring about some of these advanced properties.","","en","doctoral thesis","","978-94-6366-619-0","","","","","","","","","ChemE/Advanced Soft Matter","","",""
"uuid:d9752495-9c32-4612-b9c0-e1054c1b764f","http://resolver.tudelft.nl/uuid:d9752495-9c32-4612-b9c0-e1054c1b764f","Navigation and coordination of fixed-wing unmanned aerial vehicles under mission uncertainty","Wang, X. (TU Delft Team Bart De Schutter)","De Schutter, B.H.K. (promotor); Baldi, S. (promotor); Delft University of Technology (degree granting institution)","2022","Unmanned Aerial Vehicles (UAVs) have been emerging as a promising but challenging platform for studying autonomous and cooperative control. This Ph.D. thesis focuses on fixed-wing UAVs which, with their more efficient aerodynamics, can ensure longer flight durations and more autonomy than multi-rotorUAVs. However, in the current state of the art, limited work has been done on deploying formations of fixed-wing UAVs that can operate autonomously even in the presence of large uncertainties. Uncertainties in fixed-wing UAVs include uncertain wind environments, unmodelled longitudinal/lateral dynamics, uncertain load conditions, uncertain communication conditions among the UAVs, and other uncertain factors.
Within this PhD thesis we develope novel adaptive and distributed guidance approaches for fixed-wing UAVs. The following three aspects are studied:
* Vector field guidance under uncertainties
* Distributed formation control with uncertain UAV dynamics
* Testing in the real world to achieve Sim-to-Real transfer","fixed-wing UAV; vector field; unknown dynamics; adaptive guidance control; formation control","en","doctoral thesis","","978-94-6384-387-4","","","","","","","","","Team Bart De Schutter","","",""
"uuid:27dcbbc2-7d9e-4f67-925a-5e676ca4e43c","http://resolver.tudelft.nl/uuid:27dcbbc2-7d9e-4f67-925a-5e676ca4e43c","Monocular Vision-Based Pose Estimation of Uncooperative Spacecraft","Pasqualetto Cassinis, L. (TU Delft Space Systems Egineering)","Gill, E.K.A. (promotor); Menicucci, A. (promotor); Delft University of Technology (degree granting institution)","2022","Activities in outer space have entered a new era of growth, fostering human development and improving key Earth-based applications such as remote sensing, navigation, and telecommunication. The recent creation of SpaceX's Starlink constellation as well as the steep increase in CubeSat launches are expected to revolutionize the way we use space and extend the current capabilities of satellite-based technology. However, this steep increase in the number of human-made objects is rapidly leading to higher collision risks in congested Earth orbits. This has led to questioning whether this trend is sustainable on the long term, and ultimately to the need to tackle sustainability in space.
The recent decade has seen considerable efforts by Space Agencies to both prevent major collisions in orbit via Active Debris Removal (ADR) missions and to extend the lifetime of the functioning satellites with On-Orbit Servicing (OOS). Unfortunately, the approach and capture of space debris objects is complicated by the fact that these targets are uncooperative and cannot aid close-proximity operations, leading to critical challenges in the estimation of their relative position and attitude (pose) with respect to the servicer spacecraft. Several missions have been proposed as technology demonstrators of debris removal and servicing technologies, in which passive monocular cameras are combined with active sensors to improve the robustness and accuracy of the navigation system. Yet, despite the inherent challenges that come together with the use of monocular cameras in space, navigation systems based on a single camera are becoming an attractive alternative to systems based on active sensors, due to their reduced mass, power consumption and system complexity. The research work presented in this thesis aims at developing and validating a robust and accurate monocular camera-based pose estimation system compliant with navigation requirements of both ADR and OOS missions. \\
Two fundamental open challenges are addressed:
\begin{enumerate}
\item The robustness and applicability of image processing algorithms and pose estimation methods.
\item The validation of relative navigation filters and their interface with image processing and pose estimation.
\end{enumerate}
\noindent This research begins with a survey on the robustness and applicability of existing monocular vision-based pose estimation systems. After identifying the characteristics and limitations of each subsystem implemented in state-of-the-art architectures, a comparative assessment of the current solutions is given at different levels of the pose estimation process, in order to bring a novel and broad perspective. Special focus is put on the improved robustness of novel image processing schemes and pose estimators based on Convolutional Neural Networks (CNN). The limitations and drawbacks of the validation of current pose estimation schemes with synthetic images are further discussed, together with the critical trade-offs for the selection of visual-based navigation filters.
Building on the results of the survey, a novel framework is introduced to enable a robust and accurate pose estimation. Two investigated CNNs are used at image processing level to identify a set of pre-selected features on the target spacecraft, which are fed to a pose estimator prior to the navigation filter (loosely-coupled) or directly to the navigation filter as measurements (tightly-coupled). A novel method to derive covariance matrices directly from the CNN heatmaps is introduced to improve the modeling of the feature detection uncertainty prior to pose estimation. The performance results indicate that a tightly-coupled approach can guarantee an advantageous coupling between the rotational and translational states within the filter, while reflecting a representative measurements covariance. Synthetic monocular images of the European Space Agency's Envisat spacecraft are used to generate datasets for training, validation and testing of the CNN. Likewise, the images are used to recreate a representative close-proximity scenario for the validation of the proposed filter.
This research work then extends the validation from a purely synthetic one to a more comprehensive on-ground validation. To this end, ESA's GNC Rendezvous, Approach and Landing Simulator testbed is used to validate the proposed CNN-based pose estimation system on representative rendezvous scenarios, with special focus on solving the domain shift problem which characterizes CNNs trained on synthetic datasets when tested on more realistic imagery. To solve the domain shift problem, a novel augmentation technique focused on texture randomization was introduced, aimed at improving the CNN robustness against previously unseen target textures. The results prove an increase in robustness towards realistic imagery, as randomizing the texture of the target spacecraft during training allows the CNN to generalize textures and to focus on the shape of the target. However, a performance decrease in highly adverse illumination conditions or low camera exposures suggests that additional augmentation techniques are required to tackle the domain shift from an illumination standpoint.
In response to this need and in order to extend the on-ground validation to the entire navigation system, this research work proceeds by introducing the on-ground validation of a CNN-based Unscented Kalman Filter. The validation is carried out at Stanford's robotic Testbed for Rendezvous and Optical Navigation on a dataset of realistic laboratory images, which simulate rendezvous trajectories of a servicer spacecraft to the Tango spacecraft from the PRISMA mission. The validation is performed at different levels of the navigation system by first training and testing the adopted CNN on SPEED+, the next generation spacecraft pose estimation dataset with specific emphasis on domain shift between a synthetic domain and a laboratory domain. A novel data augmentation scheme based on light randomization is proposed to improve the CNN robustness under adverse viewing conditions. Next, the entire navigation system is tested on two representative rendezvous trajectories. Results indicate that the inclusion of a new scheme to adaptively scale the heatmaps-based measurement error covariance improves filter robustness by returning centimeter-level position errors and moderate attitude accuracies at steady-state. Thanks to the proposed adaptive method, the filter does not diverge in periods of low measurements accuracy, suggesting that a proper representation of the measurements uncertainty combined with an adaptive measurement error covariance is key in improving the navigation robustness.","Active Debris Removal; Relative Navigation; Convolutional Neural Networks; Relative Pose Estimation; On-ground Validation; Artificial Intelligence","en","doctoral thesis","","","","","","","","","","","Space Systems Egineering","","",""
"uuid:db97669e-aaba-4fc7-900c-43bf7c6fcfd2","http://resolver.tudelft.nl/uuid:db97669e-aaba-4fc7-900c-43bf7c6fcfd2","Solar carparks for electric vehicle charging in a grid with limited capacity","Ghotge, R. (TU Delft Energy Technology)","van Wijk, A.J.M. (promotor); Lukszo, Z. (promotor); Delft University of Technology (degree granting institution)","2022","The electricity grid in the Netherlands is currently unable to provide sufficient capacity for both the integration of new renewable electricity powerplants as well as for the integration of new electricity demands like electric vehicle charging. Symptoms of this scarcity of capacity, also seen in other countries undergoing an energy transition, are observed in various forms.
On the generation side, newly planned solar photovoltaic projects at both commercial and residential scales are increasingly being denied permission to connect to the grid or face long delays for grid reinforcement before they are connected. Since 2020, new utility-scale solar Photovoltaic (PV) installations were provided a maximum of 70% grid connection capacity relative to the solar installed capacity. In 2022, this permitted grid connection capacity has been further lowered to 50% for new projects larger than 1 MWp.
On the demand side, recent mapping studies by the Dutch grid operators show that a majority of the country faces structural congestion in the distribution and transmission grids. The Dutch ambition, as stated in the Regional Energy Strategy (RES), is to integrate 12 GWp of additional solar installed capacity to the existing 14 GWp by 2030. Also by 2030, the total number of Electric Vehicles (EVs) in the Netherlands is expected to increase from about 390,000 (4.4% of the total Dutch passenger vehicle fleet) today to about 1 million (10%), increasing the peak electricity demand.
The scarcity of capacity in the electricity grid to integrate both low carbon solar generation and electric vehicle charging presents an obstacle to the realisation of both short and long term emissions targets. Even though significant grid expansion is already planned and commissioned, this scarcity of capacity is expected to be a characteristic feature of the electricity grid over the coming decades. This thesis aims to investigate how the coupling of solar carparks and EV charging can enable their integration in a grid with scarce capacity while lowering operational carbon emissions.
Two configurations of solar carparks for EV charging are analysed, with the aim of reducing the grid capacity needed:
Chapter 3 analyses the first configuration: a solar carpark for charging EVs at a workplace in the Netherlands where demand peaks are caused by the simultaneous charging of EVs. The inclusion of EV demand forecasting within the scheduled charging reduces annual peak EV charging power by 36-39% relative to immediate charging. These reductions in peak demand enable a more effective use of the available power capacity (now mandated to 50% of solar installed capacity) as well as increase the utilisation of generated solar energy by reducing the need for solar curtailment.
Chapter 4 investigates the second configuration: an off-grid solar carpark for EV charging at a long term (>24 hours) parking lot in a Dutch airport. Offgrid solar charging would enable rapid planning of charging facilities for EVs, removing the uncertainty, delays and costs associated with a grid connection. However, these benefits come with a trade-off: not all vehicles are fully charged at the time of departure. With immediate charging, 20% of EVs over the year leave with a state-of-charge lower than 60% and 3% of EVs leave with a state-of-charge lower than 40%. The adequacy of fleet-level charging is lowest during the low irradiance month of December, during which 63% of vehicles leave with a state-of-charge lower than 60% and 11% of vehicles leave with a state-of-charge lower than 40%. Prioritising the charging of plugged-in vehicles with the lowest state-of-charge ensures that no vehicles leave with a state-of-charge lower than 40% over the entire year, even in the low irradiance winter months. Increasing the minimum duration of parking reduces the fraction of vehicles leaving with state-of-charge below 60% by about 2% per day.
The consequences of scheduled charging on greenhouse gas emissions are investigated in Chapters 5 and 6: Chapter 5 analyses a recently constructed solar carpark located in Dronten, the Netherlands, which includes a solar array, a nickel metal hydride battery and charge points for electric vehicle charging. The aim of the study is to quantify the magnitude of offset carbon emissions per year by the solar carport, and the contribution of battery storage to this offset. The prevalent practice of using the annual average carbon intensity is found to be unsuitable for estimating the annual offset carbon emissions since it does not account for the intra-day patterns of solar production, EV charging and battery cycling. To overcome this, we propose a novel method to calculate the annual offset carbon emissions, making use of the hourly average and hourly marginal carbon intensity. The choice of approach is found to make a difference to the calculated values of the annual offset emissions of the solar carpark. The use of hourly average carbon intensity, which takes into account variation in production, generation and storage, leads to a higher calculated value of annual offset carbon emissions by about 7% relative to a method using the annual average carbon intensity. The use of the hourly marginal carbon intensity to calculate the annual offset carbon emissions suggests that solar carparks have about a 55% higher incremental effect on the carbon intensity associated with the new load of EV charging than what is conventionally calculated. When comparing the annual offset carbon emissions from the solar carpark with and without a battery, we find that the use of the battery has a negligible effect on the annual carbon offset by the system. This result is found to be robust across all the methods of calculation. We therefore conclude that the use of batteries in solar carparks have a low contribution to the total carbon offset by the solar carport.
Chapter 6 investigates the effect of price-based scheduling of EV charging on the carbon intensity of the electricity used by a scheduled fleet of EVs. Real data of over 55,000 home charging sessions collected from 1031 charge points in the Netherlands is analysed. A simulation is made with a commercial smart charging algorithm to create a scheduled charging profile ex post from the EV charging data set. The profile results in an average price reduction of 25% for the overall fleet relative to the costs for unscheduled charging of the fleet over the same period. The time dependent hourly carbon intensity of electricity consumed in the Dutch low voltage grid in 2018 is used to find the impact of price-based scheduling on the mean carbon intensity of electricity used by the fleet. A small decrease of 1.2% in carbon intensity used by the entire EV fleet is observed over the year. Although price optimisation has large effects on the carbon intensity in individual sessions, the effect is found to balance out over the large number of sessions in the year.
Chapter 7 investigates the factors affecting the consumer acceptance of Vehicle-to-Grid (V2G) charging, which remains an insufficiently investigated barrier for the use of the full potential of EVs in demand response and storage. The research work comprises two stages of semi-structured interviews: the first with EV drivers who have never experienced V2G charging, and the second with EV drivers who experienced V2G charging. The participants in the second stage are given access to a V2G-compatible Nissan LEAF and the V2G charging facilities set up in a living lab on the University campus for at least a week each, after which they are interviewed. Clear communication of the battery impacts, financial compensation and operational control are all found to foster acceptance and were, in many cases, necessary conditions for acceptance. The main barriers for acceptance found are range anxiety in various forms, concerns about the effects of V2G charging on the EV battery and the perceived loss of freedom associated with private vehicles. A majority of participants interviewed from both groups are found to accept or conditionally accept V2G charging. This suggests that the use of EVs for demand side storage in addition to demand response is already acceptable to a subset of current EV drivers. The study also clarifies the conditions under which V2G charging would be more acceptable to a broader group of EV users.
The results obtained in this thesis show that the coupling of solar photovoltaics and EV charging enables the integration of both in a grid with scarce capacity. We therefore recommend that solar carparks for EV charging be more widely implemented at workplaces and at longterm (>24 hour) parking lots, though without stationary batteries.","solar carport; Electric vehicle (EV); Charging Infrastructure for EV's","en","doctoral thesis","","978-94-6366-616-9","","","","","","","","","Energy Technology","","",""
"uuid:b7f2378b-01df-48f0-b71d-089f17b7b378","http://resolver.tudelft.nl/uuid:b7f2378b-01df-48f0-b71d-089f17b7b378","Optical tools towards the improvement of optogenetic stimulation","Maddalena, L. (TU Delft ImPhys/Microscopy Instrumentation & Techniques)","Hoogenboom, J.P. (promotor); Carroll, E.C.M. (copromotor); Delft University of Technology (degree granting institution)","2022","Optogenetics is a powerful addition to the spectrum of techniques available in neuroscience to investigate neurophysiology and unravel how neural circuit structure is related to circuit function. This technique relies on introducing lightsensitive proteins or molecules as actuators to transduce an optical signal into a physiological perturbation of a living cell in vitro or in a living animal. To date, optogenetics has allowed remote control of neural activity in living and awake animals at different scales from single cells to complex networks of neurons to the investigation of animal behaviours. This wide range of experimental scales has been accomplished through joint progress on engineering the biological sensors and the optical design of instruments capable of manipulating with cellular spatial precision and millisecond temporal resolution.","Optogenetics; computer generated holography; adaptive optics; light-tissue interactions; microscopy","en","doctoral thesis","","978-94-6384-380-5","","","","","","","","","ImPhys/Microscopy Instrumentation & Techniques","","",""
"uuid:c31c2a87-5713-439a-9d0a-954751fcfccf","http://resolver.tudelft.nl/uuid:c31c2a87-5713-439a-9d0a-954751fcfccf","Distributed constraint optimization for cooperative autonomous vehicles","Fransman, J.E. (TU Delft Team Bart De Schutter)","De Schutter, B.H.K. (promotor); Sijs, J. (copromotor); Theunissen, Erik (copromotor); Delft University of Technology (degree granting institution)","2022","After the Second World War, chemical warfare agents and munitions were dumped in the Baltic Sea and the North Sea.
In order to assess the severity of the environmental consequences, it is important that the chemical warfare agents are located and their condition is investigated as soon as possible.
In order to reduce this time, the search could be performed by Autonomous Underwater Vehicles (AUVs).
The goal of this thesis is to develop algorithms that can be applied during underwater operations to allow AUVs to optimize their actions based on a global objective function without centralized communications.
The search problem is modeled within the Distributed Constraint Optimization Problem (DCOP) framework to be able to explicitly define both computational agents and their communications.
In order to be applicable to AUV operations, both benchmark problems and real-world problems with continuous domains are modelled within the Continuous DCOP (C-DCOP) framework.
This preserves the flexibility of modeling inherent in a DCOP while removing the limitations imposed by the discrete definitions.
Two C-DCOP algorithms are presented in this thesis.
The Compression-DPOP (C-DPOP) algorithm discretizes the domain of each of the variables and compresses their domains in order to refine the search space at every iteration.
The Distributed Bayesian (D-Bay) algorithm leverages Bayesian optimization to solve C-DCOPs without any need for discretization by modelling the effects of the variables on the global utility as Gaussian processes.
Results from high-fidelity simulations and real-world experiments are given for real-world multi-agent search problems.
A mine countermeasures operation is simulated in which AUVs update their search areas during the search based on sonar performance.
Assigned areas are re-distributed in order to optimize metrics relating to the expected time of completion and the level of confidence that all mine-like objects have been detected.
Moreover, real-world experimental results are presented for a multi Unmanned Aerial Vehicle (UAV) search problem.
By improving the autonomy of AUVs, the search efficiency can be increased through the cooperative optimization of their actions during the operation.
The research in this thesis contributes to this strategy by means of the developed algorithms and their applicability to real-world problems.","DCOP; Autonomous agents; Multi-agent systems","en","doctoral thesis","","978-94-6384-384-3","","","","","","","","","Team Bart De Schutter","","",""
"uuid:74f8791b-c6e3-48f8-8a78-0c7ace1f67a7","http://resolver.tudelft.nl/uuid:74f8791b-c6e3-48f8-8a78-0c7ace1f67a7","Thinking Perspectives: The Layered Meaning of Heinrich Tessenow’s Drawings (1901 – 1926)","Zeinstra, J.S. (TU Delft Situated Architecture)","Avermaete, T.L.P. (promotor); Havik, K.M. (promotor); Delft University of Technology (degree granting institution)","2022","An architectural perspective drawing gives a naturalistic spatial representation of an architectural project, which is usually represented in orthographic drawings, such as floorplans, sections, and elevations. However, that same perspective drawing can also express theoretical architectural concepts and ideas in a non-verbal, but highly communicative way. To investigate that particular quality, this dissertation takes a systematic look at the historical case of the perspective drawings made by the German architect Heinrich Tessenow (1876-1950), focusing on the period between 1901 and 1926. Tessenow, one of the key figures in early twentieth-century German architecture, was mostly interested in the Kleinwohnung (small workers’ and lower-middle-class house) and the Kleinstadt (small town)
Initially, Tessenow’s perspectives appeared in various well-read architectural journals, such as Bautechnische Zeitschrift and Deutsche Bauhütte. These journals not only offered the drawings (and their maker) a publishing platform but also actively invited various writers to respond to them, thus contributing to a lively public discourse on architecture. As a consequence, perspective drawings played a major role in Tessenow’s first three publications, Zimmermannsarbeiten (1907), Der Wohnhausbau (1909) and Hausbau und dergleichen (1916). In all three books, perspective drawings were much more than illustrations or building visualisations to his texts: they actively contributed to Tessenow’s architectural thinking and his emerging visual theory of architecture.
This dissertation wants to address some basic questions that relate to this: what is the meaning of these perspectives in Tessenow’s visual theory of architecture and what role did they play in the development of his thinking on the Kleinwohnung?
To answer these questions, a great number of perspective drawings are collected from various sources. Quite deliberately, these drawings are detached from their immediate context, regarding the projects they depict, the media in which they appeared and their chronological order. This collection of detached drawings is then subdivided into three main thematic categories that summarize Tessenow’s oeuvre in these years and all relate to the Kleinwohnung: Haus (house); Raum (room or space) and Sache (thing or object).
To relate Tessenow’s perspective drawings to his architectural thinking, three epistemic architectural notions are distilled from writings by both Tessenow and some of his contemporaries. These notions are Empfindung (sensibility), Abstraktion (abstraction) and Gewöhnlichkeit (ordinariness) and their epistemic character follows from the fact that they not only define Tessenow’s architectural thinking but relate to a broader German architectural culture.
By intersecting these notions with the drawings arranged in the categories of Haus, Raum and Sache, it becomes possible to select more than 20 sets of related drawings that are then subjected to a comparative iconographic architectural analysis, in which the typological organization of building, space or object; and the formal composition of its appearance are linked to aspects such as its immediate setting, spatial delineations and material expression. The method of juxtaposing perspective drawings with a similar subject and subsequently comparing these drawings makes it possible to reveal general patterns and qualities related to the depicted subject beyond the individual case. Together, these analyses form the basis of a series of speculative reconstructions of Tessenow’s inquiries into several relevant topics related to the Kleinwohnung.
Besides the historical significance of Tessenow’s case, the analyses presented in this dissertation also demonstrate the significance of perspective drawing. They make clear that this kind of drawing was, and is, able to bring together different scales, elements and atmospheres in one image, which is immediately understandable to both architects and to all the others involved in architecture and building. They also show how perspective drawing can contribute to architectural thinking and thus forms an important theoretical tool that continues to be relevant in the present day.
How such organisms manage to keep the potential to resume growth while having a nearly ceased activity? Over the last 20 years, scientists have uncovered various physiological and molecular mechanisms that control entry and exit of dormancy. However a basic understanding of what exactly happens during dormancy, i.e how to survive while stopping nearly all internal activity, is still missing. Two reasons are likely responsible for that knowledge gap. A first conceptual obstacle is a general view that since by definition nothingmuch happens during dormancy, nothing important happens. A second technical obstacle is the lack of sensitive enough instruments to quantify the ""nearly ceased"" activity during dormancy. As a result, very few studies have established the concrete link between a vanishing internal activity and the ability to survive during dormancy in a single experimental system.
In this thesiswe propose to focus on dormant Saccharomyces cerevisiae yeast spores to specifically investigate the links between gene expression and the ability to survive during week-long dormancy.","microbiology; dormancy; yeast spores; gene expression","en","doctoral thesis","","","","","","","","","","","BN/Greg Bokinsky Lab","","",""
"uuid:9f88f639-c356-4b93-a98d-331fa9999878","http://resolver.tudelft.nl/uuid:9f88f639-c356-4b93-a98d-331fa9999878","Architectural Photovoltaic Application","Haghighi, Z. (TU Delft Climate Design and Sustainability)","van den Dobbelsteen, A.A.J.F. (promotor); Klein, T. (promotor); Konstantinou, T. (copromotor); Delft University of Technology (degree granting institution)","2022","Today, photovoltaic technology is one of the fastest-growing fields of technology and is becoming the lowest-cost option for electricity generation in the greatest part of the world. Based on IEA projection, the number of households relying on solar PV grows from today’s 25 million to more than 100 million by 2030. Based on this projection, we must use all surfaces on and around buildings of an entire city to absorb solar radiation and transform it into usable electricity (or useful heat). However, current attempts to harness these potentials within the built environment leaves much to be desired. It is readily apparent that the current roof-top installation approach is neither aesthetically appealing nor technically efficient and consequently not sustainable, long-term, and reliable. Back in the 1990s, the Integration approach was introduced to address these issues. However, the introduction of this solution has neither increased popularity nor helped with untapping the solar energy potentials within the built environment.
The fundamental problem addressed in this dissertation is the lack of appropriate guidance and well-structured knowledge about the approaches and considerations which should be deliberated in the design and decision-making process for deploying PV technology in architecture. The overarching goal of this research is to promote the use of PV technology in the built environment while being thoughtful of the symbiotic and functional relationship between the technology and the urban fabric. Specifically, it aims to support the decision-making process required for the adoption and development of photovoltaic products in the built environment.
This thesis builds upon the interrelations between the concept of Integration, design decisions, and technological decisions. As the starting point, we looked into ‘integration’ as an alternative approach to the existing addition or attachment of PV into buildings. To do so, we explored given definitions and requirements outlined for the concept of Integration within the context of the application of PV in building architecture. In existing literature, integration is described as the solution for wider adoption and acceptability of PV in the built environment and defined as situation where PV module replaces a building material in a building. However, our findings show that integration does not presume photovoltaic products to be used as part of the construction material and serve a secondary or tertiary function. Furthermore, it highlights under the definition of Integration, the PV system can still be part of the architecture and remain a building service and perform a singular function as a renewable energy generator.
In the next step, we looked into how architects have used PV technologies in buildings. We shortlisted 30 projects and categorised them based on those design decisions that made them different from one another. We highlighted that these projects could be categorised based on decisions made on (i) visibility of PV system in the building architecture, (ii) mounting strategy and structural connection of PV panels and building, (iii) the customisation level of PV module, (iv) the building fabric used, and (v) the role of PV in the building system.
Subsequently, 30 architects were interviewed to study their experiences and perceptions about the architectural application of photovoltaic. In this study, we approached two groups of architects: one with experience of using PV technologies and the other with no relevant experience. Based on the input received, we witnessed three types of motivations for using PV technologies in architecture projects: the first type was related to external incentives that drive the project (e.g., NZEB), the second type was rooted in the architect’s interest in environmental-friendly and climate-responsive technologies in buildings, and the final one is a communicative gesture in which PV technologies was used as a symbol of sustainability mandated by the project owner. The findings also shed light on the differences in opinions between architects who had already applied PV technology and those who had not. Unlike those with experience working with PV technology in their previous projects, who believed that working with this technology is not complex and problematic, the group with no experience believed that working with PV technology is challenging. Furthermore, a common opinion between the two groups was the need for more versatility in colour, transparency, size, and reflectivity of module products.
In the following step, we looked into the existing PV technologies and explored their its various features and potential in architectural application. The findings highlight that the first-generation technologies (c-Si) are the most advanced and can perform better for building applications. However, the physical flexibility of this technology for customisation on the cell level remains limited. In the second-generation technologies, higher temperature tolerance is an advantage for them to be compatible in situations where double-sided ventilation is not possible. Even though most of the second-generation technologies are already lightweight and flexible, and although it they have some level of transparency in contrast to the first generation, their automated production lines make customisation of size and shape fairly difficult. The third-generation technologies received more attention because they offer lower production costs, reduced environmental impact, and a relatively higher efficiency compared to the first and second generations. This makes them an interesting option for architectural application, even though their limited service life expectancy remains an important disadvantage. Aside from the criteria mentioned for comparing these alternatives, many other factors are involved in finding the most suitable PV technology for a certain application. The architects interviewed highlighted these criteria. So, we looked into advanced decision-making methods to see if such methods can be applied in the selection process of PV technology. Through the development of a pilot tool on multi-criteria decision making method, analytic hierarchy process, and test within a concept development project, we concluded that such a method can be very helpful in finding the most suitable technology for a certain application.
In the final stage, we worked on development of new concepts for the application of PV technology in buildings as based on several reports reviewed and on results of interviews, it became apparent that existing PV products cannot fulfil current market demands and consequently the sustainability targets. We then examined the R&D processes of these projects, which showed that despite the differences in scope, objective, and nature of the concepts, several similarities could be articulated into a generalised concept development process. According to this analysis, the R&D process before the commercialisation phase can be divided into 7 steps, namely (i) scoping and definition (ii) exploration (iii) concept development (iv) proof of concept (v) optimisation (vi) application design development (vii) prototyping.
Overall, the findings of this research can be summarized in three recommendations: first, integration in this context as perceived and defined in the standards and manuals cannot be seen as a comprehensive approach to include all the architectural styles and approaches to use PV technologies in buildings. Therefore, rethinking its definition and requirements is essential. Secondly, suppose we want PV technology to become a default building service, we need to leave it to architects to accommodate it within the design concept as they wish, and the PV industry should not try to impose this technology on architecture. And lastly, we need to develop a new discipline around the design and engineering of energy-producing buildings. We need to train and equip future practitioners with insight, know-hows, and tools to use the ultimate solar energy potentials to produce energy, store, and utilize the generated energy on-site.
An essential building block to achieving safe autonomous driving is the efficient perception and representation of the vehicle’s environment. The perception and representation need to be as accurate as possible, but at the same time, as efficient as possible, to increase the time in which the vehicle can react to the evolving traffic situation. This thesis discusses various ways to increase the efficiency of perception systems of autonomous vehicles by showing: how a novel acoustic sensor detects traffic before it becomes visible, how to combine traditional machine learning algorithms with deep neural networks for faster inference, how a compact representation for images of traffic scenes can be enriched with object instance information, and how different modalities, such as images and point clouds, contribute to deep representation learning.
To detect vehicles ahead of commonly used sensors in autonomous vehicles, this thesis introduces a passive acoustic perception approach. This acoustic perception system can detect approaching vehicles behind blind corners by sound before such vehicles enter in line-of-sight. A research vehicle equipped with a roof-mounted microphone array is used to collect data and serves as a demonstrator platform. The data shows that wall reflections provide information on the presence and direction of occluded approaching vehicles. In test scenarios with a static ego-vehicle, a novel data-driven approach achieves an accuracy of 0.92 on the hidden vehicle classification task. Compared to a state-of-the-art visual detector, Faster R-CNN, the acoustic system achieves the same accuracy more than one second ahead, providing crucial reaction time for the situations studied in this work. While the ego-vehicle is driving, acoustic detection shows encouraging results, still achieving an accuracy of 0.84 within one environment type. Further, failure cases are studied across environments to identify future research directions...","Intelligent Vehicles; Machine Learning; Environment perception","en","doctoral thesis","","978-94-6384-383-6","","","","","","","","","Intelligent Vehicles","","",""
"uuid:37464633-9480-4726-9034-f55f9f6e1b16","http://resolver.tudelft.nl/uuid:37464633-9480-4726-9034-f55f9f6e1b16","Towards the Uncertainty Quantification of Fractured Karst Systems: Reactive Transport and Fracture Networks: Where Numerical Modeling Meets Outcrop Observations","de Hoop, S. (TU Delft Applied Geology)","Voskov, D.V. (promotor); Bertotti, G. (promotor); Barnhoorn, A. (promotor); Delft University of Technology (degree granting institution)","2022","Society relies on large amounts of energy to progress and allow for a high standard of living. The recent severe climate changes require advanced technologies related to cleaner energy resources. One such technology beneficial for accelerating this current energy transition is geothermal energy. This type of energy is often found in fractured and karstified carbonate aquifers. Understanding the reservoir properties and reducing the risks of such subsurface-related activities is vital. This thesis attempts to understand the complex fractured carbonate reservoirs better and improve the numerical simulation capabilities toward large-scale uncertainty quantification.","Reactive transport; Multiphase flow; Operator-Based Linearization; Fracture networks; Karst; Uncertainty Quantification","en","doctoral thesis","","978-94-6469-044-6","","","","","","","","","Applied Geology","","",""
"uuid:c40b1935-c5f1-4887-9185-d514ef408d6f","http://resolver.tudelft.nl/uuid:c40b1935-c5f1-4887-9185-d514ef408d6f","Activation, Reactivity and Dynamics of Manganese Pincer Complexes in Hydrogenation Catalysis","Yang, W. (TU Delft ChemE/Inorganic Systems Engineering)","Pidko, E.A. (promotor); Filonenko, G.A. (copromotor); Delft University of Technology (degree granting institution)","2022","The growing demands for sustainable chemical technologies have prompted a wave of searching new catalysts based on earth-abundant metals. In the field of (de)hydrogenation catalysis, however, the huge performance gap is commonly seen between the 3d-metal-based catalysts and their noble metal counterparts, which largely hampers their practical applications. In particular, while the Mn-catalyzed (de)hydrogenation has witnessed significant progress since the pioneering work by Beller and co-workers in 2016, most of the reported systems still require relatively high catalyst loadings. Apart from developing new synthetic methodologies based on the hydrogen transfer reactivity of Mn, searching highly active catalysts for (de)hydrogenation reactions therefore remains one of the central topics in Mn chemistry. The current approach to catalyst development is mainly based on the screening of the ligand backbones that proved to be effective for noble metal-based catalysts. However, the screening assessments with the reaction yields as the sole performance metrics do not probe the intrinsic reactivities of the catalysts and can easily result in the overlook of the potential ones due to suboptimal condition choice. In this thesis, we demonstrate in this thesis that the catalyst performance is defined by a complex reaction network comprised of multiple stages of catalyst operation, that is catalyst activation, deactivation, and catalytic turnover. The reactivity of the catalyst itself and the reaction environment of each process determine synergistically the catalytic performance. As a result, the catalytic transformation should be viewed from the system perspective with the performance being a dynamic and highly condition-dependent characteristic.","Homogeneous catalysis; hydrogenation; manganese catalysts; organometallics; coordination chemistry; oprando-spectroscopies","en","doctoral thesis","","978-94-6366-615-2","","","","","","","","","ChemE/Inorganic Systems Engineering","","",""
"uuid:bc2849a7-cca7-4b44-b85c-1653e98a4359","http://resolver.tudelft.nl/uuid:bc2849a7-cca7-4b44-b85c-1653e98a4359","Responsible Learning about Uncertain Risks arising from Emerging Biotechnologies","Bouchaut, B.F.H.J. (TU Delft BT/Biotechnology and Society)","Osseweijer, P. (promotor); van de Poel, I.R. (promotor); Asveld, L. (copromotor); Delft University of Technology (degree granting institution)","2022","The current regulatory regime regarding GMOs within the Netherlands and Europe does ensure safety but struggles in balancing this notion with innovation. In particular, the way the Precautionary Principle (PP) is operationalized in GMO legislation has resulted in a highly precautionary culture in which there is little room to conduct research with associated uncertain risks or uncertainties – it has resulted in a culture of compliance. Although the debate on how ‘new’ genetic engineering techniques such as CRISPR should be assessed in comparison to recently exempted techniques is ongoing within the European Union (EU), this might not have any consequences for GMO regulation at all. These issues do not only stifle innovation but also illustrate that the current regime is not resilient in dealing with emerging techniques. To break free from the impasse between safety and innovation, researchers should be able to learn what uncertain risks entail, for instance, through Safe-by-Design (SbD).
The main question addressed in this thesis is: “How to create an environment that is suitable to learn safely and responsibly what uncertain risks associated with emerging biotechnologies entail?”. I conclude that to enable responsible learning by means of SbD, 3 conditions are needed; regulatory flexibility, co-responsibility and awareness. Thereby, SbD could be a suitable approach to arrive at responsible learning, given that the 3 conditions are met. If not, SbD provides guidelines to lower or mitigate known risks but fails to provide a step-by-step approach to gradually learn what uncertain risks entail. This will leave a knowledge gap between known and uncertain risks which stifles innovation and hinders risk management in ensuring future safety for people, animals and the environment.
In this thesis, we extend the theory and the applications of stochastic duality in the following two contexts:
i) evolution of particles in space inhomogeneous settings and more precisely, processes in random environment
and processes in a multi-layer system;
ii) evolutions of particles in the continuum.","Interacting particle systems; Markov Processes; Hydrodynamic limit; Stochastci Duality; Non-equilibrium steady state; Random environment; Stochastic Homogenization; Boundary driven systems; Inhomogeneous system","en","doctoral thesis","","","","","","","","","","","Applied Probability","","",""
"uuid:b46b14e3-c0cf-4aca-a21d-b7eeda6eb2df","http://resolver.tudelft.nl/uuid:b46b14e3-c0cf-4aca-a21d-b7eeda6eb2df","Surface-related multiple estimation and removal with focus on shallow water","Zhang, D. (TU Delft ImPhys/Computational Imaging)","Verschuur, D.J. (promotor); de Jong, N. (promotor); Delft University of Technology (degree granting institution)","2022","For exploration and development of the earth, seismic surveys are acquired to provide information about the subsurface, within specifications of accuracy set by geologists and engineers, and within business constraints on budgets and turn-around time for processing and interpretation of the data. The case of seismic surveys that are acquired, partly or entirely, in shallow water is relevant for the industry worldwide. However, the acquisition and processing for shallow water seismic surveys requires considerable modifications of standard procedures to meet the survey goals. In this work, the focus is on modifications in processing and in particular with respect to the handling of multiply scattered energy, assuming standard acquisition practices.
Multiple scattering is a significant wave phenomenon when seismic waves propagate through the earth. Its corresponding energy, i.e., seismic multiples, are usually unwanted due to the interference with primary reflections. The traditional seismic surface-related multiple estimation and removal method is limited by both the unrecorded data reconstruction (e.g., the missing near offsets and the data gap between the crosslines) and the subsequent multiple adaptive subtraction performance. These issues become even more severe for the shallow-water environment, which is typically defined as being around 50-200 m within the exploration seismic frequency range (i.e., 2-120 Hz) in this thesis. Shallow water creates highly curved seismic reflection events with strong lateral amplitude variations, and complex overlap between primaries and surface-related multiples. Conventional data reconstruction methods fail to tackle the missing data in shallow water, and are even more problematic in 3D. In addition, the dilemma between primary damage and surface multiple leakage during the adaptive subtraction is very much present for shallow-water data.
An integrated closed-loop surface-related multiple estimation (CL-SRME) and full-wavefield migration (FWM) framework for better primary and surface-related multiple estimation, which is able to support CL-SRME with good-quality near offsets in order to avoid primary estimation failure that typically occurs in shallow-water environments, is proposed to attack the unrecorded data reconstruction issue. We suggest to use multiples to provide information on the missing near-offset data by using FWM, where primaries and surface multiples together create an image of the shallow subsurface. Taking advantage of FWM - with its closed-loop simultaneous primaries and multiples imaging approach - as the data reconstruction method and feeding the reconstructed near offsets to CL-SRME are the most important components to tackle the shallow-water issues in a physically consistent manner. This new integrated framework will have its main impact on a full 3D implementation with coarse sampling. Therefore, a similar cascaded framework for 3D surface-related multiple estimation in shallow-water scenarios, which consists of a data reconstruction step via 3D FWM and a surface multiple estimation step via a 3D SRME-type method, is also introduced in the thesis. Improvements on estimating surface multiples and primaries, due to good data reconstruction via FWM, have been proved on both 2D and 3D synthetic data. Despite of lacking an accurate subsurface velocity model for 2D field data, the FWM reconstructed near-offset water-bottom reflection still improves the quality of the estimated surface multiples and primaries.
In order to mitigate the surface-related multiple adaptive subtraction dilemma, we have also introduced a two-step framework for surface multiple leakage extraction in this thesis, and thus extended our seismic multiple processing toolbox. The aforementioned two-step framework based on local primary-and-multiple orthogonalization (LPMO) is both versatile and efficient for leaked multiple extraction, therefore, primaries can be better preserved without leaving much multiple energy. The initial estimation step usually prefers SRME with a conservative adaptive subtraction or any conservative multiple estimation method, and LPMO is followed to compensate the initial estimated primaries and multiples. Promising multiple leakage extraction has been achieved on both synthetic and field data sets. Although effective compared to standard subtraction, LPMO is slow and computationally intensive. Therefore, a fast LPMO (FLPMO) using a scaled point-by-point division, rather than the time-consuming shaping regularization-based iterative inversion, is further introduced to accelerate the whole process. Results on two different field data sets display a very similar multiple leakage extraction performance compared to LPMO, while indicating that the scaled point-by-point division in FLPMO is approximately 40 times faster than the shaping regularization-based inversion in LPMO. Moreover, the complete FLPMO framework is approximately four times faster than the LPMO framework, and thereby is now equivalent to the industry-standard L2 adaptive subtraction.
With the advance of deep learning (DL) technology, the aforementioned two issues in shallow water can also be investigated via a U-Net based DL neural network (NN) framework. More specifically, a DL-based de-aliasing NN is introduced for the initial surface multiple estimation, where the strong data fitting power of DL can directly project the aliased multiples, due to coarse sampling, to its corresponding unaliased target multiples. Meanwhile, a DL-based adaptive subtraction NN is proposed with both total full wavefield and the predicted multiples as two input channels to overcome the adaptive subtraction dilemma. In this way, the robust physics, i.e., the estimated multiples, is used and the synthetic primary labels can be helpful to the framework. Note that the data distribution between training and test data plays a significant role on these U-Net based applications. Training on field data and test on nearby field data shows the best performance due to a similar data distribution.
Shallow water is very challenging for surface-related multiple estimation. Physics-based deterministic approaches, e.g., FWM-based data reconstruction and LPMO, can help geophysicists better understand and partially solve the essentials of the problem. For poorly described deterministic problems, e.g., adaptive subtraction and multiple de-aliasing, DL can find the underlying relationships that are not easily achievable by the deterministic methods. Combination of deterministic methods and DL will result in an optimal performance. This is where further research should concentrate on.
This thesis explores how programmable networks can be used to facilitate emerging low-latency services. Specifically, it combines the advantages of (1) Software-Defined-Networking (SDN), a paradigm in networking that centralizes the control plane, and (2) programmable data planes, which enable an on-the-fly deployment of novel algorithms to the network switches. In particular, this thesis explores what SDN controller tasks are feasible to be offloaded to the data plane, the trade-offs in doing so, and their benefits on low-latency applications. Moreover, it takes advantage of the more fine-grained monitoring possibilities of programmable data planes and incorporates these measurements into the data plane algorithms. As a result, this thesis develops a set of solutions that enable network switches to react to short-term changes in the networking traffic and act independently (or with limited input), improving the Quality of Service (QoS) of low-latency flows.
First, we investigate the limitations of programmable switches and ways to overcome them by developing an application to detect heavy hitters (e.g., flows that consume most resources in the network). Next, we explore the concept of network slicing, i.e., reserving part of a physical network for a specific service. We demonstrate that network switches can combine data plane measurements and limited (preconfigured) input from the central controller to enable elasticity, i.e., the ability to automatically scale the assigned network resources based on the flows' requirements with negligible delay. Next, we analyze the co-existence and interactions between flows using different congestion control algorithms and/or having different RTTs. We use this information to develop a data plane algorithm to improve their interactions. Finally, we demonstrate how congestion detection and avoidance can be achieved in the data plane without any assistance from the end-hosts.","Programmable data-planes; Software-Defined networking; Programmable networks; Low-latency applications","en","doctoral thesis","","","","","","","","","","","Embedded Systems","","",""
"uuid:85e7e4f5-77dd-40b1-bb87-084d12641630","http://resolver.tudelft.nl/uuid:85e7e4f5-77dd-40b1-bb87-084d12641630","Longitudinal Studies in Travel Behaviour Research","de Haas, M.C. (TU Delft Transport and Planning)","Hoogendoorn, S.P. (promotor); Chorus, C.G. (promotor); Kroesen, M. (promotor); Delft University of Technology (degree granting institution)","2022","Mobility is an important part of daily life. With modern mobility systems, people have access to a range of transport modes allowing them to basically reach any destination they want. Although people often have multiple options to choose from, personal mobility is dominated by motorized road transport in many countries and cities, also in the Netherlands, owing to the ease of use and high level of flexibility. This popularity poses challenges for governments to keep their countries and cities accessible, attractive, safe and liveable since motorized road transport comes with several negative effects such as increased congestion, damage to the environment, negative effects on human health due to emissions, inefficient use of space and reduced liveability of cities.
This thesis consists of studies on several mechanisms behind travel behaviour change towards sustainable travel modes, based on a large-scale longitudinal travel survey; the Netherlands Mobility Panel (MPN). As this panel has been operating for several years and collects a wide range of relevant information from its respondents, it allows studying numerous aspects of travel behaviour (change). This thesis will help policy makers understand how travel behaviour changes and provide them with knowledge to promote travel behaviour change towards a more sustainable mobility system. The focus is on four topics that are imperative to achieve this goal: the effects of life events on travel behaviour, new technologies to promote a mode shift away from car (in this case, the e-bike), the links between personal health and active travel and effects of the COVID-19 pandemic on mobility.
To correctly study these topics, longitudinal data is needed, as we want to infer the direction of effects from the data rather than making assumption on this direction, with the risk of drawing wrong conclusions (e.g., we do not know whether active travel has an effect on personal health or that the effect runs from personal health to active travel). While these longitudinal data are ideally suited to study travel behaviour changes, it is crucial that the data quality is guaranteed. To address one possible cause of low data quality, the thesis includes a fifth study focused on the notion of soft-refusal, which describes the tendency of some respondents to use a strategy to lower their response burden, e.g. by claiming they did not leave their house even though they actually did.
A Cutter Suction Dredger is a floating vessel which removes sand, clay or soft rock from sea or river beds. It has a cutter head with pickpoints attached to it. By rotating and swinging, the pickpoints are pushed into the soil, disintegrating it. The soil enters the cutter head where it is mixed with water. From inside the cutter head it is hydraulically transported to the vessel via the suction mouth and pipe. The rotational speed of the cutter head can be varied by the vessel operator. When increasing the rotational velocity and swing speed, more production can be obtained. However, this leads to an outflow of water and dredged material near the ring, spilling the soil.
When the Cutter Suction Dredger is employed for cutting sand, the sand particles are easily kept in suspension due to the rotating motion before it is sucked up. A cutter suction dredger is also used for cutting rock, leading to large pieces, which are more influenced by gravity and the centrifugal force. Due to these forces, the pieces are thrown out of the cutter head more easily than smaller sand particles. The pieces of rock which are thrown out of the cutter are considered spilled. This spillage is unfavourable since this material has to be dredged a second time or is left on the sea floor. When the material is left on the sea floor, a larger layer of soil needs to be dredged for creating the same navigable depth.
To reduce spillage, the processes contributing to spillage should be quantified in order to design a better cutter head or working method. This dissertation contributes to this goal by presenting a validated model for simulating the spillage of rock particles inside a rotating cutter head. Such a model can be used to quantify different processes and test new cutter head designs…
Numerous optical techniques and instruments, such as microscopes, wavefront sensors, optical comparators, and interferometers are available now where subnanometer precision is achieved. By studying the properties of the electromagnetic field that is generated from the interaction between the probe and the unknown target, it is nowadays possible to retrieve intricate parameters of the object such as its shape and roughness. Commonly, the measurement in the far-field regime is adopted since it is noninvasive. In the far-field, for successful information retrieval of features smaller than the Rayleigh limit, the inverse problem of scatterometry (optical metrology with scattered light) requires a priori information. In many cases, we assume that the target under the study is guaranteed to exist, and it is partially known. For example, one can deposit particles of certified material on top of a surface, measure the intensity of the scattered field, and by combining this information with electromagnetic models, one can deduce parameters such as size and position of the scatterer. However, when the target becomes extremely small, as for example, a fraction of the wavelength, the question arises: given the measuring instrument, would we still detect this target? The answer is yes, if the sensitivity of the instrument is high enough. Many areas of optics and physics rely on the detection and localization of tiny objects on top of a surface. The main examples include contamination and nanofabricated features and defects in the semiconductor industry, the studies of viruses and bacteria for biological and medical sciences, air and water pollution with toxic particles in environmental science.
In this thesis, we will concentrate on the application of scatterometry for isolated nanoparticle detection aimed at quality control in the semiconductor industry.
Yet, WCRS is a specialistic branch within the coastal engineering and -user community. The technique typically requires a certain amount of user-expertise and it has mostly been applied in research settings. While data can be retrieved on kilometre scale with XBand-radars and cameras, it was historically difficult to scale up WCRS to entire coasts, which was a reason to discontinue its application in the Netherlands. Besides land-based instruments (i.e., XBand-radars, fixed camera stations) in the meantime also airborne UAVs, and space-borne satellites can be used to record a wave field, making WCRS more flexible and scalable. These recording instruments have also become more accessible. Moreover, DIAs – the software required to analyse the wave recordings – can be used interchangeably on data of these different instruments. This means that WCRS becomes potentially attractive to a broad user-community of coastal managers, the industry and the coast guard. However, DIAs still restrict broad usage of WCRS: while an important step has been taken in the open accessibility of DIAs, much is still to be gained in their handling and computational speed. This study aims to improve upon that, by building towards operational, self-adaptive and intelligent algorithms, which can provide maps of depth, near-surface currents and wave hydrodynamics on-the-fly. For this purpose, video data from a variety of instruments (fixed camera station, UAV, XBandradar, satellite) on different spatial scales 𝑂(100 m2,1 km2,10 km2,100 km2) and field-sites around the world (Netherlands, UK, USA, Australia, France) are analysed. Combining rapid processing capabilities with a broad applicability this study forms a stepping stone for a potentially broad WCRS user community. The analyses are presented going from land-based to air-borne to space-borne WCRS. This is done in three stages from (1) applying an operational DIA on XBand radar data, to (2) applying an on-the-fly DIA on camera and UAV data, to finally (3) applying a DIA on temporally sparse satellite data.
First, a DIA named XMFit (X-Band Matlab Fitting) is introduced, which is robust, accurate and fast enough for operational use. This is achieved through an iterative procedure that selects the best result among a series of depth and near-surface current estimates. For this study, video data from XBand-radars are analysed. Focusing on depth estimates, XMFit is validated for two case studies in the Netherlands: (1) the “Sand Engine”, a beach mega nourishment at a uniform open coast, and (2) the tidal inlet of the Dutch Wadden Sea island Ameland, characterizing a more complex coast. Considering both sites, the algorithm performance is characterized by a spatially averaged depth bias of −0.9 m at the Sand Engine (corresponding to an 18 h snapshot of the field site) and a time-varying bias of approximately −2–0 m at the Ameland Inlet (corresponding to a one-year time evolution with varying hydrodynamic conditions). When compared to in-situ depth surveys the accuracy is lower, but the time resolution higher. Dutch in-situ surveys typically occur annually, while depth estimates from the Ameland tidal inlet are produced every 50 min by an operational system using a navigational X-Band radar. It enables to monitor the placement of a 5 Mm3 ebb-tidal delta nourishment – a pilot measure for coastal management. Volumetric changes in the nourishment area over the year 2018, occurring at 7 km distance from the radar, are estimated with an error of 7 %. Depth errors statistically correlate with the direction and magnitude of simultaneous near-surface current estimates. Additional experiments on Sand Engine data demonstrate that depth errors may be significantly reduced using an alternative spectral approach and/or by using a Kalman filter.
Having demonstrated the potential of DIAs for operational application, the next step is to design an algorithm that can self-adapt to video from any field-site and can process it on-the-fly. To do so, a DIA is designed whose code architecture for the first time includes the Dynamic Mode Decomposition (DMD) to reduce the data complexity of wavefield video. The DMD is paired with loss-functions to handle spectral noise, and a novel spectral storage system and Kalman filter to achieve fast converging measurements. The algorithm is showcased for videos from ARGUS stations and drones recorded at fieldsites in the USA, UK, Netherlands, and Australia. The performance with respect to mapping bathymetry is validated using ground truth data. It is demonstrated that merely 32 s of video footage is needed for a first mapping update with average depth errors of 0.9–2.6 m. These further reduce to 0.5–1.4 m as the videos continue and more mapping updates are returned. Simultaneously, coherent maps for wave direction and -celerity are achieved as well as maps of local near-surface currents. The algorithm is capable of mapping the coastal parameters on-the-fly and thereby offers analysis of video feeds, such as from drones or operational camera installations. Hence, the innovative application of analysis techniques like the DMD enables both accurate and unprecedentedly fast coastal reconnaissance.
With a skilled, intelligent DIA at hand, the question remains whether it can also be used on satellite imagery, as that would further broaden the application range. DIAs commonly analyse video from shore-based camera stations, UAVs or XBandradars with durations of minutes and at framerates of 1–2 fps to find relevant wave frequencies. However, these requirements are typically not met by raw, temporally sparse satellite imagery. To overcome this problem a preprocessing step is utilized. Here, a sequence of 12 images of Capbreton, France, collected over a period of ∼1.5 min at a framerate of 1/8 fps by the Pleiades satellite, is augmented to a pseudo-video with a framerate of 1 fps. For this purpose a recently developed method is used, which considers spatial pathways of propagating waves for temporal video reconstruction. The resulting video is subsequently processed with the self-adaptive DIA. The combination of image augmentation with a frequency-based depth inversion method shows potential for broad application to temporally sparse satellite imagery and thereby aids in the effort towards broad usage of WCRS for mapping coastal bathymetry data around the globe.
By improving DIAs and their application to different instruments, this study has helped to increase the technological readiness of WCRS and its potential to be adopted by end-users. It was shown that WCRS can be performed on wave field records of land-based, airborne and space-born instruments and therewith on scales ranging from 𝑂(100 m2)(fixed camera) to 𝑂(100 km2)(X-band radar,satellite). The cost of WCRS is minor, as existing navigational X-band radars can be used, affordable UAVs and cameras, and accessible satellite data. X-band radars can operationally monitor complex coastal environments and recognize morphological trends, UAVs and cameras can be used for fast lean-and-mean mapping of coastal bathymetry, and by estimating depths from satellite imagery valuable data can be collected in otherwise data-poor environments. Yet, further steps should be taken in the accessibility, multifunctionality, quality, robustness and user-friendliness of WCRS. The key takeaway for effective WCRS monitoring is that future developments should strive towards integrated, self-adaptive software, which gives prompt visual response and requires little user-expertise. These measures reduce the difficulty to learn WCRS, increase its compatibility with data from different instruments (Xband-radars, cameras, UAVs, satellites) and thereby enable relatively easy coastal measurements. As a consequence WCRS becomes more adoptable by the coastal remote sensing community. With the exponential growth of data volumes worldwide, future data clouds may facilitate storage and offer future perspectives for online integration of data with numerical models and modern data science techniques like neural networks. This may create new possibilities for understanding system dynamics and thereby further aid decision makers in coastal management, the industry and the coast guard.","coastal remote sensing; mapping; depth inversion; wave field video; operational monitoring; on-the-fly processing; self-adaptive algorithms; XBand-radar; camera; UAV; drone; satellite","en","doctoral thesis","","978-94-6384-377-5","","","","","","","","","Coastal Engineering","","",""
"uuid:828cef26-6fae-4a12-80a6-b83aed3a8e90","http://resolver.tudelft.nl/uuid:828cef26-6fae-4a12-80a6-b83aed3a8e90","Modeling of Continuous Physical Vapor Deposition: From Continuum to Free Molecular Flow","Vesper, J.E. (TU Delft ChemE/Transport Phenomena)","Kleijn, C.R. (promotor); Kenjeres, S. (promotor); Delft University of Technology (degree granting institution)","2022","Physical Vapor Deposition (PVD) is the resublimation of a substance on a cold surface coating it with a thin solid layer. PVD coatings are utilized in industry to modify surface properties and appearance. Since the industrial process requires vacuum conditions, it has been mainly conducted in a batch process. Recently, PVD is considered a promising alternative coating technology to the hot-dip galvanization in order to apply a corrosion protective coating on steel. However, a continuous process is missing to manufacture protective coatings for strip steel on an industrial scale using PVD.
First approaches suggest the following process: The steel strip is pulled into a vacuum chamber through air-tight seals to ensure a non-reactive coating atmosphere and avoid impurities; then its surface is treated to obtain high adhesion during the coating process; afterwards it passes a Vapor Distribution Box (VDB) from which vapor jets (or plumes) emerge and coat the steel surface; eventually the strip leaves the vacuum chamber again via air-tight seals, is coiled and shipped.
To make this process usable on a large scale - or even superior to galvanization - multiple challenges need to be overcome: ensuring the tightness of the seals, cleaning the strip, preventing stray coating of the vacuum chamber, guaranteeing a uniform coating thickness, and providing a uniform high vapor mass flow to maintain a high speed of the production line.
This thesis tackles the last challenge by modeling the vapor transport both inside the VDB and inside the vacuum chamber.
First the flow inside the VDB is modeled using a SIMPLE-/PISO-based algorithm for transsonic flows. To account for the evaporation at the melt surface, a boundary condition for the inlet pressure is implemented based on the Hertz-Knudsen equation. The total mass flow rate for different melt temperatures is compared to experimental values as well as an analytical, isentropic estimation. Furthermore, the sensitivity of the model to material properties and process conditions is studied.
The total mass flow rate of the system is found to depend on evaporation and choking. With higher melt temperatures the total mass flow rate increases. The trend found in the simulations resembles the one from the experiment. Both yield only 33%-54% of the mass flow rate estimated by the analytical isentropic relation. This low efficiency improves with higher melt temperature. A comparison of the pressure loss across the VDB reveals that the main losses appear due to the viscous boundary layer in the nozzles connecting the VDB with the vacuum chamber. The simulation overpredicts the experimental result by a factor of 1.3. This may be due to the used assumption of an idealized value of unity for the evaporation coefficient; a value of approximately 0.3 would produce a better match between simulations and experiments. Impurities found in the experiment may cause this reduction of the evaporation coefficient.
When expanding from the nozzles into the vacuum chamber, the flow accelerates to supersonic speeds and rarefies. We study the interaction of two planar sonic plumes that causes a shock next to the interaction plane. This in turn produces peaks in deposition rate and thus in the coating. Direct Simulation Monte Carlo (DSMC) method is applied for the flow which ranges from continuum at the nozzles to rarefied and free molecular flow downstream. The results are compared to the analytical effusion solution and the inviscid continuum solution from a Riemann solver. The expansion and shock regions of the DSMC simulation are visualized by the Method of Characteristics (MOC). The mass flow distribution as a function of the degree of rarefaction, the nozzle-separation-distance and the inclination of the nozzles is studied.
The DSMC result of plume interaction outside the VDB closely resembles the inviscid continuum solution at low degrees of rarefaction. The flow structure with expansions and shocks coincides, deviations are apparent in the actual number density, velocity and temperature especially in the shock region. With higher rarefaction, the shock structure diminishes and the flow field approaches the free molecular flow field. However, the rarefied flow field is not within the limits of the inviscid continuum and the free molecular flow field, but may exceed them in both deposition peaks and temperature peak in the shock region.
Using the MOC for visualization reveals that with higher rarefaction the shock bends away from the interaction plane which can be explained by the increased temperature in the secondary expansion. While the shock location shifts with the nozzle-separation distance, it merges to one location when scaling it with the nozzle-separation distance. Bending the nozzle outlets towards each other produces a stronger shock starting further upstream, which in turn causes a stronger secondary expansion and thus smoother deposition.
In addition to studying the impact of geometry changes in the PVD setup, the effect of adding a light, inert carrier gas on the plume interaction and the resulting deposition uniformity is investigated. To this end, the carrier-gas mole fraction is varied at a given Knudsen number. Species separation focuses the heavy species along the primary axes, whereas the light inert carrier gas is scattered towards the periphery. Due to the higher mean molecular weight, the speed of sound decreases and consequently the interaction shock occurs farther downstream, is less bent and weaker producing a more uniform deposition profile. Desirable side effects of the carrier-gas are less stray deposition and a higher conductance of the coating material from the inlet nozzle.
The last part of the thesis focuses on the numerical method, since DSMC is accurate but computationally costly. The substitution of the collision step in DSMC with a kinetic relaxation using the Bhatnagar-Gross-Krook (BGK) operator is implemented in order to speed up the algorithm. The choice of the target distribution for the relaxation is crucial. The Maxwellian velocity distribution produces an incorrect Prandtl number; the Ellipsoidal-Stochastical BGK (ES-BGK) corrects for the Prandtl number by taking the stress into account; the Shakov model (S-BGK) corrects by considering the heat flux vector. The implemented models are verified against literature data and evaluated for their accuracy in simulating the interacting plumes case. In addition, we evaluated a hybrid coupling of the various kinetic relaxation models in dense, near-continuum regions with DSMC for rarefied and non-continuum regions. The switching criterion for the hybrid coupling was the gradient-length local Knudsen number. The implemented kinetic models compare well to literature data for rarefied Poiseuille flow. The lower resolution criteria lower the computational cost to approximately 30% of the one of DSMC. For the planar jet interaction, the BGK model (using the Maxwellian target distribution) overestimates the shock strength, the S-BGK model overpredicts the diffusion of the shock, whereas the ES-BGK models results are in good agreement with the DSMC results. This indicates that the velocity sorting and breakdown of temperature isotropy in the expansions have a more significant influence on the flow field than the shock, which skews the velocity distribution. Coupling the kinetic models with DSMC in the highly rarefied regions improves the flow field for the BGK and S-BGK model, but not significantly.
In short, this thesis examines the influence of process conditions, geometry and carrier-gas use on the mass flow rate and deposition uniformity in continuous PVD for coating steel strips with anti-corrosive coatings. It provides modeling tools for the mass transport both inside and outside the VDB which can be used for further investigation and optimization.","CFD; DSMC; Rarefied gas flow; shock waves; Fluid dynamics","en","doctoral thesis","","978-94-6384-378-2","","","","","","2023-04-01","","","ChemE/Transport Phenomena","","",""
"uuid:4e3e9128-8c27-41b7-8229-934e2e0834f2","http://resolver.tudelft.nl/uuid:4e3e9128-8c27-41b7-8229-934e2e0834f2","Buckling, Post-buckling and Vibrations of Composite Plates under Combined Thermomechanical Loads","Gutierrez Alvarez, J. (TU Delft Aerospace Structures & Computational Mechanics)","Bisagni, C. (promotor); Delft University of Technology (degree granting institution)","2022","A big technological leap in Carbon Fibre Reinforced Plastics (CFRP's) has brought to the market materials that are able to operate under severe thermomechanical loading conditions and yet remain lightweight. This is of high interest for new developments in aerostructures, such as supersonic airliners or the hydrogen aircraft. A continuous need for improvement in weight efficiency motivates this research, which delves into the possibility of using thermomechanical buckling to increase structural performance in future CFRP aerostructures. In this regard, it is well understood that plates that buckle under mechanical loads can operate safely, and they can even carry a significant amount of load before they experience material failure. Since future high-speed composite aircraft will have to endure thermomechanical loads, it is fair to consider that some parts of these new vehicles could, to some degree, be capable to operate under thermomechanical post-buckling.
The goal of this PhD dissertation is to advance the knowledge on thermomechanical buckling of composite plates, by investigating diverse aspects of the behaviour of this phenomenon. In particular, special attention is given to study the occurrence of mode jumps in post-buckling. Mode jumping phenomena alter the shape of a plate and can impact certain aspects of its functionality, e.g. aerodynamics, yet they could have potentially interesting future applications if they were to be appropriately controlled. Three kinds of methodologies, analytical, numerical and experimental, have been followed during this research. However, a clear emphasis has been put in the latter, since this thesis is mostly focused in the design and execution of experiments in thermomechanical buckling of composite plates.
This dissertation is composed of four independent investigations, being the first of them an analytical study on linear thermomechanical buckling, and the other three experimental studies on deep thermomechanical post-buckling behaviour. Four independent studies are presented in Chapters 2 to 5 of this thesis. In Chapter 2, an analytical, closed-form solution for the study of linear buckling of thin, symmetric and balanced composite laminated plates subjected to thermomechanical loads was derived. The formulation is based on a Duhamel-Neumann constitutive approach, and laminate theory to derive the plate governing equations. The mechanical load was introduced in the formulation as plate size variation, and heating load was implemented as uniform temperature increment. The formulation was limited to simply supported boundary conditions. The obtained formula relates critical buckling temperatures to initially applied plate size variation. In Chapter 3, a numerical-experimental study of thermal buckling under heating is presented. A set of parametric analysis was performed to identify composite plates that present a mode jump when heated. Two composite plates were identified and were subsequently tested in a newly developed test setup for thermal buckling of composite plates. The aforementioned setup was devised around a frame with a low coefficient of thermal expansion, so that the plate could experience buckling and mode jumping when heated, and could successfully reproduce thermal buckling and mode jumping in the tested plates. Chapter 4 reports a combined numerical-experimental study of composite plates, analysing the interaction between mechanical and thermal loads in relation to buckling and mode jumping. A novel test setup for thermomechanical testing was designed. This setup made use of the frame used in previous chapter to restrain thermal expansion, and by applying compression to the frame, mechanical shortening could be indirectly applied to the plate. In this way, it was possible to study interactions between thermal and mechanical loading states. Experimental results revealed that a linear decrease of the mode jumping temperature could be observed for increasing levels of compression, and the same was also true when the order of applications of the loads was inverted. Chapter 5 presents an experimental investigation on vibrations of heated composite plates leading to thermal buckling. The experiments were performed considering two main goals: the application of the Vibration Correlation Technique for the detection of thermal buckling in composite plates; and the exploration of the frequency variations before and after the occurrence of a mode jump in post-buckling regime. Two variations of the test setups used in previous chapters were used. The setups shared thermal expansion frame, while they differentiate on the type of heating source and mechanical boundary conditions. The plates were excited acoustically using a loudspeaker, and the vibration frequencies were monitored and acquired using a laser vibrometer. Buckling temperatures were successfully predicted using the Vibration Correlation Technique. Changes in frequency, potentially related to the occurrence of the mode jump, were also detected.
This thesis is part of a project that tries to develop an additional option for passenger vehicle mass reduction, more specifically by replacing steel in the exhaust system with fibrereinforced plastic. The principle behind this solution is that fibrereinforced plastic has better mechanical properties per kilogram of material than steel. Yet, no plastic could endure direct exposure to exhaust gas flows because of the maximum gas temperature of 800 1000 ∘C....
The goal of this thesis is to manipulate fluorescent molecules using low energy electrons for superresolution microscopy in the vacuum of a scanning electron microscope (SEM). By manipulating these molecules, and understanding the electron-induced effects, a versatile platform for LM could become available. Using low energy electrons of a few electronvolts, the different electron-induced mechanisms induced would become limited and thus more controlled. For all of this to work, a setup suitable for integrated CLEM, with electron energies available down to a few eV needs to be built. Furthermore, the electron-induced mechanisms for fluorescent molecules, and their effects on the fluorescence should then be understood, characterized, and verified as suitable for LM.
In chapter 2, we show how we modified a commercially available platform for integrated microscopy to achieve electron landing energies down to 0 eV, with 0.3 eV energy spread. For this we use a retarding field by applying a negative voltage to the setup’s stage. We show by reflecting the electron beam and detecting it with an in-column detector that we can determine the electron beam's landing energy and energy spread. In addition to this, we show that the setup improves the signal acquired for tissue sections optimized for simultaneous correlative microscopy. These tissue sections often have lower signals than samples optimized for one imaging modality only. For in-resin samples especially, this leads to poor EM signal for tissue sections of 100 nm thick or thinner. Using the negative stage bias we show that these in-resin CLEM samples can be imaged without extremely long dwell times or high beam currents even for ultrathin (50 nm) sections.
We use the setup presented in chapter 2 to study the effect of different electron landing energies down to a few eV on different fluorescent molecules in chapter 3. We find that fluorescent molecules can act as reporters for different electron-molecule reaction mechanisms. We show how electron irradiation of perylene-diimide (PDI), leads to a remarkable recovery in fluorescence after electron irradiation. We monitor this recovery continuously for different electron landing energies down to 0 eV and find based on the strength of the recovery component that electron-attachment to a transient anionic dark state is the main contributor to this process. This transient dark state can be manipulated by depositing the emitters on a conducting substrate, or by using a different dye of which the anionic dark state can be excited using a different excitation wavelength. With Rhodamine B ITC, we show an instantaneous recovery of the electron-induced dark state close to 0 eV landing energies using a short 405 nm excitation. Finally, we also demonstrate the versatility of low-energy electron irradiation by showing a dye that increases in fluorescence after electron irradiation.
Based on the electron-induced dynamics reported in chapter 3, we aim to determine what sort of strategy would be feasible for superresolution microscopy in the vacuum of a SEM. In chapter 4, we assess the resolution and quality of the reconstructed images of different molecular arrangements using simulations and different localization microscopy analysis techniques. We studied how extended photobleaching lifetimes in vacuum could improve easy-to-implement bleaching assisted localization techniques, or how low energy electron induced fluorescence fluctuations could be distinguished using Haar wavelet kernel filters and used to improve the resolution. We find that the latter approach results in both higher resolution and number of correct localizations, even if the photoswitching is switched off and only photobleaching occurs. We also propose new techniques relying on sparsity in each frame using instantaneous photoswitching of electron-induced dark states, or by temporarily switching emitters on with electrons. In general, we find that these approaches lead to a higher resolution of tens of nanometres, but that the current experimentally available photoswitching parameters are insufficient for resolving small molecular arrangements down to tens of nanometres.
The results presented in chapter 4 show promising prospects for superresolution microscopy in the vacuum of a SEM. However, with the setup presented in chapter 2, and the currently experimentally verified photoswitching parameters, resolutions down to tens of nanometeres are still unfeasible. In chapter 5, we show an integrated microscope modified by having a laser and easy to customize excitation and imaging path. The increased laser power should allow for higher accuracy localizations, but also allows for faster image acquisition. By then introducing a photomultiplier tube in the imaging path, we can monitor the electron induced dynamics down to sub-milliseconds timescales. Using the experimental approach presented in chapter 3, we quantify the fluorescence recovery timescales of perylene diimide for electron landing energies ranging from 1000 eV down to 2 eV. We find that the fluorescence recovery can be described with a double exponential behaviour characterized by time constants varying between 5-150 ms and 0.2–2s, respectively. For 2 eV electron landing energy, close to the resonance energy of electron attachment, we find a reduction in the slower exponential recovery term. Potential mechanisms responsible for these observed dynamics and follow-up experiments are then discussed.
With the results presented throughout this thesis we show how low energy electrons could be used to manipulate fluorescent molecules to achieve higher optical resolutions in an integrated light- and electron microscope. While the first steps have been made, considerable effort needs to be made to (i) understand the electron-induced dynamics and optimize fluorescent dyes, and (ii) to perform the electron-induced dynamics on biological specimen. In our outlook chapter, we discuss experimental approaches for these next steps, and other applications of low-energy electrons outside of superresolution microscopy in integrated microscopy.","","en","doctoral thesis","","978-94-6384-372-0","","","","","","","","","ImPhys/Microscopy Instrumentation & Techniques","","",""
"uuid:0bc3677d-dc15-4f93-a079-8d600e967c5a","http://resolver.tudelft.nl/uuid:0bc3677d-dc15-4f93-a079-8d600e967c5a","Interaction-Aware Motion Planning in Crowded Dynamic Environments","Ferreira de Brito, B.F. (TU Delft Learning & Autonomous Control)","Alonso Mora, J. (promotor); Babuska, R. (promotor); Delft University of Technology (degree granting institution)","2022","Autonomous robots will profoundly impact our society, making our roads safer, reducing labor costs and carbon dioxide (CO2) emissions, and improving our life quality. However, to make that happen, robots need to navigate among humans, which is extremely difficult. Firstly, humans do not explicitly communicate their intentions and use intuition to reason about others' plans to avoid collisions. Secondly, humans exploit interactions to navigate efficiently in cluttered environments. Traditional motion planning methods for autonomous navigation in human environments use geometry, physics, topologies, and handcrafted functions to account for interaction but only plan one step. In contrast, trajectory optimization methods allow planning over a prediction horizon accounting for the environment evolution. Yet, these methods scale poorly with the number of agents and assume structured scenarios with a limited number of interacting agents. Learning-based approaches overcome the latter by learning a policy's parameters offline, e.g., from data or simulation. However, to date, learned policies show poor performance and unpredictable behavior when employed in reality as the conditions differ from the learning environment. Moreover, learning-based approaches do not guarantee collision avoidance or feasibility with respect to the robot dynamics. Therefore, this thesis aims to develop motion planning algorithms generating online predictive and interaction-aware motion plans to enable robots' safe and efficient navigation among humans.
The first main contribution of this thesis is a predictive motion planning algorithm for autonomous robot navigation in unstructured environments populated with pedestrians. The proposed method builds on nonlinear model-predictive contouring control proposing a local formulation (LMPCC) to generate predictive motion plans in real-time. Static collision avoidance is achieved by constraining the robot's positions to stay within a set of convex regions approximating the surrounding free space computed from a static map. Moreover, an upper bound for the Minkowski sum of a circle and an ellipse is proposed and used as an inequality constraint to ensure dynamic collision avoidance, assuming the robot's space as a circle and the dynamic obstacles', for instance pedestrians, space ellipsoid. The LMPCC approach is analyzed and compared against a reactive and a learning-based approach in simulation. Experimentally, the method is tested fully onboard on a mobile robot platform (Clearpath Jackal) and in an autonomous car (Toyota Prius).
In real scenarios, pedestrians do not explicitly communicate their intentions, and therefore, LMPCC uses a constant velocity (CV) model to estimate their future trajectories. However, CV predictions ignore the environment constraints, e.g., static obstacles, the interaction between agents, and the inherent uncertainty and multimodality of the pedestrians' motion. Hence, this thesis presents a variational recurrent neural network architecture (Social-VRNN) for interaction-aware and multi-modal trajectory predictions. The Social-VRNN fuses information of the pedestrian's dynamics, static obstacles, and surrounding pedestrians and outputs the parameters of a Gaussian Mixture Model (GMM). A variational Bayesian learning approach is employed to learn the model's parameters minimizing the evidence lower bound (ELBO). Experimental results on real and simulation data are presented, showing that our model can effectively learn to predict multiple trajectories capturing the different courses that a pedestrian may follow.
Enhancing the LMPCC method with interaction-aware predictions is insufficient to enable safe and efficient autonomous navigation in cluttered environments. The LMPCC is a local trajectory optimization method and considers a limited planning horizon to enable online motion planning. Consequently, LMPCC plans can be locally optimal, which may result in catastrophic failures, such as deadlocks and collisions, in the long term. To overcome the latter, global guidance, e.g., cost-to-go heuristics, to the optimization problem is an option. In contrast to optimization-based methods, learning-based methods, i.e., deep reinforcement learning (DRL), allow learning policies to optimize long-term rewards in an offline training phase. Therefore, this thesis introduces two novel frameworks enhancing state-of-art online optimization-based planners with learned global guidance policies applied to mobile robot navigation in cluttered environments and autonomous vehicles driving in dense traffic.
Firstly, the Goal-Oriented Model Predictive Controller (GO-MPC) is introduced, tackling the problem that the robot’s global goal is often located far beyond the planning horizon resulting in locally optimal motion plans. The framework proposes to use DRL to learn an interaction-aware policy providing the next optimal subgoal position to an MPC planner. The recommended subgoal helps the robot progress towards its end goal and accounts for the expected interaction with other agents. Based on the recommended subgoal, the MPC planner then optimizes the inputs for the robot, satisfying its kinodynamic and collision avoidance constraints. Simulation results are presented demonstrating that GO-MPC enhances the navigation performance in terms of safety and efficiency, i.e., travel time, compared to solely based MPC and deep RL frameworks in mixed settings, i.e., with cooperative and non-cooperative agents, and multi-robot scenarios.
Secondly, the interactive Model Predictive Controller (IntMPC) for safe navigation in dense traffic scenarios is presented. While GO-MPC learns a subgoal policy, the IntMPC learns a velocity reference policy exploring the connection between human driving behavior and their velocity changes when interacting. Hence, the IntMPC approach learns, via deep Reinforcement Learning (RL), an interaction-aware policy providing global guidance as a velocity reference allowing to control and exploit the interaction with the other vehicles. Simulation results are presented demonstrating that the learned policy can reason about the cooperativeness of other vehicles and enable the local planner with interactive behavior to pro-actively merge in dense traffic while remaining safe in case the other vehicles do not yield.
Overall, this thesis contributes to enhancing autonomous robots with predictive behavior, with the ability to infer the others' trajectories and operate in cluttered environments.
However, two important limitations remain to be solved: the proposed motion planner computes open-loop interaction-aware motion plans and does not account for interaction in closed-loop. Moreover, the prediction model and guidance policies rely solely on offline learning. Future works may investigate how to account for interaction in the planning stage and how online data streams can be used to improve the navigation algorithm's performance over time.","Motion planning; Machine learning; Interaction; Collision avoidance; Autonomous Vehicles; Navigation among humans; Decision-making; Dynamic environments; Reinforcement Learning; Trajectory prediction; mobile robots","en","doctoral thesis","","978-94-6366-605-3","","","","","","","","","Learning & Autonomous Control","","",""
"uuid:0149cbb5-e3fe-4bc5-8333-8f888638055e","http://resolver.tudelft.nl/uuid:0149cbb5-e3fe-4bc5-8333-8f888638055e","Building Affordable, Durable and Desirable Earthen Houses: Construction with Materials Derived from Locally Available Natural and Biological Resources","Kulshreshtha, Y. (TU Delft Materials and Environment)","Jonkers, H.M. (promotor); Vardon, P.J. (promotor); Mota, Nelson (copromotor); Delft University of Technology (degree granting institution)","2022","Building with unfired earth (mud) is an ancient practice that is regaining popularity due to the rising concern about the impact of the construction sector on climate. However, the low image of earthen materials is a major barrier to its acceptance in India, and is caused due to its poor water resistance performance. This research focuses on developing a low-cost, water resistant and desirable earthen building material for rural housing in India. This thesis contributes toward a better understanding of water ingress and water resistance in unstabilised and biologically stabilised earthen materials, especially cow-dung stabilised earthen blocks. Moreover, it addresses the three key aspects identified for rural earthen housing in India; 1. Affordability, by using inexpensive techniques and binder; 2. Durability, by enhancing the water resistance in both unstabilised and cow-dung stabilised earthen material, and 3. Desirability, by producing Compressed Earth Blocks of a good finish and aesthetic and using a widely acceptable stabiliser. The research work is expected not only to provide scientific insights that facilitate understanding and adoption of earthen materials but also the knowledge that can be directly applied in the construction of earthen houses.","Earthen construction; Rural housing; Durability; CEB; Biological stabiliser","en","doctoral thesis","","978-90-832797-2-5","","","","","","","","","Materials and Environment","","",""
"uuid:951afaeb-1f03-45cb-8a86-cacbf9ca0f64","http://resolver.tudelft.nl/uuid:951afaeb-1f03-45cb-8a86-cacbf9ca0f64","Integrated Array Tomography: Development and Applications of a Workflow for 3D Correlative Light and Electron Microscopy","Lane, R. (TU Delft ImPhys/Microscopy Instrumentation & Techniques)","Hoogenboom, J.P. (promotor); Carroll, E.C.M. (copromotor); Delft University of Technology (degree granting institution)","2022","Multi-modal imaging techniques have become essential for better understanding fundamental questions in cell biology such as disease progression. While individual microscopy methods have rapidly advanced in recent years, the information content of any one imaging technique is limited to the type of contrast that particular technique is sensitive to. By tagging particular biomolecules with a fluorescent protein, fluorescence microscopy (FM), for example, can relay dynamic information about the distribution of these biomolecules in their cellular environment. It struggles, however, to convey information regarding the structure of the organelles that might contain these biomolecules or the surroundings of their cellular environment. Electron microscopy (EM), on the other hand, can provide detailed layouts of cellular structure by staining membranes with heavy metals. Thus, by correlating these modalities (correlative light and electron microscopy, CLEM), a more holistic understanding of the relationship between structure and function at the (sub-)cellular level can be achieved. Array tomography (AT) is a technique combining FM and EM for volumetric imaging, first introduced in 2007 for studying brain tissue. The technique has since expanded, but the approach has largely remained the same. Biological material is cut into a series of ultrathin (∼100 nm) sections (an array) and prepared for sequential FM and EM imaging by applying a series of immunofluorescence and heavy metal stains. Correlative images of the serial sections are then computationally aligned to reconstruct the 3D structure (tomography). Compared to other volumetric imaging techniques in the life sciences, AT offers the ability to correlate structure and function at high resolution across large fields of view. Moreover, it enables high axial resolutionfor both EM and FM as determined by the section thickness...","correlative light and electron microscopy; volume electron microscopy; array tomography","en","doctoral thesis","","978-94-6366-603-9","","","","","","","","","ImPhys/Microscopy Instrumentation & Techniques","","",""
"uuid:515c29f9-d086-4427-9f35-c062e743c027","http://resolver.tudelft.nl/uuid:515c29f9-d086-4427-9f35-c062e743c027","On Fine-grained Temporal Emotion Recognition in Video: How to Trade off Recognition Accuracy with Annotation Complexity?","Zhang, T. (TU Delft Multimedia Computing)","Cesar, Pablo (promotor); Hanjalic, A. (promotor); El Ali, Abdallah (copromotor); Delft University of Technology (degree granting institution)","2022","Fine-grained emotion recognition is the process of automatically identifying the emotions of users at a fine granularity level, typically in the time intervals of 0.5s to 4s according to the expected duration of emotions. Previous work mainly focused on developing algorithms to recognize only one emotion for a video based on the user feedback after watching the video. These methods are known as post-stimuli emotion recognition. Compared to post-stimuli emotion recognition, fine-grained emotion recognition can provide segment-by-segment prediction results, making it possible to capture the temporal dynamics of users’ emotions when watching videos. The recognition result it provides can be aligned with the video content and tell us which specific content in the video evokes which emotions. Most of the previous works on fine-grained emotion recognition require fine-grained emotion labels to train the recognition algorithm. However, the experiments to collect these fine-grained emotion labels are usually costly and time-consuming. Thus, this thesis focuses on investigating whether we can accurately predict the emotions of users at a fine granularity level with only a limited amount of emotion ground truth labels for training.
We start our technical contribution in Chapter 3 by building up the baseline methods which are trained using fine-grained emotion labels. This can help us understand how accurate the recognition can be if we take advantage of the fine-grained emotion labels. We propose a correlation-based emotion recognition algorithm (CorrNet) to recognize the valence and arousal (V-A) of each instance (fine-grained segment of signals) using physiological signals. CorrNet extracts features both inside each fine-grained signal segment (instance) and between different instances for the same video stimuli (correlation-based features). We found out that, compared to sequential learning, correlation-based instance learning offers advantages of higher recognition accuracy, less overfitting and less computational complexity.
Compared to collecting fine-grained emotion labels, it is easier to collect only one emotion label after the user watched that stimulus (i.e., the post-stimuli emotion labels). Therefore, in the second technical chapter (Chapter 4) of the thesis, we investigate whether the emotions can be recognized at a fine granularity level by training with only post-stimuli emotion labels (i.e., labels users annotated after watching videos), and propose an Emotion recognition algorithm based on Deep Multiple Instance Learning (EDMIL). EDMIL recognizes fine- grained valence and arousal (V-A) labels by identifying which instances represent the post-stimuli V-A annotated by users after watching the videos. Instead of fully-supervised training, the instances are weakly-supervised by the post-stimuli labels in the training stage. Our experiments show that weakly supervised learning can reduce overfitting caused by the temporal mismatch between fine-grained annotations and input signals.
Although the weakly-supervised learning algorithm developed in Chapter 4 can obtain accurate recognition results with only few annotations, it can only identify the annotated (post-stimuli) emotion from the baseline emotion (e.g., neutral) because only post-stimuli labels are used for training. The non-annotated emotions are all categorized as part of the baseline. To overcome this, in Chapter 5, we propose an Emotion recognition algorithm based on Deep Siamese Networks (EmoDSN). EmoDSN recognizes fine-grained valence and arousal (V-A) labels by maximizing the distance metric between signal segments with different V-A labels. According to the experiments we run in this chapter, EmoDSN achieves promising results by using only 5 shots (5 samples in each emotion category) of training data.
Reflecting on the achievements reported in this thesis, we conclude that the fully-supervised algorithm (Chapter 3) can result in more accurate fine-grained emotion recognition results if the annotation quantity is sufficient. The weakly-supervised learning method (Chapter 4) can result in better recognition results at the instance level compared to fully-supervised methods. We also found that the weakly-supervised learning methods can perform the best if users annotate their most salient, but short emotions or their overall and longer-duration (i.e., persisting) emotions. The few-shot learning method (Chapter 5) can obtain more emotion categories (more than the weakly-supervised learning) by using less amount of samples for training (better than the fully-supervised learning). However, the limitation of it is that accurate recognition results can only be achieved by a subject-dependent model.","Emotion Recognition; Physiological Signals; Machine Learning; Video Watching","en","doctoral thesis","","978-94-6384-376-8","","","","","","","","","Multimedia Computing","","",""
"uuid:8dbbf209-ef24-48c4-bfe2-9b029e2f97dc","http://resolver.tudelft.nl/uuid:8dbbf209-ef24-48c4-bfe2-9b029e2f97dc","Less Machine (=) More Vision: Approaches towards Practical and Efficient Machine Vision with Applications in Face Analysis","Gudi, A.A. (TU Delft Pattern Recognition and Bioinformatics)","Reinders, M.J.T. (promotor); van Gemert, J.C. (copromotor); Delft University of Technology (degree granting institution)","2022","Machines that interact with humans can do so better if they can also visually understand us, but they have limited resources to do so. The main topic of this dissertation is contrasting the use of resources by machine vision systems against the accuracy obtained by them. This thesis focuses on reducing the need for data, memory, and computation in real-world machine vision systems, applied to human observation and face analysis.
This dissertation tackles annotation effort by exploring how weakly-supervised object/person detectors can be improved. Findings show that prior knowledge about objects' bounds in images helps the detector learn the spatial extent of objects using only weak image-level labels. The proposed implementation enables single-shot detection, thus improving computational efficiency of this data-efficient method.
The thesis also demonstrates how prior knowledge about eye locations can be used to reduce the computational burden of gaze tracking: non-vital parts of the input image can be discarded without losing accuracy. Additionally, the thesis finds how a priori known geometrical relations can be exploited to project gaze onto a screen with little human annotation effort.
Findings of this dissertation further suggest that spatial structures in images can be exploited for improving efficiency of vision tasks. The proposed solution allows for learning detection of facial occlusions and anomalies from only a few examples. Results also indicate that this solution can be used as a loss function for unsupervised pre-training of neural networks when resources are constrained.
Lastly, this thesis showcases how prior know-how about blood-flow physiology in faces can be applied in a camera-based vital signs estimator. Even when data is available, this hand-crafted method performs better than deep learning methods — both in terms of accuracy and efficiency. At the same time, the results also reveal the pitfalls of assumptions made in the prior knowledge when exposed to more complex tasks — such as video compression noise filtering.
Through its common theme of incorporating prior knowledge, this dissertation brings attention to the costs incurred by machine vision systems to achieve high accuracy.","Computer Vision; Machine learning; Artificial Intelligence (AI); Efficiency; Computational Efficiency; Data Efficiency; Face Analysis; Human Observation; Remote Photoplethysmography","en","doctoral thesis","","978-94-6366-602-2","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:37f7367f-bc5e-4cde-a7fd-47d12621f853","http://resolver.tudelft.nl/uuid:37f7367f-bc5e-4cde-a7fd-47d12621f853","Cyber Threat Intelligence: Analysis of adversaries and their methods","Griffioen, H.J. (TU Delft Cyber Security)","Lagendijk, R.L. (promotor); Dörr, C. (promotor); Delft University of Technology (degree granting institution)","2022","The growing dependency on interconnected devices makes cyber crime increasingly lucrative. Together with the rise of premade tools to perform exploits, the number of cyber incidents grows rapidly each year. Defending against these threats becomes increasingly difficult as organizations depend heavily on the Internet and have many different connected devices, all with their own protocols and vulnerabilities. The rise in cyber crime and plethora of devices make it difficult for organizations to detect and mitigate all attacks targeting their business.
Cyber Threat Intelligence (CTI) provides defenders with information about cyber threats and thus the ability to scope the defensive efforts towards the areaswith the highest risk of damages. This information comes in different forms, from lists if indicators that are direcly ingestible into the defensive infrastructure of a company to documents describing the Tactics, Techniques and Procedures (TTPs) of adversaries.
A major challenge in CTI is identifying indicators that describe more abstract features of adversaries, such as the tools that are used, to automatically detect mitigation attempts in defensive infrastructure. Furthermore, the identification of adversarial campaigns remains challenging, but the analysis on the campaigns that are identified proves to provide valuable information about actor capabilities and the threat landscape.
In this thesis, we focus on improving CTI by getting a better understanding of adversarial behavior and evolution. We first create metrics to measure the quality of CTI feeds and address some measurement bias in network-based measurements. To obtain better understanding of adversaries we focus on tool fingerprinting, adversarial evolution and campaign analysis.
We find a surprising lack of sophistication and evolution of adversaries. But we also find that the quality of CTI feeds is poor with on average a response time of 21 days before an indicator is added to a feed after it is active. We show that by fingerprinting adversarial tools and performing campaign analysis on individual attacks, we can learn the sophistication of adversaries and obtain a better understanding of the threat landscape. In addition, following attacker campaigns over time allows us to better understand the evolution of actors and their objectives. To allow for this campaign analysis in DDoS attacks, we introduce a new model to describe attacks and cluster these on behavior. Finally, we utilize adversarial TTPs to devise a method to disrupt malware propagation and evaluate this method on a real-world botnet.","Cyber Threat Intelligence; Network Security; Internet Measurement","en","doctoral thesis","","978-94-6384-369-0","","","","","","","","","Cyber Security","","",""
"uuid:8eee4c39-3a63-41b3-86fa-d83724f90eff","http://resolver.tudelft.nl/uuid:8eee4c39-3a63-41b3-86fa-d83724f90eff","Sensor Technology for Unobtrusive Athlete Monitoring","Steijlen, A.S.M. (TU Delft Electronic Instrumentation)","French, P.J. (promotor); Jansen, K.M.B. (promotor); Bossche, A. (copromotor); Delft University of Technology (degree granting institution)","2022","","Wearables; Sweat Sensors; Movement Tracking; Athlete Monitoring","en","doctoral thesis","","978-94-6458-478-3","","","","","","","","","Electronic Instrumentation","","",""
"uuid:12fc390e-4d18-4b49-95bd-849cdedfda13","http://resolver.tudelft.nl/uuid:12fc390e-4d18-4b49-95bd-849cdedfda13","Design guidelines for the monetary and financial system in the digital age","van der Linden, M.J. (TU Delft Economics of Technology and Innovation)","van Beers, Cees (promotor); Janssen, M.F.W.H.A. (promotor); Delft University of Technology (degree granting institution)","2022","This thesis applies design science to the monetary and financial system as a whole. The application of this novel methodology offers new possibilities to examine this complex system. The contribution of this thesis is threefold. First, different theories on money, banking and systemic financial crises have been researched through an extensive literature review and balance sheets. Second, those theories have been used to develop design requirements and guidelines. Finally, the consensus and pivotal dissensions about the systemic problem(s) of the current monetary and financial system, requirements and guidelines among experts have been identified through semi-structured interviews. This research process results in widely supported requirements that demarcate the design space and widely supported guidelines that aim to give direction within the design space, that is, to the future development of the monetary and financial system.
The main artifact of this research are these three guidelines:
GDG 1: Develop and gradually introduce public digital money.
GDG 2: Move the financial system towards funding based on securities offering market liquidity.
GDG 3: Move financial regulation towards transparency.","money; monetary system; financial system; systemic financial crises; digital technologies; design science; design guidelines","en","doctoral thesis","","","","","","","","2022-09-29","","","Economics of Technology and Innovation","","",""
"uuid:ed292367-ed2b-4bd3-a236-9f10d9c01da8","http://resolver.tudelft.nl/uuid:ed292367-ed2b-4bd3-a236-9f10d9c01da8","Restoring mangroves with structures: Improving the mangrove habitat using local materials","Gijón Mancheño, A. (TU Delft Hydraulic Structures and Flood Risk)","Uijttewaal, W.S.J. (promotor); Reniers, A.J.H.M. (promotor); Delft University of Technology (degree granting institution)","2022","Mangrove forests effectively function as natural flood defences, and their deforestation has exposed millions of people worldwide to coastal erosion and flooding. Since mangroves require a stable sedimentary environment, stopping coastal erosion is a necessary step for their restoration. Bamboo structures have thus been built to induce accretion at the coast by attenuating waves. However, these structures often fail to rehabilitate mangroves, likely due to the lack of guidelines for their design.
This thesis investigates the effect of structures formed by bamboo poles on waves, currents, and sediment transport, to develop physics based models for structure design. These effects were studied through flume experiments with scaled structure prototypes, field experiments in Demak (Indonesia), 1D morphodynamic modelling (with the model XMgrove, calibrated with field measurements), and remote sensing.
Models to predict structure performance were developed for waves and currents. Flume experiments showed ways to optimize structure designs. For instance, wave dissipation per pole is maximum for dense rows of poles with large spacing in the wave direction. Modelling scenarios with XMgrove suggest that the optimal structure location is site-dependent, and that subsidence rates in Demak may be too high to be counteracted with structures. A large-scale method to find potential restoration sites was also developed and applied in Bangladesh.
As such, the physics-based tools, together with the mapping method presented in this thesis, open up the path to optimize and generalize mangrove restoration efforts.","mangroves; restoration; bamboo structures","en","doctoral thesis","","978-94-6458-601-5","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:5c8a42ec-a26b-4bce-ba8f-6b4a9670634f","http://resolver.tudelft.nl/uuid:5c8a42ec-a26b-4bce-ba8f-6b4a9670634f","Modelling Centuries of Geo-morphological Development of the Ganges-Brahmaputra-Meghna Delta","Akter, J. (TU Delft Coastal Engineering)","Roelvink, D. (promotor); van der Wegen, Mick (promotor); Delft University of Technology (degree granting institution)","2022","The Ganges-Brahmaputra-Meghna (GBM) Delta is a good example of a large estuarine system with sparse data. This study describes the development and validation of a morphodynamic process-based model (Delft3D) as a tool to predict the dynamic system as a response to climate change, sea-level rise, subsidence and other influences. The modelled sediment transport of the Ganges and Jamuna systems is between 200 and 1100 million ton/year, which is in line with observations. On annual basis sand accounts for less than 20% of the sediment load in the system with the remaining sediment being much finer. Analysis of modelled bed level changes over time reveals that only a few river systems are in an aggrading phase. The 2D model exhibits that about 22% of the supplied sediment deposits in the delta system on floodplains and tidal plains, whereas the remaining 78% of the sediment causes subaquatic delta progradation or is lost in the deep ocean bed. Although the model does not reproduce all-natural phenomena at all spatial scales, it will be a valuable tool
to describe and explore the morphodynamic development of the GBM Delta over decadal to centennial timescales for macro-scale understanding, planning, and management.","","en","doctoral thesis","IHE Delft Institute for Water Education","9789073445437","","","","","","","","","Coastal Engineering","","",""
"uuid:4d1c2a64-6ea2-491c-820c-20811a8b49b7","http://resolver.tudelft.nl/uuid:4d1c2a64-6ea2-491c-820c-20811a8b49b7","Adaptive control of interconnected and muliti-agent systems","Tao, T. (TU Delft Team Bart De Schutter)","De Schutter, B.H.K. (promotor); Baldi, S. (copromotor); Delft University of Technology (degree granting institution)","2022","This thesis deals with the adaptive control of interconnected systems and multi-agent systems, where adaptive control is used to deal with the presence of uncertainties. Generally, two types of uncertainties can occur. The first one is parametric uncertainty, which is most commonly addressed in the literature, and for which several design approaches for adaptive laws have been proposed. The second type of uncertainty is state-dependent uncertainty, which typically arises from the lack of structural knowledge about the dynamics of the system (a typical example being the presence of unmodelled dynamics). Guaranteeing stable adaptation in this scenario poses a big challenge since this type of uncertainty cannot be bounded a priori.","multi-agent systems; interconnected systems; distributed adaptive control; state-dependent uncertainty","en","doctoral thesis","","978-94-6384-365-2","","","","","","","","","Team Bart De Schutter","","",""
"uuid:f2b07624-8a6d-41c9-b931-2423849182a7","http://resolver.tudelft.nl/uuid:f2b07624-8a6d-41c9-b931-2423849182a7","3D Steering: Additive Manufacturing in Snake-Like Surgical Devices","Culmone, C. (TU Delft Medical Instruments & Bio-Inspired Technology)","Breedveld, P. (promotor); Smit, G. (copromotor); Delft University of Technology (degree granting institution)","2022","The minimally invasive approach has revolutionized the standard in surgery. In conventional open procedures, the surgeon exposes the diseased area with a relatively large incision. Contrary to conventional surgery, in minimally invasive surgery, several small incisions are used to insert the surgical instruments and reach the target area, reducing the risk of infections and surgical trauma. The surgical instruments currently used are straight and rigid, allowing only straight paths to be followed. An alternative is passively flexible instruments, such as endoscopes and catheters, that require external guidance, e.g., the blood vessel wall, and therefore cannot provide a stable platform to operate. Areas with a high density, like the brain, or situations that demand to actively decide the path to follow, such as in the peripheral bronchi of the lungs, require snake-like instruments that are able to follow multi-curved paths and can maintain their position without external support. Because of the great potential advantages that these types of instruments could offer and because of the new surgical possibilities that might be explored, companies and researchers are working on creating solutions. However, the complexity of such instruments creates difficulties in the surgical implementation and remain a major challenge.
In this context, additive manufacturing, also known as 3D printing, offers a new paradigm for design, manufacturing, and assembly, allowing the production of complex geometries difficult to produce with conventional manufacturing. Using additive manufacturing might help to solve some of the major challenges in snake-like surgical instruments, such as a large number of components and long assembly time. Therefore, the main purpose of the research described in this thesis, is to explore how the combination of additive manufacturing and mechanical solutions can help in designing snake-like instruments, while minimizing the assembly and device complexity.
This thesis is organized into three parts as the main components of a snake-like surgical instrument: Part I, Control, focuses on the control side of the instrument with particular attention to mechanical solutions. Part II, Shaft, focuses on the possibility of fabricating snake-like instruments with additive manufacturing technology, and Part III, End-Effector, on the use of 3D printing to enhance end-effector functions.","","en","doctoral thesis","","978-94-6384-370-6","","","","","","","","","Medical Instruments & Bio-Inspired Technology","","",""
"uuid:431f98e1-3b0b-4c61-87ba-e6b458748428","http://resolver.tudelft.nl/uuid:431f98e1-3b0b-4c61-87ba-e6b458748428","Nonlinear dynamic atomic force microscopy","Chandrashekar, A. (TU Delft Dynamics of Micro and Nano Systems)","Staufer, U. (promotor); Alijani, F. (copromotor); Delft University of Technology (degree granting institution)","2022","Most physical phenomena be it mechanical, chemical or biological are inherently nonlinear in nature. In fact, it is the linear phenomenon that is the exception rather than the rule. By harnessing these nonlinearities one can obtain far greater information about the underlying physics and develop more sensitive and efficient devices. This is especially true at the micro and nanoscale world where the forces tend to be highly nonlinear and the go-to tool for studying such forces is the atomic force microscopy (AFM). Ever since its inception, AFMhas revolutionized theworld of nanotechnology through its ability to manipulate and characterize matter with atomic resolution. With the gradual development of novel characterization techniques, AFM has slowly transitioned from a traditional imaging technique to a powerful nanomechanical characterization tool capable of estimating material properties of wide variety of samples with ease. This transition is fueled by the greater interest in understanding the highly nonlinear tip-sample interaction forces that exist between an AFM probe and the sample of interest. However, in order to advance our understanding of nanoscale interactions, one must fully embrace the nonlinear nature of the system and develop parameter identification techniques based on nonlinear dynamics. In this regard, this thesis focusses on both fundamental and applied nonlinear dynamical studies to develop novel identification techniques for dynamic AFM applications.","Nonlinear dynamics; AFM microscopy; Identification methods; Nanomechanical characterization; cantilever dynamics; Machine learning; viscoelasticity; Continuum mechanics","en","doctoral thesis","","978-94-6384-366-9","","","","","","2023-09-14","","","Dynamics of Micro and Nano Systems","","",""
"uuid:87ade27f-bf2d-483c-af31-37176ebb7db9","http://resolver.tudelft.nl/uuid:87ade27f-bf2d-483c-af31-37176ebb7db9","Cooperative Control of Autonomous Multi-Vessel Systems for Floating Object Manipulation","Du, Zhe (TU Delft Transport Engineering and Logistics)","Negenborn, R.R. (promotor); Reppa, V. (copromotor); Delft University of Technology (degree granting institution)","2022","This thesis provides a set of cooperative control schemes for autonomous multi-vessel systems to manipulate a floating object through physical interconnections in onshore (inland waterways and ports) and offshore areas. Thanks to the maturity and popularity of the advancing technologies in information, communication, sensors, automatic control, and computational intelligence, we have seen the application scenarios of the autonomous vessels being gradually extended from fundamental research to civil and commercial uses. In recent years, to ensure that the regulatory framework for autonomous vessels keeps pace with technological developments, the International Maritime Organization (IMO) has started to include the autonomous vessels issue in its sessions. Maritime operations have become more complex and their scale is getting larger, requiring the involvement of multi-vessel systems. In recent decades, the formation control of multiple vessels has been investigated and several mature control methods are proposed to cope with different typical missions. However, there is a lack of research focusing on floating object manipulation by multiple autonomous vessels through physical interconnections. Thus, the research question of this thesis is How to design a scalable control scheme for multiple ASVs to manipulate a floating object through physical interconnections? Through the analysis of four typical manipulation ways in the maritime field, the towing way is selected in this thesis as the basic physical manipulation model, which has advantages in good maneuverability of the floating object, better safety of the manipulation system, and more flexibility of the operational scenarios. The dynamic model of the towing system is built by using the 3 DOF vectorial representation, where the towing forces and towing angles are the kinetic and kinematic interconnections between the floating object and tugboats, respectively. Considering the multiple control inputs, multiple control constraints, and limited maneuverability of the towing system, the model predictive control (MPC) strategy is the research approach in this thesis. Furthermore, to achieve the distributed control architecture, the alternating direction method of multipliers (ADMM) is used. In this thesis, the proposed method is used in three different operational environments, port areas, inland waterways, and open sea for floating object manipulation...","Cooperative control; Model Predictive Control; Floating Object Manipulation; Distributed control; Multi-vessel systems; Physical-connected systems","en","doctoral thesis","","978-90-5584-316-9","","","","","","","","","Transport Engineering and Logistics","","",""
"uuid:c6d607fe-11d7-4c14-870e-97d1a9d6e0d5","http://resolver.tudelft.nl/uuid:c6d607fe-11d7-4c14-870e-97d1a9d6e0d5","Tidal Dynamics of Moons with Fluid Layers: From Ice to Lava Worlds","Rovira Navarro, M. (TU Delft Astrodynamics & Space Missions)","Vermeersen, L.L.A. (promotor); van der Wal, W. (copromotor); Delft University of Technology (degree granting institution)","2022","In the last fifty years, the spacemissions Voyager, Galileo, Cassini-Huygens and Juno explored the moons of the outer Solar System and revealed a wide spectrum of worlds. While some of these worlds are barren, others are among the most geologically active of the Solar System. The innermost Jovian moon, Io, showcases spectacular volcanic activity; its three companions, Europa, Ganymede and Callisto, are likely ocean worlds that harbour subsurface oceans beneath their icy crusts. Similarly, the biggest Saturnian moon, Titan, is believed to have a subsurface ocean concealed beneath its icy surface and dense atmosphere. Saturn’s collection of smaller icy bodies feature different levels of activity, Enceladus being the most remarkable. Above its limb, water plumes rise more than hundred kilometers spilling its internal ocean into Saturn’s E-ring. In Neptune, the captured moon Triton orbits in a peculiar retrograde orbit; its barely cratered surface is similar to that of other ocean worlds and shows signs of cryovolcanic activity. The spectrum of geological activity displayed by the moons of the outer Solar System is thought to bemainly the consequence of tides.","Icy moons; Io; exomoons; tides; thermal-orbital evolution; interiors","en","doctoral thesis","","978-94-6421-819-0","","","","","","","","","Astrodynamics & Space Missions","","",""
"uuid:02cd1165-9fef-4613-a12c-dc3c4bd9d731","http://resolver.tudelft.nl/uuid:02cd1165-9fef-4613-a12c-dc3c4bd9d731","Building blocks for atomically assembled magnetic and electronic artificial lattices","Rejali, R. (TU Delft QN/Otte Lab)","Otte, A. F. (promotor); van der Sar, T. (copromotor); Delft University of Technology (degree granting institution)","2022","This thesis focuses on possible platforms for a bottom-up approach towards realizing and characterizing atomically assembled magnetic and electronic artificial lattices. For this, we make use of the scanning tunneling microscope (STM), which provides a local probe of the magnetic and electronic properties of the sample and allows for the atom-by-atom construction of extended lattices. On the one hand, to address avenues for constructing extended spin lattices, we study single Fe atoms coordinated on the four-fold symmetric nitrogen binding site of the Cu2N/Cu3Au surface—a system which permits large-scale atomic assembly, and allows for independent access to both the orbital and spin degrees of freedom. On the other hand, we investigate the viability of laterally confined vacuum resonances on the chlorinated Cu(100) surface as a basis for constructing electronic lattices. We atomically assemble dimers and trimers of various geometries to determine the tight-binding parameters, and as a proof of concept, experimentally realize a looped Su-Schrieffer–Heeger chain using this platform. These studies are made possible by means of a low-temperature, ultra-high vacuum STM, which allows for atom manipulation and, via spectroscopic techniques, permits us to locally probe the sample density of states and detect inelastic excitations of the spin and orbital angular momentum.","scanning tunneling microscopy (STM); artificial lattices; inelastic electron tunneling spectroscopy; field-emission resonances; single atom magnetism","en","doctoral thesis","","978-94-6366-589-6","","","","","","","","","QN/Otte Lab","","",""
"uuid:91c17930-79ad-4c6f-921f-3870c2f1a33d","http://resolver.tudelft.nl/uuid:91c17930-79ad-4c6f-921f-3870c2f1a33d","Reconceptualizing Autonomy in Elderly Care in the Robot Era: A Relational Perspective","Li, S. (TU Delft Ethics & Philosophy of Technology)","Roeser, S. (promotor); van den Hoven, M.J. (promotor); Ziliotti, E. (copromotor); Delft University of Technology (degree granting institution)","2022","In response to the pressing demand for elderly care, care robots, the robots used by care receivers and/or caregivers for care purposes in various settings, such as hospitals, nursing homes, and personal residences were introduced and have been gaining traction as a technological solution to improve care quality and to enhance the value of autonomy. Despite all the benefits offered by robotic innovations, the consequent ethical issues in human-robot relationships in elderly care warrant sustained scrutiny. Due to the technological breakthroughs in robotics and its impacts on relationships in elderly care, the conventional dyadic human-robot interaction (HRI) model which focuses on one human and one robot, and the dominant Western individualistic understanding of autonomy are insufficient for the ethical evaluation of robots in elderly care.","autonomy; robot ethics; elderly care receivers; caregivers; care robots; human-robot-system interaction (HRSI); relational autonomy; value sensitive design","en","doctoral thesis","","978-94-6366-586-5","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:1a99cc80-3fea-4ed3-894f-bf3afeca6745","http://resolver.tudelft.nl/uuid:1a99cc80-3fea-4ed3-894f-bf3afeca6745","Parametric numerical study on two-way bending capacity of unreinforced masonry walls: Evaluation of the influence of geometric parameters to improve analytical formulations","Chang, L. (TU Delft Applied Mechanics)","Rots, J.G. (promotor); Esposito, R. (copromotor); Delft University of Technology (degree granting institution)","2022","Investigations on unreinforced masonry (URM) walls subjected to natural hazards, such as earthquakes and wind loads, identify the out-of-plane (OOP) failure as one of the most common failure mechanisms. Concerning the OOP failure, two types of failure mechanisms can be distinguished in URM walls: one-way bending in which lateral edges of walls are not supported; two-way bending in which at least one lateral edge of walls is supported in addition to the supports at the top and bottom. Compared with walls in one-way bending, walls in two-way bending are more widely encountered in practice considering that the lateral edges of walls are usually connected with pillars or return walls. Therefore, the failure of URM walls in OOP two-way bending can be more common. Even so, research on the geometric parameters that can have a major influence on the two-way bending capacity of URM walls, such as the aspect ratio, pre-compression and opening, is quite scarce. Due to a lack of experimental evidence and systematic numerical study, the current analytical formulations, namely the Yield Line Method (YLM) incorporated in the European Standard Eurocode 6, and the Virtual Work Method (VWM) incorporated in the Australian Standard AS3700 and Dutch Practical Guideline NEN-NPR 9998, assessing these geometric parameters can be limited in accuracy and application range.
This thesis aims at improving the analytical formulations assessing the influence of geometric parameters on the two-way bending capacity of URM walls. As a starting point, the accuracy and application range of the current analytical formulations are assessed and geometric parameters having a crucial influence on the two-way bending capacity are revealed (Chapter 2). A dataset of 46 testing specimens from 8 international testing campaigns is created and used for the assessment. The analytical formulations based on the VWM are found to return the most accurate predictions for the testing specimens. Even so, drawbacks and limitations are identified for the VWM. Besides, the precompression, wall aspect ratio and openings are identified to have a crucial influence on the two-way bending capacity of URM walls....","Unreinforced masonry; Out-of-plane; Two-way bending; Geometric parameters; 3D simplified brick-to-brick modelling; Analytical formulation","en","doctoral thesis","","978-94-6384-374-4","","","","","","","","","Applied Mechanics","","",""
"uuid:b1c97841-fda7-420c-8feb-f1145faa531c","http://resolver.tudelft.nl/uuid:b1c97841-fda7-420c-8feb-f1145faa531c","Autonomous Smart Morphing Wing: Development, Realisation & Validation","Mkhoyan, T. (TU Delft Aerospace Structures & Computational Mechanics)","De Breuker, R. (promotor); de Visser, C.C. (promotor); Delft University of Technology (degree granting institution)","2022","With the increasing desire of the aerospace industry to reduce emissions and fuel consumption, morphing wings have gained much interest due to the ability to adapt the wing shape in-flight for improved energy efficiency and aerodynamic performance. Active wing morphing is a technology that can improve aerodynamic performance continuously through different flight phases. However, a multidisciplinary approach is needed, which integrates the design, modelling, sensing and control methodologies in a multi-objective framework, and allows the smart autonomous morphing wing system to adapt its shape autonomously.
The SmartX project was initiated for this purpose at the Delft University of Technology, Faculty of Aerospace Engineering, Department of Aerospace Structures and Materials, aiming to investigate the energy-efficient wing concepts through smart wings.
This dissertation presents the Development, Realisation & Validation of a smart morphing wing, the SmartX-Alpha, capable of meeting various real-time objectives with distributed seamless morphing modules. This is done through a holistic approach considering all building blocks of a morphing system presented in four Parts of the dissertation.
Part I tackles the sensing approach required to reconstruct the shape of the wing in real-time with a vision-based sensing approach. Part II presents the design, development, realisation and experimental testing of a distributed modular morphing concept, SmartX-Alpha. Part III presents the multi-objective control framework developed to meet the gust and manoeuvre load alleviation objective and the real-time shape optimisation strategy to improve online aerodynamic performance. Furthermore, a vision-based control strategy is proposed to mitigate nonlinearities in the actuation system arising from mechanical imperfections. A series of wind tunnel experiments are conducted in the OJF to validate the methodologies on the SmartX-Alpha, ensuring the objectives are satisfied autonomously, in-real time. The final Part, Part IV presents the development of a second wing demonstrator, the SmartX-Neo, with distributed discretised control surfaces incorporating the previous learnings.","morphing design; over-actuated system; control, sensing","en","doctoral thesis","","978-94-6421-868-8","","","","","","2023-09-26","","","Aerospace Structures & Computational Mechanics","","",""
"uuid:fccd4c49-9e04-4d57-b949-cb86153257a5","http://resolver.tudelft.nl/uuid:fccd4c49-9e04-4d57-b949-cb86153257a5","Improving Acoustic Measurements with Cavities in Closed Test Section Wind Tunnels","VanDercreek, Colin (TU Delft Aircraft Noise and Climate Effects)","Snellen, M. (promotor); Ragni, D. (promotor); Avallone, F. (copromotor); Delft University of Technology (degree granting institution)","2022","br/>Aerodynamic noise produced by aircraft, wind turbines, and other objects subjected to airflow contribute to environmental noise pollution, which adversely affects human and animal health. Consequently, governments impose restrictions on aircraft and wind turbine noise levels. These restrictions can have an economic impact by limiting aircraft traffic and reducing wind turbine energy production. Accordingly, improving the design of aerodynamic surfaces to reduce their noise levels benefits health while enabling improved operational efficiency. Therefore, aeroacoustic research focuses on identifying and understanding the physical mechanisms behind aerodynamic noise to improve noise mitigation technologies. This research relies on acoustic wind tunnel measurements to validate simulations, theories, and design improvements.
Closed test section wind tunnels are widely used for aerodynamic testing but are less suitable for acoustic measurements because microphones must be installed in the wall. This location subjects the microphones to pressure fluctuations from the turbulent boundary layer (TBL), which contaminates acoustic measurements and reduces the signal-to-noise ratio (SNR). The impact of the TBL can be mitigated by recessing microphones within cavities and covering them with an acoustically transparent material. Modifying existing wind tunnel walls by installing cavity--mounted microphones is a straightforward and cost-effective improvement that enables combined aerodynamic and acoustic measurement campaigns.
The cavity geometry, i.e., depth, aperture size, wall angle, and presence of a covering determines the amount of TBL attenuation and consequently the improvement to SNR. While several studies have shown empirically that these parameters have an effect, few studies focus on identifying the physical mechanisms that explain the relationship between geometry and the reduction in TBL pressure fluctuations at the microphone. Thus, this thesis aims to identify these physical mechanisms through experiments and different modeling approaches to better explain the relationship between cavity geometry, the amount of TBL attenuation, and the subsequent impact on the measured acoustic signal.
Experimental data were collected to develop an empirical model to quantify how varying cavity geometry affects the measured pressure spectra. Moreover, experiments were also performed to validate simulation results and to quantify the SNR improvement when applying a beamforming algorithm to microphone array data. The modeling and simulation efforts focus on explaining the trends and phenomena identified in the experimental data. Initially, a physical model was developed that assumed acoustic propagation into an axisymmetric cavity with a constant cross-section. This model decomposes a pressure field, resulting from a TBL, into circular duct modes and was used to evaluate the relationship between cavity geometry and the propagation of these acoustic modes into the cavity. This model was followed up with a finite element method (FEM) simulation to study the influence of different cavity geometric parameters and wall materials on the acoustic response of the cavity when subjected to an acoustic wave.
The FEM simulation showed that the cavity's acoustic response is determined by the presence of standing waves in the form of acoustic depth modes. This simulation showed that cavities with angled walls have depth modes with lower amplitude waves and thus distort the acoustic signal less. Furthermore, it is shown that the acoustic responses of cavities formed out of sound-absorbing foam are driven by the shape of the foam holder and not the cavity shapes within the foam. Thus, the holder can be optimized to minimize the acoustic response, while the cavity itself can be optimized to reduce the influence of the TBL. Building upon these simulations, a Lattice Boltzmann based computational fluid dynamics (CFD) method was used to simulate the pressure and flow fields within three uncovered cavities and covered cavities resulting from the presence of a turbulent boundary layer.
The CFD simulations confirmed a significant finding of the physical model, that the amount of TBL attenuation increases as the cavity aperture size increases relative to the TBL streamwise coherence length. This is due to the resulting modal decomposition of the pressure field above larger cavities having more energy distributed across higher-order modes than for smaller cavities. These higher-order modes decay exponentially into the cavity, resulting in increased attenuation of the TBL. Smaller cavities have most of their energy in their first mode, which does not decay with increasing cavity depth. Furthermore, these simulations showed that the pressure field within covered cavities is primarily acoustic and can be decomposed into acoustic circular duct modes. Since the propagation of TBL pressure fluctuations into covered cavities is primarily acoustic, the shape of future cavities can be efficiently optimized using FEM simulations.
Finally, beamforming used with cavities improved the acoustic measurement SNR. Analysis shows that the improvements due to beamforming are independent of those attributed to the cavity geometry. Thus, combining the two approaches improves the SNR of acoustic measurements in closed test section wind tunnels.
The focus of human-centred design expanded in the last decades from designing user-friendly products to designing a system of products and services (PSS) that provide good user experiences (UX). In a PSS design process, many actors and disciplines are involved: various professionals with different values depending on their expertise in the process of product design, service design, or business development. Put differently, PSS design can be seen as a networked process with many actors involved who are potential design decision makers in addition to the design professionals. Next to designers, e.g., product managers, marketeers, and service engineers make design-decisions that influence how products and services will be experienced. These design decision makers seem not to continue using the earlier gained UX-insights in decision making. As a result, changes on the original design are made that reduce UX quality.
This research addresses the challenge of supporting design decision makers to continue the use of UX insights in networked design projects. The main research question guiding the research is what designers can do to prevent UX insights from getting lost in a networked design process. The research addresses this main question by exploring how and where UX insights get lost in networked design projects, and what barriers and opportunities can be identified to make networked design a human-centred project.","","en","doctoral thesis","","","","","","","","","","","Design Conceptualization and Communication","","",""
"uuid:4a63988e-dcd7-4305-b86a-5e8fe82b8519","http://resolver.tudelft.nl/uuid:4a63988e-dcd7-4305-b86a-5e8fe82b8519","Multifunctional implants Prevention is better than cure","van Hengel, I.A.J. (TU Delft Biomaterials & Tissue Biomechanics)","Zadpoor, A.A. (promotor); Apachitei, I. (copromotor); Delft University of Technology (degree granting institution)","2022","Millions of people around the globe receive orthopedic implants every year. These implants help people to regain their mobility and contribute tremendously to improve the quality of life. However, a significant number of patients suffer from complications, such as implant associated infections (IAI) and aseptic loosening. The number of orthopedic implants is expected to increase due to an aging and increasingly obese population. As a result, the number of complications will rise too. In addition, the treatment of IAI is complicated by the development of antibiotic resistant bacteria. The focus of researchers has, therefore, shifted more and more towards the prevention of complications. In the words of Desiderius Erasmus: “Prevention is better than cure.”","implant associated infection; surface biofunctionalization; biomaterials; additive manufacturing; bone tissue engineering; multifunctional implants","en","doctoral thesis","","978-94-6384-359-1","","","","","","","","","Biomaterials & Tissue Biomechanics","","",""
"uuid:fe6bb4fa-b690-4b21-81de-4ed56a79718d","http://resolver.tudelft.nl/uuid:fe6bb4fa-b690-4b21-81de-4ed56a79718d","Regionale gebiedsontwikkeling: De invloed van de provincie op ruimtelijke planning in tussenstedelijke gebieden","van Steenbergen, A.A.C. (TU Delft Spatial Planning and Strategy)","Zonneveld, W.A.M. (promotor); van Bueren, Ellen (promotor); Delft University of Technology (degree granting institution)","2022","Nationale opgaven zoals het woningbouwbeleid, de energietransitie en klimaatadaptatie zijn vraagstukken die door regionale instanties concreet moeten worden gemaakt in samenhang met de vraagstukken van de regio zelf. De regio, het niveau tussen de gemeenten en de provincie, heeft meestal geen duidelijke bestuurlijke grens en geen formele plaats in het Nederlandse bestuur.
In het proefschrift is gezocht naar de invloed van de provincie als regionale gebiedsautoriteit in een omgeving waarin diverse publieke en private partijen samenwerken en maatschappelijke organisaties hun stem laten horen. Het onderzoek is gericht op gebieden tussen grote steden waar een aanmerkelijke verstedelijking, ingebed in groen- en waterstructuren, was voorzien. Plannen en planprocessen zijn geanalyseerd, net als het handelen van de provincie in drie regio’s: de regio Rotterdam-Zoetermeer-Gouda, het gebied tussen Arnhem en Nijmegen en de regio Eindhoven-Helmond.
Het proefschrift laat onder meer zien dat planologische sturing een zwakker instrument is dan financiële sturing. Na de decentralisatie van ruimtelijke ordening en landinrichting is de invloed van de rijksoverheid op regionale gebiedsontwikkeling groot gebleven. Voorts blijkt dat planconcepten in de praktijk flexibel zijn. Als de uitvoering aan de orde komt neemt het risico toe dat het planconcept uit elkaar valt. Het onderzoek toont aan dat de provincie haar rol als regionale gebiedsregisseur actiever zou kunnen invullen. De Omgevingswet zou dat mogelijk moeten maken.
Imagine you sit down on a couch, while holding a saucer with a coffee cup on it. The coffee is too hot, so you set it down next to you ont the couch and while you wait you get lost in your thoughts. . .You feel like getting up and walking around in the room. While you are still lifting your body up from the sitting position, you hear the clinking of the cup against the saucer, and you see from the corner of your eye that the coffee cup is shaking. Without having touched the cup you managed to spill the coffee. The surface of the couchwas deformed and pulled tight because youwere sitting on it. When you got up the surface relaxed back to its original shape. This movement of the surface caused the cup to shake and spill its contents (see fig. 1 for an artist’s impression). Membrane mediated interactions are just like that! When something deforms a membrane in one place, that deformation is felt at a distance by other objects that inhabit the membrane. Exactly how far and how strongly such signals are felt depends on how floppy or tense a membrane is. When we study membrane mediated interactions, our main goal is to quantify the effects membrane deforming objects can have on each other.","Membrane mediated interactions; Canham-Helfrich energy; Dynamically triangulated membraneMonte Carlo simulations","en","doctoral thesis","","978-90-8593-535-3","","","","","","","","","BN/Timon Idema Lab","","",""
"uuid:92bc5717-c27d-4cd2-b2b2-560c6551e437","http://resolver.tudelft.nl/uuid:92bc5717-c27d-4cd2-b2b2-560c6551e437","Eda tools and methodologies for reliable nanoelectronic systems","Augusto da Silva, F. (TU Delft Computer Engineering)","Hamdioui, S. (promotor); Wong, J.S.S.M. (promotor); Delft University of Technology (degree granting institution)","2022","In recent years, advances in technology have enabled the employment of automated systems to control driving tasks. The idea of electronic devices having complete control over a vehicle promises to change the concept of mobility soon. However, allowing computers to control all the tasks in a vehicle demands sophisticated systems and significant safety concerns. Furthermore, the increasing complexity in such applications is causing a shift in the traditional design flow. For example, the development of semiconductors implementing safety-critical functionalitiesmust incorporate mechanisms to reduce the risk of failures avoiding life-threatening situations. This dissertation addresses the role of the EDA industry in supporting the safety aspects of automotive electronic systems. We propose methodologies to deploy the traditional EDA technologies into functional safety verification, improving compliance to Automotive Safety Standards, like ISO 26262, and ensuring automotive devices’ safety integrity levels. For such, we must comprehend how the guidelines of ISO 26262 establish a comprehensive safety lifecycle that supports the analysis of Systematic Failures and RandomHardware Failures. Afterward,we investigate the many possibilities to advance the state-of-the-art by deploying EDA technologies in compliancewith safety requirements. As a result,we identify research possibilities at different safety lifecycle stages. Furthermore, we propose methodologies to support such development phases, enabling compliance with ISO 26262…","Functional Safety; Verification; ISO 26262; Fault Space Analysis; Tool Qualification; Fault Injection Simulation; Formal Methods; Automotive benchmark; Safe Faults; Software Test Library; Safety Metrics; SPFM; ASIL","en","doctoral thesis","","978-94-6366-596-4","","","","","","","","","Computer Engineering","","",""
"uuid:3a065972-1fbc-483a-b9d6-7bc4570d4bef","http://resolver.tudelft.nl/uuid:3a065972-1fbc-483a-b9d6-7bc4570d4bef","Microseismic event detection and localization: A migration-based and machine-learning approach using full waveforms","Vinard, N. (TU Delft Applied Geophysics and Petrophysics)","Drijkoningen, G.G. (promotor); Verschuur, D.J. (promotor); Delft University of Technology (degree granting institution)","2022","When humans started started exploiting the abundant underground natural resources the Earth has to offer such as hydrocarbons, minerals and heat, we started to experience earthquakes that are related to this exploitation, so called induced earthquakes. Under certain conditions those can damage local infrastructure. However, most events are weak and only sensed by seismic sensors. Microseismic monitoring plays a vital role to optimize and insure the safety of these underground activities and new technologies such as carbon capture and storage. One key task besides the detection of microseimsic events is to determine the source location of these events using data recorded at the surface. In this thesis we investigate a method to localize weak microseismic events, using a deterministic approach, assuming a dense network of sensors. In simple words this method takes the seismic signals recorded at the Earth’s surface and sends them back into the Earth, where the signals start to focus at the point they originated from. This focusing method uses one-way wavefield extrapolation with an estimate of the background velocity model. The advantage of this method is that the weak signals recorded by the different sensors at the surface are amplified as they approach the location of the event that emitted the signal due to constructive interference. However, this is not enough to reliably recover the source location because typically earthquakes do not radiate seismic waves evenly; complex radiation patterns are typically observed depending on the mechanical properties of the rupture. To obtain a strong focused signal at the optimal source location we therefore perform a grid search over possible source mechanisms and increase the strength of the signal by deconvolution. Without taking the source mechanism into account we are not able to obtain accurate source locations, especially at low signal-to-noise ratios. However, by taking the source mechanism into account we are able to retrieve accurate source locations while also retrieving information about the source mechanism. Good results were obtained for 2D synthetic data for both a simple subsurface model as well as the realistic Annerveen salt model even when realistic noise was added...","Machine learning; induced seismicity; microseismic monitoring","en","doctoral thesis","","","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:a7676dc2-8003-4495-839d-b900f95fd061","http://resolver.tudelft.nl/uuid:a7676dc2-8003-4495-839d-b900f95fd061","Novel insights into the physics of fatigue crack growth: Theoretical and experimental research on the fundamentals of crack growth in isotropic materials","van Kuijk, J.J.A. (TU Delft Structural Integrity & Composites)","Alderliesten, R.C. (promotor); Benedictus, R. (promotor); Delft University of Technology (degree granting institution)","2022","Fatigue crack growth in metals has been studied intensively for more than half a century. This research field has been closely connected to the engineering industry throughout history. It is therefore no surprise that the main focus has mostly been on developing better prediction models rather than on understanding the phenomenon of fatigue crack growth. This dissertation focuses on improving our understanding of the underlying physics, and presents several novel insights primarily relating to crack closure modeling and crack growth rate modeling....","Fatigue crack growth; crack opening; crack closure; potential drop technique; physics","en","doctoral thesis","","978-94-6421-830-5","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:8445f322-fbf7-4e50-a509-1f8671e6153f","http://resolver.tudelft.nl/uuid:8445f322-fbf7-4e50-a509-1f8671e6153f","Failure prevention and restoration in power systems","Fu, J. (TU Delft Team Bart De Schutter)","De Schutter, B.H.K. (promotor); Nunez, Alfredo (copromotor); Delft University of Technology (degree granting institution)","2022","In typical power system operations, failures, e.g., caused by degradation, may result in service interruption and loss of power supply. Furthermore, low-occurrence-probability but high-impact extreme events, e.g., natural disasters and cyber-attacks, may also damage the power grid. This thesis aims to develop innovative strategies for handling these two kinds of failures in power grids. In particular, we develop preventive maintenance strategies, a pre-disaster electrical vehicle charging control strategy, an accurate fault location algorithm, and an unmanned aerial vehicles routing strategy for post-disaster distribution networks....","Power systems; Preventive maintenance; Fault location; Restoration strategy","en","doctoral thesis","","978-94-6366-590-2","","","","","","","","","Team Bart De Schutter","","",""
"uuid:88a142a9-3ac4-4d97-ab1c-1cf811b07abb","http://resolver.tudelft.nl/uuid:88a142a9-3ac4-4d97-ab1c-1cf811b07abb","Tuning Giant Magnetocaloric Materials: A Study of (Mn,Fe)2(P,Si) and NiCoMnTi Heusler Compounds","Zhang, F. (TU Delft RST/Fundamental Aspects of Materials and Energy)","Brück, E.H. (promotor); van Dijk, N.H. (promotor); Delft University of Technology (degree granting institution)","2022","Solid‐state caloric effects as intrinsic responses from different physical external stimuli (magnetic‐, uniaxial stress‐, pressure‐ and electronic‐ fields) have been evaluated near magnetic phase transformations. In the last decades the magnetically driven caloric changes in various magnetocaloric materials (MCMs) have been exploited extensively for magnetic refrigeration and magnetic heat pumping scenarios near room temperature. This thesis systematically investigates the magnetocaloric effect (MCE) for the representative magnetoelastic (Mn,Fe)2(P,Si) system. Special emphasis has been directed towards the giant MCE in nanoscale particles and the influence of doping with elements that show a strong electronegativity on the magnetic properties of this metal‐metalloid system. Meanwhile, two optimization strategies (decoupling and light element B doping) are successfully introduced to regulate the thermal hysteresis ΔThys, the ferromagnetic phase transition TC and improve the reversibility of the MCE for magnetostructural transition in the all‐d‐metal NiCoMnTi Heusler alloys.","Magnetocaloric effect; phase transition; magnetic refrigeration; (Mn,Fe)2(P,Si); Heusler compounds","en","doctoral thesis","","978‐94‐6458‐555‐1","","","","","","","","","RST/Fundamental Aspects of Materials and Energy","","",""
"uuid:50a1ca63-83c5-4603-9680-a11277f828a6","http://resolver.tudelft.nl/uuid:50a1ca63-83c5-4603-9680-a11277f828a6","Turbulent flow through a ribbed pipe: An experimental study","Schenker, M.C. (TU Delft Fluid Mechanics)","Westerweel, J. (promotor); Delft University of Technology (degree granting institution)","2022","This thesis describes the results of an experimental study of turbulent flow through a ribbed pipe. The ribbed pipe used is a pipe segment of 50 mm inner diameter with ribs inserted at various pitches (15 to 50 mm). Rectangular and rounded ribs are used, with a rib height of 7.5 or 11 mm. Pressure measurements result in friction factors showing a strong dependency on rib shape and pitch, and for the rounded rib shape also a dependency on Reynolds number. Dynamic pressure measurements show the presence of a frequency that coincides with second harmonic of vortex shedding of the ribs. Measurements with single ribs, rather than ribbed segments, show that the friction behaviour of a ribbed segment cannot be predicted based on a single rib, as the observed friction behaviour scales differently. With PIV measurements, mean flow patterns, shear strength and Reynolds Stress distributions are obtained, confirming the significant impact of rib pitch and shape on the behaviour of the flow that was indirectly observed during the pressure measurements. For the rectangular shaped ribs, axially averaging the results over one pitch flow length results in flow and stress profiles. These profiles are used as input in the theoretical description of flow over a rough wall. With some adjustment, specifically fitting the Von Kármán constant instead of assuming a value a-priori, the results fit within the theoretical description. The friction factors derived in this way, agrees well with the friction factors based on the pressure measurements up to a pitch of 35 mm. A statistical analysis of the flow dynamics shows that the large contributions to the stresses are caused by relatively rare but strong fluctuations, which are, depending on the location in the flow, either ejections or sweeps. The observation of these fluctuations and their location in the flow match with the second harmonic vortex shedding that was hypothesised based on the frequencies in the dynamic pressure spectrum.","","en","doctoral thesis","","978-94-6419-562-0","","","","","","","","","Fluid Mechanics","","",""
"uuid:fcd43f22-5a25-495a-a15c-546f01e21c6d","http://resolver.tudelft.nl/uuid:fcd43f22-5a25-495a-a15c-546f01e21c6d","Selective ion separation by supported liquid membranes under electrodialysis conditions","Qian, Z. (TU Delft ChemE/Advanced Soft Matter)","Sudhölter, Ernst J. R. (promotor); de Smet, L.C.P.M. (promotor); Delft University of Technology (degree granting institution)","2022","Electrodialysis (ED) is a membrane-based process in which ions are transported under the influence of an externally applied electrical potential. Ion-exchange membranes (IEMs) are key components in ED processes. There are two types of IEMs: (1) cation-exchange membranes (CEMs), which contain fixed, negatively charged groups, and (2) anion-exchange membranes (AEMs), which contain fixed, positively charged groups. ED processes have been widely applied for water desalination. This thesis investigates the application of ED in the treatment of drainage water of greenhouses. A key objective in sustainable greenhouse horticulture is the recirculation of drainage water, thereby minimizing the water volume used, which would otherwise be disposed into the environment.[1] The drainage water of greenhouses contains both K+ and Na+. Whereas K+ is a valuable nutrient, Na+ is detrimental for plant growth. Because of its toxicity, the Na+ level should be controlled below the crop-specific threshold.[2-4] Because Na+ is not taken up by plants, it accumulates and the excess needs to be removed. The main challenge here is to selectively separate and remove Na+ without removing K+ and other key nutrients like Ca2+ and Mg2+. Na+ and K+ are two competitive cations ion separations as they have the same valence (+1), quite similar crystal and hydrated radii and a rather similar transport behavior (i.e. electrophoretic mobility), causing that separation by charge, size, and/or mobility is challenging. This thesis focusses on the development and characterization of a membrane-based process for the selective removal of Na+....","","en","doctoral thesis","","978-94-6366-585-8","","","","","","","","","ChemE/Advanced Soft Matter","","",""
"uuid:6b36a004-0b37-417c-a77a-ab423a19cbb9","http://resolver.tudelft.nl/uuid:6b36a004-0b37-417c-a77a-ab423a19cbb9","Real-time Co-planning in Synchromodal Transport Networks using Model Predictive Control","Larsen, R.B. (TU Delft Transport Engineering and Logistics)","Negenborn, R.R. (promotor); Atasoy, B. (copromotor); Delft University of Technology (degree granting institution)","2022","Container transport is an essential part of the well-functioning, highly specialized, and global production chains society currently relies on. To improve the utilization of resources, it is important to ensure all processes are as efficient as possible. Synchromodal transport is a recent transport paradigm which seeks to increase the efficiency of freight transport by letting transport providers change the mode of transport of goods in real-time. This new flexibility alleviates some of the obstacles to using sustainable transport modes, e.g., barges and trains, as it simplifies the process of changing transport plans if something unpredicted happens, such as delays, cancellations or if shipping requests that were announced later makes different routing smarter. Furthermore, synchromodal transport can improve the utilization of the transport vehicles, as the freight can be routed using up-to-date information about vehicle availability....","synchromodal transport; co-planning; model predictive control; vehicle and container routing","en","doctoral thesis","","978-90-5584-312-1","","","","","","","","","Transport Engineering and Logistics","","",""
"uuid:b043d1ac-e8f4-43fc-aa9c-1159875553c7","http://resolver.tudelft.nl/uuid:b043d1ac-e8f4-43fc-aa9c-1159875553c7","Self healing in Fe-based systems: From model alloys to designed steels","Fu, Y. (TU Delft Novel Aerospace Materials)","van der Zwaag, S. (promotor); van Dijk, N.H. (promotor); Brück, E.H. (promotor); Delft University of Technology (degree granting institution)","2022","When high-temperature steels are loaded under under industrially relevant conditions not only creep (i.e. a time dependent strain increase even under nominally constant loading conditions) occurs, but also local damage is formed. At relatively short exposure times quasi-spherical micron-sized cavities form preferentially at the grain boundaries oriented perpendicular to the principal loading direction. These cavities subsequently grow and coalesce into micro and macro cracks, which ultimately lead to failure of the structure. The concept of self healing, in which such damage is healed in-situ and under the applied loading conditions rather than is being prevented by a special microstructure, provides a new principle to extend the creep lifetime. Well-selected supersaturated solute atoms can selectively segregate at the free internal surface of the grain boundary cavities and fill them, thereby preventing the coalescence of cavities. This reduction in coalescence rate leads to an extended lifetime. The potential of the concept has been demonstrated in previous studies for binary Fe-based model alloys in which only one healing reaction can take place. The current work aims to take the validation one step further and to demonstrate it for a Fe-3Au-4W (in weight percent) model system in which two healing reactions can take place simultaneously, but without any intention to achieve decent mechanical properties. This work also aims to apply the selfhealing concept for two multi-component steels designed to have both decent mechanical properties and to demonstrate self-healing behaviour when exposed to the right conditions.","Self healing; Fe-based alloys; precipitation; diffusion; electron microscopy; synchrotron; X-ray tomography","en","doctoral thesis","","978-94-6366-587-2","","","","","","","","","Novel Aerospace Materials","","",""
"uuid:b3ffbc58-efa6-4761-a020-5240ea7948f1","http://resolver.tudelft.nl/uuid:b3ffbc58-efa6-4761-a020-5240ea7948f1","Numerical modeling of conjugate magnetohydrodynamic flow phenomena","Blishchik, A. (TU Delft ChemE/Transport Phenomena)","Kenjeres, S. (promotor); Kleijn, C.R. (promotor); Delft University of Technology (degree granting institution)","2022","Steel is found irreplaceable in many industrial applications. It is currently predicted that steel consumption will increase significantly in the coming decades. Humanity is expected to produce more and more steel-based products, such as cables, cars, railways, bridges, stadiums, skyscrapers, etc. The increased demand will pose a serious challenge to steel-producing companies. At the same time, these companies strive to reduce the amount of carbon emissions which are released during the majority of steel-making processes. Thus, the steel-producing corporations currently carry out a lot of reforms aimed at improving production efficiency and making plants environmentally friendly. Continuous casting is a very important part of the steel-making process. During the continuous casting process, steel solidifies and takes the correct shape. There are several important nodes in this process, e.g. the tundish, the mold, the turning zone, etc. Given that the mold is the first stage where the solidification starts, a deep understanding of all physical phenomena in the mold flow could potentially help researchers to increase the process efficiency and the quality of final products. Originally, the mold flow is highly turbulent and unstable due to various physical processes arising simultaneously inside the mold. To control the flow, one of the tools widely used in the steel-making industry is the electromagnetic brake (EMBr). The work of the EMBr is based on magnetohydrodynamic (MHD) principles. A strong magnet is used to impose an external magnetic field on the flow of liquid steel which is highly electrically conductive. Hence, the generated electric current inside the liquid steel results in an active Lorentz force affecting the flow. The most optimal and efficient configuration of the EMBr remains an open question as well as a full understanding of processes caused by EMBr. In particular, it is not clear whether the electric interaction between the solidified shell (which has also a relatively high electrical conductivity) and the turbulent flow of liquid steel is significant.","Magnetohydrodynamics; turbulence; steelmaking; simulations","en","doctoral thesis","","78-94-6384-360-7","","","","","","","","","ChemE/Transport Phenomena","","",""
"uuid:e2f4b411-3d8e-4b93-b037-096009c59f61","http://resolver.tudelft.nl/uuid:e2f4b411-3d8e-4b93-b037-096009c59f61","Advanced Measurement Techniques and Circuits for Array-Based Transit-Time Ultrasonic Flow Meters","van Willigen, D.M.","Pertijs, M.A.P. (promotor); de Jong, N. (promotor); Delft University of Technology (degree granting institution)","2022","This thesis describes the design, prototyping and evaluation of matrix-based clamp-on ultrasonic flow meters. Several new measurement techniques are presented as well as an Application-Specific Integrated Circuit (ASIC) designed for accurate measurement of flow velocity with matrix transducers.
The influence of circuit topologies on the zero-flow performance of ultrasonic flow meters has been analyzed and an algorithm is presented to reduce the offset. With a linear transducer array, flow measurements have been performed via two different acoustic paths, demonstrating the ability to accurately measure flow with array transducers through a stainless-steel pipe wall. In order to improve signal quality, an ASIC has been designed that is able to drive and read-out 96 piezo transducer elements. The ASIC has been characterized electrically and flow measurements have been performed in combination with the linear transducer arrays.
Several new techniques, enabled using transducer arrays, have also been explored. By tapering the amplitude of the transmit signals, spurious waves can be suppressed. An auto-calibration technique has been developed that uses additional acoustic measurements to estimate the diameter of the pipe and the speed of sound in the pipe wall and liquid. Finally, a simulation study has been performed to explore the possibility of exploiting the beam-steering capabilities of transducer arrays to measure flow velocity profiles by using measurements obtained via multiple acoustic paths.","Ultrasonic; Flowmeters; Ultrasound; Matrix transducer arrays; Clamp-on flow meter; ASIC","en","doctoral thesis","","","","","","","","","","","Electronic Instrumentation","","",""
"uuid:f6a6096d-9eb1-4a77-b376-34e01b817011","http://resolver.tudelft.nl/uuid:f6a6096d-9eb1-4a77-b376-34e01b817011","Statistical post processing of extreme weather forecasts","Velthoen, J.J. (TU Delft Statistics)","Jongbloed, G. (promotor); Cai, J. (copromotor); Delft University of Technology (degree granting institution)","2022","In this thesis we develop several statistical methods to estimate high conditional quantiles to use for statistical post-processing of weather forecasts. We propose methodologies that combine theory from extreme value statistics and machine learning algorithms in order to estimate high conditional quantiles in large covariate spaces. In applications of weather forecasting we show improved predictive skill for precipitation forecasts.","Extreme quantile regression; Statistical post-processing; Extreme value theory; Extreme conditional quantile; Variable selection; Random Forest; Gradient boosting","en","doctoral thesis","","9789083272726","","","","","","","","","Statistics","","",""
"uuid:fad3199c-59c9-4ffc-975c-8a4690085c39","http://resolver.tudelft.nl/uuid:fad3199c-59c9-4ffc-975c-8a4690085c39","The Development and Stability of Palladium-based Thin Films for Hydrogen-related Energy Applications","Verma, N. (TU Delft Team Amarante Bottger)","Bottger, A.J. (promotor); Sietsma, J. (copromotor); Delft University of Technology (degree granting institution)","2022","Hydrogen has been contemplated as a desirable energy source of the future with enormous possibilities to create a carbon-neutral society. Since palladium (Pd) readily absorbs hydrogen even at low pressure and room temperature, Pd and its alloys are suitable for hydrogen production, purification, storage, gas sensors, and fuel cell catalyst. However, primary requirements for industrial applications are not always satisfied, such as usability under operation conditions, minimum capital cost, and sustained hydrogen embrittlement. Therefore, developing stable Pd-based thin films to investigate correlations between microstructural features and mechanical properties of a material is of great importance for many hydrogen-related technologies.
In this thesis particular interest has been focused on the stability of a series of magnetron sputtered Pd thin films of different nanostructures i.e., non-voided compact and nano-voided open columnar morphology. The X-ray diffraction (XRD) analysis methods are advanced, utilizing the tailored microstructures of the Pd films suitable to investigate the interplay between microstructure and hydrogenation properties of Pd-based thin films. Interpretation of the stress state and microstructural changes during hydrogen cycling are studied utilizing XRD line-profile analysis and the deformation mechanisms are systematically discussed. The change in dislocation density by the generation and annihilation of dislocations at interfaces reflects the difference in film-substrate interaction. The insertion of an intermediate layer between the Pd film and a rigid substrate can prevent buckle-delamination that is caused by the large volume expansion due to hydrogen absorption but it also changes the hydrogen absorption performance. The different effects on the absorption properties in the case of compliant (polyimide) and rigid (titanium) intermediate layers are illustrated. The results of this work showed that the strong clamping usually suppresses or reduces hydrogen absorption, whereas, the flexible layer enhances the lifetime of Pd thin films when exposed to prolonged hydrogen during cycling. The research in this thesis deepens the understanding about an appropriate combination of film microstructure and choice of the intermediate layer to strengthen Pd-based thin films.
This dissertation investigates to what extent a visual language could support professional spreadsheet users in interacting with complex formulas. We divided our research into two phases. In the first phase, we try to understand better how spreadsheets are used in three ways...","End-User Programming; Spreadsheets; Block-based languages","en","doctoral thesis","","","","","","","","","","","Software Engineering","","",""
"uuid:e515547e-62bc-4893-b299-87c1286b5d55","http://resolver.tudelft.nl/uuid:e515547e-62bc-4893-b299-87c1286b5d55","Spin-in of RISC-V Processors in Space Embedded Systems","Di Mascio, S. (TU Delft Space Systems Egineering)","Gill, E.K.A. (promotor); Menicucci, A. (copromotor); Delft University of Technology (degree granting institution)","2022","The usage of terrestrial processors in space applications is not straightforward, as processors in space face unique challenges due to the effects of the space environment, like ionizing radiation causing Single Event Effects (SEEs). In the nineties, the European Space Agency chose the Scalable Processor ARChitecture (SPARC) Instruction Set Architectures (ISA) for its processors, as it was the only solution available at that time providing both openness and available software support in terrestrial applications. Currently, a large part of the worldwide space community is using SPARC-based radiation-hardened (rad-hard) or radiation-tolerant (rad-tol) LEON processors in ongoing and planned missions, although SPARC processors virtually disappeared from terrestrial applications. Rad-hard and rad-tol processors for space applications typically lag more than a decade behind their commercial counterparts in terms of performance and the gap is widening every year. This is mainly due to the use of Rad-Hard-By-Design (RHBD) cells and older technology nodes. The larger vulnerability to SEEs of complex microarchitectures is not the only reason why simple microarchitectures with low parallelism are still the vast majority of processors employed in space. As a matter of fact, most of the tasks executed by processors in space data systems are non-compute-intensive workloads. The reason is that they are mainly employed for non-demanding control and housekeeping operations. Therefore, enabling demanding tasks, such as the execution of Artificial Intelligence (AI) algorithms in space embedded systems, requires a large leap in spacegrade processors, especially because space data systems in satellites are typically powerconstrained. Recently, RISC-V, a novel free and open ISA, has risen in popularity in terrestrial applications, drawing the attention of several universities and companies. Given the similarity between SPARC and RISC-V, this dissertation starts by analyzing the advantages of using RISC-V in space applications. The openness of RISC-V already enabled a vast field of research activities for terrestrial applications, with many tools and models at different level of abstraction already available. Therefore, the space industry can spin-in developments from academia and industry, focusing efforts mainly on improvements concerning specific needs in space applications and without wasting efforts on other activities. In order to fully exploit modularity, the need of defining the types of processors required in space application was identified in this dissertation. The modularity of RISC-V was employed to identify several applications in space data systems and RISC-V processor profiles to address them. They were defined in this work by the ISA subset, Instruction-Level Parallelism (ILP), Data-Level Parallelism (DLP), Processor- Level Parallelism (PLP), reference implementation and expected performance. The processors profiles defined range from microcontrollers to general-purpose implementations to high-performance processors for AI. Finally, a roadmap to bring RISC-V IP cores for terrestrial applications to space level was defined, identifying the steps and models required. After the thorough analysis of the state-of-the-art of RISC-V processors was completed, two different sets of activities were identified…","Satellite Data Systems; Processors; Fault Tolerance; Space Systems; Artificial Intelligence; RISC-V; Application Specific Integrated Circuits; Small Satellites","en","doctoral thesis","","978-94-6421-851-0","","","","","","","","","Space Systems Egineering","","",""
"uuid:a1908c73-2bc4-4947-944e-2bd3a177bfe6","http://resolver.tudelft.nl/uuid:a1908c73-2bc4-4947-944e-2bd3a177bfe6","Urn models and other approaches to risk and tails, with applications in risk management and climatology","Cheng, D. (TU Delft Applied Probability)","Redig, F.H.J. (promotor); Cirillo, P. (promotor); Delft University of Technology (degree granting institution)","2022","This dissertation collects three scientific contributions, already published in international peer-reviewed journals, plus some extra considerations and work-in-progress. First, we present a model based on reinforced urn processes, which conjugates to the right-censored recovery process, and empirically apply it to the time series of recovery rates. We perform a very thorough empirical study, including how different priors affect the posterior predictive distribution, how our model is updated with the empirical data during the global financial crisis, and we make predictions. Second, we apply a bivariate reinforced process derived from a Generalized Polya Urn scheme to model the linear dependence between the probability of default and the loss given default. Third, we offer a new perspective with Stochastic Poisson equation to deal with Spatio-temporal extremes. As it will be clear, the leit motiv of this thesis is the analysis of risk using different tools, from urn models to extreme value theory. In particular, we have focused on two risk applications: the modelling of credit risk in some of its declinations, and the prediction of the joint tail behavior of extreme sea surface temperature (SST) anomalies for the Red Sea. Almost every financial contract is affected by credit risk, that is the risk of changes in the creditworthiness of a counterparty. Financial economists, market participants, bank supervisors, and regulators have all paid close attention to credit risk measurement, pricing, and management. The probability of default, the recovery rate, and their dependence are fundamental aspects of the credit risk. Measuring credit risk accurately is pivotal for four reasons. First, for financial economists, credit risk measures are very important for pricing credit risk portfolios, credit derivatives, etc. The importance of credit risk in the pricing of financial contracts has been underlined by the global financial crisis. Second, during the management process of credit risk for companies, the accurate credit risk measure can help the management team better determine their risk appetite. Third, the well-known Basel capital requirements are calculated using credit risk measure. Fourth, the accurate estimation of the credit risk can help a manager improve decisions. For example, in the recovery activities after default, more effort will be put on the individual with a high estimated LGD to reduce the large loss. During my PhD studies I have also took part in several conferences, among which the 11th international conference on Extreme Value Analysis (EVA 2019). In attending this conference, I decided to participate in one of the proposed challenges for young scholars, something that led to thewriting of one of the contributions of thiswork,which also won the first prize in the competition.","Reinforced Urn Process; Credit Risk; Probability of Default; Loss Given Default; Extreme Value Theory; Stochastic Poisson Equation; Spatiotemporal Data","en","doctoral thesis","","","","","","","","","","","Applied Probability","","",""
"uuid:4fd952d3-f892-4567-8e51-e6623c7a6650","http://resolver.tudelft.nl/uuid:4fd952d3-f892-4567-8e51-e6623c7a6650","Experimental and numerical study of fatigue behaviour at the microscale of cementitious materials","Gan, Y. (TU Delft Materials and Environment)","van Breugel, K. (promotor); Schlangen, E. (promotor); Šavija, B. (copromotor); Delft University of Technology (degree granting institution)","2022","Ageing is an inherent feature of cementitious materials. Ageing of material leads to gradual loss of the function of a concrete structure with increasing likelihood of failure. As one of the typical ageing phenomena, fatigue of concrete, has recently received considerable research attention. The phenomenon of concrete fatigue is complicated as it inherently involves multiple spatial scales owing to the multiscale heterogeneous nature of concrete. Over the past century, tremendous efforts have been devoted to concrete fatigue. However, many important scientific problems still remain unsolved. These problems include at least: how does the multiscale heterogeneous material structure of concrete affect the fatigue behaviour; how does fatigue damage evolve inside the concrete; how to properly simulate and predict the fatigue behaviour. The main o bjective of this thesis is to improve the knowledge of fatigue behaviour of cementitious materials at the microscale and to develop a multiscale modelling scheme from micro- to meso-scale to estimate the fatigue properties. Firstly, experimental techniques for characterization of mechanica! and fatigue properties of cementitious materials at the microscale are developed. These techniques include the preparation and testing of micrometre sized sample. For sample preparation, a precision micro-dicing machine is used. With the help of a nanoindentation measurement device, micro-bending tests are performed to study the flexural fatigue behaviour of two major components in the mortar, i.e. the cement paste and the interfacial transition zone (ITZ). The fatigue fracture surface and damage evolution are assessed using an Environmental Scanning Electron Microscope (ESEM) and X-ray computed tomography (XCT). The mechanica! properties, including the strength and elastic modulus, as well as the fatigue properties including the relationship between fatigue life and stress level, the stiffness degradation and residual deformation evolution, can be obtained using the developed microscale testing approach. Secondly, a numerical model using a 2D lattice network is developed to simulate the fatigue behaviour of cementitious material at the microscale. Images of 2D microstructures of cement pastes and ITZ obtained from XCT tests are used as inputs and mapped to the lattice model. Different local mechanica! and fatigue properties are assigned to different phases of the cement paste and interfacial transition zone. A constitutive law for cyclic loading is proposed to consider the fatigue damage evolution. Experimental results obtained at the microscale are used to calibrate and validate the model. The developed model is able to investigate the effect of microstructure heterogeneities on fatigue damage evolution in a very efficient way and can establish a quantitative relationship between the material structure and the global fatigue performance. Moreover, it can also provide valuable insight into the fatigue damage evolution and fatigue fracture phenomena under different stress levels. Finally, a parameter-passing scheme is adopted for upscaling of the mechanica! and fatigue properties. Via this approach, the global fatigue fracture behaviour (i.e. stress-strain response and S-N relation) simulated at smaller scale can be used as input for the fatigue fracture modelling at the larger scale. The model can satisfactorily predict crack patterns, mechanica! and fatigue properties under fatigue loading. The model has fully predictive capabilities at the mesoscale. Hence, the model offers an opportunity to investigate in more detail the influence of material structures at different scales on the macroscopie fatigue performance.","Cementitious materials; cement paste; fatigue; Lattice Fracture Model; X-ray computed tomography; nanoindentater","en","doctoral thesis","","978-94-6421-840-4","","","","","","","","","Materials and Environment","","",""
"uuid:bed383ed-c091-4c2a-a530-aada366283d8","http://resolver.tudelft.nl/uuid:bed383ed-c091-4c2a-a530-aada366283d8","Safety assessment of automated vehicles using real-world driving scenarios","de Gelder, E. (TU Delft Team Bart De Schutter)","De Schutter, B.H.K. (promotor); Paardekooper, Jan-Pieter (copromotor); Delft University of Technology (degree granting institution)","2022","Automated Vehicles (AVs) have a great potential to change transport fundamentally by making it safer, by reducing travel time, and by increasing mobility and accessibility for all. The level of automation of these vehicles determines the extent to which the driver’s task is accomplished by the AV. With the increasing number of AVs entering the market, the level of automation of these vehicles is increasing. The increasing level of automation will cause a paradigm shift: traditionally, human drivers are responsible for the behavior of the vehicle, even if the vehicle is momentarily controlled by an Automated Driving System (ADS), but with increasing levels of automation, the human driver will no longer be solely responsible. So, the accountability and liability shift from the driver to the vehicle manufacturer, the operator of the vehicle (fleet), and/or the (vehicle) authorities. Due to this paradigm shift, for higher levels of automation, it can no longer be assumed that the human driver intervenes whenever the ADS does not respond appropriately. To guarantee that these ADSs respond appropriately in nearly all situations, new methods for assessing ADSs are required.
Scenario-based assessment is an approach for assessing AVs that is broadly supported by the automotive field. With a scenario-based assessment, the AV under test is subjected to many different test scenarios. These test scenarios resemble situations that may be encountered in real-world traffic, to see whether the AV responds appropriately to these scenarios. One of the main challenges with scenario-based assessment of an AV with a high level of automation is to come up with a set of test scenarios that provides enough confidence that the AV responds appropriately in nearly all situations. One popular approach is to use real-world data that contain scenarios from real-world traffic as a source to automatically generate test scenarios. This dissertation describes new methods for improving this data-driven scenario-based assessment of AVs.
The first contribution of this dissertation is a comprehensive and operable definition of the term scenario in the context of scenario-based assessment of AVs. We define a scenario as a quantitative description of the relevant characteristics and activities and/or goals of the ego vehicle(s), the static environment, the dynamic environment, and all events that are relevant to the ego vehicle(s) within the time interval between the first and the last relevant event. A scenario category is defined as the qualitative counterpart of a scenario and can be regarded as an abstraction of a scenario. To enable a computer to store, communicate, interact with, and interpret scenarios, an Object-Oriented Framework (OOF) is proposed in which scenarios, scenario categories, and their building blocks are defined as classes of objects having attributes, methods, and relationships. The advantage of the OOF is that it promotes clarity, modularity, and reusability of the objects that constitute a scenario.
The second contribution is a novel metric for quantifying the degree of completeness of the collected data that are used for the data-driven scenario-based assessment of AVs. The data are used to estimate unknown probability density functions (pdfs) of the important parameters that are used to describe scenarios. The proposed completeness metric is based on the expected approximation error, which is the discrepancy between the real pdf and the estimated pdf: a lower approximation error indicates a higher degree of completeness.
The third contribution is a novel method for capturing scenarios of a specific scenario category from a data set. For example, the provided method can capture all cut-in scenarios from a data set. One of the benefits of the method is that characteristics of a scenario that are shared among different scenario categories need to be identified only once. As a result, the provided method is easily applied to a wide range of scenario categories, such that a wide variety of scenarios can be obtained from the data.
The fourth contribution is the proposal of two complementary methods for generating test scenarios for AVs. The first method automatically determines the parameters that best describe the scenarios of a specific scenario category. The underlying, unknown pdf of the parameters is estimated and scenarios are generated by sampling parameter values from the estimated pdf. The second method enables the conditional sampling of parameter values, which can be used to, e.g., generate scenarios with predefined starting conditions. The benefits of the presented methods are that the generated scenarios are representative of real-world scenarios, they cover the actual variety found in real-world traffic, and they extend the variety found in the collected data. To measure the extent to which the generated scenarios indeed represent real-world scenarios while covering the actual variety found in real-world traffic, the novel Scenario Representativeness metric is proposed.
The fifth contribution is the proposal of two novel methods for quantifying the risk of an AV. Both methods calculate the risk by combining the outcome of virtual simulations of scenarios generated using the aforementioned methods and the estimated likelihood of these scenarios. The first method quantifies the risk prospectively, i.e., before the actual deployment of the AV on public roads. The quantified risk supports the risk assessment activities of ISO 26262 and ISO 21448, the leading standards in automotive safety. These standards decompose the risk into three aspects: exposure, severity, and controllability. Whereas safety experts’ opinions are traditionally used to provide qualitative, subjective ratings for each of these three aspects, our proposed method computes these aspects in a data-driven, quantitative manner. The second method is the novel data-driven Probabilistic RISk Measure derivAtion (PRISMA) method, which is used to derive Surrogate Safety Measures (SSMs) that estimate the probability of a specific event (e.g., a crash) in real time. As opposed to existing SSMs, which are only applicable in specific types of scenarios, the PRISMA method can be used to derive multiple SSMs for different types of scenarios.
The work presented in this dissertation thus makes a substantial contribution to the full integration of a scenario-based assessment for the type approval of AVs. This, in turn, brings us closer to the large-scale deployment of AVs on public roads.","","en","doctoral thesis","","978-94-6384-362-1","","","","","","","","","Team Bart De Schutter","","",""
"uuid:31e29e24-839a-4f14-b558-7e8ddc3e8269","http://resolver.tudelft.nl/uuid:31e29e24-839a-4f14-b558-7e8ddc3e8269","Increasing the sustainable consumption of mainstream consumers: through design and communication","Visser, Mirjam (TU Delft Marketing and Consumer Research)","Schoormans, J.P.L. (promotor); Hultink, H.J. (copromotor); Delft University of Technology (degree granting institution)","2022","This thesis explores why some consumers buy sustainable options and others do not. As well as how this can be altered through targeted marketing communication and design. Sustainable intent is no guarantee for sustainable behaviour, but sustainable intent is also not a necessity for sustainable behaviour. It is the sustainable behaviour that counts. The reasons consumers buy energy-efficient vacuum cleaners makes this clear. Three out of four buyers of energy-efficient vacuum models did buy an energy-efficient vacuum cleaner for other reasons than environmental friendliness. They bought their energy efficient vacuum cleaner for the exact same reasons as those who bought an inefficient model. For neither shoes nor vacuum cleaners, sustainability is a primary buying criteria. On the contrary, there is a bias that sustainability comes at the cost of perceived quality, fashion image or performance. Only when all the main buying criteria are met, sustainability adds differentiation and value. This counts for both “feel” products (such as shoes and clothing) or “think” utilities (such as household appliances and utilities). The highest willingness to buy the sustainable shoe has been reported when the communicated benefit was on personal relevance combined with a green design.
Sustainability and the environmental impact of a product is, for most consumers, abstract and distal. More abstract than the present need which will be solved with the new acquisition. It is also hard, if not impossible, for a layman to compare the environmental costs of product alternatives. Results of comparisons are often context dependent and counter intuitive, which may reduce green trust. To make sustainable products attractive to mainstream consumers, it is necessary, like in mainstream marketing, to focus communication and design on the consumers’ main buying criteria. Deliver sustainability but focus the products’ message and design on the general relevance and needs of the customer or user. Communicating sustainable products is most effective when personal benefits are combined with a linked sustainable benefit such as a health or energy cost reduction. Presenting the energy-efficiency of appliances as a result of broader technological advantages is more effective in creating sustainable purchases than emphasising the communication on the products’ environmental friendliness.
Design should and can counter the bias and negative performance perceptions of sustainability. Consumers perceive the smaller energy-efficient motors in appliances often as less robust and powerful than energy-inefficient ones. Design can counter this perceived underperformance of sustainability with additional volume and weight which both have only a minor effect on the environmental cost. Sustainable utilities should perform as well and still look robust and powerful as less sustainable variants. Sustainable shoes without leather should be also just as comfortable, breathable and fashionable.
Unfortunately, the study after recommendations of buyers of sustainable vacuum cleaners showed sustainable buyers are less positive in their recommendations compared to those who bought unsustainable versions. This makes owners of energy-efficient appliances ineffective in promoting sustainable alternatives, increasing green trust or changing social norms. Differences in satisfaction ratings are not caused by the differences in the energy efficiency of the products but by the differences in the products’ perceived performance, ease of use and value for money. These are all independent of the input power of vacuum cleaners. Additionally, irrespective of the energy efficiency of the vacuum cleaners, higher suction power and increased weight positively mediate the recommendations. Focusing design and communication on these aspects rather than on energy efficiency alone can reduce the perceived green risk and increase green trust in sustainable products.
For energy consuming durables, often the largest part of their environmental cost is realised during the use phase. Eco-design legislation to increase the energy efficiency of appliances and cars prescribes the use of eco-settings to reduce energy consumption. Most of the eco-settings usage is optional and, in most cases, defaults to the unsustainable settings after they are switched off. The washing machine study shows only a few percentages of the theoretical energy savings from the eco-setting being realised. The focus of legislators has not been on user behaviour and the effectivity of these energy efficiency measures. The washing machine study shows energy inefficient users consume three times as much energy as energy efficient users (Chapter 5). The comparison of different design for sustainable behaviour interventions showed elimination of the unsustainable settings, combined with feedback on energy consumption to be far more effective in reducing energy consumption. Design interventions are cost efficient to implement and an effective addition to the technological innovations in motor adaptions and insulations. Feedback also teaches new behaviour.
Sustainability should be implicit and not explicit if it is not relevant for the products’ performance or image. By focusing design and communication on consumer relevance and behaviour, this thesis highlights that it is possible to increase sustainable consumption among mainstream consumers.
The nonlinear Fourier transform for signals that decay sufficiently fast is currently the most commonly used transform in nonlinear Fourier transform based communication systems. We developed new algorithms for computing the continuous nonlinear Fourier spectrum which is one part of the nonlinear Fourier spectrum for decaying signals. We demonstrated significant improvements over existing algorithms in multiple numerical benchmarks, and implemented the algorithms in the open source software library FNFT. We also developed NFDMLab, which is a Python based open source simulation environment for nonlinear Fourier transform based communication systems that relies on FNFT. The developed forward nonlinear Fourier transform algorithms are fast higher-order methods with a complexity of O(D log2D) for computing the continuous nonlinear Fourier spectrum from D samples of a decaying signal. In the numerical benchmarks, we introduced the trade-off between accuracy and computation time as a new way to compare nonlinear Fourier transform algorithms and found that the newly proposed algorithms perform significantly better than prior work in this regard. We also provided the first counting analysis of a fast nonlinear Fourier transform algorithm.
There is also interest in using the nonlinear Fourier transform for periodic signals, as it is closer to the method used in conventional orthogonal frequency division multiplexing communication systems. The definition of the nonlinear Fourier transform for periodic signals is different from that of decaying signals. Communication systems based on nonlinear Fourier transforms for periodic signals make use of so-called finite-genus solutions of the nonlinear Schrödinger equation. Riemann theta functions are the traditional way to realize inverse nonlinear Fourier transforms that are used to synthesize finite-genus solutions. They are multi-dimensional Fourier series and their numerical computation suffers from the curse of dimensionality. This limits the genus of the signals used in the communication systems and is seen as a major bottleneck. We derived new bounds on the series truncation error and proposed two tensor-train based and a hyperbolic cross index set based algorithms for computing high-dimensional Riemann theta functions. We compared them to existing algorithms in multiple numerical benchmarks. The bounds that we derived on the truncation error of the Riemann theta functions allowed us to rule out several of the existing approaches for the high-dimension regime. We demonstrated that the algorithm based on the hyperbolic cross can compute Riemann theta functions upto 60 dimensions with moderate accuracy which is significantly higher than what was previously feasible.
We also tried to improve the performance of nonlinear Fourier transform based communication systems known as b-modulators in the highly nonlinear regime using improved numerical algorithms. When we did not see improvements, we conducted a theoretical analysis of b-modulation systems. The analysis allowed us to prove theoretically that nonlinear bandwidth, signal duration and power are coupled when singularities in the nonlinear spectrum are avoided. When the nonlinear bandwidth is fixed, the coupling results in an upper bound on the transmit power. The power bound decreases with increasing signal duration which consequently decreases the signal-to-noise ratios for long signals, which explains the observed performance degradation in this regime without resorting to numerical difficulties. This result is the first of its kind as such a behaviour is not known from conventional linear systems. We also demonstrated numerically that the transmit powers achieved by an exemplary b-modulated system are close to its theoretical limits.
Fiber-optic communication systems based on nonlinear Fourier transforms have been proposed to potentially tackle fiber nonlinearity, which is a major factor currently limiting transmission capacity. Efficient numerical algorithms are essential for real-time operation as well as efficient simulations of nonlinear Fourier transform based fiber-optic communication systems. The algorithms presented in this dissertation potentially make already published nonlinear Fourier transform based communication systems more practical and also allow for development of new designs which were previously infeasible. In this dissertation furthermore a limitation on communication system design imposed by the structure of the nonlinear Fourier transform was identified. It can be used to explain the inability to perform efficient communication with long duration signals, which was previously attributed to numerical problems, and guide the design of future systems.
Wind energy is projected to produce a significant share of electricity and energy in the following decades. The wind turbines have a small footprint during the operation, but the turbine with its foundation is a massive structure with a significant material footprint. Airborne wind energy uses tethered devices to harness high-altitude wind energy, substantially reducing bulk material use. However, better models are required to make the systems reliable and efficient.
This thesis focuses on membrane traction kites that harness wind energy by flying fast crosswind maneuvers. A high-fidelity aeroelastic model for the kites is developed to predict the aerodynamic loads and the structural deformations of real systems. The aeroelastic model assumes that the membrane kite flight can be modeled as multiple steady-states without memory from the past. The steady-state aerodynamics are simulated by solving the incompressible Reynolds-averaged Navier-Stokes equations numerically. High-quality numerical grid generation strategies are developed for the unconventional wing shape of the membrane kites.
The membrane kites are tensile structures, and therefore a finite element model with cable and membrane elements without rotational degrees of freedom is used to calculate the deformed shape. The solver calculates the average surface without wrinkles and applies an additional model when an element is under compression. The steady-state response of the structure is calculated with a dynamic relaxation technique. The two solvers are coupled in a partitioned manner, and during each iteration, both solvers compute a steady state. The staggered approach requires several coupling iterations to converge. The fluid mesh needs to be altered to the deformed geometry during each iteration, and therefore, the mesh is deformed with radial basis function with greedy point selection.
This thesis presents three computational studies with the framework. The first two studies focus on the aerodynamics of rigidized LEI kite airfoil and wing. The aerodynamic model is validated with an already existing wind tunnel experiment on a similar airfoil. Generally, the largest model uncertainty in CFD is the mesh and therefore, the uncertainty is assessed by mesh refinement studies. A range of flight conditions is simulated by varying the inflow angle of attack, sideslip angle and Reynolds number. The flow around the wing is characterized by a recirculation zone behind the leading edge tube due to the lack of second skin. The zone is highly influenced by the inflow conditions. The effect of the chordwise inflatable tubes on aerodynamics is assessed by creating a model with and without them. The results show that the chordwise tubes have an almost negligible impact on the aerodynamic forces, which suggests they could be left out of the aerodynamic model in future work, simplifying the mesh generation and mesh deformation.
The third study shows the aeroelasticity of a ram-air kite for several power configurations by changing the trim of the bridle lines. The kite forms a typical ram-air shape with ballooning in between ribs, and the nose of the wing is flattened at the stagnation region. The aerodynamics of the flexible kite is compared to a rigidized version of it. The wing is fixed at the symmetry plane and fixed to the pre-inflated shape with stagnation pressure. The results show that the flexible kite is aerodynamically more efficient than the rigidized version. The morphing wing adapts itself to the incoming flow in a way that extends the range of feasible flight conditions and improves efficiency. The aeroelastic framework converges satisfactorily with all the power setups, and it is computationally relatively inexpensive for fidelity. Consequently, the framework could be integrated into a membrane kite design process and could be a valuable asset in evaluating kite designs.
Here, there is a need for a systematic approach that acknowledges the socio-technical complexity of clusters for IS implementation. This Ph.D. dissertation aimed to understand how industrial symbiosis (IS) shapes within the complex socio-technical structure of industrial clusters to improve their environmental and economic performance in the long term. The first requirement for IS emergence is the existence of technical and collaborative potential due to geographic proximity. Moreover, external factors also influence actors' behaviors in the cluster and, consequently, IS formation. Rules and regulations, on the one hand, and economic conditions, on the other hand, steer actors' decisions toward IS implementation. This research combines engineering, social science, and economic assessment methods to study IS emergence as a part of a (larger) system.
To this end, a stepwise approach was taken, starting with assessing the technical potential for IS in an emerging industrial cluster (Chapter 2). We then studied the structure of previous collaborations in the cluster by analyzing regional and national institutions governing actors' behavior (Chapter 3). After assessing IS emergence's technical, collaborative, and institutional aspects, these aspects were incorporated with financial requirements in a MILP optimization model to study system behavior as a whole (Chapter 4). We investigated the formation of IS collaboration under different external conditions and evaluated the contribution of formed IS collaborations to cluster performance improvement. The research further examined the interplay between IS and carbon capture and storage toward a more sustainable cluster development (Chapter 5).
To examine the feasibility and functionality of the proposed methods, we used the “Persian Gulf mines and metals special economic zone” (PGSEZ), an iron and steel-based cluster in Iran, as a real case study. The steel industry is critical for economic modernization and one of the most energy-intensive and polluting industries. 23% of final energy demand and 28% of direct CO2 emissions in the industrial sector belong to iron and steel production. This dissertation extends our understanding of the formation of IS as an integrated component of industrial clusters through several conceptual and methodological contribution, while the case study contributes to filling the gap in regional IS studies in developing oil-rich countries, where governing institutional and economic conditions are different from developed economies.
To design quieter and more efficient propellers an optimal blade loading solution is required. For a rigid propeller, the blade loading distribution is optimized by modifying the geometry. The propeller geometry must be modified to achieve optimal loading that maximizes efficiency and minimizes acoustic emissions. In addition to efficiency and noise considerations, propeller optimization must consider thrust, ship speed, fairing constraints as well as unsteady wake of the vessel....","Explainable Machine Learning; Propeller Design; ILES; Barotropic model","en","doctoral thesis","","978-94-6419-576-7","","","","","","","","","Ship Design, Production and Operations","","",""
"uuid:5b842771-afc6-4ccb-91d3-bf0c3afac06b","http://resolver.tudelft.nl/uuid:5b842771-afc6-4ccb-91d3-bf0c3afac06b","Developing an Aerosol Layer Height Retrieval Algorithm for Passive Space-Based Sensors","Nanda, S. (TU Delft Atmospheric Remote Sensing)","Levelt, Pieternel Felicitas (promotor); Veefkind, j. Pepijn (copromotor); de Graaf, M. (promotor); Delft University of Technology (degree granting institution)","2022","Aerosols are the source of the largest uncertainties in our climate models, blurring our outlook of the future. This has been attributed to the complexity of measuring their properties, which vary over time and space. Atmospheric circulation spreads aerosols across the globe from a point source, which makes satellite-based observations lucrative. At present, there are several aerosol observing missions that deliver aerosol data products in a consistent and operational manner; these missions report several aerosol properties that are important for reducing the contribution of uncertainties to our climate models. What is missing, however, is an operational data product that measures the height of these aerosols at a global scale. Earlier attempts at this use data derived from lidar instruments in space; an example being the Cloud-Aerosol Lidar with Orthogonal Polarisation (CALIOP) instrument, which uses lasers to measure atmospheric composition. In the case of aerosols, the amount of backscattered electromagnetic radiation at each atmospheric layer gives an idea of the amount and height of aerosols. The mobility afforded by space-based instruments gives space lidars a leg up over ground-based lidars. However, the coverage of such lidar instruments is merely near-global. This has to do with the fact that while lidars in space can circle the entire globe, their footprint on the ground is very narrow, in the order of several hundred meters to a few kilometers: this is an inherent limitation of the measurement principle. Consequently, a specific patch on Earth is revisited in periods that can range several days. An alternative to space based atmospheric lidars are space based spectral imagers. These are essentially cameras that take snapshots of the Earth, capturing the light and splitting its different electromagnetic frequencies into the scale of nanometers using very precise prisms and detection techniques. The advantage of these instruments over lidars is that they have a very large footprint, covering several thousand kilometers of area in a single _y-by. This allows for daily to even sub-daily coverage of the Earth, as each snapshot covers larger and sometimes overlapping areas. The challenge is to estimate aerosol height using spectral signatures of the Earth’s atmosphere in an operational environment that can handle data coming in from the satellite at a rate of several million pixels every few minutes. This dissertation focuses on delivering the aerosol height data product operationally using computer algorithms. The logic of aerosol height estimation using these so-called spectral snapshots of the atmosphere differ from that using lidars; the instrument does not provide data for different atmospheric layers. This has to be inferred using the chemistry of the oxygen molecule. O2, the second most abundant gas in our atmosphere, has a unique spectral signature in the near-infrared region, comprising of electromagnetic radiation around 765 nm. The chemical structure of the oxygen molecule allows it to absorb some of these radiations, creating a structure of absorption bands. This spectral signature deepens as more light is absorbed by the oxygen: this happens as photons penetrate deeper and deeper into the earth’s atmosphere, unless they hit a barrier. If the photons bounce back from an aerosol layer at a very high altitude, the amount of absorption by oxygen would be low. This ‘depth’ of absorption gives clues on how high an aerosol layer might be present. Computer models can reconstruct this oxygen absorption structure onto a simulated spectrum. One of the control parameters within the model is the height of an aerosol layer. The generated spectral signature of a simulated atmosphere resembling the atmosphere of a pixel in the snapshot from space-based hyperspectral imagers is then compared to the measured spectral signature. This usually results in a non-zero difference, which is caused by errors in the model. These errors can be minimised by using computer algorithms and mathematical information retrieval techniques, resulting in a modeled atmosphere closer to the measurement by changing the height of the aerosol layer, resulting in an aerosol height estimate. In this dissertation, computer algorithms inspired from mathematical models of brain neural networks as well as information retrieval techniques such as least squares are used…","aerosol; Earth observation; retrieval; neural networks","en","doctoral thesis","","","","","","","","","","","Atmospheric Remote Sensing","","",""
"uuid:f686a46e-d470-4fb5-9a6d-997501188dfa","http://resolver.tudelft.nl/uuid:f686a46e-d470-4fb5-9a6d-997501188dfa","Empirical Essays in Artificial Intelligence Ethics","Martins Martinho Bessa, A.C. (TU Delft Transport and Logistics)","Chorus, C.G. (promotor); Kroesen, M. (copromotor); Delft University of Technology (degree granting institution)","2022","As Artificial Intelligence (AI) becomes increasingly important in modern society, there is a pressing need to address the ethical issues associated with these technologies. AI Ethics is a necessary endeavor to capitalize on the benefits of AI while minimizing its risks. However, it faces important challenges related to normative urgency, multi-purpose nature of AI, and multitude of stakeholders operating in the AI space. This doctoral dissertation builds on the premise that empirical information is valuable for AI Ethics to address these challenges and realize its normativemandate. The main ambition is to make an empirical contribution that facilitates the reflective development of AI, which assists the communities operating in the AI space to engage in a critical reflection on AI.","Artificial Intelligence; Ethics; Morality; Empirical Research","en","doctoral thesis","","978-94-6384-354-6","","","","","","","","","Transport and Logistics","","",""
"uuid:851366e2-88e2-4d9e-8831-35a8dfe450df","http://resolver.tudelft.nl/uuid:851366e2-88e2-4d9e-8831-35a8dfe450df","Steering Product Formation in High-Pressure Anaerobic Digestion Systems: The role of elevated partial pressure of carbon dioxide (pCO2)","Ceron Chafla, P.S. (TU Delft Sanitary Engineering)","van Lier, J.B. (promotor); Rabaey, Korneel (promotor); Lindeboom, R.E.F. (copromotor); Delft University of Technology (degree granting institution)","2022","Anaerobic processes such as Anaerobic Digestion (AD) and mixed culture fermentation (MCF) are important technologies in the bioeconomy context since they can be used to convert (waste) biomass feedstock into gaseous energy carriers and chemical commodities, theoretically without the use of any additional energy source. AD is a multi-step bioconversion process pursuing organic matter stabilization whose final product, i.e., biogas, can be used as an energy source. On the other hand, MCF employs open mixed cultures under non-sterilized conditions to produce carboxylates, i.e., short and medium-chain organic acids, which will serve as chemical building blocks after downstream processing. Limitations of biogas production are associated with the low CH4 content (≈50-60%), presence of impurities (like H2S) and unsuitable final pressure for direct connection to national grids. Thus, in recent years, the topic of biogas upgrading to biomethane (i.e., CH4>90%) has gained momentum and in-situ and ex-situ alternatives have been proposed with differences in financial and technical viability as well as achieved final CH4 content. While for the carboxylate production, major limitations are associated with process selectivity, presence of trace pollutants and too low broth concentrations for direct application inducing a need for “wet” and energy-intensive downstream processing. High-Pressure Anaerobic digestion (HPAD) is an innovative technology designed for simultaneous digestion and biogas upgrading. HPAD takes advantage of the large differences in solubility between biogas constituents, i.e., CH4 and CO2. Consequently, CH4 will predominantly remain in the gas phase after a pressure increase, whereas ionisable gases like CO2 and H2S will increasingly dissolve in the liquid. Thus, from a biogas production perspective, the proposed technology accomplishes higher CH4 content in the gas phase at the cost of increased dissolved CO2 levels. The process's overall performance under elevated pCO2 has not been adequately addressed. Mechanistic explanations for the role of increased dissolved CO2 in the fermentation process remain speculative, partially due to the limited amount of published experimental work on high-pressure fermentation with open cultures. Since CO2 exerts multiple roles in biological systems, increased dissolved CO2 could impact the kinetic and energetic feasibility of the reaction chain in AD and MCF, as well as the microbial community dynamics. These effects constitute a notorious knowledge gap that requires urgent attention…","","en","doctoral thesis","","978-94-93270-78-7","","","","","","","","","Sanitary Engineering","","",""
"uuid:1cd3a854-bd68-46ac-98f6-09db88bc0766","http://resolver.tudelft.nl/uuid:1cd3a854-bd68-46ac-98f6-09db88bc0766","Understanding radar backscatter sensitivity to vegetation water dynamics: Sub-daily variations in ground-based experiments","Vermunt, P.C. (TU Delft Water Resources)","Steele-Dunne, S.C. (promotor); van de Giesen, N.C. (promotor); Delft University of Technology (degree granting institution)","2022","Observing vegetation water dynamics from space offers insights into plant-water relations and water and carbon fluxes across ecosystems at local to global scales. A promising technique to observe water in the vegetation layer is radar, an active form of microwave remote sensing. Interactions between microwaves and vegetation material depend on dielectric properties of the vegetation tissue, which are a function of water content. The research presented within this thesis aims to extend our physical understanding of the relationship between vegetation water dynamics and radar backscatter. The particular focus was on sub-daily dynamics, motivated by the dynamic nature of plantwater interactions and developments in the availability of sub-daily spaceborne radar observations. Moreover, we examined the effect of vertical water dynamics inside the vegetation layer on backscatter, which is relevant for better understandingwhich parts of the vegetation layer control the signal. To limit complexity, we focused on homogeneous corn fields. During ground-based experimental campaigns, we collected scatterometer data in vertical (VV), horizontal (HH) and cross (VH and HV) polarizations, and extensive measurements of water dynamics from these fields. These datasets were analyzed using statistical analyses and electromagnetic models.","Microwave remote sensing; vegetation water content; diurnal cycle; leaf surface water; soil moisture; corn field; L-band; tower-based scatterometer","en","doctoral thesis","","978-94-6421-791-9","","","","","","","","","Water Resources","","",""
"uuid:f43a32d7-cd35-4009-ac13-cfd194f35a86","http://resolver.tudelft.nl/uuid:f43a32d7-cd35-4009-ac13-cfd194f35a86","Associating properties of dissolved organic matter to competitiveness against organic micropollutant adsorption onto activated carbon","Wang, Q. (TU Delft Sanitary Engineering)","Rietveld, L.C. (promotor); Zietzschmann, F.E. (copromotor); Delft University of Technology (degree granting institution)","2022","To eliminate organic micropollutants (OMPs) from (surface) water, activated carbon adsorption is a cost-effective technology to remove a broad range of OMPs without producing any byproducts. However, co-existing dissolved organic matter (DOM), at much higher concentrations (mg C/L) than OMPs (ng/L-μg/L), inducing adsorption competition, can interfere with OMP removal. Direct site competition and pore blocking are two DOM competition mechanisms, and low molecular weight (LMW) DOM has been recognized as the major competitor in the site competition against OMPs. However, the insights into DOM molecular properties are limited with regard to DOM competition. Therefore, the objective of this research was to relate (LMW) DOM properties to the competitiveness against OMPs, clarify the mechanism of direct site competition, and explore a useful DOM surrogate to predict DOM competitiveness.
Model DOM compounds (mDOMs) could be described individually and more accurately with molecular properties than a complex, real DOM matrix in water. To elucidate the impact of LMW DOM characteristics (hydrophobicity/polarity and aromaticity) on DOM competitiveness, fifteen model compounds (mDOM), differed in functional groups (hydroxyl, phenol, carboxyl groups, etc.), were used to represent several elemental structures of LMW DOM. By temporarily occupying adsorption sites prior to OMPs, LMW mDOM was found to be more competitive to inhibit OMP adsorption kinetics than OMP adsorption equilibrium. Although OMPs were more preferentially adsorbed onto activated carbon than mDOM, the large concentration asymmetry (~500 μg DOC/μg OMP) made a few mDOM compounds exert strong competition against OMPs. The mDOM competitiveness increased when compounds were more hydrophobic and more aromatic, whereas π-π interactions were more important to determine mDOM competitiveness than hydrophobic interaction for LMW mDOM compounds. As an integrated indicator, mDOM adsorbability, defined by mDOM adsorption capacity, was considered better to associate with mDOM competitiveness than hydrophobicity and aromaticity individually. The competition was found to be strong between strongly adsorbable mDOM and weakly adsorbable OMPs, where weakly adsorbable mDOM could even co-adsorb with strongly adsorbable OMPs with little to no competition.
To relate DOM adsorbability to competitiveness in natural waters, a two-stage adsorption procedure was designed to differentiate the adsorption of DOM fractions and OMPs by removing variously adsorbable DOM fractions with activated carbon pretreatment and analyzing the competitiveness of the remaining DOM fractions. Our results demonstrated that adsorbable (LMW) DOM was not necessarily competitive against OMPs. In addition, an increasing amount of DOM competitors was observed against the weaker adsorbable OMPs, compared to their stronger adsorbable counterparts. Similarly, more DOM competitors were identified at high initial OMP concentrations, due to the increased loading of OMPs on activated carbon, highlighting the variable roles (varying competitiveness/ complementary adsorption) of differently adsorbable DOM fractions in competition.
To elucidate the role of molecular weight (MW), polarity and aromaticity in DOM competition from a natural DOM with a complex molecular composition, activated carbon and anion exchange resin (AER) pretreatment served for differentiating competitive DOM from natural DOM. Ultrahigh-resolution Fourier transform mass spectrometry was employed for the DOM analysis at the molecular level. A large percentage of molecular formulas in untreated DOM was PAC-absorbable (97.8% for 40 mg PAC/L), while ~75% of PAC-absorbable formulas were considered poorly competitive, since these molecular formulas were not detected in DOM remaining after AER pretreatment that was highly competitive. The semi-quantitative analysis revealed that aromaticity was the dominant factor for LMW DOM adsorbability and competitiveness. In contrast, with higher MW, the competitiveness of an increasing number of aromatic DOM compounds was diminished due to strong dissociation induced by relatively high polarity.
Finally, the interference of ozone-modified NOM with the adsorption of 2-methylisoborneol (MIB, an odorous OMP) was studied in three natural waters and one standard humics solution in order to study how ozonation influences the competitiveness of DOM with different MW distributions. In the three natural waters, it was found that reducing NOM competition against MIB was found to coincide with increasing ozone consumption. The cleavage of the macromolecules in a standard humics solution, with larger molecular weight and higher aromaticity than the humics in natural waters, only induced a slightly stronger competition under low/moderate ozone consumptions. Overall, the declined aromaticity outweighed the produced LMW DOM in the competitiveness of DOM against MIB in ozonated natural waters. The UV absorbance of the LMW DOM was better correlated with the competitiveness of ozonated/non-ozonated waters than the LMW DOM concentration itself, underlining the role of LMW hydrophobic aromatics in competitive adsorption prediction.
From this thesis, it could thus be concluded that the DOM competition against OMPs is not ubiquitous for all (adsorbable, LMW) DOM fractions. The amount of DOM competitors, as well as their competitiveness, strongly varies with to OMP adsorbability and the initial OMP concentrations (i.e., the concentration asymmetry). For LMW DOM, aromaticity was a key characteristic to promote DOM competitiveness, while the high polarity reduced the DOM competitiveness by DOM dissociation (and thus high hydrophilicity/polarity). To project the competitiveness of ozonated DOM of which the hydrophobicity and aromaticity were simultaneously diminished, LMWUV can then be a handy DOM surrogate instead of LMW DOM concentration itself.
Use case 1 involves `quantum-accelerated genome sequence reconstruction'. Faster sequencing pipeline would enable novel downstream applications like personalized medical treatment. Two different reconstruction methods, ab initio reference alignment, and de novo read assembly, are studied to identify the computational bottleneck. Corresponding quantum techniques are formulated, based on quantum search and heuristic quantum optimization, respectively. A new algorithm, quantum indexed bidirectional associative memory (QiBAM), is explicitly designed to address the requirements for approximate alignment of DNA sequences. We also proposed the quantum accelerated sequence reconstruction (QuASeR) strategy to perform de novo assembly. This is formulated as a QUBO and solved using QAOA on a gate-model simulator, as well as, on a quantum annealer.
Use case 2 involves `quantum automata for algorithmic information'. A framework for causal inference based on algorithmic generative models is developed. This technique of quantum-accelerated experimental algorithmic information theory (QEAIT) can be ubiquitously applied to diverse domains. Specifically for genome analysis, the problem of identifying bit strings capable of self-replication is presented. We developed a new quantum circuit design of a quantum parallel universal linear bounded automata (QPULBA) model. This enables a superposition of classical models/programs to be executed, and their properties can be explored. The automaton prepares the universal distribution as a quantum superposition state which can be queried to estimate algorithmic properties of the causal model.
Use case 3 involves `universal reinforcement learning in quantum environments'. This theoretical framework can be applied for automated scientific modeling. A universal artificial general intelligence formalism is presented that can model quantum processes. The developed quantum knowledge seeking agent (QKSA) is an evolutionary general reinforcement learning model for recursive self-improvement. It uses resource-bounded algorithmic complexity of quantum process tomography algorithms. The cost function determining the optimal strategy is implemented as a mutating gene within a quine. The utility function for an individual agent is based on a selected quantum distance measure between the predicted and perceived environment.
This dissertation researches foundational techniques and develops innovative applications of quantum computation and algorithmic information. These were applied specifically for causal modeling in genomics and reinforcement learning. Further exploration of the synergies among these interdisciplinary concepts would improve our understanding of various scientific disciplines like computation, intelligence, life, and cosmology.","quantum algorithms; genomics; algorithmic information theory; reinforcement learning","en","doctoral thesis","","978-94-6366-530-8","","","","","","","","","Computer Engineering","","",""
"uuid:d5c26a98-199f-43d6-8997-3e38dd5d4bd5","http://resolver.tudelft.nl/uuid:d5c26a98-199f-43d6-8997-3e38dd5d4bd5","A data-driven and machine-learning study on microstructure-property relations in steel","Li, W. (TU Delft Team Kevin Rossi)","Sietsma, J. (promotor); Jongbloed, G. (promotor); Delft University of Technology (degree granting institution)","2022","Multi-phase metallic materials such as Advanced High-Strength Steels (AHSS) are of great importance in a wide variety of high-tech industries due to their higher strength compared to conventional (mild) forming steels. The higher strength leads to various advantages in weight, safety and environmental friendliness. In order to develop new AHHS steels, the steel industries make use of multi-scale microstructure modelling to predict mechanical properties from the microstructure features.
This thesis aims at the development of relations between the features of multi-phase metallic microstructures of steels and the mechanical properties of the material. The quantitative characterization of the microstructure will be more involved than is now in use for estimations of the mechanical properties, which is a necessity because of the complexity of multi-phase microstructures. Moreover, the prediction of mechanical properties on the basis of microstructural features will be extended beyond the usual limitation of the yield stress to properties like hole expansion capacity and impact energy. Statistical approaches combined with machine learning algorithms are used to find relations between microstructure features and mechanical properties. Interpretations of the machine learning algorithms are also discussed and the possible deeply embedded relations among mechanical properties are systematically studied.
The research in this thesis deepens the insight into the mechanical behaviour of the microstructure in multi-phase steels and strongly improves property predictions, not only based on microstructure features, but also on deformation properties. Results of this thesis can be directly implemented in microstructure modelling and are directly available for researchers within the steel industry for developing new materials.","Microstructures; mechanical properties; Data driven; Machine learning; Statistics","en","doctoral thesis","","978-94-6384-351-5","","","","","","","","","Team Kevin Rossi","","",""
"uuid:5c4d7f24-4df6-40b2-b58f-8f673f17ae4c","http://resolver.tudelft.nl/uuid:5c4d7f24-4df6-40b2-b58f-8f673f17ae4c","The energy dissipation during fatigue crack growth","Quan, H. (TU Delft Structural Integrity & Composites)","Alderliesten, R.C. (promotor); Benedictus, R. (promotor); Delft University of Technology (degree granting institution)","2022","Fatigue failures are a major concern in aeronautics. The growth of fatigue cracks in metallic structures and adhesively bonded joints has been studied for decades. Nowadays, most researchers evaluate and predict the fatigue damage growth rate (da/dN) with empirical and phenomenon based approaches, using Linear Elastic Fracture Mechanics with various fracture mechanics parameters (ΔK and ΔG for example). However, it does not reveal the scientific nature of fatigue crack growth from a physics perspective, which results in the need of additional corrections to fit the phenomenal observation. Therefore, the purpose of this thesis is to provide a better understanding on the nature of fatigue crack growth at a fundamental level rather than providing a da/dN prediction model.
The fatigue crack growth in metallic structures and adhesively bonded joints are considered in this thesis to be the same phenomena, which should be described with a single theory containing similar parameters. An energy equation is proposed in this thesis to describe the phenomenon in both materials, similar to the work on static crack growth by Griffith and Irwin. This equation describes that the external work done over a full cycle is equal to the sum of the surface energy dissipated by the formation of new fatigue crack surface, the plastic dissipation and the elastic strain energy stored throughout one full cycle. It is assumed that the surface energy should be uniquely related to the fatigue crack propagation, so the relation between surface energy and da/dN is assumed to be a one-to-one relationship, independent of external loading conditions. On the contrary, plastic dissipation and elastic strain energy stored are consequences accompanying fatigue crack growth, and their relation with da/dN is assumed to depend on loading conditions like the stress ratio. To verify this assumption, fatigue crack growth tests on 7075-T6 and FM94 adhesive joints were performed to obtain the relationship between the energy components involved and da/dN.","fatigue crack growth; energy dissipation; aluminium alloy; adhesive joints","en","doctoral thesis","","","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:71e1cbf8-ce02-4ce8-a09b-883e6f84e996","http://resolver.tudelft.nl/uuid:71e1cbf8-ce02-4ce8-a09b-883e6f84e996","Testing RRAM and Computation-in-Memory Devices: Defects, Fault Models, and Test Solutions","Fieback, M. (TU Delft Computer Engineering)","Hamdioui, S. (promotor); Taouil, M. (copromotor); Delft University of Technology (degree granting institution)","2022","Resistive random access memory (RRAM) is a promising emerging memory technology that offers dense, non-volatile memories that do not consume any static power. Furthermore, RRAMdevices can be written and read out in nanoseconds, and it is possible to use them to performcomputation-in-memory (CIM). These benefits make this technology a potential replacement for Flash or even dynamic random access memory (DRAM). This is also clearly seen by the community; both universities and companies are prototyping RRAMs, and there are already some commercial RRAMs available. In order to deliver high-quality products, the RRAMs need to be tested properly so that a manufacturer can guarantee the quality. This dissertation focuses on test development for RRAMs. Traditionally, production defects in memories, such as DRAM, are modeled as linear resistors in or between two nodes of the circuit. In literature, many researchers have applied a similar approach for RRAMs. However, we demonstrate that this method of modeling defects is inappropriate, because the models fail to describe the defective behavior of the RRAM device. Instead, those models describe defects in the interconnections that surround the RRAM device. To overcome this, we propose the Device-Aware Test (DAT) approach that consists of three steps. First, the approach models the actual physics of defective devices and thus leads to realistic defectmodels. Second, the defect models are used to performaccurate faultmodeling and analysis. Third, the results from this step are used to develop high-quality RRAM tests. We do this by first characterizing the defect. We analyze the complete production process of a RRAM. During this analysis, we identify what can go wrong in every step, and in what kind of defects this may result. All identified defects need to be properly modeled, so that a high-quality test can be developed.Next,we analyze howit affects the performance of a defect-free device, and incorporate the resulting defective behavior in a compact defect model. This model is calibrated and accurately describes the effects of the defect. Second, we apply this defect model in a RRAM circuit to perform fault modeling and analysis. We systematically define the complete space of all faults that could occur, and then apply an analysis methodology to validate which faults actually occur in the circuit. Third, we develop a test for the validated faults. This test only needs to detect faults that are actually sensitized, and thus is shorter than generic tests, while it also has a better fault coverage. We apply the DAT approach to RRAM forming defects and RRAM intermittent undefined state faults. The results show that these two defect models sensitize different faults than the traditional defect models do. Since the DAT defect models describe the actual physics of the defects, we can conclude that the traditional approach will lead to lowquality tests that generate test escapes and reduce the production yield. Furthermore, we demonstrate that the faults cannot easily be detected by existing test algorithms, and that special tests need to be developed to detect them. We also apply the DAT approach to a RRAM-based computation-in-memory (CIM) architecture to develop a test for it. We shows that a CIM device needs to be tested both in its memory and computation configuration, as there are unique faults in both configurations. We define the complete fault space for CIM faults and validate it using the DAT approach. Subsequently, we develop a test that detects the faults in both configurations. Furthermore, we study how process, voltage and temperature variations affect the performance of the CIM architecture. We demonstrate that certain operations are more susceptible to these variations than other ones.","resistive RAM; RRAM; memory test; device-aware test; defect modeling; fault modeling; test development; computation-in-memory; CIM; reliability","en","doctoral thesis","","","","","","","","","","","Computer Engineering","","",""
"uuid:666aa030-557f-4a68-bf6e-8a464a3f0b9c","http://resolver.tudelft.nl/uuid:666aa030-557f-4a68-bf6e-8a464a3f0b9c","Being prepared for the drinking water contaminants of tomorrow: An interdisciplinary approach for the proactive risk governance of emerging chemical and microbial drinking water contaminants","Hartmann, J. (TU Delft Sanitary Engineering)","van der Hoek, J.P. (promotor); de Roda Husman, Ana Maria (promotor); Wuijts, Susanne (copromotor); Delft University of Technology (degree granting institution)","2022","“Access to safe drinking water is a fundamental human need and, therefore, a basic human right. Contaminated water jeopardizes both the physical and social health of all people”: such is the importance of safe drinking water, as stated by Kofi Annan, former Secretary-General of the United Nations, on World Water Day 2001. While some countries are still struggling to protect their citizens from well-known drinking water contaminants, potential new drinking water risks from newly-identified chemical and microbial aquatic contaminants are appearing globally. The increasing detection of these emerging contaminants has been advanced by a combination of social, technological, regulatory, climatological and demographic developments. Recent examples of emerging contaminants are perfluoroalkyl and polyfluoroalkyl substances (PFAS), sapoviruses, pharmaceuticals and colistin resistant bacteria. Whether emerging aquatic contaminants are a concern for drinking water safety depends on their exposure and hazard potential, which is influenced by a range of various determinants, including their mobility, toxicity and persistence in the environment, the severity and duration of the health effects caused by the contaminant, and the possibility for, and efficacy of, protective measures. Evidence, however, of these determinants is often limited. The challenge of protecting public health from emerging drinking water contaminants, therefore, does not only relate to identifying emerging contaminants as soon as possible, but also to prioritising the impact on human health which these contaminants have when evidence on their exposure and hazard potential is limited. Once identified and assessed, the challenge of effective risk communication under uncertainty needs to be dealt with as well. In this dissertation, an integrated approach to facilitate the early warning of, and communication on, emerging chemical and microbial drinking water contaminants has been developed...","drinking water; emerging contaminants; pathogens; chemical; microbial; early warning","en","doctoral thesis","","978-94-6384-335-5","","","","","","","","","Sanitary Engineering","","",""
"uuid:ad44006f-b50d-49b9-bf64-ee5cb7a55980","http://resolver.tudelft.nl/uuid:ad44006f-b50d-49b9-bf64-ee5cb7a55980","Understanding Sea-Level Change Using Global and Regional Models","Hermans, T.H.J. (TU Delft Physical and Space Geodesy)","Vermeersen, L.L.A. (promotor); Katsman, C.A. (promotor); Delft University of Technology (degree granting institution)","2022","The sea level is changing around the world due to a combination of complex processes, such as changes in ocean density and circulation, the melt of ice sheets and glaciers, terrestrial water storage and vertical land motion. Projections of how much and how fast sea level will change are crucial information for adaptation planning. At the basis of most sea-level projections are global climate models, which can be used to simulate how different components of the Earth’s system, such as the ocean and the atmosphere, evolve as the greenhouse gas concentration in the atmosphere increases. However, differences between global climate models introduce uncertainties in sea-level projections. Additionally, due to their typically low grid resolution, such models poorly capture sea-level change in coastal regions in which small-scale bathymetric features and oceanic processes are important. Another uncertainty is natural sea-level variability, which can obscure long-term sea-level change in model simulations and observational records. In this thesis, the sea-level projections of two generations of global climate models (CMIP5 & CMIP6) are compared to understand how the increased climate sensitivity in CMIP6 affects sea-level projections. Additionally, regional ocean models are used to refine the simulations of two global climate models on the Northwestern European Shelf (dynamical downscaling) and to better understand the drivers of interannual sea-level variability in this region. Finally, global climate model simulations of future changes in the seasonal sea-level cycle on the Northwestern European Shelf are analyzed and explained using sensitivity tests performed with a regional climate model. Based on this research, this thesis concludes that embedding regional ocean models in sea-level science will help to improve the simulations of global climate models, to better understand the mechanisms behind sea-level change and variability and to provide stakeholders with the local sea-level information they need.","sea-level change; sea-level variability; projections; modeling; dynamical downscaling","en","doctoral thesis","","978-94-6419-524-8","","","","","","","","","Physical and Space Geodesy","","",""
"uuid:d5a0ba4c-4816-4603-bf68-3ba6164a4173","http://resolver.tudelft.nl/uuid:d5a0ba4c-4816-4603-bf68-3ba6164a4173","Het handschrift van L.P. Roodbaard: Ontwerpprincipes van Noord-Nederlandse landschapsparken in de eerste helft van de 19e eeuw","van der Laan, A.T.E. (TU Delft Landscape Architecture)","Luiten, E.A.J. (promotor); Renes, J (promotor); van Thoor, M.T.A. (promotor); Delft University of Technology (degree granting institution)","2022","Het oeuvre van de tuin- en landschapsarchitect Lucas Pieters Roodbaard (1782-1851) is toonaangevend voor de Noord-Nederlandse landschapsparken uit de eerste helft van de 19de eeuw. Zijn oeuvre is te verdelen in drie categorieën: openbare wandelparken, landschapsparken bij buitenplaatsen en landschapstuinen bij (stads) villa’s. Aan de hand van ruimtelijk architectonisch onderzoek is een schat aan oorspronkelijke plantekeningen uitgebreid gedocumenteerd en geanalyseerd om zo tot de kern van de ontwerpmethode van Roodbaard te komen. Daarmee is zijn vorminstrumentarium, ook wel de ‘meetkundige gereedschapskist’ ontrafeld. Het heeft geresulteerd in zes compositorische ontwerpprincipes. Dankzij dit onderzoek en de nieuwe inzichten in Roodbaards handschrift kan een aantal landschapsparken worden geïdentificeerd en aan zijn oeuvre worden toegevoegd. Het gaat daarbij niet alleen om Roodbaards collectie van landschapsparken, maar ook om de samenhang van deze collectie met het omliggende historische cultuurlandschap. Dit overzicht biedt handvatten om het oeuvre van Roodbaard aan te duiden als een buitenplaatsenlandschap dat een samenhangend tuinen landschapsarchitectonisch ensemble vormt in het Noord-Nederlandse cultuurlandschap. Dit ‘parkachtige’ landschap, ook bekend als de Noordelijke Lustwarande, is grotendeels gelegen in het kustlandschap dat grenst aan het Unesco Werelderfgoed Waddenzee. In de epiloog is een proeve van een reconstructie van een ontbrekende ontwerptekening gemaakt voor het landschapspark De Braak te Paterswolde. Het vormt de opmaat voor de integrale reconstructie van deze bijzondere collectie groen erfgoed.","","en","doctoral thesis","","978-94-6366-582-7","","","","","","","","","Landscape Architecture","","",""
"uuid:84ba9f5e-2c5c-4280-9789-35fd650fc617","http://resolver.tudelft.nl/uuid:84ba9f5e-2c5c-4280-9789-35fd650fc617","Selections of vine structures and their applications","Zhu, K. (TU Delft Applied Probability)","Kurowicka, D. (promotor); Nane, G.F. (copromotor); Delft University of Technology (degree granting institution)","2022","Copulas are important models that allow to capture the dependence among variables. There are many types of bivariate parametric copula families, which allow to model data sets with different properties: symmetric and asymmetric dependence, upper (lower) tail dependence. In higher dimensions popular families of copulas, e.g., Gaussian, Student-t and canonical Archimedean are not sufficiently flexible in representing different types of dependence that they can realize. By decomposing the multivariate copula into a sequence of bivariate (conditional) copulas, based on a graph called vine (which is a nested set of trees), one is able to construct a n dimensional copula with the bivariate copulas that can have different types of dependence (e.g., tail behavior and asymmetries). The model constructed this way is called the vine copulamodel...","","en","doctoral thesis","","","","","","","","","","","Applied Probability","","",""
"uuid:f4d57842-9603-41aa-950b-1009ab3c3fe3","http://resolver.tudelft.nl/uuid:f4d57842-9603-41aa-950b-1009ab3c3fe3","Turnover of Suspended and Settled Organic Matter in Ports and Waterways","Zander, F. (TU Delft Geo-engineering)","Gebert, J. (promotor); Heimovaara, T.J. (promotor); Delft University of Technology (degree granting institution)","2022","Organic matter plays a major role in global ecosystems and has several functions in terrestrial and marine environments. As organic matter impacts, among others, the rheological behaviour and settling rates of mineral sediment particles, it is of great relevance to the definition and maintenance of the nautical depth in ports and waterways. The microbial decay of organic matter leads to the emission of climate forcing gases like CO2 and CH4. In this theses it will be provided fundamental insight in the behaviour of sediment organic matter in the aquatic river system. It presents analyses of field and laboratory experiments using sediment samples taken during 21 sampling campaigns between 2018 and 2020 in the Port of Hamburg, Germany. The focus lay on sampling locations with high sedimentation rates. It is investigated chemical, physical and biological parameters and their variability in space and over time. It quantifies the share of anaerobically and aerobically degradable sediment organic matter in a depth profile and along a transect of about 30 km within the tidal Elbe river. Sediment organic matter at upstream and downstream locations is mainly allochthonous as it comes from the catchment (upstream) or from North Sea (downstream). Young organic matter, entering the system from upstream, has predominantly biogenic sources. Upstream organic matter originates from the catchment, containing plankton-derived and more easily degradable components. It was shown that the most upstream location was nourished primarily by upstream fluviatile sediments. This location was characterised by the highest concentrations of chlorophyll a, microbial biomass, silicic acid, EPS, humic acids and hydrophilic organic matter, the most negative δ13C signature and by the highest oxygen consumption rate, with decreasing trends towards downstream locations. At downstream locations, organic matter is mainly of allochthonous origin, entering the harbour mainly with the tidal flood current from the direction of the North Sea. The organic matter degradability was the lowest at downstream locations and organic matter was stabilised in organo-mineral associations. It is elucidated that spatial patterns of organic matter degradability can be explained by a source gradient. It was found that sediment organic matter lability is inversely linked to its stabilisation in organo-mineral complexes. The degradability gradient could be explained by different organic matter quality in relation to its origin. A fast, medium, slowly and non-degradable pool (pool 1 to pool 4) were identified based on the measured organic matter lability. Temporal and spatial variabilities (gradient and depth) were observed as well as seasonal changes of degradable organic matter pools. An age gradient was found with easily degradable material in top layers and increasing stabilization of organic matter in organo-mineral compounds with depth. The degradability was larger in upper sediment layers. It was also larger under aerobic conditions but the differences between aerobic and anaerobic decay decreased from upstream to downstream. The investigation area mostly comprised stabilised organic matter. On average around 20 % of TOC was anaerobically degradable and around 30% of TOC was aerobically degradable. Thermometric pyrolysis was shown to serve as a useful proxy to predict organic matter degradability in river sediments, with the Hydrogen-Index (HI) correlating well with degradability. Further, it will be demonstrated that the sediment organic matter decay has a biological, chemical and physical effect on the shear strength. Degradation of organic matter significantly affects sediment strength, especially under the anaerobic conditions. The formation of gas bubbles under anaerobic conditions added an additional physical component to the effect of biological organic matter decay. The susceptibility of the sediment to yield stress changes might depend on the availability of easily XIV degradable organic matter. Pronounced spatial trends were found with higher changes in yield stress at upstream locations and lower yield stress changes at downstream locations. Finally, this thesis will demonstrate the metamorphosis of sediment properties and sediment organic matter from its state in suspension to being part of the settled and consolidated sediment as well as from upstream to downstream. Temporal and spatial gradients were found for aerobic and anaerobic carbon fluxes as well as for potentially degradable organic carbon. A first draft of a carbon flux estimate originating from the microbial decay of organic matter from the investigation area is presented which can be used for future carbon foot printing assessments, for example for port maintenance activities.","Sediment organic matter; river sediments; spatial variability; DOM fractions; organic matter decay rates; yield stress; organo-mineral complexes; carbon flux; temporal variability","en","doctoral thesis","","","","","","","","","","","Geo-engineering","","",""
"uuid:5cd38c3b-f0f5-405f-9f9f-02512ebb77dd","http://resolver.tudelft.nl/uuid:5cd38c3b-f0f5-405f-9f9f-02512ebb77dd","The Changjiang Estuary: A highly turbid estuary in transition","Lin, J. (TU Delft Coastal Engineering)","Wang, Zhengbing (promotor); He, Qing (promotor); van Prooijen, Bram (promotor); Delft University of Technology (degree granting institution)","2022","Estuaries are the core area of land-sea interactions and have significant ecological and economic value. In estuaries, hydrodynamics and sediment dynamics are the crucial processes governing geomorphology, navigability, and primary production. Since the turn of the 20th century, increased human activities (e.g., damming, dredging, and reclamation) have subjected estuaries to significant pressure and prompted changes in hydrodynamics and sediment dynamics. Some estuaries, for example, experienced a transition from low- to hyper-turbidity after channel deepening. This transition is particularly common in tide-dominated estuaries such as the Ems and Loire estuaries. However, the reactions of estuaries with a high runoff (such as the Changjiang/Yangtze Estuary) are unclear, and the transition and underlying processes of such estuaries are further complicated by declined fluvial sediment supply. By integrating acoustic and optical sensors, this PhD dissertation developed a wide-range and high-precision sediment concentration observation system to monitor high sediment concentrations in the Changjiang Estuary. Over the past three decades, observations revealed a transition in suspended sediment concentrations, with increasing concentrations near the bed and decreasing concentrations near the water surface. The drag reduction induced by suspended sediment was assessed by a bottom-mounted tripod system. Moreover, this dissertation clarified the mechanism controlling the formation of concentrated benthic suspensions, namely the positive feedback between stratification, turbulence damping, and hindered setting. The comparison between sediment transport processes before and after the Deep Waterway Project reveals that estuarine circulation is the primary force driving sediment import from the sea, whereas tidal pumping results in the along-estuary extension of the estuarine turbidity maximum zone. These findings enhance our knowledge of the response of estuarine hydrodynamics and sediment dynamics to human interventions and provide a theoretical basis for the effective management of estuarine systems.","Sediment concentration; Human interventions; Drag reduction; Turbulence damping; Density stratification; Estuarine circulation; Tidal pumping","en","doctoral thesis","","978-94-6458-315-1","","","","","","","","","Coastal Engineering","","",""
"uuid:5c7e535b-0110-488a-b9d5-073ee3c39059","http://resolver.tudelft.nl/uuid:5c7e535b-0110-488a-b9d5-073ee3c39059","Pilot-Induced Oscillations and Control Surface Rate Limiting: Comprehension, Analysis, Mitigation, and Detection","Klyde, D.H. (TU Delft Control & Simulation)","Mulder, Max (promotor); van Paassen, M.M. (promotor); Delft University of Technology (degree granting institution)","2022","From the Wright Flyer to fly-by-wire, the phenomenon of pilot-induced oscillations (PIO) has persisted, evolving with the complexity of the airframes and their associated flight control systems. Though airframe designers have long recognized the threat posed by PIO, each generation has been forced to address the issue whether identified in developmental flight test, operational flight test, or mission operations. A desired outcome of the research presented in this thesis is that these occurrences may be minimized in the second century of powered flight through enhanced comprehension and mitigation methods. To begin, it is recognized that the most significant threat of PIO in fly-by-wire aircraft comes from pilot interactions with a nonlinear flight control system response characterized by control surface actuator rate limiting, the so-called Category II PIO, and as such is the focus of this thesis. As this work was carried out over three decades, the thesis is separated into three distinct parts that address Comprehension and Analysis Methods, Category II PIO Mitigation Methods, and PIO Detection…","Aircraft handling qualities; pilot-induced oscillations; human control","en","doctoral thesis","","978-94-6421-792-6","","","","","","","","","Control & Simulation","","",""
"uuid:3d15049b-f695-42d8-b8d1-8d4bac1c8abd","http://resolver.tudelft.nl/uuid:3d15049b-f695-42d8-b8d1-8d4bac1c8abd","Hover and fast flight of minimum-mass mission-capable flying robots","de Wagter, C. (TU Delft Control & Simulation)","Hoekstra, J.M. (promotor); de Croon, G.C.H.E. (promotor); Delft University of Technology (degree granting institution)","2022","Highly automated Unmanned Aerial Vehicles (UAVs) or ""flying robots"" are rapidly becoming an important asset to society. The last decade has seen the advent of an impressive number of new UAV types and applications. For many applications, the UAVs need to be safe, highly automated, and versatile. Safety is a prerequisite to allowing their use in society. While flight safety comprises many aspects, one important safety factor is the total system mass. The common thread through this research is therefore to minimize the system mass while maintaining mission capabilities to increase safety. Flight automation is required to reach many applications' full potential by addressing operational labor costs and scalability. But despite great advances in ground-based robotics, the weight and power constraints of flying robots still constitute important challenges. Last but not least, many applications also require versatile aircraft that combine the ability to hover and fly fast efficiently. Hover is required for precision take-off & landing in confined areas at a growing number of locations and for the close-up inspection of assets. Fast and efficient flight is needed to reach distant locations, perform large surveys, cope with high headwind conditions, or simply reach destinations quickly. Unfortunately, the requirements for hover and fast flight are conflicting, and this drives the search for solutions to ``combine hover with fast flight in mission-capable flying robots while cost-effectively minimizing their size and maximizing their safety.'' To investigate the minimal feasible mass of mission-capable robots, in this thesis, a novel 20 g tailed flapping-wing robot called DelFly Explorer is presented that can autonomously explore unknown unprepared rooms. It was equipped with a 4 g micro stereo-vision system which necessitated algorithms that were optimized for tiny microcontrollers with low memory. Combined with a navigation strategy that keeps the area in front of the robot free of obstacles, a 0.9 g autopilot, and DelFly's novel stable slow hovering flight regime, this led to the lightest flying indoor exploration robot that could navigate in unknown environments. But to combine passive dynamic longitudinal stability at slow hover and fast flight in tailed ornithopters, a shift in the center of gravity location was shown to be needed. Moreover, the aerodynamically stabilizing tail also causes sensitivity to turbulence. Therefore, by using four pairs of flapping wings, a new tail-less flapping-wing concept called Quad-thopter was created which can hover precisely and transition to fast forward flight. The cranked-rocker-based mechanism contains no expensive parts and by re-using the main propulsion motors for attitude control, powerful control moments can be created which are very important in disturbance rejection. This design represents one of the first tailless flapping wing designs that was sufficiently light and agile for performing real missions while featuring a mechanism simple enough to permit large-scale production. Versatility of flight is also an asset for outdoor flight. Theory predicts that the most efficient hover is achieved by using a single large rotor while the most efficient forward flight is performed by using high aspect ratio fixed wings to generate lift. The combination of both has led to a novel helicopter-with-wings concept called DelftaCopter. The control of this platform yields unique challenges such as the inertia of the large fixed-wing interferes with the dynamics of the helicopter rotor. A controller was derived, and the dynamics were identified in hover and forward flight. The real-world performance of this flying robot is presented by analyzing the results of its participation in the outback medical challenge, showing that large single-rotor-equipped fixed-wing aircraft combine powerful attitude control, efficient hover, and efficient forward flight. Since efficiency in forward flight is not sufficient to achieve a very long endurance in electrically powered flying robots, a novel platform was developed around a hydrogen pressure cylinder and a fuel cell. The concept focuses on versatility, minimal weight, good control, and redundancy. A 12-motor tail-sitter is presented that re-uses all its motors for attitude control, hover, and forward flight and uses the wing structure to carry the propulsion. A dual automotive CAN-bus control network and dual flight modes remove the most critical single points of failure. The platform is called the NederDrone and is shown to fly 3h38 in a test departing from a moving ship at sea in 5 Beaufort wind conditions. While reaching fast flight in large free blocks of air is mainly a challenge for the design of the airframe and its energy source, as soon as obstacles are introduced, new bottlenecks appear and the weight and power consumption of sensing and processing become driving design considerations. Increasing the flight speed of flying robots in obstacle-packed or GPS-denied environments highlights the need for very lightweight fast but intelligent systems, as the processing weight and power not only reduce the flight times but also reduce the maneuvering capabilities and the maximum speed. Therefore, in this work, an extreme example is studied in the context of autonomous drone racing. A computationally light Artificial Intelligence (AI) based monocular navigation system is presented for indoor flight through obstacles. It enabled the flying robot to fly at higher speeds than what was possible with state-of-the-art visual-inertial odometry solutions. Overall, aerospace platforms require extreme optimization as every gram kept in the air requires constant energy. The consequence is that different missions will require vastly different platforms, while traditionally a lot of flying robot applications are still performed by multicopters. This thesis contributes to the design of intelligent flying robots that can both hover and fly fast, by solving several fundamental problems in novel concepts optimized around the five key requirements of mass, agility, efficiency, range, and speed-near-obstacles. These concepts are expected to contribute to the improvement of the mission capabilities of minimal-size flying robots to address the needs of society.","Micro Air Vehicle; UAV; Flying robot; Flapping wing; Helicopter; Hybrid UAV; Tailsitter, Hydrogen; Fuel-cell; Autonomous Drone Racing; Deep Neural Networks","en","doctoral thesis","","978-94-6384-333-1","","","","","","","","","Control & Simulation","","",""
"uuid:b4b02f3b-b622-4226-8f8c-be26e2c52428","http://resolver.tudelft.nl/uuid:b4b02f3b-b622-4226-8f8c-be26e2c52428","On the Methods for Explaining Polarization of Private and Unobservable Opinions: An opinion-behavior co-evolutionary approach","Tang, T. (TU Delft Transport and Logistics)","Chorus, C.G. (promotor); Ghorbani, Amineh (copromotor); Delft University of Technology (degree granting institution)","2022","Polarized opinions are everywhere. From opposite attitudes towards Hawaiian pizza to the partisan divide in theUnited States, we have experienced enough opinion polarization in recent years. Sadly, it is usually a sign of follow-up criticism when people start to talk about ""opinion polarization"". The term, which should neutrally describe a widespread social phenomenon, has been proven to be associated with different dismaying outcomes, ranging from hostility to civil wars. Given its harmful consequence, few would doubt the urgent need for a solution to this long-lasting issue, and such a solution requires a deep understanding of opinion polarization in real-life situations. The urgent need has motivated remarkable research efforts in the past few decades. Especially in the domain of computational sociology, a considerable amount of opinion dynamics models have been proposed to explain opinion polarization from microscopic mechanisms that govern interactions between individuals. A common feature of these models, which probably results from their roots in statistic physics, is that opinions are observable and can be directly affected by other opinions just like a ""spin"" in the famous Ising model. However, opinions in real life are of fundamental difference from ""spin"" in the sense that it is by nature private and unobservable, whose expression, transmission, and inference largely depend on observable behaviors: even if people are allowed to verbally exchange opinions, how these opinions are translated into words and how these words are inferred by both parties still play a critical role in the dynamics of opinions. Thereby, we could put forward a thesis (which we did, literally) that there is a fundamental discrepancy between opinion polarization in the literature and opinion polarization in real-life situations that would deteriorate our trust in these models, let alone the solutions generated accordingly.","","en","doctoral thesis","","978-94-6384-352-2","","","","","","","","","Transport and Logistics","","",""
"uuid:e0eb1883-47c2-402b-b736-4f8e00ebb45f","http://resolver.tudelft.nl/uuid:e0eb1883-47c2-402b-b736-4f8e00ebb45f","High-Performance Cluster-Scalable Computational Methods for Genomics Applications","Ahmad, T. (TU Delft Computer Engineering)","Al-Ars, Z. (promotor); Hofstee, H.P. (promotor); Delft University of Technology (degree granting institution)","2022","The ever increasing pace of advancements in sequencing technologies has enabled rapid DNA/genome sequencing to become much more accessible. In particular, next (second) and third generation sequencing technologies offer high throughput, massively parallel and cost effective sequencing solutions. Individual sample sequencing data volumes as well as the number of assembled genomes are also growing quickly. These advances in high throughput sequencing technologies and demand for fast computational processing and downstream analysis of sequencing data in clinical settings is widening the gap between the time spent in sample collection and sequencing versus computational analysis.
To improve the scalability and performance optimizations of genome variant calling analysis workflows on modern computing systems, in this dissertation four potential research directions have been selected for further exploration. First, to exploit the performance of modern processors hardware features like multi-core and vector units on the GATK best practices variant calling pipelines, we introduce ArrowSAM, a columnar inmemory data format to place and process genomics data in-memory thus removing the need for repeated file storage accesses in intermediate variant calling pipeline applications. Our second contribution focuses on integration of the Apache Arrow based columnar in-memory data format in the PySpark API to enable exploiting the benefits of vectorized operations in the Python language using user-defined functions on Spark dataframes. For our third research contribution, we tested and benchmarked both the scalability and performance of Arrow Flight for client-server as well as cluster scaled communication.For our final research contribution reported in this dissertation, we implemented an orthogonal approach that is even more scalable than Apache Spark and Arrow Flight based solutions and offers flexibility to use many different variant callers.","Genomics; Variant Calling; Apache Arrow; Apache Spark; MPI","en","doctoral thesis","","","","","","","","","","","Computer Engineering","","",""
"uuid:4d26359a-89ad-4a5a-9fac-cd8e9e59c743","http://resolver.tudelft.nl/uuid:4d26359a-89ad-4a5a-9fac-cd8e9e59c743","Iterative methods for time-harmonic waves: Towards accuracy and scalability","Dwarka, V.N.S.R. (TU Delft Numerical Analysis)","Vuik, Cornelis (promotor); van Gijzen, M.B. (promotor); Delft University of Technology (degree granting institution)","2022","The bottleneck in designing iterative solvers for the Helmholtz equation lies in balancing the trade-off between accuracy and scalability. Both the accuracy of the numerical solution and the number of iterations to reach convergence deteriorate in higher dimensions and increase with the wavenumber. To address these issues in this dissertation, we formulated three research pillars: accuracy, wavenumber independent convergence and linear complexity. Below, we summarize the core findings of this dissertation:
WAVENUMBER INDEPENDENT CONVERGENCE
We develop the first preconditioning technique which leads to close to wavenumber independent convergence for very large wavenumbers in 1D, 2D and 3D. Building on a two-level deflation projection method, we incorporated Quadratic Rational Bezier curves to construct the deflation space and vectors (Chapter 7). As a result, the near-zero eigenvalues of the coarse grid operator remain aligned with the fine-grid operator, keeping the spectrum of the preconditioned system clustered, leading to superior convergence properties compared to previous methods.
LINEAR COMPLEXITY
For over 30 years, applied mathematicians have tried to make convergent (standard) multigrid solvers for the Helmholtz equation. Multigrid solvers use sequences of smaller problem sizes and are computationally cheap and easy to implement. Unfortunately, multigrid methods diverge for Helmholtz and solving this issue remained an open problem. Using standard smoothing techniques, combined with similar higher-order coarse spaces, we constructed a fully convergent V- and W-cycle algorithm (Chapter 9). The key features of the algorithm are the use of higher-order transfer operators (instead of deflation vectors in the previous application) and a complex shift in the smoothing operator. While the method converges and the preliminary results have been proven, much research can still be conducted in this area, as this could support a paradigm shift in solving the complexity issue for very large wavenumbers in 2D and 3D. In light of this, we extended the two-level deflation solver to a multi-level deflation solver to address both the issue of wavenumber and problem size dependence (Chapter 8). In this part, we show better convergence properties and provide numerical experiments on challenging 2D and 3D test problems to corroborate the theoretical results.
ACCURACY
Finally, we developed an unprecedented way to study the accuracy of the numerical solutions by studying the eigenvalues of systems where the analytical solution is known (Chapter 5). Expressing the pollution error in terms of these eigenmodes, enabled theoretical accuracy studies and dispersion corrections in higher dimensions, irrespective of the wave propagation angles. Something which was previously impossible. We also studied the application of Isogeometric Analysis (IgA) to improve the accuracy and reduce the pollution error (Chapter 6). Our results showed that the use of IgA was able to significantly suppress the pollution error compared to Finite Elements Discretizations of the same order.","Helmholtz; Deflation; Multigrid; Pollution Error; Isogeometric Analysis","en","doctoral thesis","","978-94-6458-348-9","","","","","","","","","Numerical Analysis","","",""
"uuid:d4c0062c-9191-40e4-805a-72fe7afde7bd","http://resolver.tudelft.nl/uuid:d4c0062c-9191-40e4-805a-72fe7afde7bd","Chenier Dynamics","Tas, S.A.J. (TU Delft Environmental Fluid Mechanics)","Reniers, A.J.H.M. (promotor); Delft University of Technology (degree granting institution)","2022","Over the last decades, mangrove forests have suffered immense and rapid losses worldwide. In recognition of their important socio-economic and environmental functions, many attempts have been made to both protect the remaining mangrove coastlines and restore eroding sites. Unfortunately, many rehabilitation attempts have failed, lacking a thorough system understanding of mangrove-mud coasts.
Some mangrove-mud coasts are protected on their seaward side by sandy ridges (called `cheniers'). They protect against wave attack and can help to protect vulnerable mangrove-mud coastlines. In order to sustainably restore mangrove coasts, chenier dynamics need to be understood at the temporal and spatial scales relevant for mangrove establishment (daily to yearly variability driven by waves and tides). This dissertation aims to advance our understanding of chenier dynamics within the context of an eroding mangrove-mud coast. The severely eroded coastline of Demak, Indonesia, is used as a case study.
We started with a field campaign in Demak, observing the cross-shore dynamics of a single chenier. The observations revealed that cheniers can be very dynamic in relatively calm conditions. Using velocity moments as a proxy for the sediment transport, we have explored the role of tides and waves in the observed chenier dynamics. Tides drive the chenier landward, especially when the water depth over the chenier crest is low (high crest level relative to mean sea level). Waves only generate substantial sediment transport when the chenier is submerged. Overall, the cross-shore chenier dynamics are very sensitive to the timing of tides and waves: most transport takes place when high water levels coincide with (relatively) high waves.
While our observations showed the chenier to be highly dynamic in the short term, satellite images reveal that over longer timescales the position of the chenier remains more or less stable within the intertidal zone. This is in contrast to cheniers described in literature, which only migrate landward until they reach a stable position above tidal influences. We have developed an idealised chenier model to explore this dynamically stable position. The model simulates cross-shore chenier dynamics under daily wave and tidal influences and is able to predict both onshore and offshore migration. Onshore migration is mainly driven by wave action, while offshore migration is induced by a tidal phase lag or storms. This phase lag is caused by drowning of the coastal plain due to subsidence. For certain combinations of waves and tides, the model predicts a dynamically stable chenier. In the absence of a phase lag and storm season effect, the model yields a `classic' stable chenier that welds onto the shoreline by onshore migration.
We used Delft3D to explore the formation of cheniers through wave winnowing (the sorting of sand and mud by waves). We have identified three phases of chenier development: (1) a winnowing phase, during which mud is washed out of the seabed initially consisting of a mixture of sand and mud, (2) a sand transport phase, when the sand in the upper layer is transported onshore, and (3) a crest formation phase, during which a chenier crest rapidly develops at the landward limit of onshore sediment transport. The main mechanism driving onshore sand transport is wave asymmetry. During calm conditions, sand transport takes place within a narrow band limiting the volume of sand delivered nearshore, and therefore no chenier develops. In contrast, average storm conditions mobilise sufficient sand for a crest to develop. Our results thus reveal that chenier formation through wave winnowing does not require extreme storm conditions. Our study also shows that chenier formation through wave winnowing is a relatively slow process, with the largest time scales associated with the the first two phases of chenier development: winnowing and sand transport.
Overall, this dissertation contributes to our understanding of cross-shore chenier dynamics. While very dynamic in the short term, cheniers can maintain a stable position in the intertidal zone for certain combinations of waves and tides. As such, they can contribute to mangrove rehabilitation by creating windows of opportunity for mangrove establishment. Due to its rapid subsidence rates, the coast of Demak provides an analogue for a global drowning of coastlines under anticipated accelerated sea level rise. In fact, cheniers may form a natural defense mechanism of drowning coastal plains. As a result, small changes to the coastal plain (e.g. constructing a dike) could have a significant impact, disturbing the chenier dynamics and interrupting their negative feedback on coastal erosion. This work has illustrated the complexity and interconnectedness of coastal systems, a crucial notion in designing successful protection strategies for mangrove-mud coasts.","chenier; morphodynamics; modelling; sediment transport; mangroves","en","doctoral thesis","","978-94-6366-576-6","","","","","","","","","Environmental Fluid Mechanics","","",""
"uuid:e92cbc11-2a04-4b66-bff8-c021a4260347","http://resolver.tudelft.nl/uuid:e92cbc11-2a04-4b66-bff8-c021a4260347","Healthy computer working","Dekker, M.C. (TU Delft Applied Ergonomics and Design)","Vink, P. (promotor); Molenbroek, J.F.M. (copromotor); Delft University of Technology (degree granting institution)","2022","Repetitive Strain Injuries (RSI), also known as Work-Related Upper Limb Disorders (WRULD), surged in the early years of this millennium due to computer work. In this thesis, the magnitude, causes and consequences of this phenomenon for the student population of the Faculty of Industrial Design Engineering (IDE) at the Delft University of Technology (TU Delft) are investigated and described. Longitudinal surveys on RSI amongst IDE students over a 15-year period (2000-2014) show the trend in prevalence and severity of the complaints. From the year 2000 to the present, a multidisciplinary RSI prevention group is active to create awareness, provide information and practical sessions. The organised prevention activities and their scientific basis are introduced and discussed. Furthermore, ideas for products and product-service systems, aimed at preventing or reducing RSI and based on medical insights and understanding of RSI risk factors, are presented. These ideas, developed in IDE master graduation projects and one from industry, are evaluated in user tests and physiological experiments with potential users.
The knowledge and insights gained in this thesis are not only valuable for design students to realise healthy computer working, but also for other educational and professional computer workers.","RSI; WRULD; students; design; prevention; Intervention","en","doctoral thesis","","978-94-6421-758-2","","","","","","","","","Applied Ergonomics and Design","","",""
"uuid:6eb484cf-cb06-4cb6-bb6e-acf3184ebfe4","http://resolver.tudelft.nl/uuid:6eb484cf-cb06-4cb6-bb6e-acf3184ebfe4","High Voltage Photovoltaic Devices for Autonomous Solar-to-Fuel Applications","de Vrijer, T. (TU Delft Photovoltaic Materials and Devices)","Smets, A.H.M. (promotor); Delft University of Technology (degree granting institution)","2022","In this dissertation, a framework is presented for the development of high voltage multijunction photovoltaic (PV) devices. Specifically, wireless silicon-based monolithically integrated 2-terminal multijuction PV devices are investigated. Such devices can be used in autonomous solar-to-fuel synthesis systems, as well as other innovative approaches in which the multijunction solar cell is used not only as a photovoltaic current-voltage generator, but also as an ion-exchange membrane, electrochemical catalysts and/or optical transmittance filter. The framework presented in this dissertation encompasses all investigations performed in answering this thesis’ central question: To what extent can fundamental insight and device engineering reduce the opto-electrical losses in a hybrid wafer-based and thin film photovoltaic multijunction device, based on group IV elements? The answer to this central question is provided in three parts, focusing on I. textures, photovoltaic materials and single junction solar cells, II. a low bandgap-energy hydrogenated (:H) germanium(tin) (Ge(Sn)) absorber and III. multijunction PV and photoelectrochemical (PEC) devices...","","en","doctoral thesis","","978-94-6423-786-3","","","","","","","","","Photovoltaic Materials and Devices","","",""
"uuid:9f4f569c-1dd5-4ef1-9e5e-85fb479f069d","http://resolver.tudelft.nl/uuid:9f4f569c-1dd5-4ef1-9e5e-85fb479f069d","Simulating industrial symbiosis: Understanding and shaping circular business models for viable and robust industrial symbiosis networks through collaborative modelling and simulation","Lange, K.P.H. (TU Delft Energie and Industrie)","Herder, P.M. (promotor); Korevaar, G. (copromotor); Delft University of Technology (degree granting institution)","2022","The European Commission aims for a full circular economy (CE), an economy that aims to reuse all resources in 2050. CE is a promising way to increase welfare and wellbeing while decreasing environmental footprints. Industrial symbiosis, in which companies exchange residuals for resource efficiency, is essential to the circular transition. However, many companies are hesitant to implement business models for industrial symbiosis because of the various roles, stakes, opinions, and resulting uncertainties for business continuity.
This dissertation supports researchers, professionals, and students in understanding and shaping circular business models for industrial symbiosis networks through collaborative modelling and simulation methods. Three theoretical perspectives, design science research, complex adaptive socio-technical systems, and circular business model innovation, shed light on designing business models for industrial symbiosis. A serious game and agent-based models were developed in multiple case studies with researchers, practitioners, and students. These were then used to design circular business models and explore their efficacy under uncertain conditions, such as various behavioural intentions of potential partners in diverse natural and societal contexts.
This thesis advances business model design and experimentation by integrated simulation of social and technical aspects of industrial symbiosis. Furthermore, the research shows how simulations facilitate learning processes in designing circular business models. Ultimately, the thesis equips researchers, practitioners, and students with knowledge, tools, and methods to shape a circular economy.
To solve these problems, we first model the timing behavior of ETC/STC systems, obtaining what we call traffic models. The states of a traffic model are the transmitted samples and its output is the elapsed time between consecutive transmissions, the inter-sample time (IST). These models are infinite-state systems that can exhibit very complex—even chaotic—behavior, as we demonstrate. To solve synthesis problems such as scheduling and optimal STC sampling strategies, we augment the models with early-sampling choices, which are guaranteed to preserve control stability and performance. The models are then abstracted into finite-state systems or timed automata, on which many of our problems can be computationally solved. Using these abstractions, the obtained schedulers are always valid for the real systems, and the obtained metrics are always formal bounds to the real system's performance.
Our abstraction method is based on quotient and l-complete systems. That is, we partition the state-space into regions, each region comprising all states whose next IST, or next sequence of l ISTs, is the same. This is made possible by observing that periodic ETC (PETC)—a practical version of ETC where events are checked periodically—has a finite output set, and that each obtained region is described by an intersection of finitely many quadratic cones. The abstraction transitions, which enable predicting how samples and their corresponding ISTs evolve over time, can be computed exactly using nonlinear satisfiability-modulo-theories solvers, or approximately through convex semi-definite relaxations. Infinite periodic IST patterns arising from these abstractions can be verified to exist in the real traffic model via an eigenvector problem, which is central for solving problem (ii) exactly.
Our methodology comprises a comprehensive framework for solving qualitative (scheduling) and quantitative (sampling performance) problems for ETC and STC, as well as a computational machinery that automates these processes, ultimately consolidated in the open-source tool ETCetera. With the developed methods, we can show cases where ETC significantly outperforms periodic sampling in terms of average inter-sample time, and how to increase this performance further using look-ahead. We also manage to solve the ETC scheduling problem efficiently, which is helped by an abstraction minimization algorithm that we propose. In summary, this dissertation provides new tools to understand and manipulate ETC traffic, and ultimately casts new light on the practical relevance of ETC and STC.","event-triggered control; self-triggered control; formal methods; abstraction; scheduling; networked control systems; cyber-physical systems; hybrid systems; sampled-data control; quantitative analysis and synthesis; chaos; combinatorial games; automata theory; bounded disturbances","en","doctoral thesis","","978-94-6384-347-8","","","","","","","","","Team Manuel Mazo Jr","","",""
"uuid:962f6655-a1b8-4c38-8467-0b2b651ab629","http://resolver.tudelft.nl/uuid:962f6655-a1b8-4c38-8467-0b2b651ab629","Investigations of the early stages of recrystallization in interstitial-free and low-carbon steel sheets","Traka, K. (TU Delft Team Maria Santofimia Navarro)","Sietsma, J. (promotor); Raabe, Dierk (promotor); Bos, C. (copromotor); Delft University of Technology (degree granting institution)","2022","The present thesis investigates recrystallization and related phenomena in interstitial free (IF) and low carbon (LC) microstructures. Emphasis is given mostly on the early stages of recrystallization, i.e. nucleation stage. The investigations are performed with experimental measurements and computer simulations. In all studies, recrystallization is observed with close coupling to the deformation substructure. Crystallographic texture analysis is used as a means to: (a) confirm trends between the simulated and experimental microstructure and (b) interpret the evolution of recrystallization in terms of selective subgrain growth. The goal of this thesis is to obtain insight into recrystallization initiation and evolution in low alloyed cold rolled steel sheets.","Recrystallization nucleation; Selective subgrain growth; Orientation selection; Full-field modelling; Cellular-automata","en","doctoral thesis","","","","","","","","","","","Team Maria Santofimia Navarro","","",""
"uuid:6592cc11-fb2b-4d9c-a8f2-6dd2bb4584be","http://resolver.tudelft.nl/uuid:6592cc11-fb2b-4d9c-a8f2-6dd2bb4584be","Grasping the Sampling Behaviour of Event-Triggered Control: Self-Triggered Control, Abstractions and Formal Analysis","Delimpaltadakis, Giannis (TU Delft Team Manuel Mazo Jr)","Mazo, M. (promotor); Mohajerin Esfahani, P. (copromotor); Delft University of Technology (degree granting institution)","2022","A fundamental challenge in networked control systems is reducing the amount of communications of each system in the network, so that bandwidth and energy are used efficiently. To address the challenge, the research community has shifted its focus to Event-Triggered Control (ETC), in which communication between the control system's different components takes place only when a state-dependent condition is satisfied. However, although ETC indeed often reduces communication, its communication times (or sampling times) are unknown beforehand and predictions thereof require intricate mathematical analysis on the system's (perturbed) dynamics. Nonetheless, predicting ETC's sampling is of paramount importance, as it enables:
• Self-Triggered Control (STC), which is a more economic implementation of ETC. In STC the controller, at each sampling time, decides the next sampling time, by employing 1-step predictions of ETC's sampling; given a state measurement it predicts ETC's next sampling time.
• Traffic scheduling, which is planning bandwidth allocation to each entity using the network and requires multi-step or infinite-step predictions of ETC's communication times. Without scheduling, many systems may access the network at the same time, resulting into network overflow and hindering the systems' stability.
• Formal assessment of an ETC-design's performance in terms of sampling and control, e.g. by computing associated long-term metrics such as the expected average intersampling time, which again requires multi-/infinite-step predictions of ETC's sampling.
This dissertation studies ETC's sampling behaviour and derives predictions thereof in all three aforementioned contexts.
First, we propose a novel STC scheme, termed region-based STC, for nonlinear systems with bounded disturbances and uncertainties. The system's state-space is partitioned into a finite number of regions, and to each region a uniform STC intersampling time is assigned. To decide the next sampling time, at each sampling time the controller simply checks to which region the measured state belongs. To derive the partition and corresponding intersampling times, we use approximations of so-called isochronous manifolds. To derive the approximations, we address theoretical issues of prior works and propose a computational algorithm, and, to account for disturbances/uncertainties, we employ differential inclusions.
Regarding traffic scheduling, our work follows the recently proposed abstraction-based approach. The sampling behaviour of a given ETC system is modeled by a finite-state system (the abstraction), offering an infinite-horizon prediction on ETC's sampling. In this work, we construct abstractions of (perturbed) nonlinear ETC systems. The system's state-space is partitioned into finitely many regions, representing the abstraction's states. For each region, a timing interval is determined, containing all intersampling times corresponding to states in the region. These intervals serve as the abstraction's output. Finally, the abstraction's transitions, given a starting region, indicate where the system's trajectories end up after an elapsed intersampling time. To determine the timing intervals and the transitions, we propose algorithms based on reachability analysis. Regarding state-space partitioning, we propose a partition similar to that of region-based STC, aiming at providing control over the timing intervals and improving their tightness.
Finally, on the formal-assessment front, we formally analyze the sampling behaviour of stochastic linear periodic ETC (PETC) systems by computing bounds on associated metrics. Specifically, we consider functions over sequences of state measurements and intersampling times that can be expressed as average, multiplicative or cumulative rewards, and introduce their expectations as metrics on PETC's sampling behaviour. We compute bounds on these expectations, by constructing appropriate Interval Markov Chains (IMCs) equipped with suitable reward structures, that abstract stochastic PETC's sampling behaviour, and employing value iteration over these IMCs.","event-triggered control; self-triggered control; abstraction; nonlinear; stochastic; sampling behaviour; networked control systems","en","doctoral thesis","","978-94-6384-345-4","","","","","","","","","Team Manuel Mazo Jr","","",""
"uuid:ca37004a-ef93-4d31-b451-35b7079f0ec9","http://resolver.tudelft.nl/uuid:ca37004a-ef93-4d31-b451-35b7079f0ec9","Wave-induced vibrations of flood gates: modelling, experimentation and design","Tieleman, O.C. (TU Delft Hydraulic Structures and Flood Risk)","Jonkman, Sebastiaan N. (promotor); Hofland, Bas (promotor); Tsouvalas, A. (copromotor); Delft University of Technology (degree granting institution)","2022","Flood gates form an essential part of many flood defence systems in coastal areas. During storm events, these gates are subjected to extreme loads from various sources. This dissertation addresses the dynamic behaviour of flood gates induced by wave impacts. A semi-analytical fluid-structure interaction model is developed to predict vertical flood gate vibrations. This model aims to overcome both the lack of accuracy involved with existing engineering methods and the high computational costs of numerical finite element methods. Scale experiments have been performed of wave impacts on flexible gate structures with an overhang to validate this model. The resulting dataset is made available for further research on the involved physical phenomena. Methods are then presented to design and assess the safety of flood gates subjected to wave impacts. At the Afsluitdijk in the Netherlands, wave impacts on the flood gates have played a major role in the design. Several case studies are performed in this dissertation based on that situation.","Dynamics; Fluid-structure interaction; Wave impacts; Flood gates; Modelling; Experimentation; Design; Vibrations; Semi-analytical; Mode matching; Fatigue; Overhang","en","doctoral thesis","","978-94-6384-349-2","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:6b3693c8-0764-4b72-8091-b082a7227d44","http://resolver.tudelft.nl/uuid:6b3693c8-0764-4b72-8091-b082a7227d44","Rheological Analysis of Mud: Towards an Implementation of the Nautical Bottom Concept in the Port of Hamburg","Shakeel, A.","Pietrzak, J.D. (promotor); Chassagne, C. (promotor); Kirichek, Alex (copromotor); Delft University of Technology (degree granting institution)","2022","The nautical bottom (the level at which contact with a ship’s keel causes either damage or unacceptable effects on controllability and manoeuvrability of a ship) should be associated to a measurable physical characteristic. Bulk density is typically used as a criterion for nautical bottom by many ports worldwide. However, the rheological properties particularly the yield stress of mud would be better parameters for defining a criterion for nautical bottom due to their strong correlation with the flow properties of mud and navigability. The density-yield stress correlation depends significantly on different parameters of mud such as organic matter type and content, clay type and content, particle size distribution and salinity. Therefore, a single critical value of density cannot be chosen for the nautical bottom criterion in a port like Port of Hamburg, where the above mentioned parameters are varying from one harbour location to another. This justifies the need for a study of the rheological properties (yield stress) of mud in Port of Hamburg.","Mud; Rheology; Nautical bottom; Yield stress; Density; Organic matter","en","doctoral thesis","","978-94-6419-532-3","","","","","","","","","Environmental Fluid Mechanics","","",""
"uuid:d3a84a3a-ba8d-4e1e-b6f7-4d3d2d7ac0c1","http://resolver.tudelft.nl/uuid:d3a84a3a-ba8d-4e1e-b6f7-4d3d2d7ac0c1","Light Dosage Optimization by Data-driven and Dynamic Modeling: in Blue Light Therapies","Wang, T. (TU Delft Electronic Components, Technology and Materials)","Zhang, Kouchi (promotor); Dong, J.F. (copromotor); Delft University of Technology (degree granting institution)","2022","The therapeutic properties of light have been known for thousands of years, but photodynamic therapy (PDT) was only developed in the last century. Currently, PDT is in clinical trials for the oncology-the treatment of head and neck, brain, lung, pancreas, abdominal cavity, breast, prostate, and skin cancer. Advantages of the light-based therapies include rapid action and avoidance of drug resistance. The underlying mechanism of PDT is that the photosensitizers (PS) transform from their ground state (singlet state) into a relatively long-lived electronically excited state (triplet state) by the absorbing the photo energy, which, in turn, produces highly toxic reactive oxygen species (ROS) in cells. One difficulty of PDT is the administration of PS. Since the PS does not naturally exist, PDT relies on the exogenous PS which is administered by intravenous injection or topical application to the skin. This makes three disadvantages: first, the PS needs to be approved before it can be applied to patients; second, the adverse reaction of importing the exogenous PS can not be eliminated even if it is approved; and third, for the tumor under deep tissue, it is hard to import the PS to incidence area. Similar to PDT, the antimicrobial blue light (ABL) only relies on the endogenous PS (flavin and porphyrin molecules) to inactivate the microbes, which is safer to use. However, as it is named, ABL can be only used for treating diseases whose pathogen is microbes, but the tumor. The most common application of ABL is treating various microbial superficial infections, e.g., skin or membranes. Traditionally, topical antimycotic/ antibiotic drugs and more convenient oral azole agents are the main treatments for microbial infections. However, most pathogens have shown increased resistance to these drugs. Especially, the most famous one, methicillin-resistant Staphylococcus aureus (MRSA), was called ”superbug” which has evolved resistance to most antibiotic drugs. Fortunately, ABLwas proved to be effective in the inactivation of most pathogenic microbes, including MRSA, Candida Albicans, Escherichia coli. Further studies show that the inactivation effect did not significantly decrease after repeated ABL irradiation, which demonstrates the avoidance of resistance to ABL...","Low-Level light therapy; Nonlinear dynamics; Mathematical model; Reactive oxygen species; Optimization algorithm","en","doctoral thesis","","978-94-6421-784-1","","","","","","","","","Electronic Components, Technology and Materials","","",""
"uuid:e6e7ddf2-96cf-4d43-aa94-243c52acc4e1","http://resolver.tudelft.nl/uuid:e6e7ddf2-96cf-4d43-aa94-243c52acc4e1","Quasi-vertical Gallium Nitride Diodes for Microwave Power Applications","Sun, Y. (TU Delft Electronic Components, Technology and Materials)","Zhang, Kouchi (promotor); van Driel, W.D. (promotor); Delft University of Technology (degree granting institution)","2022","The deployment of fifth-generation (5G) networks requires more closely spaced wireless infrastructures with a high output power to deal with high-frequency signal attenuation issues. Microwave power limiters have been widely used in the RF front-end in various wireless communication systems. A diode limiter circuit prevents the damage of sensitive receiver components by allowing RF signals below a certain threshold to pass through, while larger signals exceeding the threshold are attenuated. Many studies have been carried out on Si-based diode limiters in recent years; however, they have shown scant room for further improvement as silicon reaches its theoretical limitations. From this perspective, there is a need for new semiconductor materials to satisfy the requirements of devices. Wide-bandgap materials (e.g., gallium nitride) have recently attracted a great deal of interest due to their superior material properties such as wide band-gap, high electron saturation velocity, and high critical electric field.
Although lateral-structure GaN devices are staying ahead of the pace of industrialization, they still face several constraints and do not reach the GaN material limit due to requiring a high epitaxial layer quality and precise processing. A vertical structure is a convenient solution in Si- or SiC-based devices, which are also attractive alternatives to GaN devices. Quasi-vertical GaN devices have the freedom to select substrates (such as silicon, sapphire, and SiC) by using hetero-epitaxial growth technology. A planar structure design is easy to integrate with other RF components. This dissertation aimed to develop a quasi-vertical GaN diode for high-power RF and microwave applications which could operate in a wide frequency band and at high input power levels, with easy integration and low cost. The scope of this dissertation involved three aspects: design and fabrication of a quasi-vertical GaN device with mesa etching optimization; suppression of reverse leakage with an enhanced breakdown voltage; demonstration of microwave power applications (limiters and detectors) based on developed GaN diodes.
First, a literature summary of the state-of-the-art vertical GaN SBDs is presented in Chapter 2. A trade-off between the 𝑅𝑜𝑛,𝑠𝑝 and BV of a diode is analyzed to characterize the performance of diodes. We discuss the benchmark of 𝑅𝑜𝑛,𝑠𝑝 and BV for vertical GaN SBDs with different substrates (Si, sapphire, and GaN) and various edge terminal techniques. The equivalent circuit model of a diode for studying the high-frequency properties is introduced.Second, the optimization of mesa etching for a quasi-vertical GaN SBD by inductively coupled plasma (ICP) etching is comprehensively investigated in this chapter. In particular, the microtrench at the bottom corner of the mesa is eliminated by optimizing etch recipes. For the photoresist (PR) masked GaN samples, high source power is the cause of deteriorated mesa sidewall morphology. Although high-temperature (>140 ℃) hard baking prior to etching can produce a smooth sidewall, the drawbacks are significant and include oblique sidewall profile formation and hard striping. For the 𝑆𝑖𝑂2-masked GaN samples, the micro-trench problem at the bottom corner of the mesa can be reduced or eliminated by reducing the source power or by adding 𝐵𝐶𝑙3 into the 𝐶𝑙2 plasma. After ICP etching, the use of a TMAH wet treatment for samples can obtain a near-90° steep mesa sidewall that is microtrench-free and has a smooth surface. The proposed etching technique can be extended to other GaN nanostructures, such as hexagonal pyramids and nanowire arrays, which is promising for sensors, vertical transistors, optoelectronics, and photovoltaics.Third, a quasi-vertical GaN SBD is developed from the perspective of epilayer design, device layout, device modeling, fabrication, and leakage suppression. The design flow and fabrication process of quasi-vertical GaN diodes for microwave power applications are presented. Three solutions are developed to suppress the leakage current, namely, mesa optimization, argon ion terminations, and post-mesa nitridation. The experiment results show that our diode has the lowest leakage current density at 80% of the BV among the reported vertical GaN SBDs for a BV between 120 and 250 V. Combining mesa optimization and post-mesa nitridation technology effectively enhances the breakdown voltage and achieves excellent conduction characteristics.Fourth, a high-performance quasi-vertical GaN Schottky diode on a sapphire substrate and its application for high-power microwave circuits are investigated. We experimentally demonstrate the use of a vertical GaN SBD for L-band microwave power limiters for the first time ever. The GaN SBD limiter can handle at least 40 dBm of CW input power at 2 GHz without failure, which is comparable to a commercial Si-based diode limiter. Then, we experimentally demonstrate a quasi-vertical GaN SBD with post-mesa nitridation for high-power and broadband microwave detection. The fabricated quasi-vertical GaN diode reaches a high forward current density of 9.19 𝑘𝐴/𝑐𝑚2 at 3 V, and BV of 106 V. An extremely high output current of 400 mA is obtained when the detected power reaches 38.4 dBm at 3 GHz in pulsed-wave mode.Finally, all of the research content mentioned in this thesis is summarized, and the problems needing to be further investigated with lucubrate direction are indicated.","Gallium Nitride (GaN); Schottky Barrier Diodes (SBD); microwave power limiter; microwave power detector; quasi-vetical","en","doctoral thesis","","978-94-6421-787-2","","","","","","2023-06-27","","","Electronic Components, Technology and Materials","","",""
"uuid:07b09c12-f60d-41c1-a8ce-2e4c7ee36893","http://resolver.tudelft.nl/uuid:07b09c12-f60d-41c1-a8ce-2e4c7ee36893","Towards Self-Sufficient High-Rises: Performance Optimisation using Artificial Intelligence","Ekici, B. (TU Delft Design Informatics)","Sariyildiz, I.S. (promotor); Tasgetiren, M. Fatih (promotor); Turrin, M. (promotor); Delft University of Technology (degree granting institution)","2022","Population growth and urbanisation trends bring many consequences related to the increase in global energy consumption, CO2 emissions and a decrease in arable land per person. High‑rises have been one of the inevitable buildings of metropoles to provide extra floor space since the early examples in the 19th century. Therefore, optimisation of high-rise buildings has been the focus of researchers because of significant performance enhancement, mainly in energy consumption and generation. Based on the facts of the 21st century, optimising high-rise buildings for multiple vital resources (such as energy, food, and water) is necessary for a sustainable future.
This research suggests “self-sufficient high-rise buildings” that can generate and efficiently consume vital resources in addition to dense habitation for sustainable living in metropoles. The complexity of self-sufficient high-rise building optimisation is more challenging than optimising regular high-rises that have not been addressed in the literature. The main challenge behind the research is the integration of multiple performance aspects of self-sufficiency related to the vital resources of human beings (energy, food, and water) and consideration of large numbers of design parameters related to these multiple performance aspects. Therefore, the dissertation presents a framework for performance optimisation of self-sufficient high-rise buildings using artificial intelligence focusing on the conceptual phase of the design process. The output of this dissertation supports decision-makers to suggest well-performing high-rise buildings involving the aspects of self sufficiency in a reasonable timeframe.","High-rise buildings; Self-sufficiency; Energy; Food; Daylight; Performance-based design; Machine learning; Optimisation","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-562-9","","","","","","","","","Design Informatics","","",""
"uuid:f9a1a88c-04dc-4ae7-babf-e42fd4845022","http://resolver.tudelft.nl/uuid:f9a1a88c-04dc-4ae7-babf-e42fd4845022","Model-based Control of Large-scale Baggage Handling Systems: Leveraging the Theory of Linear Positive Systems for Robust Scalable Control Design","Zeinaly, Y. (TU Delft Team Bart De Schutter)","De Schutter, B.H.K. (promotor); Hellendoorn, J. (promotor); Delft University of Technology (degree granting institution)","2022","Large-scale baggage handling systems, or large-scale logistic networks, for that matter, pose interesting challenges to model-based control design. These challenges concern computational complexity, scalability, and robustness of the proposed solutions. This thesis tackles these issues in a collection of papers organized in two overlapping parts. The first part concerns modeling and Model Predictive Control (MPC) design of large-scale baggage handling systems (BHSs), where a modeling framework for BHSs is proposed that is subsequently used to develop an MPC scheme for control of large-scale BHSs. The MPC controller optimizes for the timely arrival of pieces of baggage at their destination within the BHS network under capacity constraints while minimizing the overall cost of transporting pieces of baggage. Several formulations for the resulting constrained optimization problem are proposed, and they are compared with each other in terms of closed-loop performance and computational complexity. It is shown, via simulation studies, that the proposed solutions can outperform a heuristics-based approach commonly used for control of BHSs while scaling well to larger BHS network instances.
In its second part, the thesis focuses on robustness of control design in the face of a partially known disturbance input (i.e., input baggage demand), and especially on developing a scalable tube-based MPC scheme. For this purpose, considering the BHS model essentially as a linear positive system, a linear-programming-based approach is proposed for the joint calculation of a robustly positively invariant subset and a constrained state feedback controller that minimizes the disturbance-driven L∞ norm of the output over this set. A tube-based MPC control scheme is finally developed by coupling the state feedback controller with a nominal MPC controller, guaranteeing recursive feasibility and asymptotic stability. It is shown via simulation studies that the proposed tube-based approach is effective against unpredictable disturbances. In addition, since the design of both the nominal MPC controller and the state feedback controller involves only linear programs, the proposed tube-based approach scales well to BHS networks of larger size.
Linear positive systems are of interest in several branches of engineering, logistics, biochemistry, and economics. As a spin-off topic and inspired by the applications of the theory of linear positive systems to modeling and control design of systems in the mentioned domains, the third part of the thesis focuses on the reachability analysis of discrete-time linear positive systems. More specifically, we revisit the problem of characterizing the subset of the state space that is reachable from the origin for discrete-time linear positive systems. This problem is of interest in topics such as optimal control of linear positive systems and realization theory of linear positive systems. It is established in this thesis that the reachable subset can be either a polyhedral or a nonpolyhedral cone. For the single-input case, a characterization is provided of when the infinite-time and the finite-time reachable subsets are polyhedral. Finally, for the case of polyhedral reachable subsets, a method, based on solving a set of linear equations, is provided to verify whether a target set can be reached from the origin using positive inputs.","Baggage Handling Systems; Polyhedral Cones; Reachable Subsets; Linear positive systems; Baggage Handling Systems, Model Predictive Control","en","doctoral thesis","","978-90-5584-311-4","","","","TRAIL Thesis Series no. T2022/8, the Netherlands TRAIL Research School","","","","","Team Bart De Schutter","","",""
"uuid:bca1b67e-8e9b-4f48-b21a-8b3136f1d4dc","http://resolver.tudelft.nl/uuid:bca1b67e-8e9b-4f48-b21a-8b3136f1d4dc","Hydroxynitrile Lyases, Immobilisation and Flow Chemistry","Coloma Hurel, J.L. (TU Delft BT/Biocatalysis)","Hanefeld, U. (promotor); Hagedoorn, P.L. (promotor); Delft University of Technology (degree granting institution)","2022","The utilisation of flow chemistry and immobilisation in biocatalysis is gaining attention as an attractive way to overcome some of the limitations commonly reported in traditional batch systems such as mass transfer restrictions, low productivities, substrate/product inhibition and safety, among others....","Hydroxynitrile lyases; Granulicella tundricola; Arabidopsis thaliana; cyanohydrins; styrenes; oxidative cleavage; nitroaldol; immobilisation; Celite","en","doctoral thesis","","978-94-6366-560-5","","","","","","","","","BT/Biocatalysis","","",""
"uuid:035865fb-2ba6-4088-9c3e-cb2e2e5fd2c1","http://resolver.tudelft.nl/uuid:035865fb-2ba6-4088-9c3e-cb2e2e5fd2c1","Energy coupling of metabolite transport in Saccharomyces cerevisiae","de Valk, S.C. (TU Delft BT/Industriele Microbiologie)","Pronk, J.T. (promotor); Mans, R. (copromotor); Delft University of Technology (degree granting institution)","2022","In living cells, transport proteins allow for the translocation of molecules across biological membranes that are otherwise impermeable to charged and polar solutes. These membrane associated proteins play an essential role in the uptake of substrate molecules and export of metabolic products, as well as in the maintenance of electrochemical gradients across membranes, which are exploited energy stores by all living cells. Over the course of evolution, a variety of transport proteins have emerged, with diverse substrate specificities, kinetics and mechanisms. To study the transport function of proteins, the yeast Saccharomyces cerevisiae has been widely used as a model organism for the expression and functional characterization of heterologous genes encoding these transport proteins. Aside from numerous advantageous traits that facilitate its cultivation and genetic modification, another benefit of using this yeast as a model is that there is already a vast collection of knowledge available in the scientific literature on its native complement of transport proteins. Besides studying the diversity of transporters already present in nature, (targeted) changes to transport proteins can be introduced that can alter their biological function...","","en","doctoral thesis","","978-94-6384-338-6","","","","","","","","","BT/Industriele Microbiologie","","",""
"uuid:890a1ddd-1957-4732-988a-eb4204e454ce","http://resolver.tudelft.nl/uuid:890a1ddd-1957-4732-988a-eb4204e454ce","Bio-Engineered Earth Retaining Structure (BEERS): A timber sheet pile-vegetation system for stream bank protection","Kamath, A.C. (TU Delft Bio-based Structures & Materials)","van de Kuilen, J.W.G. (promotor); Jommi, C. (promotor); Delft University of Technology (degree granting institution)","2022","The Netherlands has an extensive network of rivers and canals systems serving purposes like irrigation, transportation and water removal. The banks along the canals are either protected by earth retaining structures such as sheet pile walls or left unprotected. A bulk of the engineered sheet piles used to protect the canal banks in the Netherlands are made of timber. Tropical hardwoods such as Azobé (Lophira alata) are used to make these timber sheet piles durable, owing to its high biological resistance to decay. Pine from North-west Europe is also used, but need to be treated chemically for sufficient durability. Even though roots of vegetation are known to increase the shear strength of soil, the positive effects of vegetation are not quantified in depth. Vegetation roots in a root-soil composite primarily act in tension when subjected to load, thus acting similar to steel in reinforced concrete. This thesis summarizes the efforts to study a bio-engineered earth retaining structure made of non-durable locally available wood species and vegetation to protect the canal banks as an alternative to the currently used bank protection structures. Such a retaining structure would not only reduce the need for durable hardwoods, but are also more environmentally friendly than the ‘hard’ retaining structures currently in use.
Two vegetation types, Humulus lupulus L. and Salix fragilis L. were chosen for investigation based on their potential to reinforce canal banks, nativity, root characteristics and growth conditions such as presence of high ground water table. An extensive laboratory campaign was planned and conducted to characterize the strength of roots, root-soil composite and to study the behavior of the timber sheet pile-vegetation combination as a system. The experimental results were further extended to develop approaches in the design of timber sheet pile-vegetation system.
To study the root-soil composite behavior in shear, a large scale direct shear apparatus was built. The apparatus was built in the view of conducting tests in dual loading modes, displacement controlled and load controlled shear. In order to simulate the canal bank conditions as closely as possible, the samples were tested in saturated conditions at low confining pressure. Bare soil, rooted samples of Humulus lupulus L. and Salix fragilis L. were tested in both displacement controlled and load controlled conditions. The roots were excavated after the test and analysis of root orientation, diameter and root biomass was conducted. Rooted samples showed a higher friction angle when compared to bare soil. Contractive behavior was shown by rooted samples and the peak stress ratio vs displacement trend of rooted samples were seen to diverge from the trend
for bare soil. Burger model was seen to be able to capture the time dependent behavior under loading mode, when simulated over the experimental results.
A tensile testing program was devised to test roots in tension in a displacement controlled and load controlled tests. Roots of Humulus lupulus L. and Salix fragilis L. were tested in both wet and dry conditions in displacement controlled tests. Load controlled tensile testing was conducted on samples of Humulus lupulus L. of two diameters. Power law was observed to estimate volume-effect and fit all the tensile strength-diameter variations. Comparison of tensile strength in dry and wet conditions revealed that significant difference in tensile strength was observed for Salix fragilis L. while no significant difference in tensile strength was observed for Humulus lupulus L. Further, time to failure of roots were studied using a power law model.
A physical modelling approach was attempted to study the behavior of the timber sheet pile-vegetation system. A root system similar to Humulus lupulus L. was 3D
printed using PLA material to be used in physical model. A comparison of unreinforced bank and bank reinforced with root analogues revealed that the presence of roots increase the volume of soil that needs to be mobilized for failure to occur. It was also observed that when root analogues were placed in the most efficient spatial pattern, among the conducted tests, are able to sustain twice the drawdown pressure. Subsequently, finite element modelling was conducted by including the effect of roots as an increased cohesion parameter. The results from modelling were seen to be able to capture the failure. Parametric analysis revealed that the influence of spatial distribution of the roots on forces acting on the sheet pile is higher, after a threshold value of additional cohesion is reached. That implies any additional cohesion after the threshold value might
not provide any additional benefits to the stability. The results thus indicate that vegetation with more spatially distributed roots will be more suitable to be used in timber sheet pile-vegetation retaining system.
Finally, two different perspectives in design approach of a timber sheet pile-vegetation system are investigated. The system approach is based on the concept that the mechanical reinforcement of the soil with growth of vegetation could result in a reduction of horizontal pressure against the sheet pile, bending moments and shear stresses acting on the sheet pile over time. This results in decreasing the duration of load effect in the timber and counteracting the effects of slow biological degradation of wood in air-water-soil conditions. Timber sheet pile components that are below the water level are less prone to decay. However, those components that are at the air water-soil interface are more susceptible to decay. In the discrete approach the vegetation is perceived as supporting the top parts of the stream bank (< 2meters) after timber decay has occurred. The effect
of vegetation is analyzed as both increase in internal friction angle and increase in cohesion, on stream banks of retaining height of 2m and 3m. An increase in service life of nthe timber sheet pile-vegetation system is achieved in a system approach compared to when only timber sheet pile is present. In the discrete approach, it was observed that modification to the landscape by changing the slope angle of the bank might be necessary when the influence of vegetation is incorporated as an increase in friction angle of soil.
In laboratory scale, as in this study, timber sheet pile-vegetation earth retaining systems show promises to be used for stream bank protection. Future studies need to focus on field scale system level studies and quantification of reinforcement of vegetation in presence of multiple species of vegetation.","Sheetpile; vegetation; roots; canal","en","doctoral thesis","","978-94-6421-769-8","","","","","","","","","Bio-based Structures & Materials","","",""
"uuid:9d70c96d-7a7e-4b8d-98d6-d7b0ffaf6812","http://resolver.tudelft.nl/uuid:9d70c96d-7a7e-4b8d-98d6-d7b0ffaf6812","Application of advanced seismic techniques for archaeological investigations","Liu, J. (TU Delft Applied Geophysics and Petrophysics)","Ghose, R. (promotor); Draganov, D.S. (promotor); Delft University of Technology (degree granting institution)","2022","At different places in the world, the local climate conditions have helped the preservation of archaeological sites to a very high degree. This has helped us understand better our history. This situation, however, is quickly changing due to the climate change we are now facing. The condition at an increasing number of ancient sites around the world is now deteriorating due to the warming climate. Obtaining high-resolution images of the subsurface of the archaeological sites without excavation can help us make better strategies for conserving these sites. Such possibilities are provided by the application of geophysical exploration methods. Among all available geophysical approaches, high-resolution reflection seismic using transverse (S-) waves is one of the few options that can provide detailed information regarding the subsurface structure beneath archaeological sites for depths up to several meters. However, most unexcavated sites are covered by soil. Near-surface seismic data acquired in such soil-covered sites are dominated by source-generated, dispersive surface waves, and sometimes surface waves caused by other anthropogenic sources, e.g., traffic and human activities in the vicinity of the seismic line. Both of these strong events can camouflage the very shallow reflections. The conventional techniques for suppression of surface waves, e.g., muting or spatial filtering, are ineffective or even detrimental to the target reflections, especially at near offsets. This is especially challenging in surveys where the available source-receiver offset range is often quite limited, and the velocity and frequency content of the surface waves largely overlap with those of the target S-wave reflections. In chapter 2, we aim to develop a data-driven way to suppress surface-wave noise and thus reveal the very shallow reflections. We make use of seismic interferometry (SI) to retrieve both source-coherent and source-incoherent surface-wave parts of the data. The retrieved surface waves are then adaptively subtracted (AS) from the recorded data, thereby exposing the hidden reflections. We apply our schemes to both synthetic and field seismic data. We show that artifacts caused by stacking surface-wave noise are greatly reduced and that reflectors, especially at very shallow depth, can be much better imaged and interpreted. The dominance of surface waves also make it impossible to identify weak diffraction signals, which is the seismic response of buried objects of small size. The diffraction events can be used to detect and locate the distribution of shallow objects. Revealing the hidden diffraction signals from under the dominant surface waves and using them for locating objects constitute another goal of this thesis. In chapters 3 and 4, we introduce an interferometric workflow for imaging subsurface objects using masked diffractions. This workflow includes three main steps. We first reveal masked diffractions by suppression of the dominant surface waves through a combination of SI and nonstationary AS. The revealed weak diffraction signal is then enhanced by cross coherence-based super virtual interferometry (SVI). Finally, we produce a diffraction image by a multipath summation approach, which can be used to interpret the locations of subsurface diffractors. We apply our method to field data acquired at an archaeological site using two different active sources. Two shallow anomalies were detected in our sections, whose locations agree well with burial burnt stones. These burnt stones have also been detected in an independent magnetic survey and in corings. The limitation of our workflow is that it can only be applied with desired resolution to S-wave data when seismic sources and receivers polarized in the cross-line direction.","","en","doctoral thesis","","978-94-6423-855-6","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:5d618877-1001-4897-ac80-2f5bbb10489a","http://resolver.tudelft.nl/uuid:5d618877-1001-4897-ac80-2f5bbb10489a","Multiband Channel Estimation for Precise Localization in Wireless Networks: Algorithms, Simulations and Experiments","Kazaz, T. (TU Delft Signal Processing Systems)","van der Veen, A.J. (promotor); Janssen, G.J.M. (promotor); Delft University of Technology (degree granting institution)","2022","Over the last two decades, we have witnessed a tremendous evolution of wireless communication systems. For example, the data rates in mobile wireless systems have increased from a few tens of kilobits per second to 10 gigabits per second between the first and last, i.e., fifth generation (5G). The main enablers for this growth are signal processing and radio frequency (RF) hardware innovations, which led to more efficient modulation and coding schemes and high-performance RF transceivers. Following these trends, future wireless systems such as 6G and WiFi-7 aim for even higher data rates, requiring higher frequency ranges, wider bandwidths, and massive antenna arrays. These developments pave the way toward joint communication and sensing RF systems with very high range, Doppler, and angular resolutions. In particular, favorable signal and RF transceiver properties such as large bandwidth will enable precise RF localization in rich scattering environments such as indoor or urban canyons where multipath effects severely impair the performance of traditional localization systems like GNSS (Global Navigation Satellite Systems). At the same time, the wide range of emerging applications in areas of autonomous navigation, assisted living, and Internet-of-Things require precise localization, often to cm-level degree accuracy. Therefore, it is evident that new localization approaches and signal processing algorithms that can exploit signal and transceiver properties of emerging wireless systems are needed to solve the problem of precise localization in multipath environments and lead the way to novel applications.
The goal of this thesis is to design signal processing algorithms and protocols that will enable precise ranging in multipath environments while using practical single-antenna RF transceivers. In the first part of this thesis, we introduce a multiband channel model to describe multipath channel measurements collected over multiple separate frequency bands using narrowband and wideband RF transceivers. This model shows that multiband channel measurements have multiple shift-invariance property and that by increasing the frequency aperture of the multiband measurements, we can improve the resolution of multipath time-delay estimation. We use this property of the measurements to develop high-resolution time-delay estimation algorithms based on subspace estimation. To illustrate the performance of these algorithms, we perform extensive numerical experiments which demonstrate that the proposed algorithms are statistically efficient and that multiband time-delay estimation enables precise ranging in multipath environments.
However, the aforementioned results also show that the proposed algorithms are sensitive to errors introduced by hardware impairments of RF transceivers and imperfect calibration. In the second part of the thesis, we focus on the problem of joint RF transceiver calibration and high-resolution time-delay estimation. For example, in practical scenarios, the frequency response of RF transceivers might not be known nor calibrated, and performing time-delay estimation without calibrating these effects will lead to biased estimates. We show that the problem of joint RF transceiver calibration and time-delay estimation can be formulated as a particular case of covariance matching, which after reformulation, can be solved using a simple group Lasso algorithm. Likewise, due to imperfections of oscillators used in RF transceivers, the mobile and anchor nodes are usually not frequency synchronized. This frequency offset severely deteriorates the performance of multiband ranging methods. To solve this issue, we design a two-way protocol for collecting multiband channel measurements and a weighted least squares-based algorithm that enable joint clock synchronization and ranging.
Finally, in the last part of the thesis, we validate our modeling assumptions and illustrate the performance of the multiband time-delay estimation algorithms by considering practical scenarios of localization in future WiFi-7 networks. For these experiments, we use real indoor multipath channel measurements collected in a hospital and a university building environment. The results of the experiments show that using multiband channel measurements with a total bandwidth of 320 MHz, the absolute ranging error is smaller than 4 cm in 80% of the cases. Likewise, using the same scenario setup and three anchors to localize the mobile node, it is observed that the positioning error is below 24 cm in 95% of the cases. These results show that by using the advanced signal processing techniques to design estimation algorithms and channel measurement protocols that can exploit the properties and degrees of freedom offered by future wireless systems and RF transceivers, decimeter-level accurate positioning is achievable.
The signal processing models presented in this thesis are common to the wide area of array signal processing applications, such as radar and ultrasound imaging. Therefore, the results presented in this thesis impact these application areas as well.","","en","doctoral thesis","","978-94-6366-559-9","","","","","","","","","Signal Processing Systems","","",""
"uuid:9f380f03-5842-45a0-87d4-4a8372e88dd5","http://resolver.tudelft.nl/uuid:9f380f03-5842-45a0-87d4-4a8372e88dd5","nD-PointCloud Data Management: continuous levels, adaptive histograms, and diverse query geometries","Liu, H. (TU Delft GIS Technologie)","van Oosterom, P.J.M. (promotor); Meijers, B.M. (copromotor); Delft University of Technology (degree granting institution)","2022","In the Geomatics domain, a point cloud refers to a data set which records the coordinates and other attributes of a huge number of points. Conceptually, each of these attributes can be regarded as a dimension, representing a specific type of information. Apart from routinely concerned spatio-temporal dimensions for coordinates, other dimensions such as intensity and classification are also widely used in spatial applications. In fact, more dimensions can be involved. For instance, a point in the hydraulic modelling grid also records the flow direction, speed, sediment concentration, and other related attributes. As these point cloud data can be directly collected, computed, stored and analyzed, this thesis proposes the term – nD-PointCloud, as a general spatial data representation to cover them.
At present, drastically increasing production of nD-PointCloud data raises essential demand for smart and highly efficient data management and querying solutions. However, we lack effective tools. Prevalent software for nD-PointCloud processing, analyzing and rendering are built on file-based systems, requiring substantial development of data structures and algorithms. To make things worse, when other data types are involved, multiple formats, libraries and systems need enormous effort to be integrated. Aimed at generic support for diverse applications, DataBase Management Systems (DBMSs) on the other hand avoid these issues to a large extent. However, since they are initially developed to resolve 2D or 3D issues, they do not provide native support for nD data indexing and operations. Yet the 2D and 3D operators cannot be easily extended to nD.
This thesis aims at developing a generic yet efficient solution for managing and querying nD-PointCloud data. The work is based on an existing solution called PlainSFC, which maps nD data into 1D space. PlainSFC is implemented in the DBMS, adopting space filling curve based clustering and B+-tree indexing strategies. Besides, PlainSFC applies an advanced querying mechanism which recursively refines hypercubic nD spaces to 1D ranges to approach the query geometry for primary filtering. This achieves high querying efficiency. However, the solution still has drawbacks, and this research focuses on resolving them by developing and using novel methods:
• A continuous Level of Importance (cLoI) method for data organization to eliminate visual artifacts of density shocks in points' rendering, which is introduced by conventional tree structures such as Quadtree or Octree. The cLoI method computes an importance value for every point according to an ideal distribution generalized from the discrete distributions of those tree structures. This forms an additional cLoI dimension, and each point actually represents a level. By integrating the cLoI dimension into PlainSFC, smooth and efficient rendering is realized.
• An nD-histogram approach to improve querying efficiency on non-uniformly distributed data. PlainSFC decomposes the nD space into sub-spaces recursively to approach the query geometry without considering point distribution. This is not optimal when the distribution of points is severely skewed. To improve this, an nD-histogram which records the number of points inside each nD sub-space is established as a representation of data distribution. The developed solution called HistSFC decomposes and refines the nD space more smartly, which improves the accuracy and efficiency of primary filtering.
• A convex polytope querying function. Besides orthogonal window queries, the polytope query, which is the extension of the widely adopted polygonal query in 2D, also plays a critical role in many nD spatial applications. To address this type of query, an easy-to-use polytope formulation for querying is firstly proposed. Then, based on PlainSFC and HistSFC, efficient intersection algorithms are developed for convex polytope querying on nD point clouds. These algorithms are tested through experiments with up to 10D point data. Using this newly developed function, applications including perspective view selections and flood risk queries are resolved more efficiently, achieving sub-second performance.
Additionally, other optimization techniques such as parallelization are developed and experimented with, which also bring performance gain. To verify the whole framework, several benchmark tests devised by considering real applications are conducted, and comparisons with different state-of-the-art solutions are performed. The result shows that the newly developed solution outperforms the others, overall. In certain cases, the solution can be applied without further optimizations. However, this will not be the end. Rapidly arising high tech such as cloud computing platforms can boost the solution further to incorporate more data and users. Potential nD-PointCloud based applications still need to be explored, prototyped and tested to serve the society in practice.
In the second part of the thesis, we focus on the value iteration algorithm for solving optimal control problems. We propose two novel numerical schemes for approximate implementation of the dynamic programming (DP) operation concerned with finite-horizon, optimal control of deterministic, discrete-time systems with input-affine dynamics. The proposed algorithms involve discretization of the state and input spaces and are based on an alternative path that solves the dual problem corresponding to the DP operation. We provide error bounds for the proposed algorithms, along with detailed analyses of their computational complexity. In particular, for a specific class of problems with separable data in the state and input variables, the proposed approach can reduce the typical time complexity of the DP operation from O(XU) to O(X +U), where X and U denote the size of the discrete state and input spaces, respectively. We next discuss the extensions of the proposed conjugate value iteration algorithm for problems with separable data. The extensions are three-fold: We consider (i) infinite-horizon, discounted cost problems with (ii) stochastic dynamics, while (iii) computing the conjugate of input cost numerically. In particular, we analyze the convergence, complexity, and error
of the proposed algorithm under these extensions. The theoretical results are validated through multiple numerical examples.","Opinion Dynamics; Dynamic Programming; Duality","en","doctoral thesis","","978-94-6366-526-1","","","","","","","","","Team Peyman Mohajerin Esfahani","","",""
"uuid:17d027d9-9667-4afa-89b9-aacd557a41ac","http://resolver.tudelft.nl/uuid:17d027d9-9667-4afa-89b9-aacd557a41ac","Suspended Particulate Matter Formation And Accumulation In The Delta: From Monitoring To Modelling","Safar, Z. (TU Delft Environmental Fluid Mechanics)","Chassagne, C. (promotor); Pietrzak, J.D. (promotor); Delft University of Technology (degree granting institution)","2022","Coastal areas are subjected to major anthropogenic influences, as they are traditionally economically important regions, which is reflected by the presence of harbours, especially at river mouths. The Dutch coastal area is influenced by the discharge of fresh water from the Rhine river that creates the Rhine Region Of Freshwater Influence (Rhine -ROFI). There is also an additional sediment supply by alongshore transport resulting from seabed or coastal erosion. The transport of sediment is primarily driven by hydrodynamics. River plumes that pass the estuaries reaching the coastal areas play an important role in terms of Suspended Particulate Matter (SPM) formation and transport, especially in ROFI regions. SPM is defined as a suspension of microscopic particles consisting of clay minerals (sediment) aggregated or not with organic matter. Aggregated particles (flocs) are composed of different fractions of inorganic and organic parts. Flocculation and aggregation is greatly promoted in saline environment, and sediment particles are thereby more prone to flocculate in coastal regions, leading to different transport, settling, deposition and erosion dynamics as compared to freshwater conditions. The general aim of this thesis is to present a flocculation model that properly predicts SPM formation by flocculation in space and time that can easily be implemented in numerical sediment transport models.","flocculation; Sediment; Organic matter","en","doctoral thesis","","978-94-6366-57907","","","","","","","","","Environmental Fluid Mechanics","","",""
"uuid:4eaf8bdb-53e2-49e5-9730-2550945b267f","http://resolver.tudelft.nl/uuid:4eaf8bdb-53e2-49e5-9730-2550945b267f","Design strategies for large range flexure mechanisms","Rommers, J. (TU Delft Mechatronic Systems Design)","Herder, J.L. (promotor); van der Wijk, V. (copromotor); Delft University of Technology (degree granting institution)","2022","The moving parts in a mechanical device often rely on rolling or sliding contacts such as in ball bearings to gain motion. These suffice in many applications, but the friction inherent to their working principle limits their motion repeatability and thereby their precision. Flexure mechanisms are a popular alternative in the field of precision engineering because they gain motion by elastic deformation of slender segments such as thin spring steel plates, resulting in a highly repeatable motion due to the absence of friction and play. Furthermore, they are lubricant-free and do not generate particles, which makes them suitable for applications in space, astronomy, the semiconductor industry, and healthcare. A drawback of flexure mechanisms is their limited range of motion compared to their build volume, which results in voluminous designs and which limits their application field. Their range is limited by material stress but also because at large displacements the stiffness in their support directions decreases significantly, and their actuation effort increases at large deflections, resulting in high energy consumption and heat generation. Increasing the motion range would highly benefit the field of precision engineering and could also lead to innovations in healthcare or space. The motivation for this thesis is the observation that the vast majority of flexure mechanisms consist of initially straight and stress-free flexures. Recent developments in fabrication methods such as the additive manufacturing of steel are providing the possibility to create more complex shapes, which could improve the range of motion of flexure mechanisms. The objective of this thesis is to provide design strategies to increase the motion range of flexure mechanisms. The thesis consists of two parts, of which the first (chapters 2-4) focuses on a new method to design stressed and curved flexures. The second part (chapters 5 and 6) further develops a recent strategy to increase the range of motion using torsion reinforcement structures.","Flexure mechanisms; Compliant mechanisms; Precision; Motion guiding","en","doctoral thesis","","978-94-6384-353-9","","","","","","2024-06-15","","","Mechatronic Systems Design","","",""
"uuid:e4d3be75-c184-4f24-a127-6d8af9b30550","http://resolver.tudelft.nl/uuid:e4d3be75-c184-4f24-a127-6d8af9b30550","Automated Abstraction of Discrete-Event Simulation Models using State-Trace Data","Tekinay, C. (TU Delft Policy Analysis)","Verbraeck, A. (promotor); Delft University of Technology (degree granting institution)","2022","Large-scale complex systems are characterized by a large number of interconnected variables and a diverse set of interactions. As the demand for the development and optimization of large-scale systems is growing, so does the need for better techniques to understand their underlying dynamic behavior and predict and manage their long-term performance. With the increased capabilities of computer technology, we have been able to run simulation models for these systems that are larger in scale and higher in complexity. While these advancements have enabled more accurate representations of real-world systems, the ever-increasing scale and complexity of simulation models may eventually result in models that are too complex to work with – giving rise to large-scale complex simulation models. In this dissertation, we aim to investigate to what extent the abstraction of large-scale complex simulation models, specifically discrete- event simulation models expressed in the DEVS formalism, can be automated using their state- trace data. In order to achieve this objective, we designed a method that integrates the fields of modeling and simulation and temporal data mining by utilizing state-trace data and applying frequent episode mining techniques to discover behavioral patterns.","","en","doctoral thesis","","978-94-6458-300-7","","","","","","","","","Policy Analysis","","",""
"uuid:78df5e2b-740e-4268-a821-ed0ccaae93e5","http://resolver.tudelft.nl/uuid:78df5e2b-740e-4268-a821-ed0ccaae93e5","Testing and modeling of sheet pile reinforced dikes on organic soils: Insights from the Eemdijk full-scale failure test","Lengkeek, H.J. (TU Delft Hydraulic Structures and Flood Risk)","Jonkman, Sebastiaan N. (promotor); Brinkgreve, R.B.J. (copromotor); Delft University of Technology (degree granting institution)","2022","The Netherlands is inherently challenged by water, as a large part of the country lies below sea level and several major rivers in North-western Europe cross the country. The most prevalent form of protection from coastal and river floods in the Netherlands includes approximately 22,000 km of earthen dikes, of which 3800 km are primary flood defenses (i.e., the first line of defense against high water). Subsidence, sea level rise, and the increase of rain intensity and river discharge due to climate change further challenge existing flood defenses to maintain required levels of safety. To do so, the top elevation of existing earthen dikes is often incrementally raised over time. However, raising a dike requires an extension of its base, which is frequently restricted by the presence of existing buildings and other spatial constraints. These dikes can be reinforced by alternative means such as a sheet pile wall.
Another specific challenge regarding dike reinforcement in the Netherlands includes the presence of soft subsoil conditions at many dikes. These soft soils often consist of organic clays and peats. Such soils have a low stiffness and continue to deform over time; their strength is not well understood and often underestimated. Furthermore, organic soils are often not properly identified from Cone Penetration Tests (CPTs) using common interpretation methods, while CPT is the main testing method in the Netherlands.
This research focuses on improving two aspects of the global stability assessment of dikes in the Netherlands: the modeling challenges of organic soil and dike reinforcement using sheet piles. Chapter 2 of this dissertation addresses the empirical relations for organic soils. The CPT-based correlation to derive the soil unit weight (Lengkeek et al. 2018) is validated and improved. Furthermore, new CPT-based correlations for organic soils are obtained by relating the soil state parameters to the cone resistance and the unique soil type properties to the friction ratio. An adjustment to Robertson (2010) CPT-based classification is proposed. In the improved SBT classification, organic soils (SBT=2) are redefined and subdivided into peat, organic clay, and mineral clay with organic matter.
In chapter 3 the new Critical Stress Ratio (CSR) model is presented, which classifies as ‘Simplified Critical State Soil Mechanics’. The CSR model can be seen as a theoretical version of the SHANSEP equation, providing a link between effective stress parameters, obtained from common laboratory tests, and the undrained shear strength. The model can be implemented in LEM for ultimate limit state stability analysis.
The CSR model provides the state dependent undrained shear strength for each stress point. The CSR model does not require to determine the exact yield contour as in a constitutive FEM model, this is taken into account by a variable spacing ratio, called the ‘Critical Stress Ratio’. This parameter of the CSR model can be regarded as the over-consolidation ratio at which no net excess pore pressures occurs, a parameter which can be fitted based on a few CAUC tests. Furthermore, the CSR model contains methods to obtain other model parameters for existing constitutive models used in the finite element method, such as the Poisson’s ratio which determines the horizontal and isotropic stress in unloading.
Chapter 4 presents the set-up, results, and evaluation of the full-scale failure test. (In Dutch: ‘Eemdijk damwandproef’), initiated by the Dutch Flood Protection Programme. The Eemdijk full-scale failure tests involves separate tests on (1) sheet pile panels, (2) on a ground dike, as well as a combined test (3) on a ground dike with sheet pile reinforcement. The Eemdijk full-scale failure tests provides valuable insights through a detailed analysis of the deformations of dikes leading up to and beyond failure. Furthermore, the soil investigation is re-examined and parameters are determined for multiple constitutive models applied in FEM back-analyses. Finally, both the CPT-based methods, the CSR model and the SHANSEP-NGI-ADP model are validated at the Eemdijk test site.
The back-analysis of the pull-over tests (PO-test) confirmed that the cross-section class 2 sheet piles (AZ26) reached the full plastic bending moment capacity and the cross-section class 3 sheet piles (AZ13-700 and GU8N) reached at least the elastic bending moment capacity. Furthermore, from the analysis of the SAAF measurements it is concluded that the stiffness of only the side sheet piles of panels should be reduced due to edge effects.
The ground dike test (GD-test) and sheet pile reinforced dike test (SPD-test) allowed for a unique comparison and provided insight in the critical deformation rate prior to progressive failure. This criterion is useful in the assessment of unstable slopes. The GD-test illustrates the importance of high density of soil investigations and the importance of high quality CPTs (ISO class 1) and proper CPT based classification.
The sheet pile reinforced dike test (SDP-test) shows that a continuous sheet pile, with sufficient length and embedment, makes an important contribution to the robustness of the dike after failure. Even after structural failure due to a plastic hinge, all sheet piles remained intact and interlocked. The failed sheet piles functioned as a weir and ultimately prevented breaching.
Based on a careful examination of the Triaxial (CAUC) test it is recommended to use the 15% axial strain value as a basis for the ultimate value and to apply additional criteria to prevent unrealistic high or low values for the undrained shear strength, and to re-examine the applied geometrical corrections.
Based on the performed variation analysis it is recommended to use average stiffness parameters in a SLS or ULS dike design analysis, when performed with advanced constitutive models in FEM.
The alternative approaches to dike assessment presented in this research are expected to result in a more economic and better understood dike design and assessment based on improved field data interpretation (chapter 2,) undrained shear strength and modeling procedures (chapter 3) and takeaways from the full-scale tests and back analyses (chapter 4).","Full-scale test; CPT; organic soils; peat; critical state soil mechanics; SHANSEP; slope failure; Sheet pile; finite element method; constitutive model; CSR; embankment; Eemdijk damwandproef; Dyke; levee; reinforced dike","en","doctoral thesis","","978-94-6384-340-9","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:34dee93a-fcb3-4e37-81f4-ab0ec652209c","http://resolver.tudelft.nl/uuid:34dee93a-fcb3-4e37-81f4-ab0ec652209c","Parameter identification of layered systems using moving loads","Sun, Z. (TU Delft Pavement Engineering)","Scarpas, Athanasios (promotor); Erkens, S. (promotor); van Dalen, K.N. (promotor); Delft University of Technology (degree granting institution)","2022","An elegant approach to evaluate the quality of engineering structures is the Non- Destructive Testing (NDT). In the field of pavement engineering, a promising NDT method for pavement structural evaluation at network level is the Traffic Speed Deflectometer (TSD) test, which can continuously measure the surface response of pavements caused by moving loads at normal driving speeds. However, the wide application of the TSD test has been hindered by the lack of a commonly accepted parameter identification technique to process TSD measurements. To tackle this problem, this dissertation aims to formulate a mechanically correct, numerically accurate, and computationally efficient parameter identification technique specifically for the TSD test of pavements. The developed parameter identification technique is the combination of a theoretical model of the TSD test and a minimisation algorithm. The unknown parameters can be identified by minimising the differences between modelled and measured response of pavements...","","en","doctoral thesis","","978-94-6366-553-7","","","","","","2023-06-19","","","Pavement Engineering","","",""
"uuid:1e5679a7-e9d1-40d0-8857-d3f0eb290510","http://resolver.tudelft.nl/uuid:1e5679a7-e9d1-40d0-8857-d3f0eb290510","Urban airspace design for autonomous drone delivery","Doole, M.M. (TU Delft Control & Simulation)","Hoekstra, J.M. (promotor); Ellerbroek, Joost (copromotor); Delft University of Technology (degree granting institution)","2022","The paradigm of large-scale adoption of autonomous drone delivery promises to provide commercial and societal benefits. Over the past years, several companies have investigated the use-case of drones to transport small express packages of fast-food meals and time-sensitive medical supplies. The latter has shown to be highly beneficial in many parts of the worldwhere traditional transport infrastructure remains largely non-existent. However, it is assumed that the true value of autonomous delivery drones can only be demonstrated when it is applied to urban environments. For example, the use of a large-scale fleet of autonomous drones to transport packages within the last-mile segment could potentially improve the economics of package delivery, reduce traffic congestion and help decrease the total anthropogenic carbon dioxide emissions in cities. In addition, supplementing the existing last-mile delivery system with this new technology could also help accelerate the European Union’s 2050 vision of de-carbonising the transport sector. Autonomous drone delivery is obviously not a panacea to the above problems. It could, however, offer a path to mitigate such societal problems. Yet, even though there is a compelling case for autonomous drone delivery, it still remains to be deployed in cities. The reasons for this slow adoption include a large number of complex regulatory hurdles that vary between countries and cities. However, the biggest challenge is how to safely harbour large traffic volumes of drones in a constrained urban environment. This thesis frames the scientific problem and outlines twomain past research areas: unconstrained airspace design and road-based design, which served as a rich source of inspiration for this research. In a past study, known as theMetropolis project, it was demonstrated that layering the airspace and allocating flights to different altitude layers with respect to travel directions helped to mitigate the conflict probability in an unconstrained airspace setting. The study revealed two factors that were largely responsible for increasing the level of airspace safety, namely, segmentation of traffic and reduction of the relative speed, by traffic alignment, between cruising traffic at the same altitude. Furthermore, existing road vehicles, especially automated cars, provide an informative comparison with autonomous drones. Both emerging transportation modes are expected to navigate in constrained urban settings and operate in high traffic density scenarios. Of course, there are notable differences, for example, drones will operate in a three-dimensional space and the current performance limits of drones imply that it would not be optimal for drones to come to a sudden halt at intersections unlike cars and thus separating opposing traffic flows at intersections will be a difficult task. Yet, road design and research has evolved alongside road vehicles to include a host of safety measures in effort to make roads and streets safer for all its users. They make use of various conflict prevention measures to structure and organise traffic flows. Current roads and streets have channelisation planes, which help separate opposite flows of traffic using road markings, islands and raised medians to distinguish and support one-way and two-way streets. These forms of structuring have shown to reduce of the risks of conflicts and, to an extent, are able to safely harbour high traffic densities in highly constrained urban environments. The work in this thesis therefore aimed to investigate what design paradigms and methodologies from unconstrained airspace research and road infrastructure design can be translated to a constrained urban airspace for high-density drone traffic operations...","Urban airspace design; Constrained airspace; Conflict prevention; U-Space; Unmanned Traffic Management; Advanced air mobility; Drone delivery; Flying taxis; eVTOL","en","doctoral thesis","","978-94-6366-555-1","","","","","","","","","Control & Simulation","","",""
"uuid:be761a38-a87a-4e87-9343-7f838ccf6c89","http://resolver.tudelft.nl/uuid:be761a38-a87a-4e87-9343-7f838ccf6c89","Modelling and Analysis of Atrial Epicardial Electrograms: An approach based on graph signal processing and confirmatory factor analysis","Sun, M. (TU Delft Signal Processing Systems)","van der Veen, A.J. (promotor); Hendriks, R.C. (promotor); de Groot, N.M.S. (promotor); Delft University of Technology (degree granting institution)","2022","Atrial fibrillation (AF) is a frequently encountered cardiac arrhythmia characterized by rapid and irregular atrial activity, which increases the risk of strokes, heart failure and other heart-related complications. The mechanisms of AF are complicated. Although various mechanisms were proposed in previous research, the precise mechanisms of AF are not clear yet and the optimal therapy for AF patients are still under debated. A higher success rate of AF treatments requires a deeper understanding of the problem of AF and potentially a better screening of the patients.
In order to study AF, instead of using human body surface ECGs, we use the epicardial electrograms (EGMs) obtained directly from the epicardial sites of the human atria during open heart surgery. This data is measured using a high-resolution mapping array and exhibits irregular properties during AF. Although different studies have analyzed electrograms in time and frequency domain, there remain many open questions that require alternative and novel tools to investigate AF.
Experience in signal processing suggests that incorporating the spatial dimension into the time-frequency analysis on the multi-electrode electrograms may provide improved insights on the atrial activity. However, the electrophysiologcial models for describing spatial propagation are relatively complex and non-linear such that conventional signal processing methods are less suitable for a joint space, time, and frequency domain analysis. It is also difficult to use very detailed electrophysiologcial models to extract tissue parameters related to AF from the high-dimensional data.
In this dissertation, we wish to propose a radically different approach to study and analyze the EGMs from a higher abstraction level and from different perspectives to get more understanding of the characteristics of AF. We also aim to develop a simplified electrophysiological model that can capture the spatial structure of the data and propose an efficient method to estimate the tissue parameters, which are helpful to analyze the electropathology of the tissue, e.g., cell activation time or conductivity.
In the first part of this study, we put forward a graph-time spectral analysis framework to analyze EGMs during normal heart rhythm and AF with a higher-level model. To capture the frequency content along both time domain and graph domain, we propose the joint graph and short-time Fourier transform, which allows us to evaluate the temporal and spatial variation of EGMs and capture the interaction between space and time. The spectral analysis of the EGMs helps us to recognize atrial fibrillation impact on the atrial activity and identify the differences between the atrial activity and the ventricular activity. We find that the difference in graph smoothness between the atrial and ventricular activities enables us to better extract the atrial activity from the noisy measurements.
The second part of this study is to find a simplified but accurate enough electrophysiological model for the high dimensional EGMs and to make more efficient use of the data to detect the arrythmogenic substrate that causes abnormalities in atrial tissue. In this dissertation, we develop the cross power spectral density matrix (CPSDM) model of the multi-electrode EGMs and make use of an effective method called confirmatory factor analysis (CFA) to jointly estimate the model parameters. The conductivity, the activation time, and the anisotropy ratio are useful parameters to determine abnormalities in cardiac tissue and are therefore the target parameters to be estimated. With the reasonable assumptions that the conductivity parameters and the anisotropy parameters are constant across different frequencies and heart beats, and the activation time of cells are constant across different frequencies, we propose simultaneous CFA (SCFA) to jointly estimate these parameters using multiple frequencies and multiple heart beats. The identifiability conditions which need to be satisfied in the CFA problem are used to find the relationship between the desired resolution and the required amount of data. Evaluations on the simulated data and the clinical data demonstrate that the proposed method can localize the conduction blocks in the tissue and reconstruct the clinical EGMs well using the estimated parameters.","Atrial fibrillation; epicardial electrograms; spectral analysis; graphtime signal processing; electrophysiological model; cross-power spectral density matrix model; conductivity estimation; activation time estimation; anisotropy ratio estimation; confirmatory factor analysis","en","doctoral thesis","","978-94-6366-545-2","","","","","","","","","Signal Processing Systems","","",""
"uuid:fe290b5f-40ba-4678-8e8c-e4a9a2cf292f","http://resolver.tudelft.nl/uuid:fe290b5f-40ba-4678-8e8c-e4a9a2cf292f","Advancing Flood Risk Screening","van Berchum, E.C. (TU Delft Hydraulic Structures and Flood Risk)","Jonkman, Sebastiaan N. (promotor); Kok, M. (promotor); Timmermans, Jos (copromotor); Delft University of Technology (degree granting institution)","2022","Manycoastal cities are struggling with a rapidly growing risk of flooding. The sizeand complexity of these cities often demand a coordinated strategy, consistingof a combination of flood risk reduction measures. A crucial part in the designprocess is the identification of effective flood risk management strategies. However,data and resources are often limited in these early stages of design, which ischaracterized by the many different options and measures that can be considered.The focus of this study is to identify the needs and challenges of this ‘floodrisk screening’ phase and develop and implement a model framework to supportdecision making in this stage. At the centre of the study is the developmentand application of such a model: the Flood Risk Reduction Evaluation andScreening, or FLORES, model. This dissertation includes two real-life casestudies which explain the structure and development of the FLORES model, aswell as two new applications in conceptual design – flood risk analysis basedon low-resolution data, and robust decision making – that are easier toimplement in flood risk management when combined with flood risk screeningmodels.The FLORESmodel has been implemented in two case studies, one in the USA and one in Mozambique.In the Houston-Galveston Bay Area, USA, the model showed the reliance of theentire region’s flood risk on the choices made at the coastal barriers.Especially the effectiveness of inland Nature-based Solutions heavily relied onthe placement and elevation of coastal structures. In Beira, Mozambique, coastalstructures are combined and compared with other measures, such as drainagesystems, retention, and early-warning systems. The use of flood risk screeningprovided insight into the effectiveness of individual measures, as well ascombinations of measures, and prioritized strategies based on predeterminedgoals. In both cities, these insights, combined with a better understanding ofthe local flood risk and how it is influenced by risk reducing measures andfuture scenarios, can be used to support decision makers in finding the mosteffective strategy going forward.","Flood risk; flood simulation; Adaptive design; Robust decision making; Flood risk management","en","doctoral thesis","","978-94-6421-779-7","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:78515cfa-e83a-420a-8aee-b3146d96d331","http://resolver.tudelft.nl/uuid:78515cfa-e83a-420a-8aee-b3146d96d331","What has Athens to do with Jerusalem?: the potential of spatial-temporal analysis methods to interpret early Christian literature","van Altena, V.P. (TU Delft Urban Data Science)","Stoter, J.E. (promotor); Bakker, H.A. (promotor); Krans, J.L.H. (copromotor); Delft University of Technology (degree granting institution)","2022","The early Christian apologist Tertullian (ca. 160 - ca. 230 CE) queries in his De
praescriptione haereticorum: “What indeed has Athens to do with Jerusalem? What concord is there between the Academy and the Church? What between heretics and Christians?” As the question raised by Tertullian is about the relation between different disciplines and possible mutual relevance, it shows resemblance with this research:what does spatial-temporal analysis have to do with the interpretation of early Christian literature? Are these two disciplines in some way compatible with each other?
This research hypothesizes that spatial-temporal analyses could bring
additional and new insights to the interpretation of early Christian literature. The main question in this research is: in which way can spatial-temporal analysis methods contribute to the interpretation of early Christian literature?
To answer this question, an inventory of relevant work in related disciplines is made and a case-study approach is applied to demonstrate the application of
spatial-temporal analysis methods for the interpretation of early Christian literature. Furthermore, the potential and limitations of developed methods and data solutions are assessed. The study concludes by suggesting improvements and further developments to advance the use of spatial-temporal analysis in the interpretation of texts.","spatial-temporal analysis; Historical GIS; New Testament exegesis; textual criticism; Interpretation; Narrative analysis","en","doctoral thesis","","978 94 6458 168 3","","","","","","","","","Urban Data Science","","",""
"uuid:0c6cf4ab-f297-426f-9a4f-455f07f1f643","http://resolver.tudelft.nl/uuid:0c6cf4ab-f297-426f-9a4f-455f07f1f643","Spin Wave Circuit Design","Mahmoud, A.N.N. (TU Delft Computer Engineering)","Hamdioui, S. (promotor); Cotofana, S.D. (promotor); Delft University of Technology (degree granting institution)","2022","CMOS downscaling has provided the means to efficiently process the huge raw data resulted from the information technology revolution. However, this becomes more difficult because of leakage, reliability, and cost walls. To keep the pace with the exploding market needs at affordable cost, novel alternative technologies are under investigation; one of them is Spin Wave (SW), which is the collective excitation of the electron spins in the ferromagnetic materials. SW stands apart as one of the most promising avenues because of its ultra-low energy consumption and high scalability. This thesis: a) develops and designs spin wave based logic gates and circuits, and b) investigates the requirements for spin wave technology to outperform CMOS technology from energy efficiency point of view.
Logic gate: SW circuit design requires the availability of SW logic gates to possess fan-out capabilities. Therefore, we propose and validate novel fan-out enabled spin wave logic gates including (N)AND, (N)OR, X(N)OR, and majority gates. In addition, we present and validate novel n-bit multi-frequency data parallel spin wave logic gates, i.e., SWs with different frequencies propagate in the same waveguide while interfering with similar frequency SWs only. Moreover, we examine a SW 3-input Majority gate working under continuous and pulse mode operation regimes. Furthermore, we present and validate how pulse mode operation enables Wave Pipelining (WP) within SW.
Circuits: We develop, design, and validate three major circuits; namely an adder, a multiplier, and a compressor. These make use of SW gate cascading. Firstly, we introduce and validate SW accurate and approximate full adders; the approximate full adder consumes 55% less energy than the accurate full adder but it has 25% error rate making it suitable for error tolerant applications. We also propose a non-binary SW computing paradigm which we use to build a non-binary SW adder. Then we develop SW accurate and approximate 4:2 compressor; the approximate compressor consumes 46% less energy than the accurate compressor but it has 31% error rate. Finally, we design 2-bit inputs accurate and approximate multiplier; the approximate multiplier consumes 64% less energy than the accuratemultiplier but it has 25% error rate.
SW Technology Requirements: We are interested in assessing the technological development horizon that needs to be reached to make SW circuits outperform CMOS counterparts in terms of energy efficiency. We performace reverse engineering alike analysis to determine transducer delay and power consumption upper bounds that can place SW circuits in the leading position. To this end, we compute the maximum transducer delay and power consumption of a 32-bit Brent-Kung adder that could potentially enable a SW implementation able to outperform its 7 nm CMOS counterpart. Our evaluations indicate that 31nW is the maximum transducer power consumption for which a 32-bit Brent-Kung SW implementation can outperform its 7nm CMOS counterpart in term of energy efficiency.
In the next step, high-resolution three-dimensional crystal plasticity simulations are used to investigate deformation heterogeneity and microstructure evolution during the cold rolling of interstitial free (IF-) steel. A Fast Fourier Transform (FFT)-based spectral solver is used to conduct crystal plasticity simulations using a dislocation-density-based crystal plasticity model. The in-grain texture evolution and misorientation spread are consistent with experimental results obtained using electron backscatter diffraction (EBSD) experiments. Crystal plasticity simulation shows that two types of strain localization develop during the large deformation of IF-steel. The first type forms band-like areas with large strain accumulation that appear as river patterns extending across the specimen. In addition to these river-like patterns, a second type of strain localization with rather sharp and highly localized in-grain shear bands is identified. These localized features are dependent on the crystallographic orientation of the grain and extend within a single grain. In addition to the strain localization, the evolution of in-grain orientation gradients, dislocation density, kernel average misorientation, and stress in major texture components are discussed.","Crystal plasticity; Dislocation density; Parameter identification; Large deformation; Microstructure evolution","en","doctoral thesis","","978-94-6423-858-7","","","","","","","","","Team Kevin Rossi","","",""
"uuid:2804b274-ee21-4257-a052-a46408899c1f","http://resolver.tudelft.nl/uuid:2804b274-ee21-4257-a052-a46408899c1f","Development and validation of a three dimensional wave-current interaction formulation","Nguyen, T.D. (TU Delft Coastal Engineering)","Roelvink, D. (promotor); Reniers, A.J.H.M. (promotor); Delft University of Technology (degree granting institution)","2022","This study aims at developing a new set of equations of mean motion in the presence of surface waves, which is practically applicable from deep water to the coastal zone, estuaries, and outflow areas. The Generalized Lagrangian Mean method is employed to derive a set of Quasi-Eulerian mean three-dimensional equations of motion, where effects of surface waves are included through source terms. The obtained equations are expressed to the second-order of wave amplitude. Whereas the classical Eulerian-mean equations of motion are only applicable below the wave trough, the new set of equations is valid until the mean water surface even in the presence of finite-amplitude surface waves. Both conservative and non-conservative waves are under consideration, especially in the presence of a strong ambient current. A concept of three-dimensional wave radiation stress is introduced to express the effects of surface waves on the currents. It is an extension of the classical radiation stress concept. Especially, the relationship between three-dimensional wave radiation stress and vortex force representations is investigated in detail in conditions of both conservative and nonconservative waves. Through that relationship, comparisons between the new set of equations and other sets of equations implemented in recent well-known numerical models are given. It is useful for selecting a suitable numerical ocean model to simulate the mean current in a specific condition of waves combined with currents.
A two-dimensional numerical model (2DV model) is developed to validate the new set of equations of motion. The model passes the test of steady monochromatic waves propagating on a slope without dissipation (adiabatic condition). This is a primary test for equations of mean motion with a known analytical solution. In addition to this, experimental data for the interaction between random waves and currents in both non-breaking and breaking waves are employed to validate the 2DV model. As shown by this successful implementation and validation, the implementation of the new set of equations in any 3D model code is straightforward and may be expected to provide consistent results from deep water to the surfzone, in both conditions of weak and strong ambient currents.
In this thesis, the solubility, nucleation and growth of calcium oxalate (CaOx), the most common inorganic constituent of kidney stones, were studied under different conditions such as ion concentration, pH value, and also the role of inhibitors in water or artificial urine was investigated. The first step towards this work was obtaining the solubility curve of calcium oxalate monohydrate (COM) in the solvent, such as ultrapure water and different buffers, to elucidate the physicochemical conditions which can cause the kidney stone formation (Chapter 2).
Beside the solubility study, advanced technology to observe crystal formation in small scale and a very short time was needed. The volume, structure and flow properties inside the kidney inspired us to use microfluidic technology with comparable volume and flow rate. The developed microfluidic devices that mimic pathways in the human kidney were used to study the nucleation and growth of calcium oxalate crystals. The developed devices rendered an alternate perspective to the study of kidney stone formation and showed that microfluidics can provide precise, simple and fast detection of stone formation under various experimental conditions.
Initially, the designed microfluidic device allowed us to build a testing platform for the study of nucleation kinetics of CaOx inside isolated environments provided by droplets. Preliminary experiments were performed by dissolving calcium chloride and sodium oxalate in ultrapure water. The aqueous solution, containing the ions, forms the droplet phase and oil were used as the continuous phase. Altering the pH values, as well as increasing the concentration of additives such as magnesium and osteopontin (OPN), were shown to slow down the nucleation kinetics, or even inhibit nucleation (Chapter 3).
This thesis reports the findings of circular methods and strategies.","Circular Economy; waste recycling; Clean technology; Sustainability; Circular design","en","doctoral thesis","","978-94-6419-501-9","","","","","","","","","Medical Instruments & Bio-Inspired Technology","","",""
"uuid:2e5c2d37-20af-48f9-a926-593e8a7aa5a6","http://resolver.tudelft.nl/uuid:2e5c2d37-20af-48f9-a926-593e8a7aa5a6","Diffusion in Aerobic Granular Sludge","van den Berg, L. (TU Delft Sanitary Engineering)","de Kreuk, M.K. (promotor); van Loosdrecht, Mark C.M. (promotor); Delft University of Technology (degree granting institution)","2022","A large part of the wastewater produced worldwide is discharged without any treatment. This has several negative consequences, including the spread of diseases and contamination of the environment. One method to treat wastewater is called aerobic granular sludge (AGS), an advanced, compact technology that uses granules. Granules are spherical aggregates of microorganisms and biopolymers. Different microorganisms break down different pollutants within the granules. The microorganisms rely on the mass transfer of pollutants into the granule. This occurs through diffusion, a passive mode of transport that is driven by a concentration difference. Diffusion is an essential aspect of AGS as well as other biofilm processes. Most previous research has shown that diffusion in granules and biofilms is a complex process. The diffusion behaviour varies between biofilms, within the biofilm, and between different molecules. At the same time, granule and biofilm models use a simple approach to describe diffusion. These models often use a single diffusion coefficient for the entire granule or biofilm. It is unclear how valid these simplifications are and how much they influence the accuracy of the model outcomes. In this dissertation, we studied different aspects of diffusion in granules to verify and extend previous research on the complexity of diffusion. The resulting information was then used to evaluate the impact on granule models.","","en","doctoral thesis","","978-94-6366-534-6","","","","","","","","","Sanitary Engineering","","",""
"uuid:35a16fd8-6cde-49c6-9455-042aa4ac4d0a","http://resolver.tudelft.nl/uuid:35a16fd8-6cde-49c6-9455-042aa4ac4d0a","Micromechanical Modelling of Porous Asphalt Mixtures","Zhang, H. (TU Delft Pavement Engineering)","Scarpas, Athanasios (promotor); Erkens, S. (promotor); Anupam, K. (copromotor); Delft University of Technology (degree granting institution)","2022","With the attempt to reduce traffic noise, porous asphalt (PA) mixture is widely used as a wearing course on the highways in the Netherlands. However, due to the open structure, PA mix pavement easily suffers from the loss of individual aggregates from its surface, which is named as ravelling. After the initial ravelling, the damage can rapidly develop into potholes which can significantly reduce the driving safety of the pavement....","micromechanical modelling; porous asphalt mixtures","en","doctoral thesis","","978-94-6384-336-2","","","","","","2022-12-07","","","Pavement Engineering","","",""
"uuid:436a74d5-5550-46fd-a399-e32a5ec7b834","http://resolver.tudelft.nl/uuid:436a74d5-5550-46fd-a399-e32a5ec7b834","The Making of the Modern Iranian Capital: On the Role of Iranian Planners in Tehran Master Planning at a Time of Urban Growth and Transnational Exchange (1930-2010)","Jafari, E. (TU Delft History, Form & Aesthetics)","Hein, C.M. (promotor); Thomas, A.R. (copromotor); Delft University of Technology (degree granting institution)","2022","This dissertation puts together planning documents and multiple archival sources to demonstrate how urban planning and the role of planners have evolved in an ever-changing transnational context of Iran. It challenges the prevailing approach in the literature of Tehran urban studies that simply flattens the complexity of local-foreign collaboration and labels transnational planning of Tehran a top-down “Westernization” project. To depict a more nuanced picture of Tehran master planning at the time of transnational exchange and rapid urban growth, this dissertation introduces a new engaging and argumentative periodization with four distinct phases which brings transnational planning of Tehran to the fore, while reflecting on diverse political and socio-economic upheavals between 1930-2010. Dissection of Tehran master plans in each period through the lens of multiple actors offers a unique opportunity for a renewed interpretation of transnational planning of Tehran and the way Iranian planners steered Tehran urban developments while engaging with foreign experts and their planning systems. It presents a detailed analysis of overarching ‘ideas’ behind each plan, their translation to urban ‘policies’ and later on their broader (un)wanted ‘impact’ on the city and its regions. By recognizing a great diversity in transnational approach in Tehran planning practices, the dissertation concludes with how transnationalism first gave rise to the formation of the modern planning system and how later on led to contestations against it which revolutionized the role of urban planners and the political agenda for Tehran urban growth....","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-583-4","","","","","","","","","History, Form & Aesthetics","","",""
"uuid:20fce6ef-6bb3-42a1-bdd3-53b2a282f0ae","http://resolver.tudelft.nl/uuid:20fce6ef-6bb3-42a1-bdd3-53b2a282f0ae","Performance benchmarking of silicon quantum processors","Xue, X. (TU Delft QCD/Vandersypen Lab)","Vandersypen, L.M.K. (promotor); Sebastiano, F. (copromotor); Delft University of Technology (degree granting institution)","2022","Benchmarking the performance of a quantum computer is of key importance in identifying and reducing the error sources, and therefore in achieving fault-tolerant quantum computation. In the last decade, qubits made of electron spins in silicon emerged as promising candidates for practical quantum computers. To understand their physical properties and the engineering challenges behind, a complete characterization of coupled spin qubits is highly demanded. This dissertation presents extensive studies on performance benchmarking of silicon quantum processors, covering the aspects of quantum logic, quantum measurement, crosstalk and error correlations, and cryogenic quantum control.
The first experiment presented in this dissertation reports the complete characterization of a universal set of quantum gates for silicon spin qubits. As a powerful alternative to conventional Clifford-based single- and two-qubit randomized benchmarking, we introduce a new approach named character randomized benchmarking. We use it to extract a fidelity of 92% for a controlled-Z gate, and show that the crosstalk and error correlations between simultaneous single-qubit gates can be quantified in the same experiment.
The second experiment is focused on the spatial noise correlation between two idling spin qubits. Such correlated noise is critical in quantum error correction. We characterize such correlations by measuring the coherence times of two different two-qubit Bell states with parallel and anti-parallel spins respectively, and find only modest correlations. This is likely due to the existence of uncorrelated nuclear noise arising from ^29Si atoms.
In the third experiment, we observe strong nonlinear effects in electric-dipole spin resonance, which is the key mechanism for implementing single-qubit gates in all works presented in this dissertation. Importantly, this induces a novel crosstalk effect between simultaneously driven qubits. The valley-orbit hybridization is investigated and found to give a phenomenological explanation of such nonlinearity. Further studies in material properties and microwave heating effects are needed to explain the discrepancy.
The fourth experiment is about quantum nondemolition measurement of a spin qubit, which is an essential building block of quantum error correction codes. Helped by an ancilla qubit, we can significantly improve the readout fidelity of the data qubit from ∼75% to ∼95% after 15 repetitive measurements. We experimentally test an improved signal processing method, namely soft decoding, and showcase its advantage when Gaussian noise dominates the readout errors.
In the fifth experiment, we finally break the barrier of 99% for the fidelity of the two-qubit gate for semiconductor spin qubits. Combining isotopically purified silicon, careful Hamiltonian engineering of exchange interactions, and error diagnosis from gate set tomography, we achieve fidelities of all single- and two-qubit gates of over 99.5%, well above the popular surface code error threshold. Powered by the high-fidelity universal gate set, we are able to execute a variational quantum eigensolver routine for computing the dissociation energy of molecular hydrogen with the silicon quantum processor.
The last experiment steers the focus towards the interface between quantum processor and classical control electronics, known as a major bottleneck in scaling. We propose to solve the problem by utilizing control circuits working at a few Kelvin. A control chip named ""Horse Ridge'' is therefore tested at 3 Kelvin and demonstrated to match the same control fidelities obtained using bulky commercial instruments working at room temperature, at a cost of only a few hundred milliwatts. The functionality of this control chip is further tested by operating universal two-qubit logic as well as executing a two-qubit Deutsch-Josza algorithm.
At the end of this dissertation, we propose a few possible next-steps to further explore the error sources in spin qubits and approaches for scalable operations.","quantum dots; spin qubits; quantum computation; quantum benchmarking","en","doctoral thesis","","978-90-8593-527-8","","","","","","","","","QCD/Vandersypen Lab","","",""
"uuid:6fd8a152-ab7a-4ecd-a817-61945d431bef","http://resolver.tudelft.nl/uuid:6fd8a152-ab7a-4ecd-a817-61945d431bef","Influence of a forward-facing step on crossflow instability and transition: An experimental study in a swept wing boundary-layer","Rius Vidales, A.F. (TU Delft Aerodynamics)","Kotsonis, M. (promotor); Scarano, F. (promotor); Delft University of Technology (degree granting institution)","2022","The market growth expected for commercial aviation in the coming decades and the increasing social awareness regarding the effects of global warming are driving significant technological developments necessary for emission reduction in future transport aircraft. From the aerodynamics perspective, a significant increase in aircraft efficiency can be obtained by applying Laminar Flow Control (LFC) techniques. The objective of LFC techniques is to reduce the skin-friction drag component by delaying the laminar-turbulent transition through the stabilisation of boundary-layer instabilities. Relevant to high-subsonic transport aircraft is the development of Crossflow (CF) instability, which manifests as a series of co-rotating vortices inside the boundarylayer flow on swept aerodynamic surfaces...","Swept wings; Boundary-layer transition; Crossflow Instability; Forward-Facing Step; Surface Irregularities","en","doctoral thesis","","978-94-6366-544-5","","","","","","","","","Aerodynamics","","",""
"uuid:0f2ee7c8-70d2-43b2-93e7-26a328ded3a9","http://resolver.tudelft.nl/uuid:0f2ee7c8-70d2-43b2-93e7-26a328ded3a9","Rational approaches to the design of magnetocaloric materials","Batashev, I. (TU Delft RST/Fundamental Aspects of Materials and Energy)","van Dijk, N.H. (promotor); Brück, E.H. (promotor); Delft University of Technology (degree granting institution)","2022","The magnetocaloric effect (MCE) is a thermal response of a magnetic material to a change in an external magnetic field. With the discovery of materials exhibiting a giant magnetocaloric effect in the vicinity of room temperature, several applications of this phenomenon have been proposed. First is the magnetic refrigeration, which can serve as a more eco-friendly alternative to conventional vapour-compression cooling systems. The second is the magnetic energy conversion using thermomagnetic motors and generators. It allows to transfer waste heat - currently an untapped resource – into electricity, therefore, increasing the energy efficiency of various types of industries. The development of devices for these applications facilitated the need for an optimal material to fit all the practical requirements. To this date, only a handful of materials are considered viable for commercial implementation, among which (Mn,Fe)2(P,Si) alloys, Ni-Mn-based Heusler alloys and La(Fe,Si)13 alloys are most prominent.
The goal of this thesis is to identify new promising magnetocaloric materials and improve known material systems using a combination of experimental techniques, ab initio modelling and database screening.
Embedded adaptive functions could be an opportunity to reduce these drawbacks. If embedded adaptivity is to work within a design, the particularities of geometry and material arrangements must be considered. Nature offers fascinating models for this approach, which frames the objectives of this doctoral dissertation. The dissertation examines both adaptive façades and biology criteria that support a function-oriented transfer of thermo-adaptive principles in the early design stage. The research work discusses whether the technical complexity can be reduced by biomimetic designs and which role geometric design strategies play for thermo-adaptive processes.
The research work is divided into three phases, following the top-down process in the discipline biomimetics, supplemented by methods from product design and semantic databases. The first phase is dedicated to the analysis of the contextual framework and criteria of façades aiming at thermal adaptation.
Further, transfer systematics are developed that guide the analysis and selection process. In the second phase, analogies in biology are collected that appear suitable. Selected examples are examined to identify and systematically describe their functional principle. Two exemplary descriptions herald the third phase, in which functional façade models are created and evaluated.
The result of this research work provides a conceptual approach to generate function-imitating biomimetic façade designs, so-called physio-mimetic façade designs.","Biomimetics; function-oriented design; adaptive façades; thermal adaptation; energy efficient building skins; interdisciplinary transfer methods","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-506-3","","","","A+BE I Architecture and the Built Environment No 4 (2022)","","","","","Design of Constrution","","",""
"uuid:8663c97e-945d-4a30-85c1-a30f269a10bc","http://resolver.tudelft.nl/uuid:8663c97e-945d-4a30-85c1-a30f269a10bc","Getting a grip on stress: Designing smart wearables as partners in stress management","Li, X. (TU Delft Human Information Communication Design)","Jansen, K.M.B. (promotor); Jonker, C.M. (promotor); Rozendaal, M.C. (copromotor); Delft University of Technology (degree granting institution)","2022","This thesis is motivated by the vision of designing smart wearables as partners for veterans with chronic posttraumatic stress disorder (PTSD). Everyday objects are becoming ‘smarter’ with the integration of computational and electronic technologies. It is now possible to start thinking of these objects as ‘intelligent agents’ that can form collaborative relationships to help us with issues that were hitherto impossible. Smart wearables show the potential to be designed as “partners” that are able to continuously monitor bodily and behavioural signals, to involve the human body as part of the interaction, and help the person whenever possible and in ways other products cannot. People with chronic PTSD, who face the challenge of constantly dealing with various everyday stressful situations, provide an interesting case to explore the concept of such partners.","","en","doctoral thesis","","978-94-6421-755-1","","","","","","","","","Human Information Communication Design","","",""
"uuid:4df455e4-3f65-497b-bcd4-fa9baf1351fa","http://resolver.tudelft.nl/uuid:4df455e4-3f65-497b-bcd4-fa9baf1351fa","Dealing with Uncertainty in Infrastructure Public-Private Partnership Projects","Demirel, H.C. (TU Delft Integral Design & Management)","Hertogh, M.J.C.M. (promotor); Leendertse, W.L. (promotor); Volker, L. (promotor); Delft University of Technology (degree granting institution)","2022","This research concerns dealing with uncertainty in infrastructure Public-Private Partnership (PPP) projects. PPPs comprise a network of actors or organizations in mutual relationships regulated by contracts. These contracts arrange a division of tasks and responsibilities between contracting parties, and they allocate risks and uncertainties. Where risk is a calculable event concerning probability and consequence, uncertainty is the unclear future state in which there is no possibility of placing a numerical probability or calculating the possible effect on an impactful event occurring. Uncertainties are inevitable in infrastructure PPPs because of the dynamic environment in which PPPs usually are implemented and the complex structure of their arrangements. Moreover, the long-term nature of the contracts increases exposure to uncertainty over the life-cycle of the project...","","en","doctoral thesis","","978-94-6421-739-1","","","","","","","","","Integral Design & Management","","",""
"uuid:aa5409dc-edb7-44f7-92ed-529e2cb575b3","http://resolver.tudelft.nl/uuid:aa5409dc-edb7-44f7-92ed-529e2cb575b3","Trick and treat: Cell-instructive and bactericidal nanopatterns for bone implants","Modaresifar, K. (TU Delft Biomaterials & Tissue Biomechanics)","Zadpoor, A.A. (promotor); Fratila-Apachitei, E.L. (copromotor); Delft University of Technology (degree granting institution)","2022","","nanopatterns; bactericidal; osteogenic; mechanotransduction","en","doctoral thesis","","978-94-6419-502-6","","","","","","","","","Biomaterials & Tissue Biomechanics","","",""
"uuid:55f24f9e-7ce3-4864-89dd-2a4a514a642b","http://resolver.tudelft.nl/uuid:55f24f9e-7ce3-4864-89dd-2a4a514a642b","Electricity Markets for Direct Current Distribution Systems","Piao, L. (TU Delft Energie and Industrie)","De Vries, Laurens (promotor); de Weerdt, M.M. (promotor); Yorke-Smith, N. (copromotor); Delft University of Technology (degree granting institution)","2022","Direct current distribution systems (DCDS) are a promising alternative to alternating current (AC) systems because they remove AC--DC conversion between sources and loads that cause energy losses. Compared to AC systems, a DCDS has higher power capacity, energy efficiency and reliability, and no need for synchronisation---suitable where a large amount of renewable power is generated and consumed locally in DC.
A DCDS has unique features that affect its implementation: low system inertia, strict power limits and power--voltage coupling. Hence, simply applying markets designed for AC cannot guarantee a DCDS's supply security and voltage stability. This dissertation aims to identify DC-tailored local market designs that facilitate a DCDS's operational efficiency and reliability under uncertainty.
To identify promising DCDS market designs from all feasible options, we developed and applied a comprehensive design framework for local electricity markets. It is based on an engineering design process of identifying goals, determining design space, testing and evaluation. Whereas previous studies focused on individual commodities, we widened the scope to include the role of market architecture. Its main element is the choice of sub-markets for energy delivery, the provision of DC-substation capacity, and voltage regulation. For each selected sub-market, we analysed the design options for the general organisation, bid format, allocation and payment, and settlement. Considering the design complexity, we performed three rounds of market design according to the agile development principle: a qualitative assessment, a quantitative analysis without uncertainty, and a quantitative analysis under uncertainty.
In Step 1, we analysed the design options and identified three types of DCDS market designs according to the above framework, each featuring a unique architecture. First, the integrated market (IM) design explicitly links three sub-markets (for energy, substation capacity and voltage regulation) to incorporate all system costs into energy prices. It aims to create price signals that encourage prosumers to resolve congestion and voltage issues, but the challenges are privacy concerns and sophisticated market clearing. Second, the locational energy market (LEM) design relieves congestion with nodal prices--by linking the energy and substation capacity markets--whereas a system operator regulates the voltage. Third, the wholesale energy price (WEP) market design passes such prices directly to local prosumers, whereas the system operator resolves all network issues.
In Step 2, we quantitatively analysed how the market design addresses DC technical characteristics, such as volatile energy prosumption that challenges DC-substations. We built a deterministic optimisation model to evaluate three market designs, with a one-minute resolution to reflect the local prosumption volatility. Recognising that both total demand and demand flexibility may increase significantly in the future, we included a high share of electric vehicles (EVs) to test the market robustness. Simulations of a realistic urban DCDS demonstrated that the IM and LEM designs manage network congestion and voltage deviation even with a large share of EVs. It is found out that the main challenge to distribution-level market design is network congestion, mainly due to flexible prosumption at low-price hours. Voltage deviation and cable power capacity are not limiting factors of an urban DCDS market design. However, simply passing wholesale prices to local prosumers (like in the WEP design) is discouraged, as it may cause severe congestion and substantial flexibility investments.
In Step 3, we demonstrated the economic efficiency and reliability of the LEM design also under uncertainty. The performance of a local energy market is dominated by the uncertainty from stochastic local power prosumption, fluctuating wholesale energy prices, and unforeseen EV availability. We presented a novel agent-based model to evaluate the LEM design's performance in realistic scenarios. This model describes typical electric-vehicle user preferences and their bidding strategies with different levels of range anxiety. To stress-test LEM, we created challenging scenarios with a high share of solar generation and EVs. It performed efficiently and reliably in simulations, based on the high-resolution 2018 Pecan Street database and the IEEE European Low Voltage Distribution Test Feeder, even with a high share of EVs. We demonstrated that regardless of the bidding strategy, the LEM achieves efficient DCDS operation, as long as the network constraints are not too tight. Hence, we conclude that the simple LEM design---with only price--quantity bids and DC-substation capacity constraints---is the best feasible option among the three designs.
Although both DCDS technologies and the concept of local energy markets are still under development, we presented viable market solutions based on the best practices in the emerging DC technology, thereby clearing its market-side implementation barrier. The most economically-efficient yet technically feasible market design, at least in urban DCDS applications, is the LEM design. It supports fast market clearing and real-time control over flexible devices to resolve DC substation congestion. Other market designs, namely the IM and WEP, were proven to have practical limitations.
In the future, we recommend testing, improving and verifying the LEM design in field tests with real prosumers and various flexibility sources. This dissertation made assumptions and simplifications on both the technical system and the market operation, thereby leaving room for further development. First, the optimisation model and the agent-based model could be improved to enable more realistic market simulations. Second, a simple, user-friendly yet efficient agent module should be developed to enable high-frequency energy transactions in a DCDS. Third, follow-up research should estimate upon prosumers' bidding and investment incentives: the impact of additional price components---transmission and distribution system costs, national taxes and levies. Fourth, we should also evaluate the influence of prosumer values---including privacy, energy equality and energy self-sufficiency---on the local energy market design.","electricity market; direct current; distribution system; agent-based model; flexibility","en","doctoral thesis","","978-94-6384-341-6","","","","","","","","","Energie and Industrie","","",""
"uuid:d0b7e1e5-7836-4914-993d-ff83e446a43f","http://resolver.tudelft.nl/uuid:d0b7e1e5-7836-4914-993d-ff83e446a43f","Synthetic Cell Aspirations: A Toolbox for Building a Membrane Container from the Bottom Up","van Buren, L. (TU Delft BN/Gijsje Koenderink Lab)","Koenderink, G.H. (promotor); Aubin-Tam, M.E. (promotor); Delft University of Technology (degree granting institution)","2022","Everything that we consider alive, be it plants, dogs, bacteria or humans, is composed of the same microscopic building blocks: cells. While cells between and even within these organisms can look and behave very differently, they all share the same key functionalities: they can grow, they can divide to proliferate, they can eat and metabolise to fuel internal processes, and they carry a genetic blueprint which they can process to knowwhat to do and how to do it. Since the first notion that cells formthe universal building blocks of life [1], biologists have been intrigued by the fact that across the wide diversity of living organisms, all these constituent cells share the same fundamental characteristics. In the process of understanding how cells are capable of executing the key functionalities of growth, division, metabolism and information processing, biologists have identified the set of molecular components that constitute cellular life, and have broadly related components to specific cell functions. Despite the growing knowledge on what there is inside the cell, a crucial question remains largely unanswered: how do the molecular components of a cell give rise to these life-giving processes? Answering that question that is easier said than done. Cells consist of thousands of different components that are in continuous interaction, can take over each other’s function, or have multiple functions. Extracting a mechanistic understanding of how these components actually work together to make the cell alive, is almost impossible in such a complex soup. Even if we can list the minimal set of components that are vital to cellular life, we gain little understanding in how these ingredients function to make non-living matter alive. Instead, for this purpose it might be more insightful to try to rebuild the cell with a minimal set of components: to build a synthetic cell. In such a bottom-up reconstitution approach, cell components and their interactions can be studied in a well-controlled chemical environment. Starting with a small number of components, complexity can step-by-step be increased while maintaining a fundamental understanding throughout the construction process. To achieve this goal of building a minimal cell, multiple research initiatives have been founded worldwide. As a unique interdisciplinary effort, synthetic cell research combines the expertise required to understand, rebuild and integrate all vital cellular functions in vitro...","Bottom-up reconstitution; giant unilamellar vesicles; synthetic cell; encapsulation; membrane mechanics; membrane fusion; image analysis","en","doctoral thesis","","978-94-6366-529-2","","","","","","","","","BN/Gijsje Koenderink Lab","","",""
"uuid:469e35d4-7aa8-4301-94a0-17baceb3af1c","http://resolver.tudelft.nl/uuid:469e35d4-7aa8-4301-94a0-17baceb3af1c","On the Modelling and Characterisation of Photoconducting Antennas","Fiorellini Bernardis, A. (TU Delft Tera-Hertz Sensing)","Llombart, Nuria (promotor); Neto, A. (promotor); Delft University of Technology (degree granting institution)","2022","In recent years, the interest in terahertz technology witnessed a constant increasing interest and attention from the scientific community, as it allows to unlock the THz bandwidth, that can be used for a wide range of applications: it helps the study of biological samples and help medical diagnostics; it serves as a harmless non-destructive testing technique for medical and pharmaceutical products; it is used to inspect food and other packaging for quality control in a non-invasive fashion; it is adopted in body scanner systems for security screenings of people...","Photoconductive Antennas; Photoconductive Sources; Lens Antennas; Terahertz; time-domain sensing; time-domain spectroscopy; advanced antenna architectures","en","doctoral thesis","","","","","","","","","","","Tera-Hertz Sensing","","",""
"uuid:7bde9932-e775-431d-8dc0-34623b9eb5fc","http://resolver.tudelft.nl/uuid:7bde9932-e775-431d-8dc0-34623b9eb5fc","Andreev bound states in potentially topological setups","Barakov, H.S. (TU Delft QN/Nazarov Group)","Nazarov, Y.V. (promotor); Blanter, Y.M. (promotor); Delft University of Technology (degree granting institution)","2022","","Superconductivity; Andreev bound states; nanostructures; Random Matrix Theory; Quantum Circuit Theory","en","doctoral thesis","","978-90-8593-523-0","","","","","","","","","QN/Nazarov Group","","",""
"uuid:bd748c70-2cda-4094-945f-0f0577700367","http://resolver.tudelft.nl/uuid:bd748c70-2cda-4094-945f-0f0577700367","Engineering biotin synthesis; towards vitamin independency of Saccharomyces cerevisiae","Wronska, A.K. (TU Delft BT/Industriele Microbiologie)","Daran, J.G. (promotor); Pronk, J.T. (promotor); Delft University of Technology (degree granting institution)","2022","Every century brings its own challenges, but the 21st century is the first in which a global transition towards circularity is required to ensure human existence on this planet. Exhaustion of planetary resources, such as oil and rare elements, must be prevented and sustainable circular value chains introduced into our industry and economy. In addition to new challenges, every century also brings new and unique solutions. Today, biotechnology may provide some of the most relevant solutions by providing scientists with the ability to decipher the code of life represented by an organism’s DNA as well as with the tools to edit this code. Especially fast-reproducing microorganisms have a great potential to serve as cell factories, which can convert renewable raw materials into chemicals, materials and food ingredients and thereby support a circular bio-based economy. Recently developed biotechnological tools enable us to rewrite (‘edit’) the blueprint for these microbial cell factories with unprecedented precisions and at unprecedented rates. A myriad of life forms evolved over billions of years to adapt to an incredibly diverse number of habitats on our planet, which led to an immense diversity in survival strategies and metabolic capabilities. Recombining these naturally occurring DNA codes and ‘novel-to-nature’ DNA sequences generated in laboratories offers unique possibilities for development of novel cell factories to address challenges in our century and beyond. Baker’s yeast, Saccharomyces cerevisiae is one of the most intensively studied microorganisms and, as a cell factory, has a long history of successful application in industrial applications. Its story of success began thousands of years ago when processes for production of wine, beer and bread-making were first invented and, over many centuries, improved. Application of yeasts probably started as serendipitous discovery rather than as an invention, when yeast cells from the environment ‘contaminated’ sugar-containing food products and, by accident, turned sugars into ethanol and carbon dioxide, thus yielding the first alcoholic beverages and rising dough. All essential nutrients that yeast require for growth and fermentation were either present in the food product or generated by other microorganisms that inadvertently entered these early fermentation processes. Such a co-existence of multiple microbial species is a natural phenomenon that helps organisms thrive, but in man-made industrial settings such undefined mixed populations are often difficult to control and optimize. When researchers discovered that pure cultures of individual yeast strains were very efficient in producing transport fuels and other interesting chemicals, they therefore developed growth media that contained all essential and non-essential nutrients required for optimal yeast growth, to make these yeast cell factories as productive as possible. For over a century now, yeast cell factories have been under continual development. Classical strain improvement strategies to obtain high-producing strains, later combined with recombinant-DNA technology (genetic engineering) brought microbial production systems to a next level and helped pave the way towards a sustainable bio-based industry. However, while studying and developing product pathways for yeast strains employed in these processes, the specific requirements of these hosts regarding essential nutrients (vitamins) did not always receive attention. Use of generic media, containing excess amounts of vitamins to ensure high productivity, increase overall production costs, complicate down-stream processing and increase contamination risks. The research described in this thesis explores genetic engineering strategies in which heterologous DNA sequences are introduced to improve vitamin synthesis under industrially relevant conditions, with the goal to enable development of fully vitamin-independent (prototrophic) S. cerevisiae strains. The research focusses on a number of compounds that are routinely added to synthetic media for cultivation of S. cerevisiae that, based on their role in human nutrition, are referred to as B-type vitamins. A special focus was laid upon one of the more expensive B vitamins, biotin. The pathway by which some S. cerevisiae strains synthesize biotin is still not completely resolved. By a combination of laboratory evolution, genome analysis and genetic engineering, different strategies were designed and tested to obtain biotin prototrophic and fully vitamin-independent S. cerevisiae strains…","","en","doctoral thesis","","978-94-6423-708-5","","","","","","","","","BT/Industriele Microbiologie","","",""
"uuid:cc7874b6-62b8-4539-9cf6-d63c7bf9dce0","http://resolver.tudelft.nl/uuid:cc7874b6-62b8-4539-9cf6-d63c7bf9dce0","Coordination Strategies for Reducing Price Volatility in Local Electricity Markets","Chakraborty, S.T. (TU Delft Energie and Industrie)","Lukszo, Z. (promotor); Verzijlbergh, R.A. (copromotor); Delft University of Technology (degree granting institution)","2022","With increasing renewable energy and cross-sectoral electrification price volatility is increasing. Flexibility through demand-side management and electric storage has the potential for reducing price volatility. In this thesis, using duality theory the flexibility required for constraining price to a maximum limit is quantified.
Coordination Strategies for Reducing Price Volatility in Local Electricity Markets investigate three case studies varying with respect to type and degree of flexible resource aggregation for constraining price. Insights generated are relevant for regulators, aggregators, energy communities, and scholars focusing on the engineering and economics of local energy systems.","coordination mechanisms; Price volatility; Duality theory; Flexibility; Local electricity markets","en","doctoral thesis","","978-94-6384-331-7","","","","","","","","","Energie and Industrie","","",""
"uuid:179c7c74-7a84-4622-9c7f-4bd56b32eb91","http://resolver.tudelft.nl/uuid:179c7c74-7a84-4622-9c7f-4bd56b32eb91","Assessing Reference Dependence in Travel Choice Behaviour","Huang, B. (TU Delft Transport and Logistics)","Chorus, C.G. (promotor); van Cranenburgh, S. (copromotor); Delft University of Technology (degree granting institution)","2022","People make all kinds of choices every day, such as driving to work rather than taking public transport. Many of these choices have a direct impact on demand for products, services or public infrastructures. Understanding people’s choice behaviour can not only infer people’s preferences for certain products or services but more importantly make future demand forecasts. Over the last fifty years, there has been a steadily growing interest in applying a quantitative statistical method, discrete choice modelling, to study individual and household choice behaviour. Discrete choice models provide a theoretically robust and tractable tool for modelling and analysing various choices across many fields such as transport, health, and marketing...","","en","doctoral thesis","","978-90-5584-309-1","","","","","","2023-05-19","","","Transport and Logistics","","",""
"uuid:d09dedd5-de2b-4da9-b17f-5a61bc439805","http://resolver.tudelft.nl/uuid:d09dedd5-de2b-4da9-b17f-5a61bc439805","Scaling spin qubits in quantum dots more - distant - industrial","Zwerver, A.M.J. (TU Delft QCD/Vandersypen Lab)","Vandersypen, L.M.K. (promotor); Veldhorst, M. (copromotor); Delft University of Technology (degree granting institution)","2022","The discovery of the counter-intuitive laws of quantum mechanics at the beginning of the 20th century revolutionized physics. Quantum-mechanical properties, such as superposition and entanglement, can be harnessed to create quantum technology that opens a computing power far beyond the computing power that we know today. A quantum computer would enable efficient simulations of chemical reactions and material properties, which is expected to greatly impact healthcare and the energy transition. Practical quantum computation requires millions of qubits, either with neighbour-to-neighbour connectivity, or connected via quantum links. Spin qubits in electrically-defined silicon quantum dots are promising qubit candidates due to their small footprint and relatively long coherence time. The last decade meant a leap for the understanding and control of spin qubit systems with devices up to three quantum dots. Yet building systems capable of performing useful quantum calculations has proven difficult due to low sample yield, as well as challenges in controlling and scaling these systems. In this thesis, we explore quantum-dot-based spin qubits and their suitability for scaling to larger systems. This quest was threefold and can be summarized as: More, Distant, Industrial.
- More: Increasing the number of quantum dots and thus qubits to numbers greater than three was proven challenging, among others due to the the cross-capacitance that was posed upon quantum dots by the metallic gate electrodes of their neighbours. Here, we develop a material platform-independent method to individually control the chemical potential of each quantum dot and the number of electrons in it without affecting the quantum dots in their vicinity. We demonstrate the method by tuning up a linear array of eight GaAs quantum dots, containing exactly one electron each.
- Distant: Thereafter, we shift our focus to creating quantum links between distant quantum dots by shuttling electron spins across a chip. Given the superior spin coherence times, we moved to silicon quantum dots, which were not as far developed at the time. To improve our understanding of the material and allow for the fabrication of silicon arrays beyond two quantum dots, we formulate metrics that allow for sample comparison across material platforms and gate geometries, which allows us to examine samples and detect disorder and flaws to improve (uniform) sample fabrication. This enables the fabrication of a sample that can host an array of up to five quantum dots and tune it with the method described above. To mimic a quantum link, we shuttle an electron forth and back through four quantum dots of the array up to 1000 times, corresponding to a total distance travelled of approximately 80 _m. We observe that the spin orientation was preserved, forming a promising base for a quantum link.
- Industrial: Thirdly, in collaboration with Intel, we harness the experience of the semiconductor industry by industrially manufacturing quantum chips and controlling a qubit on these chips. By means of the metrics that we defined, we demonstrate that industrial manufacturing on 300-mm wafers allows for high yield and reasonable cross-wafer uniformity of the samples, while allowing for well-defined quantum dots and qubits with a performance that is comparable to state-of-the-art spin-qubit results. This high-yield fabrication without compromising qubit properties is crucial for scaling to the thousands of qubits that we need for practical quantum computation. The results in this dissertation provide perspective for scaling up silicon quantum dots and position the silicon spin qubit as a primary candidate for achieving quantum advantage with large-scale devices with millions of qubits.
This dissertation explores under what conditions could the integration of so called “negative emission technologies”, such as bioenergy with carbon capture and storage (bioCCS), allow for industries to achieve or exceed carbon neutrality within the system of production, rather than needing compensation elsewhere in society. To do so, this dissertation first defines the criteria necessary for negative emissions technologies to result in the net reduction in atmospheric greenhouse gases, then provides an overview of existing research of bioCCS-in-industry and identifies the main trends in why bioCCS may be useful in specific sectors, and then investigates specific configurations of negative emission technologies in industry, including bioCCS in the steel, cement, and chemical sectors, as well as the potential of natural and accelerated mineralization in concrete production. The primary methodological focus thesis is the comparative modeling of possible technological configurations with life cycle accounting of carbon dioxide and other greenhouse gas emissions, and also includes the review and synthesis of existing literature, as well as a technoeconomic case study.
Negative emission technologies such as bioCCS may be particularly useful in decarbonizing sectors where a substantial amount of carbon dioxide is unavoidably produced during industrial production, such as via the calcination of limestone in cement or the fermentation of ethanol; where the process is already biogenic, such as for paper and bioethanol; where it can be retrofitted into existing infrastructure that cannot be quickly replaced, such as for steel and cement; or where the product itself emits carbon dioxide in a difficult-to-capture way, such as in ethanol or urea production. However, using bioCCS to allow for “carbon neutral” or “carbon negative” production is non-trivial, as it requires ensuring that the greenhouse gas emissions in the supply chains of biomass production and logistics; industrial feedstocks, production and use; and carbon capture, transport, and permanent storage do not exceed the amount of carbon dioxide that is removed from the atmosphere and permanently stored, all of which can be obscured by overly narrow system boundary choices. Other issues of industrial negative emission technologies discussed in this thesis include asynchrony of carbon emissions and removals; the role of non-CO₂ greenhouse gases; the carbon and resource intensity of the technologies; and mismatches in the system boundaries used for life cycle assessment and cost assessment of industrial negative emission technologies.","","en","doctoral thesis","","","","","","","","","","","Energie and Industrie","","",""
"uuid:7596fcac-cb22-40b2-8351-ca1138272445","http://resolver.tudelft.nl/uuid:7596fcac-cb22-40b2-8351-ca1138272445","The Development of Technology Cluster Innovation Performance: Health and Sustainable Energy","Stek, P.E. (TU Delft Economics of Technology and Innovation)","van Geenhuizen, M.S. (promotor); van Beers, Cees (promotor); Delft University of Technology (degree granting institution)","2022","Motivation. The sustainability technology sectors, encompassing health and sustainable energy technology, play a critically important role in addressing global challenges such as climate change and ageing populations, which require a transition to a low or zero-carbon energy system, and sustainable and affordable healthcare. While these problems cannot be resolved by technological solutions alone, technology plays an important part in addressing them. Innovation, climate change, public health and the need for sustainable industrialization and economic growth are also part of the Sustainable Development Goals of the United Nations, further highlighting their global importance. Another motivation for the study, specifically from a European perspective, is a concern over the long-term economic competitiveness of Europe, which appears to be lagging behind the United States and certain Asian countries. This concern is a driver of the European Union’s current science and technology policy, including its multi-billion euro Horizon Europe initiative and Smart Specialization strategies for European regions. Research Question and Knowledge Gaps. The main research question addressed in this dissertation is: How are the dynamic spatial distribution and innovation performance patterns of sustainability technology clusters influenced by cluster characteristics, such as agglomeration and knowledge networks, and sectoral differences? Although there is an extensive literature on evolutionary economic geography, innovation systems, and global innovation diffusion, these theories often lack specificity with regard to particular technology sectors. Relatively little is known about the spatial distribution, cluster characteristics, and cluster innovation performance in the sustainability technology sectors. There are three main knowledge gaps: (i) the global spatial distribution and knowledge networks of technology clusters and their changes over time, (ii) the association between cluster innovation performance and various cluster characteristics, and (iii) the extent to which the aforementioned factors are influenced by socio-technological transitions and other sectoral differences. These knowledge gaps are addressed with a novel empirical approach to cluster identification, the measurement of cluster characteristics and the modeling of innovation performance…","innovation; Clusters; Health; Sustainable Energy; Patents; Bibliometric Analysis; Spatial Analysis","en","doctoral thesis","","978-94-6384-330-0","","","","","","","","","Economics of Technology and Innovation","","",""
"uuid:320d7140-4fad-4ffa-bc9a-50fcc344ed0f","http://resolver.tudelft.nl/uuid:320d7140-4fad-4ffa-bc9a-50fcc344ed0f","Investigation and engineering of respiratory energy coupling in yeasts","Jürgens, H. (TU Delft BT/Industriele Microbiologie)","Pronk, J.T. (promotor); Mans, R. (copromotor); Delft University of Technology (degree granting institution)","2022","Microorganisms, enhanced by genetic engineering, are important cell factories for conversion of renewable feedstocks into fuels, chemicals, biomaterials, nutraceuticals and drugs. Microbial synthesis of these products from chemically simple carbon substrates often requires a net input of free energy in the form of ATP, which is typically provided by respiration in aerobic bioprocesses. In such processes, oxidation of a fraction of the substrate with oxygen releases carbon dioxide and water as final products and provides the ATP that is needed for product formation as well as for maintenance requirements and cellular growth...","","en","doctoral thesis","","978-94-6421-746-9","","","","","","","","","BT/Industriele Microbiologie","","",""
"uuid:94b0cdd5-280b-4afb-a210-f19ecf12cf66","http://resolver.tudelft.nl/uuid:94b0cdd5-280b-4afb-a210-f19ecf12cf66","Bayesian deep learning for system identification","Zhou, H. (TU Delft Robot Dynamics)","Wisse, M. (promotor); Pan, W. (copromotor); Delft University of Technology (degree granting institution)","2022","Applying deep neural networks (DNNs) for system identification (SYSID) has attracted more andmore attention in recent years. The DNNs, which have universal approximation capabilities for any measurable function, have been successfully implemented in SYSID tasks with typical network structures, e.g., feed-forward neural networks and recurrent neural networks (RNNs). However, DNNs also have limitations. First, DNNs can easily overfit the training data due to the model complexity. Second, DNNs are normally regarded as black-box models, which lack interpretability and cannot be used for white-box modelling. In this thesis, we develop sparse Bayesian deep learning (SBDL) algorithms that can address these limitations in an effectivemanner.","Deep learning; System identification; Hessian calculation; Sparse Bayesian learning; Symbolic regression; Neural architecture search; Network compression","en","doctoral thesis","","978-94-6384-329-4","","","","","","","","","Robot Dynamics","","",""
"uuid:26c9e59f-5d64-40a4-adb9-cabe2c272bcb","http://resolver.tudelft.nl/uuid:26c9e59f-5d64-40a4-adb9-cabe2c272bcb","Quantum Properties in Hybrid Nanowire Devices","Xu, D. (TU Delft QRD/Kouwenhoven Lab)","Kouwenhoven, Leo P. (promotor); Wimmer, M.T. (promotor); Delft University of Technology (degree granting institution)","2022","Quantum computing is a flourishing field of scientific and technological research. The development of quantum computing in the past decades is the so-called second quantum revolution, where various aspects of quantum physics, such as entanglement and superposition, are used to form the main building block of computation — the quantum bit, or qubit. With quantum technologies rapidly growing, people have been paying attention to a few promising implementation schemes for quantum computing, including schemes based on topological protection.
Majorana bound states (MBSs) are predicted to be non-Abelian anyons that enable topological quantum computing, which uses the topological phase of matter to protect quantum information against noise from the environment. The search for MBSs has drawn enormous interests in condensed matter physics community, where hybrid semiconductor-superconductor nanowire systems are currently the most promising candidates. Great advances have been achieved in this field over the last decade, due to efforts ranging from material growth, to transport experiments and to theoretical understanding.
When a hybrid nanowire undergoes a transition to a topologically nontrivial phase, two MBSs appear at the ends of the hybrid region. As a result, zero-bias peaks (ZBPs) should appear in tunneling spectroscopy performed on normal-conductor - semiconductor-nanowire - superconductor (N-nanowire-S) junctions. In this thesis, we firstly demonstrate large ZBPs in vapor-liquid-solid (VLS) InSb nanowires with epitaxial Al with heights on the order of 2e2/h. Besides the original Majorana interpretation of these ZBPs, we discuss alternative explanations such as quasi-Majoranas due to a smooth potential and random disorder. (Chapter 4)
In the same system, we then demonstrate that the induced superconducting gap, the effective Landé g-factor and the spin-orbit coupling strength can be tuned by the electrostatic environment, i.e. applied gate voltage. The change of these quantities is dominated by the coupling between the semiconductor and the superconductor. (Chapter 5)
The remaining part of the thesis focuses on selective area growth (SAG) InSb nanowires with epitaxial Al. Quantum transport results on these nanowires show high-quality phase-coherence, hard superconducting gap and 2e-Coulomb blockaded transport. We then study the properties on induced superconductivity as well as phase coherence in SAG nanowire networks. We establish a fitting model to extract the phase coherence length based on the temperature dependence of the Aharonov-Bohm (AB) effect.
The SAG platform will allow scalable experiments for more complicated quantum transport, paving the way towards Majorana braiding. (Chapters 6-8)","Majorana; hybrid device; semiconductor nanowire; InSb; superconductivity; Andreev bound state; selective area growth; Aharonov-Bohm effect","en","doctoral thesis","","978-90-8593-519-3","","","","","","","","","QRD/Kouwenhoven Lab","","",""
"uuid:2af85b15-9208-47bc-851d-9313d65cefcb","http://resolver.tudelft.nl/uuid:2af85b15-9208-47bc-851d-9313d65cefcb","Reliability updating for slope stability: Improving dike safety assessments using performance information","van der Krogt, M.G. (TU Delft Hydraulic Structures and Flood Risk)","Kok, M. (promotor); Schweckendiek, T. (copromotor); Delft University of Technology (degree granting institution)","2022","Dikes are crucial for the protection against floods. One of the ways in which dikes can fail is by the instability of the inner slope. Credible probabilities of failure for slope stability are essential for the safety assessment of existing dikes and the design of dike reinforcements. This dissertation focuses on improving failure probability estimates for slope stability by using observed behaviour and performance of dikes. Examples of performance information are survived loads such as flood water levels or proof loads, and measurements during such loading conditions.
The research uses Bayesian analysis to account for one or more performance observations or measurements in estimating failure probabilities. Using multiple case studies, this research identifies the observations and success factors leading to significantly lower failure probabilities. Furthermore, Bayesian decision analysis was used to consider the cost-effectiveness (Value of Information) of performance information, to determine which strategy of dike reinforcement and/or uncertainty reduction leads to the lowest overall cost to comply with a given safety level.
It is found that incorporating multiple data of the behaviour and performance of dikes improves estimates of the failure probability for slope stability, leading to better safety
assessments and more efficient design of dike reinforcements. The cases considered in this dissertation suggest that savings of several million euros per kilometre dike reinforcement are possible (10-35% compared to the current dike reinforcement costs), for the Dutch situation with typically relatively high cost of dike reinforcements compared to the costs and risks of obtaining performance information. The use of performance information therefore contributes to improving the efficiency of managing flood risk in the Netherlands, and in particular the dike reinforcements in the Dutch Flood Protection Programme.","dikes; levees; slope stability; Bayesian reliability updating; performance information; site-specific transformation models; transformation uncertainty; proof loading; Bayesian decision analysis","en","doctoral thesis","","978-94-6366-531-5","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:5073b4ad-86ed-4cd9-b1dd-66f4bc5588e5","http://resolver.tudelft.nl/uuid:5073b4ad-86ed-4cd9-b1dd-66f4bc5588e5","Analysis of nitric oxide emissions from anode baking furnace through numerical modeling","Nakate, P.A. (TU Delft Numerical Analysis)","Vuik, Cornelis (promotor); Lahaye, D.J.P. (copromotor); Delft University of Technology (degree granting institution)","2022","Industrial emissions are discussed worldwide for their adverse effects on the global climate change. Nitric oxide (NOx) emissions are harmful among the various flue gases emitted in the atmosphere. There are multiple routes of formation of NOx in industry. The formation of NOx due to local overheating in fuel lean conditions is the most common way. The heavy industries such as aluminium consists of various intermediate processes. Anode baking is one such important process. The anode baking process comprises of a combustion chamber in which natural gas is fired for generating heat. The temperature in the furnace due to the combustion process is as high as 2000oC. This leads to formation of NOx in the anode baking furnace. The Aluminium and Chemie B.V. (Aluchemie) operates industrial anode baking furnaces situated in Rotterdam, The Netherlands. The problems of NOx formation in anode baking furnace of Aluchemie are discussed in this thesis.","Anode baking furnace; CFD; Combustion; Radiation; NOx analysis; COMSOL Multiphysics software","en","doctoral thesis","","978-94-6384-334-8","","","","","","","","","Numerical Analysis","","",""
"uuid:6c2cb521-1922-4772-9d95-c559cc59d8b6","http://resolver.tudelft.nl/uuid:6c2cb521-1922-4772-9d95-c559cc59d8b6","The Future of Ports in the Physical Internet","Fahim, P.B.M. (TU Delft Transport and Logistics)","Tavasszy, Lorant (promotor); Rezaei, J. (copromotor); Delft University of Technology (degree granting institution)","2022","Freight transport and logistics (FTL) produce around 15% of the world’s GDP and account for approximately 10% of finished product costs on average. However, through its contribution to the carbon footprint and traffic congestion, today’s FTL operations are often considered to be non-sustainable from an economic, environmental, and societal perspective. Transportation marks its presence with over 30% of the global carbon emissions. Additionally, as demonstrated by regular disruptions and the resulting shock-effects on international trade and manufacturing, the global FTL system suffers from vulnerabilities and lack of resilience.
In addition to being critical components in the FTL system, maritime ports function as facilitators of international trade, through which they contribute to the economic development of countries and regions. Over centuries, maritime ports have evolved from simple gateways between land and sea into highly complex systems with a large and diverse number of stakeholders being involved, and various types of services being offered. This has caused maritime ports not only to function as (transshipment) hubs in FTL networks, but also a location where industrial and value-added services take place. In this way, ports can be considered as dynamic organic systems within both national socio-economic-political and globalized economic systems, where ports need to continuously adapt to their external environment by changing economic and trading patterns, new technologies, legislation, and port governance systems.
An innovation that is expected to impact the current economic and trading patterns, technologies, legislation, and governance systems, is the Physical Internet (PI). The PI is an all-encompassing vision for a future FTL system that transforms “the way physical objects are moved, stored, realized, supplied and used across the world”, aiming towards greater economic, environmental, and societal efficiency and sustainability. By analogy with the digital internet (DI), physical shipments are encapsulated into multi-level modular containers and sent through an open hyperconnected network of logistics networks to their final destinations. The PI is defined as “a hyperconnected global logistics system enabling seamless open asset sharing and flow consolidation through standardized encapsulation, modularization, protocols and interfaces to improve the efficiency and sustainability of serving humanity’s demand for physical objects”...","","en","doctoral thesis","","978-90-5584-310-7","","","","","","","","","Transport and Logistics","","",""
"uuid:c721fe29-c3f8-4933-b2e9-fcfbb8412f88","http://resolver.tudelft.nl/uuid:c721fe29-c3f8-4933-b2e9-fcfbb8412f88","Understanding the decision-making process in homeowner energy retrofits: From behavioural and transaction cost perspectives","Ebrahimigharehbaghi, S. (TU Delft Design & Construction Management)","Visscher, H.J. (promotor); Qian, QK (promotor); de Vries, G. (copromotor); Delft University of Technology (degree granting institution)","2022","In 2020, owner-occupied housing accounted for 57% of the housing stock in the Netherlands. Homeowners are fully responsible for the implementation of energy retrofits. Moreover, the processes of energy retrofitting are complex and homeowners face problems such as finding financial support, reliable information and contractors. The complexity of implementing energy retrofits may discourage homeowners from continuing the process and achieving the expected benefits. Behavioural aspects and transaction costs (TC) are among the most important factors influencing consumer decision-making processes. Behavioural factors primarily illustrate a range of personal, contextual, and external factors that influence the decision-making process of homeowners. These include cognitive awareness and biases, attitudes and beliefs, experience and skills, homeowner characteristics, sociodemographic characteristics, property characteristics, and the behaviour of others. TC are any hidden costs that influence decision making but are not included in the direct physical costs of renovation services and products. This dissertation developed an integrated framework of behavioural factors and TC that impede the decision-making
process for energy retrofits. Key findings include (1) the significant importance of
behavioural factors and TC barriers. (2) the behavioural factors are particularly important in the early stages of energy retrofits and the TC barriers after the final decision. (3) the importance of behavioural factors and TC barriers differs according to the type of energy retrofit and non-energy retrofit. (4) Accounting for cognitive biases significantly improves the prediction of households' actual decisions about energy retrofits. This modelling is more accurate than the model that assumes households make rational decisions.","","en","doctoral thesis","","978-94-6366-532-2","","","","","","","","","Design & Construction Management","","",""
"uuid:558ae171-e2f4-4a37-8993-2b4a6a9539dd","http://resolver.tudelft.nl/uuid:558ae171-e2f4-4a37-8993-2b4a6a9539dd","Urban Scenes of a Port City: Exploring Beautiful İzmir through Narratives of Cosmopolitan Practices","Tanis, F. (TU Delft Situated Architecture)","Havik, K.M. (promotor); van Bergeijk, H.D. (promotor); Delft University of Technology (degree granting institution)","2022","This dissertation is an invitation to the reader to explore Güzel İzmir / Beautiful İzmir in Turkey. Through three different semi-fictional narratives, it aims to draw attention to specific and singular spaces as they were recorded and remembered through old postcards, black and white photographs, stories, and written travelogues in the past centuries and decades. Thus, it wants to discuss the specificity of an eastern Mediterranean port city by addressing it on eye-level through the experiences of a wanderer. By acknowledging the important role of narratives in building an image of the city, this doctoral research proposes that developing a particular narrative writing method may help to re-establish emotional connections between present-day inhabitants of port cities and their environments. It offers an alternative way of writing and an unconventional reading of the urban and architectural history of İzmir to revive socio-spatial practices by writing narratives of Beautiful İzmir.","port city architectures; İzmir; Smyrna; Ottoman Empire; Eastern Mediterranean; Narrative approach; Spatial Imagination; port city culture; Port Heritage; Port-cities; Levantine; Levantine Heritage","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-542-1","","","","","","2022-05-09","","","Situated Architecture","","",""
"uuid:d6f35adf-486e-453a-9ae9-679a81105bed","http://resolver.tudelft.nl/uuid:d6f35adf-486e-453a-9ae9-679a81105bed","High-Mobility TCO-Based Contacting Schemes for c-Si Solar Cells","Han, C. (TU Delft Photovoltaic Materials and Devices)","Zeman, M. (promotor); Zhang, Xiaodan (promotor); Isabella, O. (promotor); Delft University of Technology (degree granting institution)","2022","In the efficiency-driven photovoltaic (PV) industry, the market dominating crystalline silicon (c-Si) technology has been developing towards PV devices with carrier-selective passivating contacts (CSPCs). Especially, the silicon heterojunction (SHJ) solar cell, based on hydrogenated amorphous silicon (a-Si:H) contact stacks, and the poly-Si solar cell, based on ultrathin SiOx/poly-Si passivating contacts, pave the way for power conversion efficiencies above 26%, approaching the theoretical limit of the c-Si solar cell. In case of front/back-contacted (FBC) architectures, to minimize the optical parasitic absorption at the emitter and/or surface field side(s), thin doped silicon layers are normally applied, which exhibit high sheet resistance. Accordingly, transparent conductive oxide (TCO) layers are required to ensure sufficient lateral carrier transport towards the metal electrodes. However, problems still exist in contacting schemes for high-efficiency solar cell design towards future multi-terawatt production of PV modules, regarding the development of TCO layer with high carrier mobility (μ), its integration into specific device structures, and more importantly, the material availability. In this work, we present three types of TCO materials. They are tin-, fluorine- and tungsten-doped indium oxide layers, namely, ITO, IFO, and IWO. RF magnetron sputtering approach has been utilized to deposit the films. The TCOs are integrated into both low thermal-budget SHJ and high thermal-budget poly-Si solar cells. Further, to address the sustainability implication related to indiumconsumption, we propose a strategy of bifacial SHJ solar cell with reduced TCO use. Meanwhile, to reduce silver (Ag) consumption, as well as to reach good solar cell performance in our laboratory, we have developed a platformfor bifacial copper (Cu)-platingmetallization approach. Specific results are summarized as follows...","transparent conductive oxide (TCO); bifacial copper-plating; indium use reduction; c-Si solar cells","en","doctoral thesis","","978-94-6421-734-6","","","","","","","","","Photovoltaic Materials and Devices","","",""
"uuid:8021b680-5186-4beb-becc-f8799763ee21","http://resolver.tudelft.nl/uuid:8021b680-5186-4beb-becc-f8799763ee21","Towards Ubiquitous and Efficient LoRaWAN: MAC-Layer Protocols and APP-Layer Coding Mechanisms for Scalable and Energy-Efficient Long-Range Wide-Area Networks (LoRaWAN)","Kouvelas, N. (TU Delft Embedded Systems)","Langendoen, K.G. (promotor); Venkatesha Prasad, Ranga Rao (promotor); Delft University of Technology (degree granting institution)","2022","","LoRaWAN; Channel Sensing; Scalability; Energy Efficiency; Coding; MAC Layer; Application Layer","en","doctoral thesis","","978-94-6421-737-7","","","","","","2024-05-06","","","Embedded Systems","","",""
"uuid:0c6629a3-6bed-4469-8f35-ee0863ab37c3","http://resolver.tudelft.nl/uuid:0c6629a3-6bed-4469-8f35-ee0863ab37c3","Improving the performance of hospitals: An architectural analysis of patient journeys in China","Peng, D. (TU Delft History, Form & Aesthetics)","Wagenaar, C. (promotor); Luscuere, P (promotor); Zhang, C. (promotor); Delft University of Technology (degree granting institution)","2022","Nowadays, we are faced with serious challenges in public health worldwide. However, the challenges cannot be solved only in the domain of architecture, medical science or management. A successful hospital building is more than a nice building or an efficient healing machine. Patient journey is such a concept that tries to explore the possibility to solve the problems in hospitals in the domain of hospital architecture and hospital management. In this context, the research proposes a study of patient journey in hospitals from the perspective of architecture on basis of the outcomes achieved in management. Patient journeys are transferred from both clinical and administrative processes to special patterns. Moreover, in such a visual way, both the efficiency and effectiveness of hospitals and patients’ satisfaction during the journeys in hospitals are analyzed with case studies of China and the Netherlands. The system of spatialized patient journeys helps architects and hospital managers broaden their understanding of hospital. And the comparison results from case studies are useful for hospitals in China to improve performance and patients’ satisfaction.","","en","doctoral thesis","","978-94-6366-535-3","","","","A+BE | Architecture and the Built Environment No 7 (2022)","","","","","History, Form & Aesthetics","","",""
"uuid:3eab33fd-f9fc-420f-b574-749d22db5f1c","http://resolver.tudelft.nl/uuid:3eab33fd-f9fc-420f-b574-749d22db5f1c","ODS steels for nuclear applications: thermal stability of the microstructure and evolution of defects","Marques Pereira, V. (TU Delft Team Kevin Rossi)","Sietsma, J. (promotor); Schut, H. (copromotor); Delft University of Technology (degree granting institution)","2022","An approach to improve the performance of steels for fusion and fission reactors is to reinforce them with oxide nanoparticles. These can hinder dislocation and grain boundary movement and trap radiation-induced defects, thus increasing creep and radiation damage resistance. Steels containing these oxide particles are called ODS steels (Oxide Dispersion Strengthened). In the present thesis, two ODS steels containing 0.3 weight % of Y2O3 were studied: the 0.3% Y2O3 ODS Eurofer and the ODS 12 Cr steel. The main objectives of the work developed during these four years were: (i) evaluation of the thermal stability of the microstructure and of the oxide nanoparticles present in the steels; (ii) investigation of the effect of oxide nanoparticles on phase transformations and other microstructural processes, such as recovery and recrystallization; (iii) investigation of the interaction of oxide nanoparticles with defects intrinsic to the microstructure and (iv) development of the fundamental understanding of the behaviour of the steels prior to exposure to radiation.
The systematic characterization of microstructure of the two ODS steels was made, in their reference state and after 1 h annealing treatments at temperatures ranging from 573 K to 1600 K. The techniques used were Scanning Electron Microscopy (SEM), Electron Backscatter Diffraction (EBSD) and Vickers hardness testing. The oxide nanoparticles present in the 0.3% Y2O3 ODS Eurofer steel were observed using Transmission Electron Microscopy (TEM) and Atom Probe Tomography (APT); the oxide nanoparticles in the ODS 12 Cr steel were analysed with TEM. The 0.3% Y2O3 ODS Eurofer steel has, in its reference state, an isotropic microstructure, without significant texture, composed of tempered martensite, residual ferrite and M23C6 carbides. The ODS 12 Cr steel does not form austenite at high temperatures and, therefore, its matrix is always ferritic, with TiC carbides located along grain boundaries. Because of consolidation by hot extrusion, the ferritic grains in the ODS 12 Cr steel are elongated and present <110>α-fibre texture. In the 0.3% Y2O3 ODS Eurofer steel the oxide nanoparticles are composed of Y, V and O; in the ODS 12 Cr steel, the nanoparticles are Y, Ti and O based. The addition of Ti is known for reducing the final oxide nanoparticle size and for conferring higher thermal stability to the particles. When the oxide nanoparticles remain refined at high temperatures, the Zener pinning force exerted by them also remains strong and the overall microstructure does not become coarser during exposure to elevated temperatures. The Y-V-O based nanoparticles in the 0.3% Y2O3 ODS Eurofer steel go through coarsening during annealing at 1400 K, which leads to the formation of a coarser microstructure upon cooling to room temperature and reduction in the Vickers hardness. In the ODS 12 Cr steel, a fraction of the Y-Ti-O nanoparticles becomes coarser only after 1 h annealing at 1573 K, which leads to a moderate degree of softening of the material.
Positron Annihilation Doppler Broadening (PADB) was used to investigate the thermal evolution of defects present in different ODS steels and their interaction with oxide nanoparticles. PADB results suggest that the oxide nanoparticles are able to trap thermal vacancies, formed in high concentrations during annealing at temperatures of 1400 K and above. The excess of thermal vacancies, trapped by the oxide nanoparticles, is retained in the microstructure upon cooling to room temperature. To further investigate this hypothesis, Thermal Desorption Spectroscopy (TDS) measurements were carried out in the ODS 12 Cr steel, in its as-received condition and after annealing at 1573 K for 1 h, after exposure to low-energy deuterium plasma. The deuterium uptake in the annealed condition was higher than that in the as-received state, and it could be related to the prior-trapping of thermal vacancies by oxide nanoparticles, which would be able, then, to accommodate more deuterium atoms. The ability to accomodate more deuterium atoms (or hydrogen, or helium, or other radiation-induced interstitials) could have positive effects on the performance of the steel during service, but mechanical testing is necessary to verify this influence.
2 methanation for large scale energy storage: Catalyst and Process development","Wei, L. (TU Delft Large Scale Energy Storage)","de Jong, W. (promotor); Grénman, Henrik (copromotor); Delft University of Technology (degree granting institution)","2022","class=""MsoNormal"" style=""text-indent:12.5pt"">Chapter 1 is the introduction, which presents thestate of the art in synthesis and application of these, in fact, bi-functionalmaterials for sorption enhanced CO2 methanation.
In Chapter 2, zeolite 13X and 5A supported Ni catalysts wereutilized, which synthesized using the evaporation impregnation method. Theinfluence of using different Ni precursors (nitrate, citrate, and acetate) aswell as calcination temperatures on the catalyst properties and performancewere investigated. Using nickel citrate and acetate resulted in smaller NiOparticle sizes compared to nitrate. Methanation experiments revealed that the13X catalysts synthesized using nickel citrate displayed clearly higheractivity, compared to the catalysts synthesized using nickel nitrate or nickelacetate.
Chapter 3 describes zeolites 13X and 5A that weremodified with nickel and/or ruthenium for CO2 methanation. The results showed that Ni wasable to enter the pores of 13X, in the other cases an egg shell type structurewas formed. Methanation experimental results showed that the mono-metalliccatalysts outperformed the bi-metallic ones with Ni being the more active. Oneof the factors influencing the performance of the bi-metallic catalysts wasthat it was difficult to obtain good dispersion when both metals were present. Thecatalysts with lower weak acidity displayed higher activity. The catalyst2.5%Ru13X and 5%Ni13X showed good catalytic stability with around 97% CH4 selectivity at 360 °C, with no catalystdeactivation during a 200 h catalyst stability test.
Chapter 4 deals with sub-nanometer zeolite13X-supported Ni-ceria catalysts for CO2 methanation. Ce loading affected thecatalysts’ metal dispersion, reducibility, basicity and acidity, and hencetheir activity and selectivity. STEM-EDX elemental mappings showed that Ce andNi were predominantly highly dispersed. Ce had a positive effect on thereduction of NiO and lead to a relatively high number of medium basic siteswith a low Ce loading. Highly stable 5%Ni2.5%Ce13X displayed high activity andnearly 100% CH4 selectivity in CO2 methanation at 360 °C, which was mainlyattributed to the high dispersion of metals and relatively high amount ofmedium basic sites.
In Chapter 5, a long-term experimental study employing 5%Ni5A,5%Ni13X, 5%NiL and 5%Ni2.5%Ce13X bifunctional materials with both catalytic andwater adsorption properties was performed in a fixed bed reactor. The overallperformance of the bifunctional materials decreased going from 5%Ni2.5%Ce13X,5%Ni13X, 5%Ni5A, to 5%NiL. The highest obtained CO2 conversion and CH4 selectivity were close to 100 % duringprolonged stability testing in 100 reactive adsorption – desorption cyclesamounting to 203 hours in total with 5%Ni2.5%Ce13X.
Chapter 6 focuses on determining the kinetics of anickel on zeolite 13X catalyst in comparison with a nickel catalyst on ameso-porous γ-Al2O3 support. In this chapter, the validity ofthe obtained rate equation is discussed. The results showed that 13X zeolitesupported nickel catalyst was more active compared to the one supported on γ-Al2O3. This is mainly due to a better dispersion ofnickel on the 13X zeolite catalyst.
Finally, Chapter 7 provides the overall conclusions of thestudies reported in this thesis. Recommendations for further research are alsoprovided.","Large scale energy storage; Sorption enhanced; CO2 methanation; Zeolite","en","doctoral thesis","","978-94-6421-730-8","","","","","","","","","Large Scale Energy Storage","","",""
"uuid:e2fdb0c4-a80d-49c7-889c-7e492fb20ca3","http://resolver.tudelft.nl/uuid:e2fdb0c4-a80d-49c7-889c-7e492fb20ca3","Seismic behaviour of masonry buildings with timber diaphragms","Mirra, M. (TU Delft Bio-based Structures & Materials)","van de Kuilen, J.W.G. (promotor); Metrikine, A. (promotor); Delft University of Technology (degree granting institution)","2022","Existing masonry buildings are very frequently part of the architectural context for several countries all over the world. These constructions often feature masonry walls as vertical structural elements, and timber floors and roofs as horizontal components. The often poor characteristics of masonry, along with the in-plane flexibility of the floors and their frequently weak connections to the walls, make such buildings very vulnerable against seismic actions, as proved by the destructive consequences of several earthquakes in the last decades.
The improvement of seismic capacity of existing masonry buildings is still an open research topic, with several retrofitting techniques for masonry walls and timber floors being proposed and tested. An acknowledged way of increasing the structural performance of a masonry building under an earthquake consists of the development of the so-called box behaviour, enabling the construction to react as a whole to the ground motion. To pursue the box behaviour, the main adopted retrofitting methods are linked to in-plane stiffening of the timber floors, and seismic strengthening of the connections. In this context, several (nonlinear) analysis method for masonry buildings
(e.g. the pushover analysis) assume that rigid floors are present, and out-of-plane failure mechanisms of masonry are prevented. Therefore, also in the context of numerical modelling, the in-plane response of the diaphragms is generally not taken into account in detail, since they are only considered as linear elastic orthotropic membranes or stiff elements.
However, past seismic events demonstrated that an excessive stiffening of the diaphragms can also be detrimental. This triggered the study of several lighter, moderately stiff strengthening methods for timber floors, referred to various architectural frameworks. Since existing floors proved to be excessively flexible, but also too stiff diaphragms could not be recommendable, the overarching research question of this dissertation arises:
“How is it possible to predict the global seismic behaviour of existing and retrofitted masonry buildings, and to optimize it by quantifying the influence of strengthening interventions on timber floors and timber-masonry connections?”
The research question is answered starting from the specific situation of the Groningen area, in the northern part of the Netherlands, where human-induced earthquakes caused by gas extraction take place. The local building stock is composed for more than 50% of low-rise masonry constructions with timber floors and roofs, none of which was designed or realized with seismic events in mind, since earthquakes were absent until recently.
Up to now, these events have not caused extensive structural damage, because their intensity was from low to moderate, but according to probabilistic studies, more intense earthquakes might also occur. For this reason, a seismic characterization of local timber and masonry structural components was firstly necessary: this dissertation focused in particular on the testing and modelling of as-built and retrofitted timber diaphragms and timber-masonry connections.
As a first step, because it was not possible to test a large number of whole structural components on site, a replication method based on material properties was defined. This ensured that the specimens constructed in laboratory could be representative for the actual structural components in practice. With regard to timber diaphragms, in-plane quasi-static reversed-cyclic tests were performed on five as-built samples, which showed an approximately linear, and very flexible response. For these diaphragms, a retrofitting technique enhancing not only strength and stiffness, but also energy dissipation of the diaphragms, was designed. This dissipative contribution of the floors can be relevant, because it can potentially (greatly) dampen the seismic shear forces on masonry walls, and this characteristic can be even more important for the Groningen area, where low-quality masonry and very slender piers are often present. The strengthening technique for the floors consisted of an overlay of plywood panels screwed along their perimeter to the existing sheathing: the retrofitted diaphragms exhibited a great enhancement in seismic properties, with relevant increase in strength, stiffness, and energy dissipation.
The great potential of the developed retrofitting technique could not be limited to an experimental characterization: an analytical model was also necessary to enable the design of the strengthening for other contexts or floor configurations. Hence, starting from the analytical formulation of the load-slip behaviour of the single screws connecting planks and plywood panels, the global in-plane response of the floors was derived, including their characteristic pinching behaviour. This analytical model had also another important function, because it was the basis for an advanced numerical implementation of the seismic response of timber diaphragms in finite element software. This enabled to account in detail for the (dissipative) in-plane behaviour of the diaphragms, so that the seismic response of existing masonry buildings could be optimized with a well-designed retrofitting of the floors.
Yet, this energy dissipation can only be activated by means of the in-plane deflection of the diaphragms: to avoid out-of-plane collapses of masonry walls, this deflection has not to be excessive, but at the same time also effectively strengthened timber-masonry connections are needed. Therefore, an experimental characterization of two as-built and five strengthened timber-masonry connections was conducted. The joints were tested under monotonic, cyclic, and also high-frequency dynamic loading, by subjecting them to an induced Groningen seismic signal. Seven replicates per joint type were built and tested, and analytical models for evaluating strength and stiffness of the joints were derived, useful for design purposes or as input for numerical models.
Finally, in order to study the possible optimization of seismic capacity of existing masonry buildings, it is also necessary to define proper criteria for an optimal retrofitting. The current seismic design framework is extensively based on peak ground acceleration, which cannot, however, take into account factors such as load duration and quantification of structural damage. These parameters can play a crucial role for the Groningen region, because of the transient nature of the local earthquakes, featuring short, high-frequency and sudden signals, if compared to the longer and more damaging tectonic earthquakes.
Therefore, an energy-based approach was adopted, which allows to predict and quantify the hysteretic energy provided by a building as a function of its period and the load duration of the earthquake.
This approach opened up the opportunity to quantify structural damage in terms of number of cycles on the system: the role of timber diaphragms becomes, then, even more relevant, because with an optimized retrofitting the floors are only moderately stiff, thus the period of a building would be higher than that of the same structure featuring stiff diaphragms. Furthermore, the possibility to include load duration enables a characterization of the seismic capacity independent of the context or the earthquake type.
Thus, to prove the beneficial, dissipative effect of the optimized retrofitting of floors, as well as the effectiveness of the adopted modelling strategies, numerical time-history analyses were performed on three case-study buildings. The first two were typical Dutch constructions, subjected to induced earthquakes, while the third was a country house from the Italian context, to which tectonic earthquakes were imparted. This additional building was included because it enabled to demonstrate that the developed retrofitting and modelling principles, along with the energy-based characterization of seismic capacity, can be generalized to other context besides the reference Dutch one.
The results from the analyses show that excessively flexible floors cause, as expected, out-of-plane collapses in masonry walls, while excessively stiff floors limit the energy dissipation to masonry piers only, thus reducing the seismic capacity of the building. On the contrary, an optimized retrofitting is able to retrieve the global base shear of the building, and at the same time its maximum displacement capacity within masonry drift limits. The optimal strengthening also corresponds to the maximum hysteretic energy that can be provided by the structure. Furthermore, the period of the building is also increased compared to stiff floor configurations, meaning that the structure is subjected to a lower number of cycles, besides benefitting from the additional damping effect activated by dissipative diaphragms. This dissipative contribution can be brought into play provided that an effective strengthening of timber-masonry joints is realized. The beneficial, dissipative effect of well-retrofitted, optimized timber floors was quantified in terms of an equivalent hysteretic damping ratio of 15% (additional to the dissipation already provided by masonry walls), and of an increased behaviour factor (q) range for masonry structures: from the usual values of q = 1.5 ÷ 2.5 to q = 2.5 ÷ 3.5 in presence of dissipative diaphragms.
This research study can contribute to a more efficient seismic retrofitting of existing buildings, enabling preservation of the architectural heritage and more dissipative, earthquake-safe masonry structures.","Timber diaphragms; Masonry buildings; Seismic retrofitting; Energy dissipation; Seismic optimization; Plywood panels; Time-history analyses","en","doctoral thesis","","978-94-6421-710-0","","","","","","","","","Bio-based Structures & Materials","","",""
"uuid:f3c4431d-368c-4a17-aacf-2d1283688a1a","http://resolver.tudelft.nl/uuid:f3c4431d-368c-4a17-aacf-2d1283688a1a","Image Reconstruction for Low-Field MRI","de Leeuw den Bouter, M.L. (TU Delft Numerical Analysis)","van Gijzen, M.B. (promotor); Remis, R.F. (promotor); Delft University of Technology (degree granting institution)","2022","Each year, hundreds of thousands of infants develop hydrocephalus (""water on the brain""). This is a disease that, if untreated, leads to brain damage and ultimately death. The prevalence of hydrocephalus is relatively high in children living in the Global South (in sub-Saharan countries, for example), but access to advanced imaging technology is usually limited in countries belonging to the Global South. This is especially problematic for hydrocephalus, since magnetic resonance imaging often is the diagnostic tool of choice for this disease, but MRI scanners are essentially out of reach due to their cost, size, and stringent infrastructure demands. Therefore, the introduction of an inexpensive, portable, low-field MRI scanner is clinically relevant. An interdisciplinary team of researchers at the Leiden University Medical Center, Pennsylvania State University, Mbarara University of Science and Technology and Delft University of Technology has been working on the development of such low-fieldMRI scanners, with the first goal being to aid in the diagnosis of hydrocephalus in infants in sub-Saharan Africa. Within this project, several prototypes and various dedicated image reconstruction techniques have been developed. This dissertation focuses on the latter. High-field MRI scanners have very strong and homogeneous static magnetic background fields, due to the superconducting magnets they are equipped with. To significantly reduce production costs, the low-field scanners considered in this work use permanent magnets to realize their static background fields. Obviously, such background fields are much weaker than in a high-field MRI scanner, leading to measured signals with a significantly lower signal-to-noise ratio, since this ratio scales with the magnitude of the background field. For spatial encoding (i.e., to distinguish what part of the signal originates from what part of the body or object inside the scanner), high-field scanners depend on gradient coils which superimpose a linearly varying magnetic field on the background field. The first prototype we consider does not have any gradient coils. Instead, spatial encoding is carried out by making use of the inhomogeneities in the static magnetic background field. Due to the nonbijective nature of the field, a single measurement does not yield enough information for a reconstruction. However, by carrying out several measurements and rotating the field between subsequent measurements, image reconstruction should be possible. The second prototype follows the design of high-field scannersmore closely: it was designed such that the static magnetic field is as homogeneous as possible and the scanner is equipped with three gradient coils to allow for spatial encoding in three directions. In this case, the relationship between signal and image can be described by a Fourier Transform...","","en","doctoral thesis","","","","","","","","","","","Numerical Analysis","","",""
"uuid:daa03c27-9c6d-40da-b737-02b8deaaa0ee","http://resolver.tudelft.nl/uuid:daa03c27-9c6d-40da-b737-02b8deaaa0ee","A Multiscale View on Bikeability of Urban Networks","Reggiani, G. (TU Delft Transport and Planning)","Hoogendoorn, S.P. (promotor); Daamen, W. (promotor); Delft University of Technology (degree granting institution)","2022","Although many agree that the use of bicycles improves mobility and quality of life in a city, much less clear is how to assess the progress being made in this direction and how to plan bikeable cities. The bikeability of a city depends on many diverse and interrelated factors such as the land use and transport system, culture and social norms, as well as individuals’ perceptions. Among the many factors influencing bikeability the infrastructure network, made of streets and intersections, is a fundamental component to allow safe and convenient cycling in a city. For this reason, this thesis focuses on infrastructure-related bikeability aspects and how to assess them. Planning for bicycle infrastructure has been piece-wise and location-specific resulting in every city developing its own best practices without contributing to a more general theoretical guidance on how to assess and develop attractive and
convenient bicycle networks. Since a systematic approach to bicycle infrastructure evaluation and planning is lacking we formulate the following research goal:
To gain empirical knowledge on bicycle infrastructure networks and develop methodological tools to assess infrastructure-related bikeability.","","en","doctoral thesis","","978-90-5584-308-4","","","","","","","","Transport and Planning","","","",""
"uuid:c44f8490-da62-4f7c-9945-3cdb6fe0a7a4","http://resolver.tudelft.nl/uuid:c44f8490-da62-4f7c-9945-3cdb6fe0a7a4","A bird's-eye view on infrasound: High-resolution methods to unravel the ambient microbarom wavefield","den Ouden, O.F.C. (TU Delft Applied Geophysics and Petrophysics)","Evers, L.G. (promotor); Smets, P.S.M. (copromotor); Delft University of Technology (degree granting institution)","2022","","infrasound; microbaroms; sensor technology; soundscapes; array processing","en","doctoral thesis","","978-94-6366-487-5","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:44d0401b-9dfb-4510-aa0d-07cdf25b3e2f","http://resolver.tudelft.nl/uuid:44d0401b-9dfb-4510-aa0d-07cdf25b3e2f","Stabilised Material Point Method for Fluid-Saturated Geomaterials","Zheng, X. (TU Delft Geo-engineering)","Hicks, M.A. (promotor); Pisano, F. (copromotor); Delft University of Technology (degree granting institution)","2022","Large deformations in fluid-saturated geomaterials are central to numerous geotechnical applications, such as landslides and dam failures, pile installations, and underground excavations. An in-depth understanding of the soil's hydromechanical behaviour during large-deformation processes is essential for quantitative predictions about such geotechnical problems, which justifies the considerable importance that detailed numerical simulations have been acquiring in this context. However, such simulations are inevitably associated with significant conceptual and computational complexity, due to the simultaneous presence of possibly very large soil deformations along with dynamic effects. Under such conditions, the most common Lagrangian version of the Finite Element Method (FEM) is known to suffer from the mesh distortion that is induced by large deformations, which has a detrimental impact on the accuracy and stability of the corresponding numerical results. The recently developed Material Point Method (MPM) offers a viable solution to the problem by combining the advantages of both Lagrangian and Eulerian methods, and has therefore received increasing attention within the numerical modelling community.
In this thesis, the MPM has been adopted and further developed for the simulation of dynamic large-deformation problems in fluid-saturated porous materials, with emphasis on the stabilisation of the pore pressure field in the presence of low-order interpolation functions. Particular attention has been placed on developing and verifying the proposed stabilised MPM. As a starting point, an explicit version of the proposed coupled MPM, based on the Generalised Interpolation Material Point (GIMP) method, is implemented. Several numerical challenges, such as (i) the implementation of a single-point two-field dynamic formulation, and (ii) the mitigation of pore pressure oscillations, are tackled and discussed in detail. The resulting explicit GC-SRI-patch method includes the use of: (i) selective reduced integration (SRI) for pore pressure evaluation at the central Gauss points of individual background cells; (ii) patch recovery based on a Moving Least Squares Approximation (MLSA) for mapping pore pressure increments from central GPs to Material Point (MPs); (iii) the Composite Material Point Method (MPM) for enhancing the recovery of effective stresses. The analysis of various poroelastic dynamic consolidation problems over a wide range of loading/drainage conditions demonstrates the effectiveness of the explicit GC-SRI-patch method.
Due to the adoption of explicit time integration, the abovementioned (explicit) GC-SRI-patch method, similar to most coupled MPM formulations from the literature, is only conditionally stable, which imposes extreme limitations on the selection of the time step size. As a consequence, the need for stable time integration restricts the applicability of explicit coupled MPM modelling to problems of considerable size and/or duration. A fully implicit stabilised GIMP using a single-point three-field (u-p-U form) formulation is thus proposed, with pore pressure instabilities being remedied through the same MLSA-based patch recovery. Relevant aspects regarding the numerical implementation of the implicit GIMP-patch method are discussed in detail. This novel method is shown to produce accurate, stable, and oscillation-free results for coupled problems associated with different inertial and deformation regimes, and is generally more efficient than the explicit GC-SRI-patch method owing to the use of larger time steps.
Following the development of the implicit GIMP-patch method in a poroelastic framework, its extension to elastoplastic large-deformation problems is introduced. In particular, in order to analyse coupled large-deformation problems in (nearly) incompressible elastoplastic geomaterials, an anti-locking B-bar algorithm is implemented. The effectiveness of the implicit B-bar GIMP-patch method in mitigating the detrimental effects of volumetric locking is highlighted through several practical examples, including (i) a strip footing undergoing both small and large settlements on an incompressible soil, (ii) the failure of an earthen slope, and (iii) the bearing capacity of a strip footing near the crest of a slope. The proposed method is proven to be a suitable tool for simulating the large-deformation failure mechanisms in realistic fluid-saturated geotechnical problems and the quantification of the unstable soil mass during the corresponding failure processes.
In summary, the work presented in this thesis is believed to make significant progress on the applicability of stabilised MPM for large-deformation problems in fluid-saturated geomaterials. The presented new developments will support more efficient and accurate assessment of geohazards and soil-structure interaction in geotechnical engineering practice.","Explicit time integration; Hydromechanical coupling; Implicit time integration; Large deformations; Material point method; Patch recovery; Pore pressure stabilisation","en","doctoral thesis","","978-94-6384-324-9","","","","","","","","","Geo-engineering","","",""
"uuid:171ba94a-e8f4-4969-b6ed-912d4f334968","http://resolver.tudelft.nl/uuid:171ba94a-e8f4-4969-b6ed-912d4f334968","Reliable numerical algorithms for the Non-linear Fourier Transform of the KdV equation","Prins, Peter J. (TU Delft Team Sander Wahls)","Wahls, S. (promotor); Verhaegen, M.H.G. (promotor); Delft University of Technology (degree granting institution)","2022","b>Research question
The topic of this dissertation is the numerical computation of the forward and inverse Non-linear Fourier Transform (NFT) for the Korteweg–de Vries equation (KdV), for sampled signals that decay sufficiently fast on both sides. With NFTs certain non-linear Partial Differential Equations (PDEs) can be solved in a way that is analogous to solving linear Ordinary Differential Equations (ODEs) and PDEs by means of the ordinary Fourier transform. Similarly to the linear Fourier transform, NFTs can be used to analyse, synthesise, filter and predict signals. Existing numerical NFT algorithms suffer from either or both a limited accuracy or a long computation time, which limit the usability of the KdV-NFT for engineering problems. In this dissertation we develop new algorithms that achieve a higher accuracy or require a shorter computation time.
Design methods
We implemented existing numerical algorithms in Mathworks Matlab in floating point arithmetic to analyse their behaviour. Thereafter we designed new algorithms that avoid the undesirable behaviour of the existing algorithms. We demonstrated the improvements by means of benchmark tests. Furthermore we implemented some of the new algorithms in the programming language C in the Fast Non-linear Fourier Transform (FNFT) software library.
Results
We have developed algorithms to compute the continuous KdV-NFT spectrum and the eigenvalues and norming constants of the discrete KdV-NFT spectrum. Furthermore we developed an algorithm to compute the contribution of the discrete spectrum to the inverse KdV-NFT. The continuous KdV-NFT spectrum can now be computed with a fast algorithm at a comparable error tolerance to the Non-linear Schrödinger Equation (NSE)-NFT. That means that the computational complexity has been reduced from O(D^2) to O(D(log(D))^2), where D is the number of samples, without a significant deterioration of the accuracy. The eigenvalues of the discrete KdV-NFT spectrum can now be computed reliably and more efficiently than before. The norming constants can now be computed in all known cases without the anomalous errors that were observed for older algorithms. That means an improvement of the accuracy by several orders of magnitude. The contribution of the inverse KdV-NFT can now be computed for discrete spectra with three to seven times as many eigenvalues in comparison to previously available algorithms.
Conclusions and applications
The KdV can be used as a model for nearly linear wave phenomena that propagate in one direction. These are found in a plethora of physical applications. The algorithms that we presented in this dissertation can be used for the analysis, synthesis, filtering and prediction of sampled data from such systems. Their higher accuracy and/or shorter computation time thus brings the KdV-NFT a step closer to the engineering practice.","signal processing algorithms; non-linear Fourier transform (NFT); Korteweg–de Vries (KdV) equation; Schrödinger equation; water wave; soliton; norming constant; exponential splittings; dressing method; Darboux transform; Crum transform","en","doctoral thesis","","978-94-6384-320-1","","","","","","","","","Team Sander Wahls","","",""
"uuid:bad12fb8-ef0a-41ba-b0ae-f5e5ce2fa5fe","http://resolver.tudelft.nl/uuid:bad12fb8-ef0a-41ba-b0ae-f5e5ce2fa5fe","Models and heuristics for hard routing and knapsack problems","Pierotti, J. (TU Delft Discrete Mathematics and Optimization)","Aardal, K.I. (promotor); van Essen, J.T. (copromotor); Delft University of Technology (degree granting institution)","2022","One of the world’s biggest challenges is that living beings have to share a limited amount of resources. As people of science, we strive to find innovative ways to better use these resources, to reach and positively affect more and more people. In the field of optimization, we aim at finding an optimal allocation of limited sets of resources to maximize a certain objective. Some of these problems can be solved in polynomial time; others are more difficult to be solved. Current state-of-the-art methods can solve NP-hard problems (a class of optimization problems) in exponential time, in the worst case. To give an idea, for input size n Æ 100 and parameter k Æ 2: polynomial time nk Æ 1002 Æ 10,000; exponential time kn Æ 2100 Æ 1,267,650,600,228,229,401,496,703,205,376. Yet, many relevant and practical problems are NP-hard and have to be solved in a short amount of time. Our research focuses on formulating and solving four of these problems. Among those, three are vehicle routing problems (VRP, Chapters 2, 3 and 4). VRPs are problems where vehicles have to perform routes in order to minimize an objective function (for example, minimize routing costs) while being subjected to constraints (for example, each location has to be visited). Routing costs have a significant impact on society and on the cost of products (the transportation sector makes up 13.2% of the EU’s GDP (Joint Research Centre, 2021)). Although VRPs have been thoroughly studied for over half a century, new technologies (autonomous driving, real-time information, etc.) and new customers’ demands (increase in online shopping, a more competitive delivery market, etc.) create variants of the standard VRP that are more and more complex to formulate and solve. VRPs are well-known for being NP-hard and difficult to approximate, and hence solve. We formulated three novel VRPs and solve those both exactly via branch-and-bound (Chapters 3 and 4, the latter also uses valid inequalities) and metaheuristicly (Chapters 2 and 4). To increase generalizability, we introduced an almost non-parametric algorithm that encompasses all the most famous heuristic operators for VRP (Chapter 4). To increase performance, we proposed an adaptive, i.e., selftuning, algorithm (Chapter 2) that can detect problem’s features and steer its decisions to achieve better solutions. Lastly, Chapter 5 focuses on what we believe will be the most radical transformation in the metaheuristic field in coming years: machine learning for combinatorial optimization. Machine learning established its fundamental importance in many fields and it is currently paving its way into combinatorial optimization. We developed a selfattention based deep reinforcement learning algorithm without any problem-specific knowledge to solve one of the most studied combinatorial optimization problems. Our results suggest that machine learning can (and we conjecture that it will) tackle combinatorial optimization on its own, without problem-specific knowledge and will be a fundamental element in future state-of-the-art heuristics for combinatorial optimization.","Metaheuristics; Logistics,; Routing,; Integer Linear Programming; Balanced Traveling Salesman Problem; Special Education Needs School Bus Routing Problem; Reinforcement Learning; Knapsack Problem","en","doctoral thesis","","","","","","","","","","","Discrete Mathematics and Optimization","","",""
"uuid:dd92e02d-a268-4cfd-aa88-063200f66b5e","http://resolver.tudelft.nl/uuid:dd92e02d-a268-4cfd-aa88-063200f66b5e","Paradigm Lost: On the Value of Lost Causes in Transforming Cities and Water Systems’ Development Pathways","Godinez Madrigal, J. (TU Delft Water Resources)","van der Zaag, P. (promotor); Van Cauwenbergh, Nora (promotor); Delft University of Technology (degree granting institution)","2022","In the last decades, thousands of socio-environmental conflicts have spawned, especially at the sub-national scale. Among these, water conflicts are especially complex and multifaceted, since most are driven by a combination of socio-economic dynamics that increase pressure on natural resources, more extreme hydro-climatic trends, outdated or biased legal frameworks, large power asymmetries between actors, and the dominance of sociotechnical paradigms that reduce the decision space of water policies. Although water conflicts often receive a lot of attention, public scrutiny, and media exposure, this has not necessarily transcended into improving our understanding of their relation to the coupled human-water systems in which they are embedded, and even less of their transformative potential to open the decision space on the development pathways of cities and water systems. Furthermore, if a conflict drags on, it creates the notion of conflict impasse, of a static nature and confined to a narrowed space. This can further obstruct our understanding of what the conflict is really about, what are its root causes, what are the motivations of key actors, how do actors mobilize different capitals to achieve their goals and coalesce in networks, and what are the best ways to move forward and find transformative alternatives. This PhD thesis aims to reveal that water conflicts are highly dynamic and the result of a complex web of events influenced by social and natural long-term dynamics, knowledge controversies, and actors and network dynamics that widen the perception of the boundaries of water conflicts. To map out and navigate these turbulent waters of water conflicts, new transdisciplinary methods and action research are necessary. The realization that conflicts are complex and dynamic, and that transdisciplinary and action methods are needed to transform them has many implications. First, given the longterm dynamics that determine a conflict, it is necessary to analyze its history beyond the “official” start of the conflict, even before the involvement of the main actors in the conflict. Therefore, a water conflict involves much more than only just a dispute between parties, but also wider and more transcendent discussions of sustainability of cities and water systems and fairness of socio-political systems. Second, these long-term dynamics are both social and natural, thus, water conflicts need to be analyzed in an interdisciplinary manner to better deal with controversies composed of different kinds of uncertainties and ambiguity in the coupled human-water systems. The development of new hybrid disciplines like socio-hydrology and hydrosocial studies are a step forward, but they keep being dominated by either a natural sciences or social sciences epistemology. Third, further analyzing the conflict in a transdisciplinary and longitudinal manner, by involving actors in knowledge co-production, can improve our understanding of knowledge controversies, which in turn increases the reflectivity of the role of science and scientists in these conflicts.","","en","doctoral thesis","","978-90-73445-39-0","","","","","","","","","Water Resources","","",""
"uuid:8f2edbca-3e32-4dee-b6a3-39d5e43b4473","http://resolver.tudelft.nl/uuid:8f2edbca-3e32-4dee-b6a3-39d5e43b4473","Novel catalysts and applications for polymer electrolyte membrane cells","Bunea, S. (TU Delft ChemE/Catalysis Engineering)","Urakawa, A. (promotor); Burdyny, T.E. (copromotor); Delft University of Technology (degree granting institution)","2022","Polymer electrolyte membrane (PEM) water electrolysis represents a promising technology for the sustainable, emission-free hydrogen production from renewable energy, as it is able to quickly respond to fluctuations in the renewable energy supply. Nevertheless, natural scarcity of iridium, which is used as catalyst for the anodic water oxidation reaction, hinders wide-scale implementation of these cells. In this thesis we investigated how the performance of iridium catalysts can be improved by the addition of Sn and SnO2. Furthermore, despite their high efficiency, PEM cells are currently only employed for hydrogen production. We investigated whether it is possible to run other electrochemical transformations in PEM cells. Particularly, we focused on nitrate and nitric oxide reduction. These are pollutant molecules in ground water and in air, respectively. We investigated potential catalysts for the efficient transformation of these species to ammonia, which is a very important molecule for the fertilizer industry. Our approach can open new pathways for ammonia synthesis, replacing the energy-intensive, state-of-the-art Haber-Bosch process.","Polymer electrolyte membrane electrolysis; Electrochemical nitrate reduction; Oxygen evolution reaction; Electrocatalysis","en","doctoral thesis","","","","","","","","","","","ChemE/Catalysis Engineering","","",""
"uuid:16e90401-62fc-4bc3-bf04-7a8c7bb0e2ee","http://resolver.tudelft.nl/uuid:16e90401-62fc-4bc3-bf04-7a8c7bb0e2ee","An integrated aero-structural model for ram-air kite simulations: with application to airborne wind energy","Thedens, P. (TU Delft Wind Energy)","van Bussel, G.J.W. (promotor); Schmehl, R. (copromotor); Delft University of Technology (degree granting institution)","2022","The airborne wind energy (AWE) technology aims to utilise tethered wings to harvest wind energy at altitudes conventional wind turbines cannot reach. There are two distinct methods to harvest airborne wind energy: onboard and ground-based generation. The onboard generation is achieved through flying fast manoeuvres driving propellers attached to the tethered wing, while the generated electricity is conducted through the tether. On the other hand, the ground-based generation utilises the tether tension of the kite to unwind it from a drum, driving a generator. When the tether is fully extended, it is reeled in by the generator, which consumes energy. Since the traction phase is a lot longer and produces a lot more electricity than electricity needed in the reel-in phase the net energy of such a cycle is positive. SkySails Power is one of the leading companies developing a ground-based AWE generator driven by a large ram-air kite. This thesis describes the development of a methodology for simulating their wing.","FSI; AWE; Dynamic Relaxation; Panel Method; Ram-air Kite","en","doctoral thesis","","","","","","","","","","","Wind Energy","","",""
"uuid:d0f89fb8-bb07-499d-bff4-29b074ed17ef","http://resolver.tudelft.nl/uuid:d0f89fb8-bb07-499d-bff4-29b074ed17ef","Data-efficient learning of geometric structures from single-view images","Lin, Y. (TU Delft Pattern Recognition and Bioinformatics)","Reinders, M.J.T. (promotor); van Gemert, J.C. (copromotor); Pintea, S. (copromotor); Delft University of Technology (degree granting institution)","2022","The humanly constructed world is well-organized in space. A prominent feature of this artificial world is the presence of repetitive structures and coherent patterns, such as lines, junctions, wireframes of a building, and footprints of a city. These structures and patterns facilitate visual scene understanding by providing abundant geometry information. Humans can easily recognize diverse geometric structures. However, we wonder how to instruct an autonomous agent to interpret the visual world as we do? There has been great interest in automatic understanding of the geometric world from images in the past few decades. Conventional approaches first detect hand-crafted edge features and then group them into parametric shapes such as lines. Recently, this strategy has been gradually replaced by deep neural networks because learning features from large annotated datasets gives a richer representation compared to hand-designed features. Although the progress is inspiring, there are still several concerns on deploying neural networks in the real-world. A primary concern is the availability of massive labeled data, as the performance of deep networks deteriorates substantially when training data is scarce. This thesis introduces novel strategies to enhance the performance of neural networks in a small data regime, by adding geometric priors into learning. We start with the Hough Transform, a well-known prior for straight lines, and offer a principled way to add this prior into neural networks for data efficient end-to-end learning. On the wireframe parsing task, our model advances the state-of-the-art substantially on various subsets with much less training data. Subsequently, we extend the Hough Transform line priors to semi-supervised lane detection, only requiring a small amount of labeled data, and show that this approach improves the overall performance by leveraging a massive amount of unlabeled data. We explore a second geometric prior, the Gaussian sphere mapping, for vanishing point detection. We present an end-to-end framework for detecting multiple non-orthogonal vanishing points without relying on large quantities of training samples. Moreover, the proposed model exhibits consistent performance across multiple datasets without fine-tuning, thus demonstrating the effectiveness of geometric priors in tackling data variation. Next, we study detecting 3D mirror symmetry from single-view images. We explicitly incorporate 3D mirror geometry into identifying symmetry planes. To reduce the computational footprint, we design multi-stage spherical convolutions to hierarchically pinpoint the optimal plane in the parameter space. Our model not only improves overall performance but also reduces the inference latency substantially. Finally, we explore the possibility of detecting polygonal shapes from images by using transformers. We provide a full picture of the strength and weakness of the auto-regressive and parallel transformers on detecting polygons viewed as collections of points. we demonstrate on a toy dataset that the auto-regressive transformers can be a reasonable option for learning polygonal representations from real-world images. Taken together, with this thesis we show that incorporating geometric priors into modern deep learning allows reducing the need for expensive, manually annotated data.","Computer Vision; Deep Learning; Geometry","en","doctoral thesis","","978-94-6423-778-8","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:352be4dc-76ae-4fa2-945e-17fdae35e714","http://resolver.tudelft.nl/uuid:352be4dc-76ae-4fa2-945e-17fdae35e714","Particle Manipulation-on-chip: Using programmable hydrodynamic forcing in a closed loop","Kislaya, A. (TU Delft Fluid Mechanics)","Westerweel, J. (promotor); Tam, D.S.W. (promotor); Delft University of Technology (degree granting institution)","2022","The precise manipulation of particles and droplets is crucial to many microfluidic applications in engineering. The design of microfluidic devices is generally tailored to perform a specific task, with each specific application requiring a unique and fixed design. In this way, using a single device to perform multiple analyses of a wide range of specimens, from biological to chemical specimens, is unfeasible. Here, we address this issue and present a microfluidic approach that dynamically controls the hydrodynamic flow and the streamlines to realize complex multi-particle manipulations within a single device. Our approach combines the design of a flow-through microfluidic flow cell together with an optimization procedure to find a priori optimal particle path-lines, and a Proportion-Integral-Derivative-based (PID) feedback controller to provide real time control over the particle manipulations. In our device, particles are manipulated with hydrodynamic forces, by using a uniform flow through the flow cell and three inlets perpendicular to the flow cell. The streamlines within the device are manipulated by injecting or extracting fluid through the three inlets. We demonstrate the robustness of our approach by performing multiple functions within the device, including particle trapping, particle sorting, particle separation and assembly. We show that the real time control procedure affords accurate particle manipulation, with a maximum error on the order of the diameter of the particle. Our particle manipulation approach is particularly well suited to biological samples and living cells.","Microfluidic; hydrodynamic force; Particle manipulation; streamline; Potential flow; Hele-Shaw channels","en","doctoral thesis","","978-94-6419-492-0","","","","","","","","","Fluid Mechanics","","",""
"uuid:7b2ddf0f-3e0f-49d8-b425-b6df0b540e32","http://resolver.tudelft.nl/uuid:7b2ddf0f-3e0f-49d8-b425-b6df0b540e32","Toolpath Generation for Fused Filament Fabrication of Functionally Graded Materials","Kuipers, T. (TU Delft Materials and Manufacturing)","Wang, C.C. (promotor); Wu, J. (copromotor); Delft University of Technology (degree granting institution)","2022","The products in our day to day lives have different requirements in different regions of the product. One way of dealing with such spatially varying requirements is to divide the product into multiple parts and assign each part a different material. For example, the handle of a drill is often fitted with a rubbery material which gives it more grip when holding it. This gives the designer a choice of which regions exactly to use that material. Designers can consider the large variety of hands and ways of holding a drill. Equipped with an average distribution of gripping pressure throughout the handle, they could identify the regions with a pressure higher than some cutoff value and assign the gripping material to those regions. This means that the material properties vary abruptly over the surface; the adjacent regions on the one side of the cutoff value provide needlessly much grip, while on the other side the regions provide too little grip. The final distribution of material properties is segmented and therefore does not follow the gradual distribution of requirements optimally. In this workflow the designer is forced to reduce the continuous gripping pressure information into a binary material choice. But what if we could manufacture products with a gradient in their material properties? To answer this question we consider the material and the manufacturing technique. A Functionally GradedMaterial (FGM) is any substrate with material properties made to vary from region to region. FGMs find application for example in personalized footwear, implants, tires and airplane wings. They can improve a product’s performance by optimizing the spatial gradation of material properties throughout the product. Rather than a homogeneous block of material, FGMs consist of a fine-scale geometry of one or more base materials. The material properties of an FGM can be governed by controlling the shape of that fine-scale structure. Fused Filament Fabrication (FFF) is an additive manufacturing technique which can produce complex geometry cheaply. Thermoplastic material is heated and extruded out of a nozzle to deposit extrusion lines. These extrusion lines accumulate to formlayers, which are added on top of each other to form the final product. A 3D model is converted into toolpaths for the 3D printer, which describe the geometry of the extrusion lines the nozzle should traverse. Because of the physics involved in suchmachines, there are severalmanufacturing constraints to which the print job must adhere, such as (i) the maximum overhang angle to prevent printing in mid air, (ii) (semi-)continuous extrusion to prevent print defects at the ends of extrusion lines, (iii) integer thickness geometry (N * linewidth) to prevent overlapping extrusion lines, and (iv) chemically compatible materials to prevent a multi-material print job from disassembling during the manufacturing process...","Functionally graded materials; Mechanical metamaterials; Cellular materials; Lattice structures; Toolpath generation; Additive manufacturing; Fused Deposition Modeling; Fused Filament Fabrication; Material Extrusion","en","doctoral thesis","","978-94-6458-132-4","","","","","","","","","Materials and Manufacturing","","",""
"uuid:b6ab630d-f054-42de-b5f9-113a08ef4362","http://resolver.tudelft.nl/uuid:b6ab630d-f054-42de-b5f9-113a08ef4362","Quantum Networks using Spins in Diamond","Hermans, S.L.N. (TU Delft QID/Hanson Lab)","Hanson, R. (promotor); Wehner, S.D.C. (promotor); Delft University of Technology (degree granting institution)","2022","A future quantum internet will bring revolutionary opportunities. In a quantum internet, information will be represented using qubits. These qubits obey the rules of quantum mechanics. The possibilities to create superposition and entangled states, and to perform projective measurements give the quantum internet its unique strengths. A quantuminternet will enable fundamentally secure communication, quantumcomputations in the cloud with complete privacy and quantum enhanced sensing. But it is likely that many of its applications are still unknown. A full-scale quantum internet puts demanding requirements on the individual components. In the last decades single nodes and remote entanglement have been explored, but a small-scale prototype quantum network does not yet exist. In this thesis we go beyond single- or two-node experiments and realize the first multi-node quantum network using nitrogen-vacancy centers in diamond. The electron spin of this defect serves as the communication qubit and nearby 13C nuclear spins as memory qubits. We investigate the performance of the network and demonstratemultiple key network-primitive protocols. In addition, we further explore and improve the individual building blocks...","","en","doctoral thesis","","978-90-8593-520-9","","","","","","","","","QID/Hanson Lab","","",""
"uuid:02389b85-8dff-4939-a8d2-826196d3ef58","http://resolver.tudelft.nl/uuid:02389b85-8dff-4939-a8d2-826196d3ef58","Photochromism and photoconductivity in rare-earth oxyhydride thin films","Colombi, G. (TU Delft ChemE/Materials for Energy Conversion and Storage)","Dam, B. (promotor); Savenije, T.J. (promotor); Delft University of Technology (degree granting institution)","2022","The uncommon photochromism, photo-conductivity, and H-mobility of RE oxyhydride materials make them promising candidates for application in optics, opto-electronics and electrochemical devices alike. Additionally, their extreme compositional flexibility and the connected variety of possible (meta)stable phases make them an excellent case study to advance our understanding of the link between composition, structure, and properties in mixed-anion materials. Further, the possibility of producing RE oxyhydride not only under thermodynamic control (e.g., high temperature/pressure solid state reaction) but also under kinetic control (e.g., topochemical anion-exchange, or post-oxidation of reactively sputtered polycrystalline/epitaxial REHx thin films) largely expands the possibility of tuning their properties, influencing other aspects such as concentration of defect, material morphology, film texture, filmstress, etc.","","en","doctoral thesis","","978-94-6384-321-8","","","","","","","","","ChemE/Materials for Energy Conversion and Storage","","",""
"uuid:6d18abba-a418-4870-ab19-c195364b654b","http://resolver.tudelft.nl/uuid:6d18abba-a418-4870-ab19-c195364b654b","Heat Exchange in a Conifer Canopy: A Deep Look using Fiber Optic Sensors","Schilperoort, B. (TU Delft Water Resources)","Savenije, Hubert (promotor); Coenders-Gerrits, Miriam (copromotor); Delft University of Technology (degree granting institution)","2022","Forests cover a large part of the globe, and are responsible for a large amount of evaporation and the fixation of carbon. To be able to better understand this atmospheric exchange of forests, and how the forests will behave under future climate change, both accurate measurements as well as models are required. However, due to their height and heterogeneity they are difficult to model and measure. Standard theories do not apply well to forests, and as such more effort is required to understand the exchange between the forests and the atmosphere. However, precise measurements are made difficult due to a number of issues. The most prominent are the non-closure of the energy balance, and so-called ‘decoupling’ of the canopy. Non-closure of the energy balance is where all the measured inflows and outflows of energy do not add up to the measured change in energy storage in the forest system. The size and heterogeneity of forests makes this difficult to assess. Second is ‘decoupling’, where the vertical mixing of air within the canopy is hampered, and measurements performed above the canopy are not representative of what happens in the entire canopy down to the forest floor.","distributed temperature sensing; evaporation; forest; heat flux; boundary layer; temperature inversion; soil temperature","en","doctoral thesis","","","","","","","","","","","Water Resources","","",""
"uuid:4b8ed184-757b-48e6-bc0b-10d71428af81","http://resolver.tudelft.nl/uuid:4b8ed184-757b-48e6-bc0b-10d71428af81","Towards 3D Printed Osteoimmunomodulatory Surface Patterns","Nouri Goushki, M. (TU Delft Biomaterials & Tissue Biomechanics)","Zadpoor, A.A. (promotor); Fratila-Apachitei, E.L. (copromotor); Delft University of Technology (degree granting institution)","2022","Osteoimmunomodulation (OIM) is a mechanism through which orthopedic biomaterials may modulate the function of immune cells to promote osteogenesis. OIM is considered a potentially effective way for improving osseointegration. The surface characteristics of orthopedic implants (e.g., topography, wettability, surface chemistry, and charge) can significantly influence their potential OIM behavior. Modifying these properties can, therefore, be considered a powerful method for achieving the described OIM response.
Among the different possible length scales of topographies, the role of submicron topographies on OIM functions has been less frequently studied. That is partially because it is quite challenging to fabricate surface topographies with controlled shapes and dimensions. Moreover, the currently available technologies for the fabrication of submicron features usually involve multiple fabrication techniques and steps.
In this thesis, uniform submicron patterns are, for the first time, 3D printed with controlled dimensions using a single-step nanoprinting technique called two- photon polymerization (2PP). In addition, the effects of the dimensions of the 3D printed submicron pillars on the response of two types of cells involved in the OIM process (i.e., preosteoblast and immune cells) are extensively studied, both separately using monocultures and in interaction with each other using a direct co-culture model performed in the presence of submicron pillars. Our findings, reveal that 3D printed submicron scale patterns are able to generate both osteogenic and immunomodulatory in vitro cellular responses. This novel concept of multifunctional topographies opens up a new approach for enhancing the OIM behavior of orthopedic biomaterials.","Osteoimmunomodulation; Bone regeneration; 3D Printing; Biomaterials; Cell-surface interaction; Osteogenic response; Submicron patterns; Direct laser writing","en","doctoral thesis","","978-94-6384-327-0","","","","","","","","","Biomaterials & Tissue Biomechanics","","",""
"uuid:42eae6d4-ad6d-4b0a-aaf0-19757ecad2a3","http://resolver.tudelft.nl/uuid:42eae6d4-ad6d-4b0a-aaf0-19757ecad2a3","Improving Satellite-based precipitation estimates: A spatiotemporal object-oriented approach to error analysis and correction","Laverde Barajas, M.A. (TU Delft Water Resources)","Solomatine, D.P. (promotor); Corzo, Gerald A. (copromotor); Delft University of Technology (degree granting institution)","2022","Satellite Precipitation Products (SPP) have been revolutionary in water resources management and flood-related disaster response. However, estimating extreme rainfall is subject to multiple systematic and aleatory errors that need to be corrected. This dissertation addresses errors in satellite data to estimate extreme rainfall events in space and time beyond the pixel. The Spatiotemporal Contiguous Object-based Rainfall Analysis method (ST-CORA) is developed to analyse errors in SPP for rainstorm estimations based on their main physical features in space and time (volume, intensity, duration, extension, orientation, speed, among others). Using ST-CORA, systematic errors due to volume and displacement in space and time are corrected in a novel bias-corrected method called ST-CORAbico. Case studies in two monsoonal areas in South America and Southeast Asia have been used to analyse the hydrological response of systematic errors in flood predictions and evaluate error reduction in non-operational and operational bias correction applications. Finally, the dissertation describes further implementations of ST-CORA in developing an operational system for rainstorm monitoring called Rainstorm tracker. This web-based platform is designed to monitor and alert decision-makers about the severity of rainstorm events over the Lower Mekong basin in near-real and real-time.","Spatiotemporal Analysis; satellite-based precipitation; extreme rainfall; error correction","en","doctoral thesis","","978-90-73445-40-6","","","","","","","","","Water Resources","","",""
"uuid:d2dbc4aa-02f5-43a1-a48c-b7ba1ad8ad67","http://resolver.tudelft.nl/uuid:d2dbc4aa-02f5-43a1-a48c-b7ba1ad8ad67","To bind or not to bind: DNA mediated multivalent interactions lead to superselectivity","Linne, C. (TU Delft BN/Liedewij Laan Lab)","Kraft, Daniela J. (promotor); Laan, L. (copromotor); Delft University of Technology (degree granting institution)","2022","To bind two entities together, an attractive interaction is needed. In biological systems, such interactions are often between ligands and receptors. But this interaction constantly breaks and forms because it is (too) weak. To ensure a lasting bond, the system can form multiple weak bonds that form an overall strong bond – similar to velcro. An interesting feature of aweak multivalent systemis the sharp discrimination between surfaces based on receptor density. That means that when multivalent particles encounter surfaces with the specific receptor but different densities, they will most likely bind to the surface with the highest density, because it has the highest binding probability. This phenomenon is called superselectivity and emerges from the large entropic contribution in amultivalent system: The more ligands and receptors are involved in the binding, the more possibilities the system has to form a bond and hence a large entropy. In this thesis we investigate how the interaction strength and entropy influences superselective binding. In doing so, we study superselective binding of microparticleswith hundreds of interactions and, additionally, particles with only few interactions of the size of nanometers.","colloids; DNA nanostar; membrane; diffusion","en","doctoral thesis","","978-90-8593-517-9","","","","","","","","","BN/Liedewij Laan Lab","","",""
"uuid:5435633c-5ec3-4ffa-a990-84e7768eb79c","http://resolver.tudelft.nl/uuid:5435633c-5ec3-4ffa-a990-84e7768eb79c","Evaluation of Green and Grey Infrastructures for Runoff and Pollutant reduction","Martinez Cano, C.A. (TU Delft BT/Environmental Biotechnology)","Brdjanovic, Damir (promotor); Vojinovic, Zoran (copromotor); Delft University of Technology (degree granting institution)","2022","Nowadays, economic development, urbanisation and heavy rainfall events are present in urban areas. A major change in approaches to the management of flooding is also ongoing in many countries worldwide. There are some decision support system tools available to evaluate green and grey infrastructures across a wide range of conditions as well as to compare alternative options. However, the performance of urban drainage systems that combines different green-grey solutions is still unclear. The present book introduces a framework for evaluation of the performance of green and grey infrastructures for runoff and pollutant reduction. To this end, it presents an evaluation of how different combinations of green infrastructure (GI) measures perform within a drainage system to reduce runoff and pollution and how the interactions between different grey infrastructures can influence the drainage system capacity. The modelling approach introduced here also combines the infiltration process, overland flow and sewer system interactions to assess the optimal combination of green-grey infrastructures for urban flood reduction. The results of this research demonstrate that including rainfall-runoff and infiltration processes, along with the representation of GI within a 2D model domain, enhances the analysis of the optimal combination of infrastructures, which in turn allows the drainage system to be assessed holistically.","","en","doctoral thesis","","9789073445383","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:bc764755-5225-4119-ba66-7ad4f6d01662","http://resolver.tudelft.nl/uuid:bc764755-5225-4119-ba66-7ad4f6d01662","Fouling Control in Anaerobic Membrane Bioreactors by Flux Enhancer Dosing","Odriozola, Magela (TU Delft Sanitary Engineering)","van Lier, J.B. (promotor); Spanjers, H. (promotor); Delft University of Technology (degree granting institution)","2022","Anaerobic membrane bioreactor (AnMBR) technology is increasingly researched for wastewater treatment in a circular economy scenario to recover nutrients, water, and biogas. AnMBR couples the advantages of anaerobic digestion, such as low sludge production, no aeration requirement and biogas production, with the benefits of membrane technology, that is, complete solids removal and a high removal degree of pathogenic organisms. Nevertheless, membrane fouling remains the major operational challenge, limiting the economic feasibility and applicability of AnMBRs. Membrane fouling is responsible for lower flux, higher transmembrane pressure, the need for intensive biogas sparging or increased crossflow velocities for membrane scouring, and increased frequency of membrane cleaning and membrane replacement; consequently, increasing energy and operational costs. Researchers extensively studied the causes and mitigation of membrane fouling in both aerobic and anaerobic membrane bioreactors. Membrane fouling mitigation strategies have focused on optimisation of membrane operational variables, such as: gas sparging, crossflow velocity, filtration relaxation cycle, permeate flux and frequency and intensity of chemical cleaning. Although optimisation of operational variables might be suitable when the sludge has good or moderate filterability, it may not be adequate or sufficient when fouling is caused by a sludge with poor filterability. The application of flux enhancers for fouling control has been extensively investigated. Flux enhancers are adsorbents, coagulants and flocculants that decrease fouling by changing the sludge characteristics, thereby improving sludge filterability. Particularly, cationic polymers have been successfully applied as flux enhancers in short term tests on large scale aerobic membrane bioreactors (MBRs), whereas in AnMBRs research is scarce, and so far, only done at lab scale. Results from MBRs cannot be directly translated to AnMBRs because the extent and nature of membrane fouling under anaerobic and aerobic conditions are different. This thesis studies the feasibility of dosing cationic polymers into large scale AnMBRs for fouling mitigation, focusing on long term effects, possible side effects, optimal dosing strategy and variation of required dosage.","Anaerobic Delft filtration characterization method (AnDFCm); Anaerobic membrane bioreactor (AnMBR); Flux enhancer; Membrane fouling mitigation and control; Modelling; Sludge filterability","en","doctoral thesis","","978-94-93270-44-2","","","","","","","","","Sanitary Engineering","","",""
"uuid:9b3a8dff-40ca-4b95-ae73-9a568302d1e5","http://resolver.tudelft.nl/uuid:9b3a8dff-40ca-4b95-ae73-9a568302d1e5","A discontinuity-enriched finite element method for the computational design of phononic crystals","van den Boom, S.J. (TU Delft Computational Design and Mechanics)","van Keulen, A. (promotor); Aragon, A.M. (copromotor); Delft University of Technology (degree granting institution)","2022","Phononic crystals can be designed to have bandgaps---ranges of frequencies whose propagation through the material is prevented. They are therefore attractive for vibration isolation applications in different industries, where unwanted vibrations reduce performance. Yet, important steps are still to be made for the integration of phononic crystals into engineering practice. For instance, methods for large scale production are still in development. Furthermore, it is essential that design methods are established to enable the design of phononic crystals that meet all of the, often conflicting, requirements for practical applications. This thesis focuses on the latter challenge by proposing a computational design method for phononic crystals based on the combination of an advanced finite element method and level set-based topology optimization.","Phononic crystals; Enriched finite element methods; Topology optimization","en","doctoral thesis","","978-94-6384-315-7","","","","","","2022-04-06","","","Computational Design and Mechanics","","",""
"uuid:c43c9b99-585a-4929-9bee-2c6d87a3b2c1","http://resolver.tudelft.nl/uuid:c43c9b99-585a-4929-9bee-2c6d87a3b2c1","Expression of a gene-encoded FtsZ-based minimal machinery to drive synthetic cell division","Godino, E. (TU Delft BN/Christophe Danelon Lab)","Danelon, C.J.A. (promotor); Aubin-Tam, M.E. (copromotor); Delft University of Technology (degree granting institution)","2022","The Christophe Danelon lab is involved in the long-term effort to construct an autonomous minimal cell using a bottom-up approach. Our goal is to achieve self-maintenance, selfreproduction, and evolution of a liposome compartment containing a minimal genome and a cell-free gene expression system. Self-reproduction requires splitting of the mother compartment into two daughter cells. The project described in this dissertation is part of the group’s attempts to create a minimal division unit for the synthetic cell. The development of a gene-driven, controllable, content-preserving liposome division strategy is an ongoing challenging task. Here, we reconstituted some of the organizational mechanisms for division of Escherichia coli in a cell-free system. In E. coli, cytokinesis is mediated by a multiprotein complex that forms a contractile ring-like structure at the division site. The ring is composed of the cytosolic filament-forming protein FtsZ, as well as its membrane anchoring proteins FtsA and ZipA. The Min system assists in the ring localization at mid-cell by oscillating from pole to pole. Using liposomes as a synthetic compartment and PURE system for cell-free gene expression, we reconstituted membrane-bound cytoskeletal structures and oscillating gradients of Min proteins for liposome constriction and dynamic organization of FtsZ filaments.","Synthetic biology; liposomes; synthetic cell; cell-free gene expression; cell division; FtsZ; Min system","en","doctoral thesis","","978-90-8593-518-6","","","","","","","","","BN/Christophe Danelon Lab","","",""
"uuid:943cabcf-697f-4e82-8d51-480d0f171496","http://resolver.tudelft.nl/uuid:943cabcf-697f-4e82-8d51-480d0f171496","Designing an active recommender framework to support the development of reasoning mechanisms for smart cyber-physical systems","Tepjit, S. (TU Delft Internet of Things)","Horvath, I. (promotor); Delft University of Technology (degree granting institution)","2022","Our research concentrated on a newly emerged problem related to smart cyber-physical systems (S-CPSs). The essence of the problem is that S-CPSs are based on reasoning mechanisms that enable them to generate context-dependent solutions for various application problems. Typically, development of application-specific reasoning mechanisms (ASRMs) is a challenge for all stakeholders including software developers, knowledge engineers, and application developers. The research was conducted based on the global research hypothesis stated that an active recommendation framework (ARF) is a computer-aided design support tool with new features, including the coupling of process monitoring and decision support functionalities for compositional design of ASRMs of S-CPSs is not yet proposed. The whole research project was methodologically framed by a logical flow of four research cycles. Each research cycle addressed different aspects of the ARF development. The selected application context was a specific part of the design process of an automated parking assist system (APAS), as the target ASRMs. The research activities included: (i) knowledge aggregation, demarcation of the domain of interest, and specification of requirements; (ii) functional and architectural conceptualization of the ARF; (iii) computational implementation and operationalization of the demonstrative modules; and (iv) validation of the usefulness of the recommendations generated by the implemented demonstrative modules of the ARF. Initially proposed by researchers of the hosting Section of Cyber-Physical Systems Design, the concept of an ARF as significant novelty and supposed to play an influential role in the future. The term “framework” was used to refer to a purposeful enabler that arranges and rationalizes design activities, information processing, and designer-system interaction. The term “recommender” expresses that, as a complex system, the ARF derives context-dependent advice for the designer based on a comprehensive system model of the concerned (specific) design process. The term “active” refers to the fact that the ARF continuously monitors the design process and spontaneously interacts with the designer wherever it is needed in the design process. Our conclusions have been that the ARF goes well beyond the concepts of traditional SEFs and static recommender systems.","smart cyber-physical systems; application specific reasoning mechanisms; active recommender frameworks; context-sensitive recommendation; automated parking assist system","en","doctoral thesis","","978-9-46-384299-0","","","","","","","","","Internet of Things","","",""
"uuid:206d8f37-4d4c-4af9-9da3-8d57fce769e8","http://resolver.tudelft.nl/uuid:206d8f37-4d4c-4af9-9da3-8d57fce769e8","Wave overtopping processes for very mild sloping and shallow foreshores","Nguyên, Hà (TU Delft Coastal Engineering)","Stive, M.J.F. (promotor); Hofland, Bas (copromotor); Delft University of Technology (degree granting institution)","2022","A sea-dike system is of importance for the protection of the hinterland. However, the effect of very gentle and shallow sloping foreshores (in the order up to 1 in 1000) on wave overtopping processes has not yet been quantified enough and, thus, so far is not well understood. This dissertation seeks an answer to the question of whether the existing formulae are suitable for these slopes, and which approach is the most appropriate to quantify the wave overtopping discharge and volume for these typical (e.g. gentle and shallow) slopes. Along the way, comparative research between the results of numerical analysis and those of existing studies is conducted. Subsequently, new empirical equations are formulated, using a series of test cases in the numerical modeling for this type of slopes, using a Least Squares method.
First, recent state-of-the-art research for wave overtopping behavior and the relevant parameters is investigated.","SWAN; SWASH; wave overtopping discharge; LF waves/ infragravity waves; spectral wave period; (very) shallow foreshore; very gentle foreshore; Weibull parameters; overtopping volume","en","doctoral thesis","","9789463665247","","","","","","","","","Coastal Engineering","","",""
"uuid:2a66f780-cfc3-4749-a704-ffac3d267d02","http://resolver.tudelft.nl/uuid:2a66f780-cfc3-4749-a704-ffac3d267d02","Supply-Insensitive Frequency Synthesis for an LDO-Free Powering Scheme in SoCs","Chen, Y. (TU Delft Electronics)","Staszewski, R.B. (promotor); Babaie, M. (copromotor); Delft University of Technology (degree granting institution)","2022","The scaling of CMOS technology in deep submicron process nodes is accompanied by the integration of more and more functional blocks of a system, whether digital or analog/RF, onto the same chip (i.e., system-onchip, SoC). These blocks would also place different requirements on their power supplies. To provide various static or dynamically controlled supply voltages needed by the SoC, a dedicated power management unit (PMU) is typically deployed. Following the same trend of system integration, implementing the PMU on-die is also highly desired. The core of a PMU consists of several sets of voltage regulators that convert the output level of the energy source to the multiple supply voltages required by the integrated system. The DC-DC converters (switching regulators) and the low-dropout (LDO) linear regulators are normally employed in cascade for high power efficiency and for suppressing ripple amplitude demanded by the supply sensitive blocks, respectively. Although much effort has been devoted to the research on the design of fully integrated, or the so-called ‘capacitor-less’ LDOs, little has been done for the co-analysis and co-design of these LDOs with the load circuitry they power. Frequency synthesizers have found wide applications in various systems. The phase-locked loop, as one of the most commonly used frequency synthesis techniques, modulates the oscillator in a feedback manner to generate the desired loop output. The first part of this thesis (Chapters 2 and 3) focuses on the power efficiency of the capacitor-less LDO when powering a PLL. Since the PLL, especially its oscillator, is sensitive to the supply perturbations, the LDO should provide a high power supply rejection (PSR) at the ripple frequency, which could be in the range of 153 154 Summary several to tens of megahertz for integrated DC-DC converters, with a low output noise. The dropout voltage of the LDO is then determined by the required PSR, consuming extra voltage headroom and degrading the efficiency of the system. The tolerable output noise generally limits the minimum quiescent current consumed by the error amplifier (EA) and the feedback resistors in the LDO. Owning to the stringent requirement of the supply noise imposed by the oscillator in order to preserve its inherent phase noise performance, the efficiency of the corresponding LDO would be further degraded by a large factor due to its quiescent current consumption. Based on the analysis above, it deems beneficial to power the PLL directly from the output of DC-DC converters. Taking this step further, different scenarios of powering the SoC are also identified and briefly discussed at the end of the first part. To enable the proposed direct connection, the modules in the PLL should be able to tolerate the output ripples from such converters. In the second part of this thesis (Chapters 4 and 5), a fractional-N digitally intensive PLL (DPLL) architecture capable of maintaining its performance under a large (i.e., 50mVpp) supply ripple is developed. The digital implementation is selected due to its ability to incorporate various digital calibration techniques with relative ease. The supply pushing of the LC oscillator used in the DPLL is suppressed by the proposed feed-forward ripple replication and cancellation technique, which replicate the supply ripple to the gate of its tail current source with a proper gain, stabilizing the oscillator tail current, and correspondingly, the oscillation swing. The optimal gain is calibrated on-chip through amplifying the oscillation amplitude variation and locating the control setting corresponding to the minimum value. The time error between the reference and the divided output of the oscillator is linearly converted into voltage domain through the current-mode supply insensitive slope generator, with its input range being halved by resampling the output of the multi-modulus divider (MMDIV), driven by second-order .. modulation, with both edges of the oscillation signal. The output of a current DAC operating in parallel is also cascaded with the slope generator during phase detection to limit the dynamic range of the SAR ADC used to digitize the phase error. A low-power ripple pattern estimation and cancellation algorithm is also inserted at the ADC output to remove 155 the effect of the output delay perturbations of loop components under the supply ripple. Employing all these techniques, the proposed DPLL demonstrates, for the first time ever, the acceptable performance while operating under a large 50mVpp supply ripple.","Frequency synthesizer; ital phase-locked loop (DPLL); LC oscillator; LDO; PMU; SoC","en","doctoral thesis","","","","","","","","","","","Electronics","","",""
"uuid:2dda1059-5567-40d4-b95d-94e66ed7a108","http://resolver.tudelft.nl/uuid:2dda1059-5567-40d4-b95d-94e66ed7a108","Microstructure of vanadium micro-alloyed steels for automotive applications: Interaction of precipitation with austenite-to-ferrite phase transformation studied by SANS and neutron diffraction","Ioannidou, C. (TU Delft Team Erik Offerman)","Offerman, S.E. (promotor); van Well, A.A. (copromotor); Delft University of Technology (degree granting institution)","2022","The focus of the present work is on the micro-alloying element vanadium, which is well known for providing precipitation strengthening to steels and which has, therefore, attracted a lot of interest the last decades. Vanadium carbide precipitation can take place in the migrating austenite/ferrite interface during the austenite-to-ferrite phase transformation, i.e. interphase precipitation, and in ferrite. Due to the beneficial contribution of the vanadium carbides to the mechanical properties of the steel and the necessity to make optimum use of vanadium, it is critical to understand and quantify the vanadium carbide precipitation and its interaction with the austenite-to-ferrite phase transformation.
We study the precipitation kinetics of vanadium carbides and its interaction with the phase transformation kinetics in vanadium micro-alloyed steels that differ in vanadium and carbon concentrations and that have undergone different isothermal annealing treatments. In Chapter 1, the introduction to the research topic and the scope of this thesis are described. The novelty of our research is the use of advanced neutron scattering techniques i.e. Neutron Diffraction and Small-angle Neutron Scattering (SANS), coupled to Atom Probe Tomography (APT) and Transmission Electron Microscopy (TEM), to study model vanadium micro-alloyed steels during heat-treatments. The combination of neutron diffraction and SANS to study, simultaneously and in-situ, the interaction between the phase-transformation and precipitation kinetics is unique, as is the furnace that is designed and developed for these in-situ measurements. The results provide fundamental insight into the role of vanadium on the phase-transformation and precipitation kinetics, which is deemed essential for the development of micro-alloyed steels with reduced amounts of alloying elements without compromising properties.
In Chapter 2, the vanadium carbide precipitation kinetics and its interaction with the phase transformation kinetics is investigated. Two micro-alloyed steels that differ in vanadium and carbon concentrations by a factor of two, but have the same vanadium-to-carbon atomic ratio of 1:1 are studied. Dilatometry is used for heat-treating the specimens and studying the phase-transformation kinetics during isothermal annealing at 900 °C, 750 °C and 650 °C for up to 10 h. Samples annealed for different holding times are used for ex-situ SANS, TEM and APT to study the precipitation kinetics. Vanadium carbide precipitation is only observed during or after the austenite-to-ferrite phase transformation at 650 °C and not during annealing at 900 °C and 750 °C. The precipitate volume fraction and mean radius continuously increase as the holding time increases, while the precipitate number density starts to decrease after 20 min, which corresponds to the time at which the phase transformation has finished. This indicates that nucleation and growth are dominant during the first 20 min, while later precipitate growth and coarsening take place. TEM indicates the presence of spherical/slightly ellipsoidal precipitates in all steels after annealing at 650 °C and APT shows gradual changes in the precipitate chemical composition during annealing at 650 °C, which finally reaches a 1:1 atomic ratio of vanadium-to-carbon in the core of the precipitates after 10 h.
Chapter 3 introduces a custom-made furnace designed and built by our group at TU Delft. It is able to facilitate in-situ and simultaneous neutron diffraction and SANS measurements during heat-treatments of metals. In-situ and simultaneous studies on phase-transformation and precipitation kinetics are necessary in order to gain an in-depth understanding of the nucleation and growth of precipitates in relation to the evolution of austenite decomposition at high temperatures. Precipitation, occurring during solid-state phase transformations in micro-alloyed steels, is generally studied through TEM, APT and ex-situ SANS measurements. The advantage of SANS over the other two characterization techniques is that it allows for the quantitative determination of size distribution, volume fraction, and number density of a statistically significant number of precipitates within the resulting iron matrix at room temperature. However, individual ex-situ SANS measurements do not provide information regarding the correlation between interphase precipitation and phase transformations. The presented furnace is, thus, developed for in-situ studies in which SANS measurements can be performed simultaneously to neutron diffraction measurements during typical high-temperature thermal treatments for steels. The furnace is capable of carrying out thermal treatments involving fast heating and cooling as well as high operation temperatures (up to 1200 °C) for a long period of time with accurate temperature control in a protective atmosphere and in a magnetic field of up to 1.5 T. The characteristics of this furnace give the possibility of developing new research studies for better insight of the relationship between phase-transformation and precipitation kinetics in steels and also in other types of materials containing nano-scale microstructural features.
In Chapter 4, in-situ SANS is used to determine the time evolution of the chemical composition of precipitates at 650 °C and 700 °C in three micro-alloyed steels with different vanadium and carbon concentrations. The evolution of the ratio of the nuclear to magnetic SANS component is used for this analysis. The samples are heat-treated in the furnace presented in Chapter 3. Precipitates with a distribution of sub-stoichiometric carbon-to-metal ratios in all steels are detected. The precipitates have a high iron content at the early stages of annealing, which is gradually being substituted by vanadium during isothermal holding. Eventually a plateau in the composition of the precipitate phase is reached. Faster changes in the precipitate chemical composition are observed at the higher temperature in all steels. We found that the addition of vanadium and carbon to the steel has an accelerating effect on the evolution of the precipitate composition. Addition of vanadium to the nominal composition of the steel increases the concentration of vanadium in the precipitates, reduces the iron concentration and leads to a smaller carbon-to-metal ratio. APT measurements prove the presence of precipitates with a distribution of carbon-to-metal ratios, ranging from 0.75 to 1, after 10 h of annealing at 650 °C or 700 °C in all studied steels.
In Chapter 5, in-situ neutron diffraction and SANS are employed for the first time simultaneously in order to reveal the interaction between the austenite-to-ferrite phase-transformation and the precipitation kinetics in-situ in vanadium micro-alloyed steels. The neutron scattering measurements are performed in three steels with different vanadium and carbon concentrations during isothermal annealing treatments at 650 °C and 700 °C for 10 h. The furnace introduced in Chapter 3 is used for the heat treatments. The austenite-to-ferrite phase-transformation and precipitation kinetics are quantified and the interaction between these two phenomena is explained. We show that the phase transformation is completed during the 10 h annealing treatment in all cases and that it is faster at 650 °C than at 700 °C for all alloys. Our analysis shows that additions of vanadium and carbon to the steel composition cause a retardation of the phase transformation and the effect of each element is explained through its contribution to the Gibbs free energy dissipation. The phase transformation is found to initiate the vanadium carbide precipitation. The presence of ellipsoidal precipitates is confirmed by TEM, contributing to the SANS data analysis. Larger and fewer precipitates are detected at the higher temperature in all three steels, and a larger number density of precipitates is detected in the steel with higher concentrations of vanadium and carbon. The effect of the precipitation kinetics to the phase-transformation kinetics is also discussed. An important outcome is that the external magnetic field applied during the experiments, necessary for the SANS measurements, causes a delay in the onset and time evolution of the phase transformation and consequently on the precipitation kinetics.","vanadium micro-alloyed steel; in-situ measurements; Small-Angle Neutron Scattering; Neutron Diffraction; phase-transformation kinetics; Precipitation Kinetics","en","doctoral thesis","","","","","","","","","","","Team Erik Offerman","","",""
"uuid:06ff0b5e-d5da-4149-a90f-62064c29f238","http://resolver.tudelft.nl/uuid:06ff0b5e-d5da-4149-a90f-62064c29f238","Molten Metal Oscillatory Behaviour in Advanced Fusion-based Manufacturing Processes","Ebrahimi, Amin (TU Delft Team Marcel Hermans)","Richardson, I.M. (promotor); Kleijn, C.R. (promotor); Delft University of Technology (degree granting institution)","2022","The growing demand for manufactured products with complex geometries requiring advanced fusion-based manufacturing techniques emphasises the importance of process development and optimisation to reduce the risk of adverse outcomes, which is currently impeded with traditional approaches (trial and error experiments). Development, optimisation and qualification of such procedures are often expensive and time-consuming, particularly when new materials or new material combinations are involved. Process stability is intrinsically linked to the stability of the molten metal melt-pool, which ideally should solidify in a smooth and continuous manner to produce a consistent product, free of undesirable geometric and metallurgical defects. The influence of material properties and process conditions on melt-pool stability are generally difficult to derive from experimental observations; hence process optimisation is often reliant on a trial-and-error approach, mitigated to a large extent by a considerable body of industrial experience.
The challenge addressed in this research is to develop a simulation-based approach to assess the stability of oscillating melt-pools in fusion welding and additive manufacturing, to minimise the number of trial-and-error experiments required for process development and optimisation, which ultimately will lead to shortening the time between design and production. The computational model developed in the present work has a generic construction with specific process influences addressed through appropriate boundary conditions, avoiding the necessity to integrate melt pool and detailed process descriptions in a single simulation. The model is therefore capable of representing a wide range of welding and additive manufacturing technologies through selection of appropriate material properties and boundary conditions. The robustness of the present computational model in predicting the melt-pool behaviour is demonstrated by comparing the numerical predictions with experimental, analytical and numerical data.
Focusing on numerical simulations of solidification and melting using the enthalpy-porosity method, the influence of the permeability coefficient (also known as the mushy-zone constant) on the numerical predictions, which is employed to dampen fluid velocities in the mushy zone and suppress them in solid regions, is systematically analysed for both isothermal and non-isothermal phase-change problems. For isothermal phase-change problems, reducing the cell size diminishes the influence of the mushy-zone constant on the results and the solution becomes independent of the mushy-zone constant for fine enough meshes. Numerical predictions of non-isothermal phase-change problems are inherently dependent on the mushy-zone constant. A method is proposed, based on a Péclet number, to predict and evaluate the influence of the permeability coefficient on numerical predictions of solidification and melting problems.
In many numerical studies in the literature, the transport coefficients of the material, specifically thermal conductivity and viscosity, are artificially increased by a so-called `enhancement factor' to achieve agreement between experiments and numerically predicted melt-pool sizes and solidification rates. However, the use of an enhancement factor has little physical meaning, does not represent the physics of complex transport phenomena and can significantly affect the numerical predictions. The effects of using enhancement factors on the numerical predictions of melt-pool behaviour in fusion welding and additive manufacturing are studied in detail. Moreover, the effects of employing temperature-dependent material properties on the numerical predictions are discussed in the present thesis.
Melt pools in fusion welding and additive manufacturing exhibit highly non-linear responses to variations of process parameters and are very sensitive to imposed boundary conditions. Temporal and spatial variations in the energy-flux distribution, which are often neglected in numerical simulations, are taken into account in the present work. It is shown how deformations of the melt-pool surface, due to fluid motion as well as changes in the system orientation, affect the numerical predictions of thermal and fluid flow fields. The effects of joint shape design on melt-pool behaviour during fusion welding is also studied in the present work.
Changes in power-density and force distributions affect the thermal and fluid flow fields on the melt-pool surface, which in turn can influence the pool shape. Oscillations strongly relate to shape and size of the melt-pool and the surface tension distribution on the molten material surface. Using the simulation-based approach developed in the present work, the frequency and amplitude of melt-pool oscillations and changes in the oscillation modes are predicted, which are not accessible using published analytical models and are generally difficult to measure experimentally. Additionally, using the proposed simulation-based approach, the need for triggering of the melt-pool oscillations is obviated, since even small surface displacements are detectable, which are not sensible to the current measurement devices employed in experiments.
The dynamic features of the oscillation signals cannot easily be derived employing conventional Fourier transform (FT) analysis since the oscillation signals are assumed to be stationary (i.e. the behaviour of the system is linear and time-invariant), which is often not the case in fusion welding and additive manufacturing. The continuous wavelet transform (CWT) has been employed in the present work to overcome the shortcomings of the conventional fast Fourier transform (FFT) analysis in characterising the non-stationary features of the surface oscillation signals received from the melt pool. Employing the continuous wavelet transform, the time-resolved melt-pool surface oscillation signals obtained from the numerical simulations can be decomposed into time and frequency spaces simultaneously.
The simulation-based approach developed in the present work addresses some of the significant challenges involved in assessing the melt-pool stability for process development and optimisation. The numerical predictions of the present computational model enhances the current understanding of the process behaviour, which is often very challenging to achieve from experiments alone. Moreover, the present simulation-based approach can be employed to explore the design space and reduce the costs associated with process development and optimisation.","Materials processing; Fusion welding and additive manufacturing; Process design and optimisation; Melt pool behaviour; Computational modelling","en","doctoral thesis","","9789464237412","","","","","","","","","Team Marcel Hermans","","",""
"uuid:a2f06688-c74a-4160-ac50-62fecc9a5f8b","http://resolver.tudelft.nl/uuid:a2f06688-c74a-4160-ac50-62fecc9a5f8b","Investigation of Temperature Tolerance in Saccharomyces cerevisiae under Anaerobic Conditions","Lip, K.Y.F. (TU Delft OLD BT/Cell Systems Engineering)","van Loosdrecht, Mark C.M. (promotor); van Gulik, W.M. (copromotor); Delft University of Technology (degree granting institution)","2022","Saccharomyces yeasts are common workhorses for the production of alcoholic beverage and bio-ethanol. In these production processes, temperature is one of the predominant factors determining the operational costs of the industrial fermentation processes and the quality of the products (alcoholic beverage) because it influences the functioning of cellular activities in yeast cells. Saccharomyces yeasts in natural habitats have a wide range of difference in temperature optimal due to the geographic difference of habitats. Saccharomyces yeasts in an artificial environment, such as industrial fermentation processes, adapt to the conditions and have a temperature optimal close to the condition of the production process. In chapter 2, various Saccharomyces yeasts were compared for their growth performance between 12°C and 40°C wherein we selected two industrial strains which grew the fastest at sub- (12-27°C) and supra-optimal (33-40°C) temperatures. A top-down holistic approach was used to evaluate the temperature impact on cell growth, meaning the substrate uptake rates, (by)products production rates, as well as cellular protein and storage carbohydrates accumulations were measured. To do so, the two selected industrial strains and a laboratory strain were grown in anaerobic chemostat cultures at 12, 30, and 39°C to separate the growth rate effect from temperature effect. Significant differences in biomass and ethanol yields on glucose, total biomass protein, storage carbohydrates contents were observed between strains and cultivation temperatures.","Yeast physiology; biotechnology; multiomics; Bioreactor; Evolution strategy","en","doctoral thesis","","978-94-6384-310-2","","","","","","","","","OLD BT/Cell Systems Engineering","","",""
"uuid:877bed45-d775-40bb-bde2-d2322cb334f0","http://resolver.tudelft.nl/uuid:877bed45-d775-40bb-bde2-d2322cb334f0","Decisions on life-cycle reliability of flood defence systems","Klerk, W.J. (TU Delft Hydraulic Structures and Flood Risk)","Delft University of Technology (degree granting institution)","2022","Many countries rely on flood defence systems to prevent economic damage and loss-of-life due to catastrophic floods. Asset managers of flood defence systems need to cope with the consequences of structural degradation, and changing societal and environmental conditions, in order to satisfy performance requirements and optimize societal value of flood defence assets. This is a continuous effort of planning, executing and evaluating a variety of different system interventions. These can be aimed at both reducing the uncertainty on (e.g., inspection or monitoring), or improving the performance of a flood defence system (e.g., reinforcement). Performance is typically expressed as the reliability on a system level, which in this thesis is interpreted as the life-cycle reliability: the estimated reliability with all foreseen interventions in time. The key objective of this thesis is to improve decisions on life-cycle reliability of flood defence systems. This is elaborated for three key topics, with a focus on earthen flood defences (also known as levees or dikes)...","flood defences; levees; asset management; risk-based decision making; Bayesian decision theory; inspection; maintenance; uncertainty reduction; reinforcement; optimization; reliability","en","doctoral thesis","","978-94-6384-313-3","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:cfed8540-76cf-4d35-b528-b03230ef98e0","http://resolver.tudelft.nl/uuid:cfed8540-76cf-4d35-b528-b03230ef98e0","SAVing the Internet: Measuring the adoption of Source Address Validation (SAV) by network providers","Lone, Q.B. (TU Delft Organisation & Governance)","van Eeten, M.J.G. (promotor); Hernandez Ganan, C. (copromotor); Delft University of Technology (degree granting institution)","2022","IP spoofing is the act of forging source IP addresses assigned to a host machine. Spoofing provides users the ability to hide their identity and impersonate another machine. Malicious users use spoofing to invoke a variety of attacks. Examples are Distributed Denial of Service (DDoS) attacks, policy evasion and a range of application-level attacks. Despite source IP address spoofing being a known vulnerability for at least 25 years, and despite many efforts to shed light on the problem, spoofing remains a popular attack method for redirection, amplification and anonymity. Defeating these attacks requires operators to ensure that their networks filter packets with spoofed source IP addresses. This is a Best Current Practice (BCP), known as Source Address Validation (SAV). Yet, widespread SAV adoption is hindered by a misalignment of incentives: networks that adopt SAV incur the cost of deployment, while the security benefits diffuse to all other networks. The challenges posed by SAV adoption exemplify the failure of traditional governance models to provide solutions in the Internet ecosystem. Policy interventions usually require transparency in measurements to quantify and assess the vulnerability landscape. However, measuring SAV requires a vantage point inside the network or in the upstream provider of the network. Once a packet with a spoofed source address leaves the upstream network provider, it is almost impossible to ascertain its origin...","SAV; Source Address Validation; BCP38; BCP84; DDoS; cybersecurity; incentive; ISPs; notification","en","doctoral thesis","","978-94-6419-468-5","","","","","","","","","Organisation & Governance","","",""
"uuid:a491fb7e-119b-406d-8708-c0253c94b95a","http://resolver.tudelft.nl/uuid:a491fb7e-119b-406d-8708-c0253c94b95a","An agent-based exploration of complex heat transitions in the Netherlands","Luteijn-Nava Guerrero, G.D.C. (TU Delft Energie and Industrie)","Lukszo, Z. (promotor); Hansen, Helle H. (promotor); Korevaar, G. (copromotor); Delft University of Technology (degree granting institution)","2022","In the Netherlands, a complex heat transition is taking place. Currently, the country's built environment largely relies on natural gas for heating. By 2050, the housing sector should, in principle, be free of this fuel. Changes in laws, policies, regulations, and technical solutions to achieve this goal will be challenging. Their implementation will require coordination and even cooperation between building owners (i.e. group decisions), as well as taking into account households' bounded financial rationality, households' heterogeneous decision criteria and preferences, and uncertainties introduced by changes in formal institutions.
This dissertation explores these challenges from the perspectives of socio-technical systems, complex adaptive systems, and complex systems engineering. It addresses the question: How could the heat transition in the Netherlands be influenced by homeowners’ individual and group decisions regarding investment in heating systems and insulation measures? Agent-based modelling, informed by recent policy developments and scientific literature, is the main method used for answering this question. This dissertation takes the application of this method to explore the heat transition in the Netherlands one step further.
“An agent-based exploration of complex heat transitions in the Netherlands” is relevant for the following three audience groups. Firstly, researchers who develop computational models to study socio-technical transitions, and in particular, heat transitions in the Netherlands. Secondly, practitioners who develop or use those computational models to offer advice to different actors. Finally, this research is relevant for anyone interested in enabling heat transitions in the Netherlands, from households and neighbourhoods who are the end users of technologies, to public actors discussing and designing policy interventions.","Built environment; Residential; Households; Thermal; Insulation; Complex adaptive systems; Socio-technical systems; ABM; group decisions; Investment; Homeowner associations","en","doctoral thesis","","","","","","","","","","","Energie and Industrie","","",""
"uuid:f10dc34b-2d8f-4e77-a42a-2de17d2fa8da","http://resolver.tudelft.nl/uuid:f10dc34b-2d8f-4e77-a42a-2de17d2fa8da","The role of the native environment on membrane bioenergetics: A physiological perspective","Godoy Hernandez, A. (TU Delft BT/Biocatalysis)","Hanefeld, U. (promotor); McMillan, D.G.G. (copromotor); Delft University of Technology (degree granting institution)","2022","The rise of multi-drug resistant bacterial infections is a worldwide growing concern. Despite being one of the greatest medical advances of the 20th century, classical antibiotics are no longer effective against such organisms. Unfortunately, patients infected with drug-resistant bacteria are more likely to experience worse clinical outcomes, as well as have higher mortality rates. The bacterial respirasome is one of the most promising target spaces in the search for novel antibiotics, since many pathogenic organisms display unique respiratory adaptations: e.g., a change in the pathogens’ metabolism upon infection of the host. More importantly, these adaptations are often essential for the cell viability of the pathogen, what makes them excellent candidates for targeted inhibition.
These drug targets frequently are membrane-bound respiratory proteins. For this reason, they have a significantly different biochemical and biophysical behavior than that of cytoplasmic or other water-soluble proteins, and so they are notoriously difficult to study. Most remarkably, membrane proteins are amphipathic, interacting with the aqueous environment, but also surrounded by a highly hydrophobic environment. Lipid composition is extremely complex: lipids of different polarities, hydrophobic electron carriers, etc. Altogether, a multicomponent system that ought to be considered if we aim to design efficient drugs that will interact with these enzymes (i.e., inhibit them) in their native environments.","","en","doctoral thesis","","","","","","","","","","","BT/Biocatalysis","","",""
"uuid:657a35f4-01ad-4d62-b498-33b05cc46452","http://resolver.tudelft.nl/uuid:657a35f4-01ad-4d62-b498-33b05cc46452","Architectural Robotics: Bridging the Divide between Academic Research and Industry","Feringa, J. (TU Delft Architectural Engineering)","Oosterhuis, K. (promotor); Bier, H.H. (copromotor); Delft University of Technology (degree granting institution)","2022","The presented research addresses the question of how to bridge the divide between academic research and industry in the domain of by investigating aspects of computational design , robotic fabrication and their impact on practice via a series of experiments and case studies. Research sub-questions related to the architectural model , geometry and implicit fabrication are investigated in relation to topology optimization, robotic hotwirecutting, and robotic hot-blade cutting...","abrasive wire-cutting; analemma; architectural robotics; BIM; computational architecture; computational geometry; daylight simulation; diamond wire cutting; elastica; evolutionary fabrication; evolutionary computation; formwork; hotwire; hotblade; levelsets; implicit fabrication; medial-axis; ruled surfaces; stereotomy; surface segmentation; toolpath; topology optimization; Turing completeness","en","doctoral thesis","","","","","","","","","","","Architectural Engineering","","",""
"uuid:3377329a-8c7f-4b67-84a0-4a64fe56c0a8","http://resolver.tudelft.nl/uuid:3377329a-8c7f-4b67-84a0-4a64fe56c0a8","Homogeneous reduction by bifunctional manganese catalysis: a quantum chemical approach","Krieger, A.M. (TU Delft ChemE/Inorganic Systems Engineering)","Pidko, E.A. (promotor); Li, G. (promotor); Delft University of Technology (degree granting institution)","2022","","","en","doctoral thesis","","978-94-6419-442-5","","","","","","","","","ChemE/Inorganic Systems Engineering","","",""
"uuid:34442378-e3a2-4c99-865f-57be3f13b96f","http://resolver.tudelft.nl/uuid:34442378-e3a2-4c99-865f-57be3f13b96f","Automated Defect Analysis using Optical Sensing and Explainable Artificial Intelligence for Fibre Layup Processes in Composite Manufacturing","Meister, S. (TU Delft Structural Integrity & Composites; Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR))","Groves, R.M. (promotor); Stueve, J. (copromotor); Delft University of Technology (degree granting institution)","2022","In modern aircraft, structural lightweight composite components are increasingly
used. To manufacture these components in a costeffective way, robust production systems for the manufacturing of complex fibre composite components are necessary. A rather novel but already established process for fibre material deposition is the Automated Fibre Placement (AFP) technology, which automatically places several narrow, parallel fibre tows on a mould. Typically, a component consists of several, often hundreds of stacked layers of these fibre material strips. However, when these narrow fibre tows are placed in position, layup defects can occur and reduce the mechanical properties of the component. Hence, in safety critical applications, such as aircraft manufacturing, a visual inspection of every single ply is mandatory. This inspection step is currently carried out by an expert via a visual examination, which requires up to 50 % of the total production time. An automation of this inspection process using suitable algorithms offers great potential for increasing process efficiency. However, with the growing complexity of these respective defect analysis algorithms, their performance potentially increases, but the comprehensibility of the machine decision and the behaviour of the algorithm become more challenging. This is problematic especially in safety critical applications. In addition, the data quality of recorded images is influenced by the very matte, low reflective Carbon Fibre Reinforced Plastic (CFRP) material which raises the uncertainty of an inspection....
A main adversity in spectral geometry processing is the expensive computational cost attached to the eigendecomposition of the Laplacian operator, before we can use the spectra and the eigenfunctions for the applications. Since analytical solutions are not known, one needs to opt for a numerical method to solve the eigenvalue problem. It is a numerically expensive computation, especially for a complex mesh. Another challenge comes from the storage requirement. Considering that the Laplace--Beltrami operator has global support, it takes a dense matrix to represent the eigenvectors. Therefore, the memory requirement for saving the eigenbasis can be high, particularly when a large number of eigenfunctions need to be stored. These challenges hinder the use of spectral methods for geometry processing applications.
In this thesis, we introduce new methods addressing the aforementioned challenges. In Chapter 2, we propose a fast algorithm that allows for approximating the smallest eigenvalues and the corresponding eigenvectors of the Laplace--Beltrami operator in just a fraction of the time needed to solve the original eigenvalue problem. We construct subspaces of the space of all functions that include low frequency functions and restrict the solution of the eigenproblem to the subspace. It enables the fast approximation of the eigenproblem, independent of the size of the original problem. Our novel scheme also enables significantly more efficient storage of the approximated eigenfunctions. We show that the approximated spectra are close to the reference spectra and that the fast approximation method benefits geometry processing applications, such as shape classification, geodesic distance computation, shape projection (e.g. filtering), and vibration modes of deformable objects.
We consider localized eigenfields of the Hodge--Laplacian, which serve as a sparse basis for the efficient design and processing of tangential fields, in Chapter 3. The basis spans subspaces of the spaces of tangential vector, $n$-vector, and tensor fields on a surface mesh. Restricting the design and processing of tangential fields to the subspace allows us to decouple the degrees of freedom we use for design and processing tasks from the complexity of the mesh representation. The construction is scalable, so we can efficiently compute and store subspaces for large meshes. We evaluate the performance of the novel method on various modeling and processing tasks in vector fields (fur design), n-vector fields (n-field design and hatching/line-art design), and tensor fields (curvature fields smoothing) and show that the computation time decreases up to two orders of magnitude compared to that of the original problem.
Chapter 4 introduces a novel multigrid method for numerically solving the Laplace--Beltrami eigenproblems on a surface mesh. Our new technique, the Hierarchical Subspace Iteration Method (HSIM), works on a hierarchy of nested vector spaces, in which the solution of the coarser level is used as an initial solution on the finer level. We construct the coarsest level such that the eigenproblems can be solved efficiently using a dense eigensolver. On every level, the prolongation operator maps the solution from the coarser to the finer level. The result then can be used as an initialization for subspace iterations to approximate the eigenpairs. This approach significantly reduces the number of iterations at the finest level, compared to the non-hierarchical subspace iteration method. We show that HSIM outperforms the Locally Optimal Block Preconditioned Conjugate Gradient method and the state-of-the-art Lanczos-based eigensolvers, such as Matlab's eigs, Manifold Harmonics, and SpectrA.
In summary, each of the chapters in this thesis proposes efficient algorithms for computing the eigendecompositions of Laplace--Beltrami and Hodge--Laplace operators, mainly using model order reduction and multigrid approaches. These methods reduce computational costs (Chapter 1-3) and storage requirements (Chapter 1-2) for the spectral processing of scalar functions and tangential fields on surface meshes.","geometry processing; spectral methods; model order reduction; Multigrid; Laplace--Beltrami operator; vector fields","en","doctoral thesis","","","","","","","","","","","Computer Graphics and Visualisation","","",""
"uuid:f55b78e3-2a98-44d5-a290-ec2ce9800823","http://resolver.tudelft.nl/uuid:f55b78e3-2a98-44d5-a290-ec2ce9800823","Nonlinear Fluid-Structure Interaction in Large-Scale Offshore Floating Photovoltaics","Xu, P. (TU Delft Ship Hydromechanics and Structures)","Huijsmans, R.H.M. (promotor); Wellens, P.R. (copromotor); Delft University of Technology (degree granting institution)","2022","Efficient and affordable solar energy is essential for reducing greenhouse gas emissions and achieving climate neutrality. Large-scale offshore floating photovoltaics (LOFPV) provide creative opportunities for clean and cost-effective renewable energy production close to populated areas. They are composed of solar modules, a membrane platform, ring, moors, anchors, cables, inverters and transformers as a general configuration. The floater ring on the edge and the highly hydroelastic membrane provide a protective environment for solar modules. Besides, the cooling effect of the ocean greatly increases energy production. The result is a highly competitive levelized cost of energy. These advantages make LOFPV ideal for addressing the rapidly increasing demand for renewable energy worldwide and releasing the vast land requirement.
In industrial practice, LOFPV is in the prototyping phase at present. The size of installed prototypes is in the magnitude of 100 meters. A further scale-up of size is necessary to bring LOFPV into the massive application. The enlargement increases the nonlinearity in the fluid-structure interaction (FSI) of LOFPV in waves, which is critical but seldom studied. In addition, LOFPV-wave interaction is also a typical FSI problem that can be summarized as a floating sheet in waves. There are two floating sheet applications: ice floe and very large floating structures. Within the topic of floating sheets, the fully nonlinear analysis is rare. Therefore, this dissertation has a twofold objective - to identify and analyze nonlinear properties of LOFPV in waves in particular and to propose analytical approaches for nonlinear FSI problems of floating sheets in general. Analytic solutions are preferable because they offer physical insights qualitatively and fast estimation quantitatively.
As a first approximation without waves, the membrane platform of LOFPV is modeled as a clamped circular plate subjected to a non-zero mean oscillating load. Our interest is its nonlinear vibration subjected to a non-zero mean load. A set of coupled Helmholtz-Duffing equations is obtained by decomposing the static and dynamic deflections and employing the Galerkin procedure. The static deflection is parameterized in the linear and quadratic coefficients of the dynamic equations. The effects of the static load on the dynamics, i.e., stiffening, asymmetry, and softening, are identified by means of the numerical solution of the coupled multi-mode system. An analytical solution of the single-mode vibration near primary resonance was derived. The analytical solution provides a theoretical explanation and quick quantification of the influence of the static load on the dynamics. The numerical and analytical results compare well, especially for lower values of the static deflection, confirming the effectiveness of the analytical approach.
The semi-nonlinear FSI of LOFPV in free surface waves is presented as an extension of existing FSI literature. The floating structure is modeled as a nonlinear Euler Bernoulli-von Karman (EBVK) beam coupling with water beneath, represented by linear potential theory. By introducing the wave steepness squared as the perturbation, the multi-time-scale perturbation method leads to hierarchic partial differential equations. The analytical solution of the proposed nonlinear FSI model is obtained up to second order. Pontoon structures and LOFPV are studied and compared. The asymptotic solution to the semi-nonlinear FSI model demonstrates that progressive FSI waves remain linear at the primary order and are corrected by the nonlinearity at the second order. Other than in previous literature, it is also theoretically proven also theoretically proves that no resonance occurs in the considered wave propagation problem in such an infinite domain.
The fully nonlinear FSI wave on LOFPV is a further extension of our approach. In this fully nonlinear model, the Euler Bernoulli-von Karman beam models the structure while potential flow represents the fluid. A set of coupled dynamical equations is established. A fully analytic solution is sought with the unified Stokes perturbation method. The characteristic equation is derived up to third order, which introduces amplitude-dispersion in the coupled model. It is the first time, to our knowledge, that a third-order nonlinear solution to the floating sheet problem is reported in the literature. The expressions obtained from the solution are applied to two typical cases of a pontoon LOFPV and a membrane LOFPV, with physical parameters from recently published articles. The comparison with literature demonstrates our methodology for the membrane-type LOFPV, which are more flexible, in waters of arbitrary depth and pontoon-type, which is relatively stiffer, in relatively deep waters.
This dissertation investigates several nonlinear properties of LOFPV-wave interaction theoretically. The major conclusion is that the nonlinear FSI can be identified and analyzed with our proposed analytical methods. Nevertheless, there are always open topics that deserve academia and industry's attention. The most significant recommendation is to extend our analytical method to three-dimensional from two-dimensional. Industrially, we propose a study on the breaking wave impact onto a reinforced membrane and the structural hydroelastic response for the massive deployment of LOFPV. Methodologically, it is recommended to combine finite-length structure modes and free surface wave propagation, i.e., to couple a local problem with a global problem in the considered domain, for solving wave transmission and scattering problems.
This thesis focuses a broadly spread collective action practice which occurs in the urban space, extensively described from the point of view of individual cases or from specific disciplines, with yet no significant attempt to generate a loosely constrained overview of this practice, called the urban commons.
The closer proximity of the propeller and airframe requires a more dedicated integral design (approach) of both the airframe and propulsion unit. The objective of this dissertation is: to characterize the role of the aerodynamic interaction between the propeller and the airframe on the performance and static stability characteristics for selected aircraft configurations which aim for a beneficial propeller-airframe interaction. To this end, three different types of analyses are performed. First, fundamental phenomena are investigated which provide insight for related configurations and derivatives thereof. Second, a configuration study indicates the expected trends on various performance indicators. Finally, two detailed studies on aircraft level demonstrate the relative importance and the coupling between aerodynamic interactions.
The first configuration features propellers that are mounted to the horizontal tailplane. This is an example where there is a strong interaction between the propeller and airframe that affects performance, stability, and control, and contains various interaction mechanisms that are of interest for other configurations as well. A second specific case is the a distributed propulsion configuration with propellers mounted to the inboard part of the wing (in front of the high lift devices), together with a propeller mounted to the tip of the wing.
One of the focal points of the current study is extending the understanding of nonuniform inflow effects on propeller performance and its role in aircraft stability and trim. Compared to the conventional configuration, for various unconventional propeller installations, the nonuniform inflow to the propeller differs both in type and magnitude, and varies with flight condition. The slipstream shape and consequently its interaction with lifting surfaces are affected as well. These factors directly affect the gradients and offsets of the propeller force curves and therefore the aircraft stability and trim, respectively.
By employing CFD results, a study has been performed on the {sensitivity} of the radial load distribution to a change in inflow condition that is expressed as a change in local advance ratio. The constructed distributions provide insight into what region of the disk is responsible for the largest changes of the propeller forces. This has been demonstrated to be the region of highest loading. It is also shown that for a given propeller design, nonuniform inflow can be represented by an `installation coefficient' kappa such that the efficiency curve of this uninstalled propeller is scaled along the advance ratio and efficiency directions by kappa to obtain the installed propeller efficiency. Using the data of the isolated propeller for an arbitrary blade angle, the advance ratio at which the installed case has highest efficiency, as well as the value of the maximum efficiency, can be quantified immediately. The computational intensive analyses to find the optimum blade angle for the installed cases are therefore redundant and the formulation of the installation coefficient is therefore highly valuable to the aircraft designer. The installation coefficient also gives insight in what regime of the efficiency-advance-ratio curve the largest changes occur due to nonuniform inflow...","propulsion integration; propeller aerodynamics; aircraft stability and control; aircraft performance; aft-mounted propellers; unconventional aircraft configurations; aircraft design; computational fluid dynamics; wind-tunnel testing","en","doctoral thesis","","978-94-6384-304-1","","","","","","","","","Flight Performance and Propulsion","","",""
"uuid:643a6936-ce92-4c3e-af0f-5c251a50abbe","http://resolver.tudelft.nl/uuid:643a6936-ce92-4c3e-af0f-5c251a50abbe","Development of robotic volumetric PIV: with applications in sports aerodynamics","Jux, C. (TU Delft Aerodynamics)","Scarano, F. (promotor); Sciacchitano, A. (copromotor); Delft University of Technology (degree granting institution)","2022","Particle image velocimetry (PIV) is the state of the art for quantitative, full-field, 3D flow diagnostics. Despite the maturity of the technique, two bottlenecks are identified which are addressed in this thesis: the achievable measurement volume size, and the optical access to geometrically complex objects. Both aspects are well illustrated when considering the human body in sports action. Characterising the aerodynamic flow topology around an athlete demands measurement volumes on the cubic-meter scale, whereas the simultaneous illumination and imaging of the flow near the athlete’s body is challenged by the geometric complexity of the human body and the sports equipment. Focusing on sport performance, especially in timed disciplines, it is recognized that due to the shape of the human body, the aerodynamic resistance is often dominated by pressure drag. Therefore, a third element addressed in this thesis is the PIV-based pressure evaluation in the flow and on an object surface.
To overcome the identified measurement constraints, a PIV system for the 3D diagnostics of large-scale and low-speed flows has been developed, synthesizing advancements in PIV imaging and illumination hardware, automation technology, tracer particle generation, and particle tracking algorithms. The so called robotic volumetric PIV concept is proposed in Part I of this thesis, along with dedicated data analysis methods to retrieve the shape of the test object, the total pressure in the fluid flow, and the aerodynamic pressure on the object surface. Part II features applications of the proposed tools in the context of sport aerodynamics, with specific examples in cycling and swimming.
This thesis focuses on the aerosol optical properties, particularly the light absorption of the aerosol particles that has significant effects on the Earth’s climate system.
This thesis starts with a general introduction of atmospheric aerosols, including its sources, categories, physical properties and measurement techniques (Chapter 1). Next, the Ultra-Violet Aerosol Index (UVAI) is introduced, which is calculated from satellite measurements of the radiance at two wavelengths in the UV. UVAI contains information of aerosol absorption, and it has a very long and
almost continuous data record starting in 1978. Direct use of UVAI is challenging because it is not a geophysical quantity, but a numerical index. The objective of this thesis is to derive quantitative properties on aerosol absorption from the UVAI (e.g. single scattering albedo, absorption aerosol optical depth) that can be directly used in aerosol radiative transfer assessments. Two types of methods have been developed, i.e. physically-based methods and statistically-based methods. The first compares the observed UVAI to the one simulated by radiative transfer models. The second uses Machine Learning algorithms trained by existing data sets.
The physically-based methods have been applied to quantify aerosol absorption of several large scale wildfires (Chapter 2 and 3). An important challenge of these method is that assumptions have to be made on the aerosol micro-physical properties, leading to significant uncertainties in the results, whereas theMachine Learning-based methods can avoid this kind of assumptions. Chapter 3 investigates the feasibility to quantify aerosol absorption from UVAI using a Machine Learning algorithm. Despite the higher computational efficiency and better results, the application of such data-driven methods is still restricted by the limited data on the aerosol vertical distribution. Therefore, in Chapter
4, a database of aerosol height is created from a chemistry transport model. This database is applied in Chapter 5, where a Deep Neural Network method is used to derive the quantitative aerosol absorptive properties from the OMI/Aura UVAI for the period from 2006 to 2019. In comparison to ground-based observations, the results of the Deep Neural Network agree better than satellite retrievals and also better than chemistry transport model simulations.
This thesis demonstrates the feasibility of deriving quantitative aerosol absorptive properties from the satellite retrieved UVAI.We use traditional radiative transfer simulations meanwhile investigating the new possibilities of data-driven methods in aerosol remote sensing. Although the retrieval results are encouraging, there remain limitations and challenges which need to be addressed. These are discussed in Chapter 6 with corresponding suggestions and prospects. Despite the challenges, it is expected that a synthetic database of global aerosol absorption can be derived fromUVAI observations provided by multiple satellite products. Such a data set will make great contributions to quantify the effect of absorbing aerosols on the climate system.","Atmospheric Remote Sensing; aerosol climatology; Absorbing aerosol; Aerosol; Aerosol optical property","en","doctoral thesis","","9789463843065","","","","","","","","","Atmospheric Remote Sensing","","",""
"uuid:d9491dbc-5649-4eb9-be1f-163ecf727d04","http://resolver.tudelft.nl/uuid:d9491dbc-5649-4eb9-be1f-163ecf727d04","New methods and applications of ptychography","Wei, X. (TU Delft ImPhys/Optics)","Urbach, Paul (promotor); El Gawhary, O. (copromotor); Delft University of Technology (degree granting institution)","2022","This thesis addresses new methods and applications of ptychography which is a scanning Coherent Diffraction Imaging(CDI) method for reconstructing a complex valued object function from intensity measurements recorded in the Fraunhofer or Fresnel diffr- action region. The technique provides a solution to the so-called 'phase problem' and is found to be very suitable for Extreme Ultraviolet (EUV) and X-ray imaging applications due to its high fidelity and its minimum requirement on optical imaging elements. Moreover, abundant studies show that ptychography is able to provide a wide field-of-view and retrieve the illumination probe also. During the last two decades, ptychography has been successfully demonstrated with X-ray radiation sources, electron beams and visible light sources.
Chapter 1 is an introductory chapter which gives an overview of CDI techniques. The goal is to provide the necessary knowledge so that readers with different background can easily understand the following chapters. This chapter contains three parts. For the first part we introduce the problem statement of CDI, the approximations that are commonly used in CDI, i.e. the projection approximation, the Fraunhofer approximation, and the required conditions of these approximations. This part also includes the introduction about the discrete Fourier transform, the chirp-Z transform, the issue of sampling and the coherence requirements. The second part of this chapter gives a brief introduction about iterative and non-iterative phase retrieval methods in CDI. For the final part of this chapter, we discuss the fundamental of ptychography which is the main topic of this thesis. We first derive an iterative ptychographic algorithm based on the steepest descent method, then explain the extended field-of-view and the ambiguities in ptychography. Some of the recent developments of ptychography are included in this part as well.
For performing phase retrieval in the EUV regime more efficiently, developing polychromatic ptychography is desirable. As an alternative to the existing ptychographic information multiplexing method, we present in Chapter 2 an another scheme where all monochromatic exit waves are expressed in terms of the amplitude of the transmission function and the thickness function of the object. Our proposed algorithm is a gradient based method and its validity is studied numerically. In addition, the sampling issue which appears in the polychromatic ptychography scheme and its influence to the reconstruction quality are discussed.
In Chapter 3 we investigate the performance of ptychography with noisy data by analyzing the Cram\'{e}r Rao Lower Bound (CRLB). The lower bound of ptychography is derived and numerically computed for both top-hat plane wave and structured illumination. The influence of Poisson noise on the ptychography reconstruction is discussed. The computation result shows that, if the estimator is unbiased, the minimum variance for Poisson noise is mostly determined by the illumination power and the transmission function of the object. Monte Carlo analysis is conducted to validate our calculation results for different photon flux numbers. Furthermore, the performance of the maximum likelihood method and the approach of amplitude-based cost function minimization is studied in the Monte Carlo analysis.
In Chapter 4 we present a parameter retrieval method which combines ptychography and additional prior knowledge about the object. The proposed method is applied to two applications: (1) parameter retrieval of small particles from Fourier ptychographic dark field measurements; (2) parameter retrieval of a rectangular structure with real-space ptychography. The influence of Poisson noise is discussed in the second part of the chapter. The CRLB in both applications is computed and Monte Carlo analysis is used to verify the calculated lower bound. With the computation results we report the lower bound for various noise levels and the correlation of particles in application 1. For application 2 the correlation of parameters of the rectangular structure is discussed.
The thesis is concluded with Chapter 5 where the main contribution of this thesis is listed. Furthermore, the unfinished work during my PhD and the possible extensions of the topics discussed in this thesis are addressed in this last chapter.","computational imaging; ptychography; Cramér Rao Lower Bound; parameter retrieval","en","doctoral thesis","","978-94-6384-308-9","","","","","","","","","ImPhys/Optics","","",""
"uuid:69e5aa74-9dd9-4915-8882-03b70506030a","http://resolver.tudelft.nl/uuid:69e5aa74-9dd9-4915-8882-03b70506030a","Mud dynamics in the Belgian coastal zone and siltation in the harbor of Zeebrugge","Vanlede, J.D.S.M. (TU Delft Environmental Fluid Mechanics)","Winterwerp, J.C. (promotor); Delft University of Technology (degree granting institution)","2022","Ports are important drivers for economic activity. For the Port of Zeebrugge, important sectors include cars, containers and liquefied natural gas (LNG). Due to significant siltation, frequent maintenance dredging is necessary in order to ensure the nautical accessibility. For Zeebrugge, that responsibility falls on the Flemish department of mobility and public works, at a yearly cost of about 70 millions euro. This thesis aims to contribute to the body of knowledge on the mud dynamics in the Belgian Coastal Zone, on the mechanisms behind the siltation of the harbor, and on the effects of the disposal of dredged material at sea. The cohesive sediment dynamics in the Belgian Coastal Zone (BCZ) are characterized by residual transport directed towards the northeast, and by the presence of a coastal turbidity maximum (CTM) that extends between Ostend and Zeebrugge. The resulting mud deposits are a persistent feature in the BCZ, at least since the beginning of the 20th century. Baroclinic effects, tidal asymmetry and local gradients in the residual current all play a role in trapping sediment in the CTM. In this thesis, the sediment dynamics are studied using a combination of data analysis and numerical modeling. First, a dataset is analysed that consists of 51 tripod deployments over nine years (2005-2013) at locations MOW1 and Blankenberge, kindly provided by the Royal Belgian Institute of Natural Sciences (RBINS). Tidal ensembles are derived of velocity and near-bed suspended sediment concentration (SSC). These ensembles are used to study the vertical gradient of SSC, the influence of waves, and the seasonal variation. Subsequently, a 1DV model is set up that computes the transient vertical distribution of a single fraction of SSC, and the mud content in the bed. This model is used to study the intratidal variation of the near-bed SSC observed at Blankenberge. It is shown that a two-fraction (coarse and fine) sediment model is necessary to model both the the ebb and the flood peak of SSC. Subsequently a 3D sediment transport model is set up. The settling velocity of the coarse and fine fraction are taken over from the 1DV model, as is the zero order resuspension constant. The set of measurements that is available for model calibration and validation is maximized by using both the comparable tide method and tidal ensembles. The model confirms that local hydrodynamic conditions trap sediment in the CTM, and it is used to study the role of salinity-driven baroclinic currents. A sediment balance is derived to better understand the sediment dynamics in the BCZ as an open system with some closed characteristics: even though the residual sediment transport through the Dover Strait is an important sediment supply to the BCZ, the relative importance of local erosion and deposition gives it some characteristics of a closed system, like a different clay mineralogical composition than English Channel mud.","Mud; Harbor siltation; North sea; Zeebrugge","en","doctoral thesis","","","","","","","","","","","Environmental Fluid Mechanics","","",""
"uuid:647cf562-9e16-492f-aeb2-75e08fc33273","http://resolver.tudelft.nl/uuid:647cf562-9e16-492f-aeb2-75e08fc33273","The Integration of LADM and IndoorGML to Support the Indoor Navigation Based on the User Access Rights","Alattas, A.F.M. (TU Delft GIS Technologie)","van Oosterom, P.J.M. (promotor); Zlatanova, S. (copromotor); Delft University of Technology (degree granting institution)","2022","Indoor navigation applications are actively investigated and developed due to their capacity to provide users with essential information in the modern extensive building complexes. Therefore, many researchers have developed a range of indoor navigation applications, which have focused on aspects such as localization, indoor route computation, and human spatial cognition. Unfortunately, current indoor navigation systems do not consider the user's access rights when it comes to navigating safely and effectively. This thesis delivers several contributions, which are based on international standards, to provide Indoor navigation aware of the user’s access rights.
The thesis proposes:
1) a combined model of ISO’s LADM and OGC’s IndoorGML;
2) an enhancement of the UML class diagram of the conceptual model of IndoorGML;
3) a 2D LADM country profile of the Saudi Arabia;
4) a 3D LADM country profile of Saudi Arabia;
5) a conversion of the combined LADM and IndoorGML conceptual model to a technical model;
6) definitions of access rights for users of indoor environments during crisis based on the integrated model of LADM and IndoorGML;
7) a 3D web-based prototype application for indoor navigation making use of user access rights.
The developed concepts and implementation have been acknowledged by the standardization organization ISO and OGC and considered for amending IndoorGML and LADM.","LADM; IndoorGML; Country Profile; Navigation; Access rights","en","doctoral thesis","","978-94-6366-520-9","","","","A+BE | Architecture and the Built Environment No. 5 (2022)","","","","","GIS Technologie","","",""
"uuid:5d6357fd-5ddf-4c48-819f-6b4b6cc7e2c9","http://resolver.tudelft.nl/uuid:5d6357fd-5ddf-4c48-819f-6b4b6cc7e2c9","Pyrimidine and hopanoid synthesis in anaerobic yeasts","Bouwknegt, J. (TU Delft BT/Industriele Microbiologie)","Pronk, J.T. (promotor); Daran, J.G. (promotor); Delft University of Technology (degree granting institution)","2022","Products resulting from alcoholic fermentation by microorganisms have been used by mankind for over 11,000 years. Alcoholic fermentation is involved in production of bread, alcoholic beverages such as wine and beer, distilled spirits such as vodka and rum, and bio-ethanol. The popularity of the most intensively used microorganism for these alcoholic-fermentation-related processes, baker’s yeast (Saccharomyces cerevisiae), is related to its ability to grow anaerobically. In large-scale industrial bioreactors, aeration is expensive and homogenous oxygen concentrations are extremely difficult to obtain, especially during production of ethanol, which, during fermentation of hexose sugars, coincides with the production of equimolar amounts of CO2. Moreover, to maximize the yield of dissimilation products such as ethanol on the carbohydrate feedstock, aerobic respiration should be prevented.
A fermentative pathway is a prerequisite for anaerobic growth in yeasts, but although many yeasts can ferment sugars to ethanol, only few are able to grow in the complete absence of oxygen. Yeasts that are able to grow anaerobically have specific nutritional requirements, which are needed to bypass biosynthetic reactions that either directly require molecular oxygen or are dependent on respiration. For example, synthesis of sterols and unsaturated fatty acids in S. cerevisiae requires oxygen. Sources of these compounds are therefore routinely included in defined media for anaerobic cultivation. Other yeast species may additionally require oxygen for other processes, including synthesis of pyrimidines, vitamins and/or redox-cofactor balancing. However, the exact oxygen requirements for industrially relevant non-Saccharomyces yeasts remain to be elucidated.
Oxygen requirements of yeasts are mostly investigated in laboratory-scale anaerobic cultivation systems. In contrast to the situation in industrial scale bioreactors, it is extremely difficult to achieve fully anaerobic conditions in these small set-ups. Even entry of minute amounts of oxygen may already be enough to meet minimum requirements for biosynthetic processes and thereby obscure oxygen requirements that are highly relevant at industrial scale. Moreover, this challenge in experimental design is likely to have contributed to contradicting literature regarding the oxygen requirements of yeast strains and species. The research described in this PhD thesis was aimed to obtain a deeper insight in adaptations of non-Saccharomyces yeasts that enable anaerobic pyrimidine synthesis and in adaptation mechanisms of such yeasts to sterol depletion under anaerobic conditions.","","en","doctoral thesis","","978-94-6384-288-4","","","","","","","","","BT/Industriele Microbiologie","","",""
"uuid:260cd874-c1ed-4155-bfdc-cf7fc3813ca6","http://resolver.tudelft.nl/uuid:260cd874-c1ed-4155-bfdc-cf7fc3813ca6","Aerodynamic Noise Reduction with Porous Materials: Aeroacoustics Investigations and Applications","Teruna, C. (TU Delft Wind Energy; TU Delft Flow Physics and Technology; TU Delft Aerospace Engineering)","Casalino, D. (promotor); Ragni, D. (copromotor); Avallone, F. (copromotor); Delft University of Technology (degree granting institution)","2022","Public annoyances due to noise emission in aviation and wind energy sectors are expected to become more severe in the near future with increasing number of civilian flights and wind turbine installations. Such trend demands for deeper investigations into the noise generation mechanisms and the means to mitigate them. This dissertation addresses two cases of flow-induced sound: 1) turbulent wake-body interaction, which leads to the combined tonal and broadband noise produced by the fan stage of an aircraft engine, and 2) turbulent boundary-layer trailing-edge noise that is responsible for the swishing noise from a wind turbine rotor. Although both mechanisms are inherently different, they both describe situations where noise is produced when turbulence encounters a discontinuity in the flow field, such as when turbulence in a freestream impinges a sharp leading edge or when turbulence in a boundary layer is scattered as it flows past a sharp trailing edge. The usage of permeable/porous materials at the edge of an aerodynamic body has been proposed as a solution since their flow permeability characteristic realises an intermediary region, alleviating the aforementioned discontinuities. However, there are questions that still need to be addressed. How exactly do these porous treatments mitigate the noise generation mechanisms? How do they affect the flow field in their vicinity? How should they be optimally designed and integrated? To help answering these questions, a high-fidelity lattice-Boltzmann method has been employed in the present work, with which detailed flow and acoustic analyses have been performed.","trailing-edge noise; leading-edge noise; aeroacoustics; porous materials; lattice-Boltzmann method","en","doctoral thesis","","978-94-6384-309-6","","","","","","","Aerospace Engineering","Flow Physics and Technology","Wind Energy","","",""
"uuid:7c5d0326-c293-4f5a-a926-edf7358f6510","http://resolver.tudelft.nl/uuid:7c5d0326-c293-4f5a-a926-edf7358f6510","Inverse Electromagnetics for EUV mask metrology and inspection","Ansuinelli, P. (TU Delft ImPhys/Optics)","Coene, W.M.J.M. (promotor); Urbach, Paul (promotor); Delft University of Technology (degree granting institution)","2022","The importance of inverse problems is paramount in science and physics because their solution provides information about parameters that cannot be directly observed. This thesis discusses and details the application of a few inverse methods in optical imaging, metrology and inspection of lithographic targets, particularly patterned structures on top of an extreme ultraviolet (EUV) lithographic mask...","inverse problems; scatterometry; phase retrieval; EUV mask inspection and metrology","en","doctoral thesis","","9789464237009","","","","","","","","","ImPhys/Optics","","",""
"uuid:7a9909e7-472d-4da9-8637-852e9e873577","http://resolver.tudelft.nl/uuid:7a9909e7-472d-4da9-8637-852e9e873577","Resonant optical control of magnetism on ultrashort timescales","Hortensius, J.R. (TU Delft QN/Caviglia Lab)","Caviglia, A. (promotor); Kuipers, L. (promotor); Delft University of Technology (degree granting institution)","2022","Excitation of optical transitions in solids using ultrashort pulses of light allows to selectively perturb microscopic degrees of freedom in order to change and control material properties on very short timescales. In this thesis we study how ultrafast resonant excitation of optical transitions can induce coherent structural dynamics in wide-bandgap insulators and control magnetic interactions, manipulate magnetic order and induce (propagating) spin dynamics in insulating antiferromagnets. In time-resolved all-optical pump-probe experiments,we use ultrashort pulses of light to target specific lattice vibrations, orbital resonances and electronic transitions in various insulating materials and optically probe the structural and magnetic dynamics on the picosecond timescale.
Chapter 1 provides an introduction to the field that studies ultrafast optical control of solids, with a focus on resonant optical control of magnetic properties and the generation of propagating excitations. Chapter 2 discusses the basic concepts of magnetic interactions, magnetic order and spin waves and chapter 3 contains the main experimental methods and experimental setups used in this work.
In chapter 4 we study the coherent structural dynamics initiated by ultrafast resonant excitation of an infrared-active lattice vibration in the wide-bandgap insulator LaAlO3. We observe the excitation of a coherent THz phonon mode, corresponding to rotations of the oxygen octahedra around a high-symmetry axis, and identify the underlying nonlinear phonon-phonon coupling through density functional theory calculations. The resonant lattice excitation is also shown to generate both longitudinal and transverse strain wavepackets, the result of optically induced anisotropic strain.
In chapter 5 we demonstrate that light-driven infrared-active phonons can be used to control fundamental magnetic interactions and coherently manipulate magnetic states on picosecond timescales. Resonant optical excitation of lattice vibrations in the antiferromagnet DyFeO3 results in nonthermal, ultrafast and long-living changes in the exchange interaction between the Dy orbitals and the Fe spins. We identify phononinduced coherent lattice distortions as the underlying mechanism and show that we can use this change in magnetic interaction to induce picosecond coherent switching from a collinear antiferromagnetic ground state to a weakly ferromagnetic phase.
Having explored the structural and magnetic dynamics following excitation of lattice vibrations, we explore the effect of optical excitation of orbital resonances in the van der Waals antiferromagnet NiPS3 in chapter 6. We demonstrate that ultrashort pulses of light, with the photon energy tuned in resonance with orbital transitions within the magnetic nickel d-orbital manifold, can excite a subterahertz magnon mode with twodimensional behaviour. We show that this selective excitation results from a photoinduced transient magnetic anisotropy axis, which emerges in response to excitation of the ground-state electrons to orbital states with a lower orbital symmetry.Finally, we show in chapter 7 that ultrashort pulses of light can generate a wavepacket of coherent propagating spin waves in insulating antiferromagnets. The nanometer confinement of ultrafast optical excitation in resonance with electronic charge-transfer transitions in the antiferromagnet DyFeO3 creates a strongly non-uniform spatial spin excitation profile close to the material surface. This results in the emission of a broadband wavepacket of coherent subterahertz spin waves into the material. We optically probe individual spectral components of this spin-wavepacket with wavelengths down to 125nm in a time-resolved fashion using the magneto-optical Kerr effect.
Chapter 8 provides the main conclusions of the work presented in this thesis. Wereflect on unanswered questions and give possible directions for future research.
The main goal of this project was to describe and quantify the pathways that sediment takes on an ebb-tidal delta. To reach this goal, we focused our analyses on Ameland ebb-tidal delta in the Netherlands. Before we could begin to tackle this challenge, we needed to develop new tools and techniques for analyzing a combination of field measurements and numerical models. These include a method for analyzing the stratigraphy and mapping the morphodynamic evolution of ebb-tidal deltas, a new metric for characterizing suspended sediment composition, and innovative use of sediment tracers. We also established a quantitative approach for looking at and thinking about sediment pathways via the sediment connectivity framework, and developed a Lagrangian model to visualize and predict these pathways efficiently.
The techniques developed here are useful in a wider range of coastal settings beyond Ameland, and are already being applied in practice. We foresee that the main impacts of this project will be to improve nourishment strategies, numerical modelling, and field data analysis. This dissertation also points forward to numerous opportunities for further investigation, including the continued development of the connectivity framework and SedTRAILS. By managing our coastal sediment more effectively, we will set the stage for a more sustainable future, in spite of the challenges that lie ahead.","sediment transport pathway; ebb-tidal delta; connectivity; Ameland; Wadden Sea; coastal dynamics; morphodynamics; hydrodynamics","en","doctoral thesis","","978-94-6384-300-3","","","","","","","","","Coastal Engineering","","",""
"uuid:59439cc6-8edb-450c-9d03-4f9bd4bbce48","http://resolver.tudelft.nl/uuid:59439cc6-8edb-450c-9d03-4f9bd4bbce48","Dynamic Response of a Submerged Floating Tunnel Subject to Hydraulic Loading: Numerical Modelling for Engineering Design","Zou, P. (TU Delft Hydraulic Structures and Flood Risk)","Uijttewaal, W.S.J. (promotor); Bricker, J.D. (copromotor); Delft University of Technology (degree granting institution)","2022","The submerged floating tunnel (SFT), also called an Archimedes Bridge, is a new type of infrastructure for wide and deep sea-crossings, regarded as one of the alternatives to bored and immersed tunnels and bridges. It is afloat in water employing its buoyancy and a support system to balance its self-weight. However, no prototype SFT has yet been built anywhere due to the immaturity of scientific research and engineering technology. The dynamic response of the SFT subject to operating and extreme environmental conditions, which determines structural safety and reliability, is a crucial issue that needs to be better understood. In order to better comprehend the response of the SFT to hydrodynamic forces, key points including hydrodynamic loads acting on the SFT and the structural dynamic response for various structural configurations, as well as the relations and interactions among these factors must be quantified.
In this study, hydrodynamic loads on various SFT cross-sectional geometries were computed. The parametric cross-section shape described by a Bezier-PARSEC curve was optimized using a hybrid Artificial Neural Network (ANN) and Genetic Algorithm (GA). The practical range of aspect ratios of the SFT cross-section was determined by conducting a sensitivity analysis under tidal current conditions. It was found that an SFT cross-section with an aspect ratio of 0.47 using a leading-edge BP curve under the given clearance is optimal for a balanced consideration of hydrodynamic performance and construction cost. Furthermore, the machine learning method used is shown to be a reliable and effective tool for the SFT cross section optimization design.
The hydrodynamic loads acting on the optimal cross-section shape were compared with simpler shapes under various environmental conditions of currents and waves, including the extreme environmental conditions of internal waves, tsunamis, and super typhoons. An internal solitary wave (ISW), described by the modified Korteweg de Vries (mKdV) theory, was adopted for the hydrodynamic loading analysis of the SFT based on field observations and high-resolution satellite images. It was found that the ISW can remarkably alter the buoyancy - weight ratio (BWR) of the SFT and hence, cause a large vertical hydrodynamic load on the SFT, threatening the safety and reliability of the SFT system. A worst-case tsunami and a hindcast typhoon in the Qiongzhou Strait were selected for extreme event hydrodynamic forcing analysis. It was found that extreme event hazards in the Qiongzhou Strait are rare due to the sheltering effect of Hainan Island. In terms of hydrodynamic forcing, the selected typhoon scenario is more devastating than the tsunami case for an SFT. The proposed parametric cross-sectional shape for the SFT shows better hydrodynamic performance than simpler shapes under all applied environmental conditions and is therefore recommended for the engineering design.
After investigating different types of hydrodynamic loads acting on the SFT, the global dynamic response (including vibration) of the SFT was assessed. A numerical model of a prototype super-long coupled tube-mooring-joint SFT system based on Finite Element Method (FEM) was developed to better predict flow-induced vibration (FIV) and structural dynamic response. A pragmatic approach for structural dynamic response computation under realistic oceanic conditions was developed considering the spatial randomness of hydrodynamic loads. Multi-scale hydrodynamic models including a large-scale oceanographic model and a small-scale CFD model were developed for determination of hydrodynamic loads. It was found that the SFT tube is unlikely to experience severe resonance under steady current conditions, but the vibration of the SFT tube is dominated by wave conditions, where a single dominant mode excitation of the tube with a large wave height and period cause large amplitude motion. In order to give insight into structural dynamic response under extreme environmental conditions, internal forcing on the SFT and structural response of the SFT were computed subject to the ISW and super typhoon loads. This showed that the displacement and acceleration of the SFT under the ISW are far smaller than the structural serviceability requirements, and resonance of the tunnel tube becomes unlikely under the ISW condition due to its rather low intrinsic frequency. The dynamic response of the SFT subject to the typhoon scenario is much more severe than that of the ISW case, and the horizontal stiffness of the moored tube greatly affects its dynamic response. The maximum bending moment and torque on the SFT occur at its shore connections, where failure risk due to structural fatigue or buckling are substantial.
The final aspect of this thesis aims to optimize the SFT structural configuration for minimization of hydraulic resonant loading. The core concept is to investigate the sensitivity of structural response to the structural fundamental frequencies outside the hydrodynamic frequency. It was found that natural frequencies of the SFT system are mainly affected by BWR, tunnel tube length, mooring configuration and stiffness, and joint and shore connection properties. A dynamic process for the SFT configuration optimization subject to different hydrodynamic loads can be established by smartly tuning the fundamental frequencies to mitigate structural dynamic response.
Redundant effectors can be linked together, and to the pilot input, in many ways according to different optimality criteria and/or performance objectives. In particular, the research presented in this dissertation focuses on the possibility to achieve Direct Lift Control (DLC). The latter is intended as the ability to use control effectors to alter the aircraft lift ""without, or largely without, significant change in the aircraft incidence, and ideally is meant not to generate pitching moment.""
The ability to do so is essentially dependent on the position of the Control Center of Pressure (CCoP), which is the center of pressure of aerodynamic forces solely due to control surface deflections. In case of a single control surface dedicated to DLC, the CCoP coincides with the control surface itself. In case of redundant control surfaces, their deflections can be coordinated to induce the position of the CCoP towards some preferred location, as allowed by the architecture of the aircraft and the available control effectiveness.
The first three chapters of the dissertation are dedicated to establishing the societal, scientific, and technical background underlying the subsequent research studies, including an overview of the CA problem for redundant control effectors. The following four chapters present, in this order: an evaluation of the mission performance of a staggered box-wing aircraft model designed for commercial transonic operations; a comparison of different CA methods on the design of an optimum control surface layout for a box-wing aircraft, with control surface both fore and aft the aircraft center of gravity; a trim problem formulation which employs forces and moments due to the aircraft control surfaces as decision variables, to maximize control authority, minimize aerodynamic drag or obtain a prescribed pitch angle; a CA-based formulation aimed at altering the characteristics of the transient response of an aircraft by exploiting the properties of the CCoP.
The conclusive chapter presents a comprehensive, top-level recap of the main aspects and topics covered within the dissertation. It reflects on the classic meaning of DLC, and what it means to achieve it with redundant control surfaces that are not expressly dedicated to it. With some considerations on the needs of aviation market, it speculates on the practical role of unconventional aircraft configurations in the near future. Lastly, it provides suggestions for improvements and future research studies.
The continuous search for higher performing and more sustainable designs has given ample opportunity to revisit these machine components, and to investigate their possible implementation into machines and fields that so far have not been able to make use of their superior performance characteristics. This lack of increased implementation is often due to fundamental limitations of these components. Both the performance of hydrostatic and hydrodynamic bearings is directly coupled to their need for a thin lubricant film to provide stiffness and load capacity. For hydrostatic bearings in particular, this requirement can more accurately be defined as the need for the bearing surfaces to remain parallel to its counter-surface, separated by a thin film of lubricant in the order of 100 micrometer. Not only the performance, but also ecological constraints have been more dominant in recent years. The conventional use of oil-based lubricants has given researchers the motivation to look for cleaner and more sustainable lubricants. The engineering community keeps on moving forward to a higher performing and more sustainable future.
This thesis investigates one fundamental design direction to obtain higher performance and increase the implementation field of full film bearings. By implementing compliant design, or the use of elastic elements, several facets of in particular hydrostatic bearing limitations are investigated. Two fundamental limitations are dominant throughout this dissertation: the design for changing counter surfaces and the design for multiple operating conditions. Several principles are introduced that improve the use of hydrostatic bearings for non-constant counter surfaces. These principles are the use of functionally graded materials to minimize the pre-loading effect the hydrostatic pressure has on the elastic bearing support, distributed whiffletree support-systems to increase bearing deformability, and the introduction of a compliant water-filled universal joint with superior stiffness characteristics compared to the state of the art. The universal joint makes use of the principle of closed-form pressure balancing, which is pressurizing an enclosed body of fluid to obtain low rotation stiffness while maintaining high support stiffness. All these principles are described through design models and are further investigated with the use of finite element models.
The second dominant subject that is investigated in this dissertation is developing hydrostatic bearings that function for multiple operating conditions with the use of compliant elements. This principle, defined in this work as passive shape shifting, gives hydrostatic bearings more flexibility when it comes to designing for more discrete load cases, such as the one that can be found in hydraulic pumps. Besides describing this concept design principle with finite element models, a step towards real life application is also made in a scaled case study. By combining the previously mentioned compliant universal joint and a hydrostatic bearing using the principle of passive shape shifting, a contact-mechanics free alternative for axial piston pumps is proposed. This mechanism is designed for a scaled case study and validated through experimental work and finite element modelling.
This thesis, which is conceptual in nature, combines these different design directions and therefor shows design directions that broaden the field of use for full film lubricated bearings by using elastic elements.","Tribology; Hydrostatic bearing; Compliant Mechanism","en","doctoral thesis","","978-94-6384-301-0","","","","","","","","","Mechatronic Systems Design","","",""
"uuid:9f8f9ffc-c3d9-4583-8903-25889233a95b","http://resolver.tudelft.nl/uuid:9f8f9ffc-c3d9-4583-8903-25889233a95b","Examining the Effectiveness of Collaborative Search Engines","Moraes Gomes, F. (TU Delft Web Information Systems)","Houben, G.J.P.M. (promotor); Hauff, C. (promotor); Delft University of Technology (degree granting institution)","2022","","Collaborative Search; Search Engines; Sensemaking","en","doctoral thesis","","978-94-6384-307-2","","","","","","","","","Web Information Systems","","",""
"uuid:8f0dc629-1b98-4384-b98c-5c8d6a3146a4","http://resolver.tudelft.nl/uuid:8f0dc629-1b98-4384-b98c-5c8d6a3146a4","Poisson meets Escher: Exploring the Poisson effect in bone implant design","Kolken, H.M.A. (TU Delft Biomaterials & Tissue Biomechanics)","Zadpoor, A.A. (promotor); Mirzaali, Mohammad J. (copromotor); Delft University of Technology (degree granting institution)","2022","Since the beginning of time, humans have been trying to replace the skeletal system in cases of trauma or disease. Medical devices were designed, manufactured, and implanted, to restore the function of the skeletal system. Fast forward to today and joint replacements are among the most common surgeries carried out in the world. Despite their success, a relatively large number of patients will eventually need a revision surgery. In most cases, the implant fails due to aseptic loosening. Aseptic loosening represents a range of implant loosening cases not associated with infection and is often linked to an inflammatory response. This kind of implant loosening is often associated with the mechanical failure of the load-bearing connection at the bone-implant interface, which could be caused by inadequate initial fixation, the loss of fixation over time, or bone tissue deterioration as a result of (wear) particles. The complexity of the bony tissue, with its hierarchical and anisotropic structure, complicates the development of life-lasting replacements.
Metallic biomaterials have been introduced as promising bone substitutes but their stiffness is usually vastly higher than that of the native bone. As a result, the patient’s bone becomes shielded from mechanical stimuli (stress shielding). Prolonged reduction in the mechanical stimuli results in bone resorption and may cause implant loosening. With the introduction of additively manufactured (AM) porous structures, the mechanical properties of metallic biomaterials could be reduced to the level of the bony tissue. Additionally, porous biomaterials allow for the diffusion of nutrients and oxygen, the ingrowth of de novo bone tissue, and the formation of capillaries. While this may sound as the golden combination, close bone-implant contact is of critical importance and can only be guaranteed if the implant matches the patient’s anatomy. Recent advances in AM have enabled the development of patient-specific implants, but this does not necessarily guarantee a lasting fixation. In the most ideal situation, the geometry of the implant should be tailored at both micro- and macroscale to optimize both shape-matching and material properties of the porous structures. This often calls for an unusual set of properties and functionalities that are not usually found in nature. Materials of which the small-scale architecture can be designed to obtain certain mechanical, mass-transport, and biological properties are referred to as meta-biomaterials.
The deformation of a material in directions perpendicular to the direction of loading is described by the Poisson’s ratio. A negative value would indicate that the material exhibits lateral expansion in response to axial tension, which can be observed in auxetic materials. This Poisson effect is usually guided by the internal structure of the material, or the micro-architecture of meta-biomaterials. Changing the building block (i.e., the unit cell) will, therefore, change the micro-architecture and, thus, the deformation behavior of the material as a whole...
An adult organism can consist of trillions of cells of different types, organized to form various complex tissues and organs. To obtain such a system starting from a single cell requires an incredibly sophisticated combination of three key processes: cell division, differentiation and migration. Cell division turns a parent cell into two genetically identical copies, the daughter cells. Differentiation is the process by which a cell transforms into a particular cell type with a highly specialized function, such as a neuron, a bone cell or a muscle cell. While cells of different types have identical genomes, they express distinct sets of proteins that define the cells’ function and shape. Finally, directed cell migration is crucial to obtain proper three-dimensional organization and thereby correctly formed tissues and organs.
Over many decades, several model organisms have been used to study the development of insects (Drosophila), fish (Zebrafish), amphibians (Xenopus), avians (Chicken), and mammals (Mouse, Human). In this thesis, we present three studies of different stages of mammalian embryogenesis...","stem cells; assembloïds; embryonic development; mammalian","en","doctoral thesis","","978-90-8593-514-8","","","","","","","","","BN/Timon Idema Lab","","",""
"uuid:7fa4684e-87f9-45fa-b283-7b79e3679883","http://resolver.tudelft.nl/uuid:7fa4684e-87f9-45fa-b283-7b79e3679883","Space Charge Pulsed Electro Acoustic Method, Calibration for Flat Samples and Crosstalk Reduction for HVDC Cable Measurements","Mier Escurra, G.A. (TU Delft DC systems, Energy conversion & Storage)","Bauer, P. (promotor); Vaessen, P.T.M. (promotor); Mor, A. R. (copromotor); Delft University of Technology (degree granting institution)","2022","The Pulsed Electro Acoustic Method (PEA) is a widely used method for the measurement of space charges in High Voltage (HV) dielectrics. This thesis aims to contribute to the optimization of the PEA method by being able to make measurements from different test setups comparable and enhance the reliability of the results interpretation. This work is divided in two main parts with different scopes. In the first part, the work is focused utilizing flat samples, in which the effects of the different electrode materials at the dielectric interface and the use of reference samples for measurement characterization is analyzed. In the second part of this work, the focus is on measurements at HVDC cables for which the effects of different pulsed voltage injection configurations are tested, with a special focus in reduction of the crosstalk between the applied pulse and the acoustic sensor. The obtained results contribute to optimize the application of the PEA method for measurements of the space charge phenomena in HVDC components.","Space charges; pulse electro-acoustic method (PEA); Electromagnetic compatibility (EMC); high voltage cables; Dielectric; multilayer dielectrics; dielectric interfaces","en","doctoral thesis","","978-94-6421-645-5","","","","","","","","","DC systems, Energy conversion & Storage","","",""
"uuid:9b35b35a-0789-4883-a471-f8df0d7939ad","http://resolver.tudelft.nl/uuid:9b35b35a-0789-4883-a471-f8df0d7939ad","Low-Mach Number Flow and the Discontinuous Galerkin Method","Hennink, A. (TU Delft RST/Reactor Physics and Nuclear Materials)","Kloosterman, J.L. (promotor); Lathouwers, D. (promotor); Delft University of Technology (degree granting institution)","2022","This thesis describes a numerical method for computational fluid dynamics. Special attention is paid to lowMach number flows.
The spatial discretization is a discontinuous Galerkin method, based on modal basis functions. The convection is discretized with the local LaxFriedrichs flux. The diffusion in the enthalpy equation is discretized with the symmetric interior penalty method, which is generalized in a straightforward manner for the viscous stress in the momentum equation. The numerical method does not deviate fundamentally from previous literature.
The temporal derivatives in the enthalpy and momentum equations are dis cretized with a secondorder backward finite difference method. An algorithmic pressure correction scheme is used decouple the momentum and the continuity equations, giving rise to explicit artificial boundary conditions. If the pressure and
the momentum are discretized with an equalorder polynomial space, then the pressure equation is stabilized with an extra penalty term to suppress the discontinuities in the solution, as explained in chapter 2.
Using a timesplitting method is far more difficult when the flow is compressible, due the variable density. LowMach number flows also do not lend themselves well to solving the coupled transport equations, because the density is a function of the enthalpy, not the pressure. This differs from highMach number flows, where one can solve a transport equation for the density. Chapter 4 describes in great detail how a nonconstant density can be incorporated into a timesplitting scheme for lowMach number flows.
Chapter 4 also discusses the best form of the enthalpy transport equation to solve (primitive or conservative), and for which variable (primitive or conserved). This question arises in lowMach number flows, because the density is a function of the temperature. Here the conservative transport equation is solved for the specific enthalpy.
The main difficulty with this approach is that the temporal enthalpy derivative is nonlinear due to the variable density. This can be addressed with an easily implemented adjustment of the finite difference scheme (‘method #2’ in sections 4.3–4.4). The resulting discretization displays secondorder temporal accuracy (as measured in the spatial 𝐿2 norm) for the enthalpy and the mass flux, without having to iterate within a time step.
Furthermore, the enthalpy transport equation needs to be stabilized with a simple change of variables, in which the specific enthalpy is ‘offset’ by a constant. Though it may be counterintuitive, the enthalpy offset is critical to the stability and the accuracy of the temporal discretization. This would also be true if one were to solve for the volumetric enthalpy, because the enthalpy offset determines whether there is a onetoone mapping between the volumetric enthalpy and the density.
The spatial and temporal discretizations and their implementations are extensively verified and validated with the test cases at the end of the chapters. In particular, sections 3.3.1, 3.3.2, and 4.5.1 feature exhaustive tests with manufactured solutions with nontrivial fluid properties. Sections 2.7, 3.4, and 4.5.2 contain validations for laminar flows. Chapter 5 also shows simulations of turbulent flows.
Fine cohesive sediments are found in rivers, estuaries and their adjacent areas. With the rapid economic development in the Changjiang (Yangtze River) estuary, human activities have intensified, including the development of shipping, and the exploitation of water and soil resources. The erosion and deposition problems in waterways and tidal flats have become increasingly prominent, affecting the evolution of geomorphology and ecological environment....","Sediment; Flocculation; Algae; Biological effects; Hydrodynamics","en","doctoral thesis","","978-94-6423-679-8","","","","","","","","","Coastal Engineering","","",""
"uuid:2ed1ed62-ce08-4bde-bafb-7235fd1f2dc8","http://resolver.tudelft.nl/uuid:2ed1ed62-ce08-4bde-bafb-7235fd1f2dc8","Together we make places: Designing connections in urban space","Slingerland, G. (TU Delft System Engineering)","Brazier, F.M. (promotor); Lukosch, S.G. (promotor); Comes, M. (promotor); Delft University of Technology (degree granting institution)","2022","Place-making initiatives have gained momentum in recent years to establish stronger urban communities. Through place-making, people attach meaning to spaces and they become places. While place-making initiatives have traditionally been designed and implemented from top-down, more and more scholars call for a participatory and bottom-up approach, for place-making to realise its full potential in creating strong neighbourhood communities. In this context, the thesis explored how the knowledge from Participatory Design and place-making may confluence to move from spaces to places in a more inclusive and community-driven way. The main research question was: How can Participatory Design facilitate place-making in urban settings across physical space, social connections, and institutional support? This thesis presents a framework for participatory place-making, build from place-making and Participatory Design literature and evaluated using six participatory place-making interventions that were designed, implemented, and evaluated in neighbourhoods in The Hague, Rotterdam, and Cork (Ireland). The framework contains five principles (emergent, empowering, inclusive, playful, reflective) that should guide the design of participatory place-making interventions and is complemented with five guidelines to design for participatory place-making.","Place-making; Participatory Design; Communities; Design interventions","en","doctoral thesis","","978-94-6366-499-8","","","","","","","","","System Engineering","","",""
"uuid:5b51be97-aca7-4935-836a-be9b75f819df","http://resolver.tudelft.nl/uuid:5b51be97-aca7-4935-836a-be9b75f819df","Ring of fire as a novel approach to study cycling aerodynamics","Spoelstra, A.M.C.M.G. (TU Delft Aerodynamics)","Scarano, F. (promotor); Sciacchitano, A. (copromotor); Delft University of Technology (degree granting institution)","2022","The research presented in this thesis introduces a new measurement concept for on-site aerodynamic measurements based on large-scale stereoscopic particle image velocimetry (stereo-PIV) measurements past an athlete, a vehicle or an object travelling through a quiescent environment. The analysis of the momentumdeficit past the transit poses the basis to estimate the aerodynamic drag. For such an approach, where the object crosses the illuminated measurement plane, the experimental method is referred with the name
“Ring of Fire” (RoF).
The first part of this work presents the development and assessment of the Ring of Fire concept through the study of cycling aerodynamics. A feasibility study is performed in which two RoF experimentswith a cyclist are conducted, indoor and outdoor,mimicking respectively track and road cycling. During these experiments attention is placed on the effects of the environmental conditions and the confinement of the measurement region. Furthermore, the experiments cover different postures of the cyclist (time trial and upright) with the aimto directly measure the effect of posture on aerodynamic drag and its detectability with the RoF. Despite differences between the two experiments in the cyclist geometry, bike geometry, and the cycling speed, the flow fields in the near wake of the riders compare well between both experiments and to literature. In terms of drag estimation, a clear distinction in upright vs. time trial drag area is found for both experiments, with the upright posture yielding higher drag area by about 20-35% with respect to the time trial posture. The comparison of these drag values with literature data, however, could not yield a conclusive assessment, given the large dispersion (approx. 50%) of the literature data due to many varying parameters, like rider posture, bikes geometries and testing conditions. Furthermore, the uncertainty of the measured drag and its dependency upon experimental conditions and the image processing parameters have not yet been addressed. Knowledge of the minimum detectable drag variation is relevant when measurements are intended to perform aerodynamic optimisation, therefore, a sensitivity analysis is conducted that assesses how the estimated drag is affected by the choice of PIV image processing parameters. The size of the cross-section considered in the control volume formulation is also investigated. It is found that the accuracy of the estimated drag depends on the procedure used to detect the edge of the momentum deficit region in the wake. Moreover imposing mass conservation yields the most accurate drag measurements. The drag estimation has little dependence upon the spatial
resolution of themeasurement as long as the interrogation window size stays within 5% to 25% of the equivalent diameter of the object cross section. In addition, the drag values obtained with the RoF are compared against the drag estimates from simultaneously acquired power meter data. To assess the agreement between the two approaches in different regimes, drag variations are introduced by different cyclist postures, as well as varying garments. Regardless of the underlying input parameters in the power meter model, both small- and large scale deltas are well captured by both the Ring of Fire technique and the power meter approach. The uncertainty on the average drag measurements
from the RoF is within 5%.
The second part of this work implements the findings and conclusions from part 1mand presents two applications in speed sports studied with the Ring of Fire. Firstly, themeffect of drafting in cycling is investigated. More precisely, the amount of drag reduction experienced by a trailing cyclists in a tandem formation is investigated at different lateral and longitudinal separations. The longitudinal displacement of the drafters varied between 0.32 m and 0.85 m and the lateral displacement varied between +/- 0.20 m among different runs. The results show that the amount of drag reduction for the trailing rider is mainly caused by the change in inflow conditions and that its aerodynamic advantage decreases with increasing lateral and longitudinal separation between riders, where the lateral distance is found to produce a more rapid effect. Based on these results a model is introduced that predicts the aerodynamic gain of the trailing rider based on his or her position with respect to the leading rider. Validation of the model with data from literature shows that in the near wake the model prediction is in line with literature, with an overestimation of the drag reduction when the longitudinal distance is between 0.1 m and 0.3 m. Secondly, the applicability of the RoF to speed skating is demonstrated. An aerodynamic assessment is presented of two elite skaters, each in two different skating configurations at the ice-rink Thialf in Heerenveen, the Netherlands. Both skaters transit 20 times through the RoF, 10 in each skating configurations. Athlete A skates with two hands on the back and with one arm on the back and one loose. Athlete B skates with both arms loose for all the runs, but was varying his knee and trunk angles. All tests were performed at a nominal speed of 11 m/s. Firstly, the wake velocity fields of skater A, with two hands on the back, are presented throughout five different phases of the skate stroke. Significant variations in the distribution of the velocity deficit downstream of the athlete are observed, which suggest corresponding variations in the skater’s aerodynamic drag. Secondly, average streamwise velocity and vorticity field for all 4 different postures are presented and compared. Finally, the results show that the difference in drag between two arms loose and one arm loose was found to be not statistically significant. Conversely, the optimization of the trunk and knee angles results in a reduction by 7.5% of the skater’s drag.","PIV; Helium filled soap bubbles; Sports aerodynamics; Cycling aerodynamics; On-site drag evaluation","en","doctoral thesis","","978-94-6366-467-7","","","","","","","","","Aerodynamics","","",""
"uuid:4533fe3c-f5ed-495a-ae15-67ce2cdc80df","http://resolver.tudelft.nl/uuid:4533fe3c-f5ed-495a-ae15-67ce2cdc80df","Thulium Doped Garnets for Quantum Repeaters and Optical Quantum Memory","Davidson, J.H. (TU Delft QID/Tittel Lab)","Tittel, W. (promotor); Hanson, R. (copromotor); Delft University of Technology (degree granting institution)","2022","Quantum memories will prove to be an invaluable tool for distributing entangled quantum states over distance. Many novel materials for this purpose are being investigated and key among them are rare earth ion doped crystals. These materials possess a great number of potential combinations of host crystal and ion species for further study, some of which will likely be used to create the first quantum repeaters. Choosing a specific combination, thulium doped garnets, and a unique goal of making memory devices which can function simultaneously across many spectral channels, I take a unique perspective through which to push the performance of quantummemories ...","","en","doctoral thesis","","","","","","","","","","","QID/Tittel Lab","","",""
"uuid:c98876c1-aa83-4c8d-9816-191daf5cc423","http://resolver.tudelft.nl/uuid:c98876c1-aa83-4c8d-9816-191daf5cc423","Harnessing the metabolic versatility of purple non-sulfur bacteria","Cerruti, M. (TU Delft BT/Environmental Biotechnology)","van Loosdrecht, Mark C.M. (promotor); Weissbrodt, D.G. (copromotor); Delft University of Technology (degree granting institution)","2022","A transition to a circular economy is necessary to mitigate the negative effects on the environment of the exploitation and disposal of materials and to achieve society-wide benefits. The current produce-waste-dispose model is slowly changing toward a more sustainable produce-userecycle- upcycle model. In this context, bio-based processes using microbial mixed cultures are crucial to develop waste-to-resource valorization processes.
Purple phototrophic bacteria (PPB) form a guild of hyper-versatile organisms found in almost all aqueous environments, thriving on infrared light energy, capturing organics by photoorganoheterotrophy, and even recycling CO2 by photolithoautotrophy. Due to their outstanding metabolic versatility, their organic and nutrient capture ability, and their biomass yields over substrate approaching 1 g CODx g-1 CODs, PPB are dedicated organisms to study and use for the development of water resource recovery applications. Despite already 80 years of research on PPB, their physiology still needs to get deciphered, and their environmental biotechnological exploitation is at its infancy.
The aim of this thesis was to study and harness the metabolic versatility of PPB at different levels, from the elucidation of light-driven physiologies in pure cultures to the management of selection phenomena, population dynamics, and distributed metabolic functionalities in mixed cultures. The findings were aggregated to derive to mixed-culture bioprocess application perspectives for capturing organics and nutrients from municipal sewage and agri-food wastewater and producing valuable products, as bioplastics, biohydrogen or photopigments. In this thesis, a comprehensive overview of the potential of PPB for water resource recovery is given. The molecular principles and ecological dynamics governing the PPB metabolism were elucidated with the goal to demonstrate the potential of PPB-based biotechnologies.","","en","doctoral thesis","","978-94-6384-303-4","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:be47e512-c19c-44c8-bba4-28f719e6fb8f","http://resolver.tudelft.nl/uuid:be47e512-c19c-44c8-bba4-28f719e6fb8f","Cavity electromechanics using flipped silicon nitride membranes","Peiter, S.R. (TU Delft QN/Steele Lab)","Steele, G.A. (promotor); Kuipers, L. (promotor); Delft University of Technology (degree granting institution)","2022","Silicon nitride (SiN) membrane electromechanics have shown to serve as excellent systems for applied research on sensing and transduction applications. Nevertheless, their relatively large mass in combination with high-Q also makes them suitable formore fundamental research, where gravitational effects can be tested on large mass quantum states, an experiment which has been elusive till so far. However, creating long-lived mechanical quantum states can be challenging for numerous reasons. One difficulty arises when integrating these membranes into a microwave circuit. In particular, the degradation of the mechanical resonator quality factor in an unpredictable manner. Another complication is that we often have to deal with a low coupling between the devices, which makes the control aspect of the mechanical resonator tougher.
In this thesis, we present a robust SiN based electromechanical platform that uses a custom-built flipchip tool. It allows for achieving single photon-phonon coupling on the order of Hz and high-Q factors at cryogenic temperatures consistently. In chapter 1, we introduce the field of optomechanics and the motivations for extending this field to microwave frequencies. In chapter 2, we provide a detailed derivation of the electro mechanical hamiltonian and use the Heisenberg-Langevin equation of motion to derive an analytical expression for the classical cavity field and mechanical amplitudes. After introducing fluctuations operators in the field amplitudes, we are able to obtain an expression for the noise power spectral density using Wiener–Khinchin theorem. In chapter 3, we give an extensive overview of the design and fabrication methods that we followed to make the electromechanical devices used in our experiments. In chapter 4, we optimise the shape of a lumped-element resonator that is to be used in our electro mechanical system. By simulating with electromagnetic software Sonnet EM, we show that a large loop inductor can negatively impact the resonator quality factor in case a copper platform is located at the bottom of the device. The losses improve tremendously when replacing the loop with a meandered design of the inductor. In chapter 5, we combine a square SiN membrane with the optimised lumped-element resonator, using the flipchip tool. We show that the electromechanical system offers large enough sensitivity to quantify the vibrations originating fromthe cryocooler at the mixing chamber stage. This device shows promise to serve as a broadband cryogenic accelerometer. In chapter 6, we demonstrate that placing the square SiN membrane within a silicon phononic shield significantly enhances the mechanical quality factor and therefore the cooperativity. We also discuss the implications of mechanically induced cavity noise on the measurements. In chapter 7, we conclude the thesis and present the prospects of overcoming mechanical induced cavity noise that afflicts our measurements using 2 different methods i.e. a mechanical isolation system and microwave noise locking mechanism.","Electromechanics; microwave resonators; silicon nitride membranes; high-Q; cryogenic temperatures","en","doctoral thesis","","978-90-8593-513-1","","","","","","","","","QN/Steele Lab","","",""
"uuid:512da833-dc9d-4301-b566-35243d2f1f9b","http://resolver.tudelft.nl/uuid:512da833-dc9d-4301-b566-35243d2f1f9b","On-Chip Solutions for Future THz Imaging Spectrometers","Pascual Laguna, A. (TU Delft Tera-Hertz Sensing)","Baselmans, J.J.A. (promotor); Neto, A. (promotor); Delft University of Technology (degree granting institution)","2022","The mysteries of the early Universe are largely enshrouded in dust, product of the violent process of star formation. Due to the vast distances of our Universe, infrared light emitted by the heated dust back in those early stages can still be observed today, which has been observed to contribute to about half of the total cosmic background radiation. Gases fueling star-formation also radiate, but in the form of emission lines, which leave distinct spectral signatures that allow the study of the underlying physical processes. Given the expansion of the Universe, the evolutionary information is encoded in the cosmological redshift observed, making the far-infrared or terahertz (THz) regime specially suited for probing star-formation. Superconducting on-chip broadband THz imaging spectrometers with moderate spectral resolution coupled to large telescopes will allow the investigation the early Universe processes over large cosmological volumes. In this dissertation we propose two enabling technologies toward the advancement of this on-chip superconducting instruments: a broadband and moderate spectral resolution channelizing filter-bank, and a broadband phased array antenna as a reflector feed with beam-steering capabilities.
Octave-band THz channelizing filter-banks with moderate spectral resolution of the order R=500 are investigated in this work. These systems allow for a size reduction of several orders of magnitude compared to conventional spectrometers with similar spectral resolution. The proposed filters are half-wavelength resonators, which naturally provide a free-spectral range of an octave. The performance of those filters, both when in isolation and when embedded in a filter-bank, is analyzed using a newly-developed circuit model. This tool also provides design insights such as the required filter ordering and separation within the filter-bank to enable an efficient circuit. The actual implementation of the superconducting filter-bank on a chip is investigated for two of the main on-chip technologies: co-planar waveguide (CPW) and microstrip. Despite the easier manufacturing of co-planar circuitry, that technology is not suited for channelizing THz filter-banks as it suffers from radiation issues. Instead microstrip technology is non-radiative and, although it suffers from the moderate dissipation in deposited dielectrics such as a-Si, it provides a very reliable platform to build THz filter-banks. Half-wavelength I-shaped resonators are proposed as suitable filtering structures with which frequency-sparse filter-banks have been built to test their performance in semi-isolation. The measurements were based on both a frequency response characterization of the filters as well as their optical efficiency, showing good agreement between the two. The measured performance of these filters showed pass-bands with an average peak coupling efficiency of 27% and a spectral resolution R≈940. The coupling is significantly better than earlier results based upon planar technology.
The coupling between the quasi-optical reflector system of a telescope and the on-chip filter-bank requires of a broadband antenna. Currently, broadband integrated anti-reflection-coated lenses are being developed for this purpose, but their manufacturing is specially complicated for cryogenics and require mechanical actuators to perform beam scanning in the case of a multi-object spectrometer. In this dissertation, we propose a broadband phased-array antenna concept with electronic beam-steering that exploits two key properties of superconductors in its feeding network: the negligible conductor loss and the tunable kinetic inductance with a bias current. The focused connected array antenna concept proposed is based on the broadband impedance matching enabled by the connected arrays and the largely frequency-independent far fields of near-field focused apertures. To demonstrate this concept we designed, fabricated and tested two low frequency (3-6 GHz) prototypes in PCB technology: one pointing broadside and another one scanning. The measured fields met the predictions to a large degree and provided with a reflector aperture efficiency in excess of 60% over an octave of bandwidth and allowing to scan one half-power beamwidth at the lowest frequency with a frequency-averaged scan loss of 0.2 dB. Both the directivity and the gain were measured, allowing to report the losses, which chiefly originated from the tin-finished copper lines in the PCB. As a result, we can expect a highly-efficient reflector feed at THz frequencies with beam-steering capabilities in the near future.
The beam-steering concept proposed for the phased-array antenna relies on the current-dependent kinetic inductance of superconducting lines. With this effect, the phase velocity of biased superconducting lines may be modified, allowing thereby an electronic tuning of the phase-shift introduced. Prior to the integration of such phase-shifters with the phased-array antenna, we devised an on-chip platform based a tunable Fabry-Pérot resonator to quantify the phase-shifting capabilities at THz frequencies. In this concept, the dc bias currents are injected in the proximity of the edges of the resonator through 9th order Chebyshev stepped-impedance low-pass filters, whose high rejection mitigates any possible disturbance to the THz resonances. Using a circuit model including the resonator and the low-pass filters, as well as the simulated properties of the superconducting buried microstrip lines used in the designs, we anticipate an expected maximum tuning of dφ/φ=-df/f≈2%. With such tuning range millimeter-long tunable delay lines will be required for THz superconducting phased-array.","antenna; array; astronomy; broadband; filter-bank; Fabry-Pérot; on-chip; phase-shifter; spectrometer; superconductor; THz","en","doctoral thesis","","978-94-6384-298-3","","","","","","","","","Tera-Hertz Sensing","","",""
"uuid:3dec3c8d-68e6-4a98-b273-b3db9ac23d7d","http://resolver.tudelft.nl/uuid:3dec3c8d-68e6-4a98-b273-b3db9ac23d7d","Enhancing Virus Removal in Low-Cost Ceramic Pot Filters for Drinking Water Treatment","Soliman, Mona Youssef Moawad (TU Delft Sanitary Engineering)","van Halem, D. (promotor); Medema, G.J. (promotor); Delft University of Technology (degree granting institution)","2022","Access to safe drinking water is an essential human right and a crucial element to human survival. The quality of drinking water, has strong and direct impact on human health. Unless free of fecal contamination, water is unsafe to drink. Yet, to date, 2 billion people remain without access to safe drinking water. Consequently, the burden of waterborne disease remains a global threat to public health especially in developing countries. Fortunately, many interventions in the past decades aimed to provide safe drinking water in developing countries. Household water treatment (HHWT), provided individuals with a cheap and effective solution to treat water. Since its introduction, HHWT has dramatically improved the microbial quality of water, reduced the burden of waterborne diseases and its associated mortality. In particular, ceramic pot filters (CPFs) were described as one of the most sustainable, popular and effective HHWT systems in reducing waterborne diseases. In 2014, it was estimated that 4 million users rely on CPFs for water treatment. CPFs provide consumers with an adequate protection against bacteria and protozoa which accounts for its reported protection against waterborne diseases. However, CPFs are not highly protective against all waterborne pathogens since they fail to remove viruses. The exceptionally small size of viruses enables them to pass through the filter pores. Therefore, the objective of the thesis was to enhance virus removal in ceramic pot filters (CPFs). It was hypothesized that continued filtration of water through CPFs would lead to biofilm growth which might enhance virus removal. This hypothesis was examined using MS2 bacteriophage as ssRNA model virus. It was found that the growth of biofilm was dependent on the level of nutrients in raw water and as the subsequent virus (MS2) removal observed. The trade-off was the lower flow rates in high nutrient biofilms. Although high nutrient biofilms had better removal of virus (2.4 ± 0.5 logs), it reduced flow rates in the filters making them unusable. This limitation in virus removal and flow rate called for alternative solution. Therefore, the use of metals, namely silver (Ag) and copper (Cu), was examined as potential additives to CPFs to enhance virus removal. Ag is already being applied to CPFs in many factories but its contribution to virus removal has been controversial and only reported using model RNA virus (MS2). Cu is cheaper than Ag, hence it provided the possibility of an economical alternative or complementary addition. To that end, Cu and Ag were examined for their antiviral efficiency; separately and combined. MS2 (ssRNA) and PhiX 174 (ssDNA) bacteriophages were tested as conservative model viruses for RNA and DNA waterborne viruses. Ag (0.1 mg/L) exhibited antiviral efficiency against MS2 and PhiX 174 (≤ 2 log inactivation over 6 hours), which was reduced in the presence of 20 mg C/L of natural organic matter (NOM) in water. Overall, Cu (1 mg/L) was a more potent disinfectant than Ag (0.1 mg/L). For example, in water containing NOM (20 mg C/L), Cu inactivated ≥ 6 logs of MS2 over 3 hours, and to lesser extent PhiX 174 (≥ 1 log in 3 hours). Moreover, significant synergy of Cu and Ag in combination was observed for MS2 in the absence of NOM and to a lesser extend in presence of low NOM at pH ≥7. A synergistic effect of Cu and Ag together in disinfecting PhiX 174 was observed, but only in presence of NOM in water. Overall to achieve ≥ 3 logs of inactivation by Cu and/or Ag, hours of interaction between the metal(s) and the virus were needed. Because antiviral efficiency of Cu and Ag was observed, each was applied to ceramic filter discs (CFDs) according to the factory method (Filtron, Nicaragua) by painting metal ions solution using a hand brush. Virus removal by filtration through metal painted CFDs was examined. In addition, virus inactivation in the receptacles containing filtrate (in which there was leached Cu or Ag) was examined over 5.5 hours of storage. The contribution of Cu or Ag to enhancing virus removal by filtration was minor compared to the observed inactivation following hours of filtrate storage. This observation highlighted the value of utilizing virus inactivation as post treatment / post filtration option using Cu and/or Ag ions. Unfortunately, the rapid leaching of Cu from CFDs was an obstacle to testing Cu and Ag combination. It is therefore recommended to investigate alternative methods of Cu dosing other than painting. This thesis quantified the contribution of biofilm growth to improving virus removal in CPFs, although the effect varied. With the in-depth assessment of Cu and/ or Ag antiviral efficiency, examining the effect of water quality parameters on the achieved virus inactivation, the potential of Cu and Ag was assessed. Post treatment or safe water storage relying on Cu and Ag ions can be applied in principle to provide safe drinking water in compliance with the WHO requirements.","","en","doctoral thesis","","","","","","","","","","","Sanitary Engineering","","",""
"uuid:5d36b4de-8593-4f7e-bc92-a7ae175a0900","http://resolver.tudelft.nl/uuid:5d36b4de-8593-4f7e-bc92-a7ae175a0900","Computational aeroacoustics of rotor noise in novel aircraft configurations: A lattice-boltzmann method-based study","Romani, G. (TU Delft Wind Energy)","Casalino, D. (promotor); Delft University of Technology (degree granting institution)","2022","The accurate and reliable prediction of the aerodynamic noise sources of open rotors and ducted-fans in electric Vertical Take-Off and Landing (eVTOL) and non-conventional aircraft configurations is a challenging task from a computational perspective. Indeed, such propulsive systems can often operate in highly distorted and non-homogeneous flows, with the rotating blades interacting with strongly non-uniform and turbulent flows; and/or experience phenomena related to low Reynolds numbers and boundary-layer transition, due to the relatively small diameters and blade tip speeds. While analytical, semi-empirical and low-fidelity numerical models can provide quick and computationally inexpensive predictions, their results are often not fully reliable and their state-of-the-art requires a further development step to properly address the problem of rotor noise prediction in non-conventional aircraft and rotorcraft. On the other hand, Navier-Stokes based scale-resolving approaches such as Large Eddy Simulation (LES) have the capability to capture most of the aforementioned phenomena, but at a prohibitive computational cost for a routine employment in the design stages of such innovative vehicles. In view of this, high-fidelity scale-resolving lattice-Boltzmann (LB) numerical simulations, coupled with the Ffowcs Williams & Hawkings' (FW-H) acoustic analogy, are extensively performed and validated in this thesis. A wide range of aeroacoustic problems, spanning from airfoil and small-scale propellers in transitional boundary-layer regime to open rotors in blade-vortex interaction conditions and ducted fans ingesting the airframe turbulent boundary-layer, are addressed with the aim of predicting, identifying and characterizing and the primary sources of aerodynamic noise associated to open rotors/propellers and ducted-fans in eVTOL and novel aircraft configurations by means of the hybrid LB/FW-H approach.","Trailing-edge noise; propeller noise; blade-vortex interaction noise; fan boundary-layer ingestion noise; LBM; CFD; FW-H; CAA","en","doctoral thesis","","978-94-6366-498-1","","","","","","","","","Wind Energy","","",""
"uuid:1af34dce-0e3b-45d0-8851-b6254612185e","http://resolver.tudelft.nl/uuid:1af34dce-0e3b-45d0-8851-b6254612185e","Using Multibeam Echosounders for Multiscale and Interdisciplinary Habitat Mapping on the Dutch Continental Shelf","Koop, L. (TU Delft Aircraft Noise and Climate Effects)","Simons, D.G. (promotor); Snellen, M. (promotor); Delft University of Technology (degree granting institution)","2022","Because the seafloor is a complex ecosystem, a multidisciplinary approach must be adopted in order to produce comprehensive habitat maps. Such multidisciplinary projects have been lacking for the Dutch area of the North Sea. To address this lack, the Distribution, structure and functioning of low resilience seafloor communities and habitats of the Dutch North Sea (DISCLOSE) project, funded by the Gieskes- Strijbis Fonds, was initiated. The consortium for the project included three research institutes, as well as the North Sea Foundation. The first of the research institutes was the Delft University of Technology, tasked with the large-scale mapping of the seafloor, using acoustic systems such as the multibeam echosounder (MBES). The second research institute, the University of Groningen (UG), focused on the use of photography and videography to study the seafloor and the epifauna at a smaller, yet more detailed, spatial scale. Finally, the Royal Netherlands Institute for Sea Research (NIOZ), studied the seafloor from both the perspective of particle size and macrofauna using grab-sample data. All of these measurement methods were utilized for the same research areas, in order to maximize the possibility to established links between the sampling methods, and thereby create detailed habitat maps. The work in this thesis focuses specifically on the acoustic results generated within the DISCLOSE project. In recent years the MBES has become the standard tool for the large-scale mapping of the ocean floor. With the MBES, large swaths of the seafloor can be covered in short periods of time. The use of the two-way travel time to measure the bathymetry of the ocean has become very standardized. In addition to measuring the bathymetry, the MBES can also deliver the collocated backscatter product. The appropriate use of backscatter for the classification of seafloor properties and habitats is much less well understood than bathymetry. As such, this is an active field of research. Within Dutch waters, most research has taken place using datasets from the area of the Cleaverbank. Other areas have not been well studied, for example, the southern sandy area. Utilizing MBES backscatter-based seafloor classification in sandy areas is a major focus in this thesis. A dataset from the Brown Bank area of the North Sea was used in order to study seafloor classification over mega ripple structures. A big part of the Southern North Sea is covered in nested sand waves of different sizes. The largest of these is the tidal ridge, with some ten kilometers from crest to crest. The second largest is the sand wave, and the smallest is the mega ripple. Obviously, the main sediment type in this area is sand. Previous research suggests that a difference in grain size is to be expected between the crest of the tidal ridge to the trough. It was not known if a difference in grain size from the crest to the trough of the sand wave or the mega ripple is present, or detectable using MBES backscatter. As such, for this research a few things were very important. Firstly, it was necessary to accurately correct the backscatter for the seafloor slopes in the research area. Next, it was important to have a high spatial resolution for the final classification results. Additionally, a high geo-acoustic resolution was also needed. This final resolution is needed because it is expected that the difference in sediment properties from the trough to crest of a mega-ripple may be just slightly coarser or finer sand. From our research, it was found that it is possible to use MBES backscatter in order to classify the sediment types at the scale of mega ripples. It was found that the coarsest sediments were in the troughs, finer sediments on the stoss side slopes, and a mixture of sediments on the lee side slopes of the mega ripples...","Multibeam Echosounder; Sediment Classification; Dutch North Sea; Object-based image analysis; Backscatter; Bathymetry,; Bathymetric derivatives; Grab samples; Bayesian classification; Seafloor mapping; Benthic habitats; Marine geology; Sandbanks; Tidal ridges; Sand waves; Mega ripples; Sand ripples","en","doctoral thesis","","978-94-6384-297-6","","","","","","","","","Aircraft Noise and Climate Effects","","",""
"uuid:18fc05b9-fdf4-4655-ab27-d60ce92401e1","http://resolver.tudelft.nl/uuid:18fc05b9-fdf4-4655-ab27-d60ce92401e1","Cost Allocation in Integrated Community Energy Systems","Li, N.L. (TU Delft Energie and Industrie)","Lukszo, Z. (promotor); Hakvoort, R.A. (promotor); Delft University of Technology (degree granting institution)","2022","With the growing concerns over energy depletion and environmental protection all over the world, more and more attention is being paid to energy transition towards renewable energy sources (RESs), energy efficiency improvement, and CO$_2$ emission reduction. Integrated community energy systems (ICESs) emerge in the development of local energy systems by integrating local distributed energy resources (DERs) and local communities. Local community members are actively involved in the planning, development, and administration of the energy system as well as the allocation of its costs and benefits.
In principle, the costs should be paid by those who consume energy and use energy-related services in the system, and the benefits should be assigned to those who made the investments. A well-designed cost allocation contributes to the successful implementation in the short-term and sustainable development of ICESs in the long-term. In large power systems, the regulator makes decisions on tariff design according to the regulatory principles. However, no regulators are dealing with these issues in ICESs. The local community itself needs to agree on the cost allocation method themselves. It, therefore, requires that the costs are allocated in a socially acceptable manner. In order to fill in the research gap, the main research question addressed in this thesis was:
How to design cost allocation in an ICES in a socially acceptable manner?
This question was answered by first reviewing cost allocation in tariff design, including the objectives, regulatory procedures, tariff structure design, regulatory principles and the widely used cost allocation methods. After that, an extensive discussion of how these concepts and methods can be applied in the context of cost allocation in ICESs was conducted. Based on this, a systematic framework was proposed in order to ensure a successful implementation of cost allocation design in ICESs.
Cost allocation framework
Cost allocation in ICESs is a rather new topic, there is no guidance on how to do it in a systematic manner. This thesis presented a systematic framework for allocating costs in ICESs learning from cost allocation in electricity tariff design. It clearly defines the objectives, procedures, and required components for allocating costs in a socially acceptable manner. Ten cost allocation methods that are applicable in the context of cost allocation in an ICES were derived and formulated mathematically to show their underlying principles.
Performance assessment of cost allocation methods
Each cost allocation method has its own characteristics and may perform differently. It is necessary to assess their performance in order to distinguish between them. In this thesis, two criteria: cost reflectiveness and predictability are proposed to evaluate the performance of the ten cost allocation methods. Cost reflectiveness is used to gain insights into how well the costs are allocated and cost predictability is used to show how the energy costs change in the long-term. For this purpose, a case study with 100 households is used in the model to investigate the performance of the ten cost allocation methods. The results showed that energy-based methods perform better compared to other methods. In order to further identify the effectiveness of the methods and how they perform in terms of the two criteria, a sensitivity and a robustness analysis are conducted. The sensitivity was conducted by investigating the changes in the number of consumers in the community.
The results showed that the number of consumers has little or no influence on the performance of the ten cost allocation methods in terms of cost reflectiveness and predictability. The robustness analysis was conducted by investigating the penetration of prosumers in the ICES. The findings concluded that the increasing prosumer penetration has a positive effect on the performance of the ten methods in terms of the two criteria in the event of changes in the number of local community members and prosumers. The two analyses also presented that the energy-based allocation methods can retain their merits in respect of the two criteria. The comprehensive analysis provides a better understanding of the performance of the ten cost allocation methods considered in this thesis.
Social acceptance analysis
One of the novel aspects of ICESs lies in the integration of local community members. They play an important role in the energy system by actively involving in the planning, development, and administration of the energy system as well as the allocation of its costs and benefit. Local community members are encouraged to participate in the decision-making process. They may have various preferences towards cost allocation. The selected cost allocation method should satisfy the requirements and preferences of local community members.
Furthermore, no regulators are involved in ICESs, the community itself needs to agree on the cost allocation method themselves. It, therefore, requires that the selected cost allocation method be socially acceptable to local stakeholders. In this thesis, social acceptance is conceptualized from the perspective of procedural and distributive justice to make sure both the process and the results of cost allocation are fair and socially acceptable to local community members. Furthermore, local community members with similar backgrounds and interests may have similar or the same preference over the criteria. Therefore, they can be classified into several groups according to their major preferences. It, therefore, stands for a multi-group, multi-criteria, and decision-making problem. Here we proposed a multi-group multi-criteria decision-making approach to support the local community member in selecting a socially acceptable cost allocation method.
A simulation was also conducted in order to understand the decision-making tool developed in this thesis. The local community is categorized into different decision-making groups considering the differences in their major preferences. Seven decision-making groups are considered in this study, and their major preferences vary from fairness, cost reflectiveness to stability and any combination of them. The numerical results show that time-of-use pricing is the best solution for the seven decision-making groups considered in this research. In addition, an analysis with the changes in the weights of the decision-making groups was conducted to see how this would influence the selection of cost allocation method. The results indicated that the changes in the number of local community members in different decision-making groups influences the selection of best solutions.
Conclusions
This thesis presents a practical solution for allocating costs in ICESs in a socially acceptable manner. A systematic framework was formulated, possible cost allocation methods were proposed, and a decision-making tool was developed in the research in order to ensure a successful implementation of cost allocation design in ICESs. The methodology developed in this thesis can be applied to any local community energy system. The obtained results can be used by decision-makers to help them in the decision-making process. A successful cost allocation will definitely contribute to the implementation of ICESs, thus contributing to the energy transition.","Integrated community energy systems; Cost allocation; Cost reflectiveness; Cost predictability; Social acceptance; Multi-group perspective; Multi-criteria decision-making","en","doctoral thesis","","978-94-6384-293-8","","","","","","","","","Energie and Industrie","","",""
"uuid:cdab8d12-b970-4c02-b972-cfb61dd00c98","http://resolver.tudelft.nl/uuid:cdab8d12-b970-4c02-b972-cfb61dd00c98","Distributing entanglement in quantum networks","Goodenough, K.D. (TU Delft QID/Wehner Group)","Wehner, S.D.C. (promotor); Elkouss Coronas, D. (copromotor); Delft University of Technology (degree granting institution)","2022","The research presented in this thesis focused on the problem of entanglement distribution. Simply put, the two main problems facing (practical) implementation of entanglement distribution over quantum networks are loss and noise. Quantum repeaters are meant to overcome the effects of loss, but in practice their implementation always comes at the cost of more incurred noise. This additional noise can be overcome by the use of entanglement distillation.
In the first two chapters, we focused on the assessment of a basic building block for quantum networks, a single quantum repeater. We then considered finding schemes for the concatenation of multiple such quantum repeaters, along with the inclusion of basic distillation protocols. Finally, we considered a systematic way of optimising over a relevant class of (more complex) distillation protocols.","quantum information; quantum information theory; quantum networks; quantum communication; quantum repeaters","en","doctoral thesis","","","","","","","","","","","QID/Wehner Group","","",""
"uuid:824acfcf-29b6-4c99-86b0-f6e3bd4e15c3","http://resolver.tudelft.nl/uuid:824acfcf-29b6-4c99-86b0-f6e3bd4e15c3","Design and Fabrication of Shell Structures: aided by radial basis functions and reconfigurable mechanisms","Chiang, Y.-C. (TU Delft Structural Design & Mechanics)","Overend, M. (promotor); Veer, F.A. (promotor); Delft University of Technology (degree granting institution)","2022","Shell structures carry loads with their thin yet curved shapes. Being thin means shells require little material, which is desirable for minimizing embodied carbon footprints. However, the feature of being curved implies shells require immense effort to design and fabricate. To address the challenges, this dissertation consists of three parts: developing a design algorithm based on radial basis functions (RBFs), inventing a fabrication technique based on reconfigurable mechanisms, and producing prototypes based on the new algorithm and mechanism. The first part of this dissertation introduces a new algorithm based on RBFs for designing smooth membrane shells, which is more versatile than existing methods. The algorithm can generate membranes with both tensile and compressive stresses. It can also tweak an initial shape to meet free-edge conditions. It can also incorporate horizontal loads in the form-finding process. The second part of the dissertation presents a new system of flat-to-curved mechanisms, which allows a shell to be fabricated in a flat configuration and deployed into a double-curved state. Such a mechanism consists of panels connected by tilted hinges. The mechanism can contract non-homogeneously and change its Gaussian curvature. The last part of this dissertation demonstrates the integral application of the RBFs form-finding algorithm and the flat-to-curved mechanisms. The prototypes designed and produced deliver form-found shapes that have spans ranging from 0.2 to 4 meters. This dissertation contributes to the development and distribution of shell structures by developing computer algorithms and digital fabrication techniques to minimize the hurdles of designing and fabricating shell structures.","Shell structure; Funicular form-finding; Radial basis functions; Fabrication-aware design; Architectural geometry; Reconfigurable mechanism","en","doctoral thesis","TU Delft OPEN Publishing","9789463665087","","","","A+BE | Architecture and the Built Environment No 3 (2022)","","","","","Structural Design & Mechanics","","",""
"uuid:8079a208-fda0-49ab-9048-df1f9a0158e0","http://resolver.tudelft.nl/uuid:8079a208-fda0-49ab-9048-df1f9a0158e0","Coupling lattice, charge and topological reconstructions at oxide interfaces","van Thiel, T.C. (TU Delft QN/Caviglia Lab)","Caviglia, A. (promotor); Akhmerov, A.R. (promotor); Delft University of Technology (degree granting institution)","2022","Modern materials synthesis techniques allowfor the layer-by-layer assimilation of structurally similar, yet compositionally different materials into artificial crystals, with atomic scale precision. At the resulting heterointerfaces, structural, electronic and magnetic reconstructions can lead to physical phenomena that are otherwise absent in the individual constituents. Composing so-called heterostructures is therefore one of the key approaches towards realizing the ultimate goal of designer materials with tailored properties. In this context, perovskite oxides represent a promising class of materials, owing to the combination of a delicate balance among competing electronic and magnetic interactions, as well as excellent structural compatibility among its members. This thesis describes a collection of investigations into interface-driven reconstructions in heterostructures composed of such perovskite oxides. Chapters 1 provides a brief introduction to the field of complex oxide interfaces, as well as the Berry curvature and its relationship to the so-called anomalous Hall effect. Chapter 2 provides an overview of the main experimental techniques used throughout this thesis; pulsed-laser deposition, X-ray diffraction, lithographic device fabrication and cryogenic magnetotransport characterization. Chapter 3 focuses on heterostructures composed of spin-orbit semimetal SrIrO3 and the bandgap insulator SrTiO3. Aided by transport measurements, synchrotron X-ray diffraction and DFT calculations, we demonstrate a coupling of orthorhombic structural domains in the film to tetragonal domains in the substrate. The results extend to a variety of orthorhombic materials, opening up possibilities to manipulate structural domain patterns to a wide variety of materials through interaction with a tetragonal substrate. Chapters 4 and 5 focus on the itinerant ferromagnet SrRuO3 and its intriguing intrinsic anomalous Hall effect. We show that through interfacing SrRuO3 with SrTiO3, SrIrO3 and LaAlO3, the sign of the momentum-space Berry curvature can be controlled. We propose a simple two-channel model to account for the unusual field dependence of the anomalous Hall effect in asymmetric heterostructures. The findings in these chapters underline oxide interfaces as a versatile platform for manipulating the geometric properties of wavefunctions in solid-state systems, as well as the potential of ultrathin SrRuO3 for spintronic applications. In Chapter 6, we synthesize SrRuO3 thin films on SrTiO3 (111) substates. Transport measurements indicate a distinct effect of electronic confinement on the electronic properties of (111) oriented SrRuO3 thin films as compared to their (001) counterparts, producing bands with a hole-like character in the ultrathin limit. This highlights crystal orientation and heteroepitaxial growth as an effective tuning parameter for controlling the electronic properties of oxide heterostructures. The final chapter summarizes the findings of this thesis and provides a number of research directions to be further explored.","complex oxides; oxide interfaces; lattice instabilities; anomalous Hall effect; topological reconstructions; polar discontinuity","en","doctoral thesis","","78-90-8593-511-7","","","","","","2023-01-31","","","QN/Caviglia Lab","","",""
"uuid:9dd9701b-343e-4f25-8d49-02652e839e32","http://resolver.tudelft.nl/uuid:9dd9701b-343e-4f25-8d49-02652e839e32","Technology platform for advanced neurostimulation implants: The “chip-in-tip” DBS probe","Kluba, M.M. (TU Delft Electronic Components, Technology and Materials)","Dekker, R. (promotor); Delft University of Technology (degree granting institution)","2022","The progress in the field of neurostimulation is impressive, both from a technical as well as from a therapeutic point of view. Nowadays, the electrical stimulation of the nervous system can be used to induce or suppress muscle responses. Additionally, it can also influence hearing, vision, immune system response, pain perception, and even mental state. The number of medical conditions that can be treated using existing or completely new neurostimulation devices is continuously growing. Moreover, well-targeted electrical neuromodulation can help reduce the whole-body side effects, typical for traditional medication therapies. However, the potential of neurostimulation therapy is limited by the relatively slow development of the accompanying technologies. Most commercial neurostimulation implants still consist of a pulse generator encapsulated in a bulky titanium case and lengthy extension cords. Moreover, in some cases, such as deep brain stimulation (DBS), the resolution of the stimulation is also an issue that can cause severe side effects. In this thesis work, a technology platform for the manufacturing and packaging of advanced neurostimulation implants has been developed to enable further bioelectronics miniaturization and improve the stimulation resolution. These goals have been achieved in close collaboration with the InForMed project partners involved in finalizing the joint design, preparing the inter-facility fabrication process, and supplying off-the-shelf technology modules…","neurostimulation; directional deep brain stimulator; miniaturization; high-level integration; trench capacitor; high-definition flex-to-rigid; sealable trenches; cavity-BOX; biocompatible flip-chip; flexible interconnects; soft encapsulation; parylene; platinum; ceramics; parylene processing in cleanroom","en","doctoral thesis","","978-94-6384-296-9","","","","","","","","","Electronic Components, Technology and Materials","","",""
"uuid:d2ff078a-70f3-4d41-944d-44a81ed6fa07","http://resolver.tudelft.nl/uuid:d2ff078a-70f3-4d41-944d-44a81ed6fa07","Study of foam in model fractures: coarsening, gas trapping and gravity effects","Li, K. (TU Delft Applied Geophysics and Petrophysics)","Rossen, W.R. (promotor); Wolf, K.H.A.A. (promotor); Delft University of Technology (degree granting institution)","2022","Naturally fractured reservoirs (NFRs) gain much attention worldwide because they are often encountered in aquifer remediation, CO2 sequestration, and hydrocarbon extraction. In hydrocarbon extraction, however, oil recovery by gas injection in NFRs is usually low, because of poor sweep efficiency. During gas injection, the displacement front is unstable. Conformance problems, such as gravity override, viscous fingering, and channeling, take place because the gas has a lighter density and lower viscosity compared to reservoir fluids, and tends to flow preferably through high-permeability zones in heterogeneous reservoirs. In addition, open fractures can have much greater conductivity than the matrix. As a result, gas flows through fractures, leaving much of the matrix unswept. Foam, by adding surfactant solution to gas injection, can effectively mitigate conformance problems by greatly reducing the mobility of gas. During foam flooding in porous media, the displacement front is more stable, and more gas is diverted to unswept zones, hence improving the sweep and increasing oil recovery. Foam can also be created in fractures, where it builds up a viscous pressure gradient and thus diverts the flow of gas into the matrix. As a result, the sweep is improved. In the field, foam pilots have achieved an increase in oil production rate and a reduction in gas/oil ratio. Despite this success, foam application in NFRs is still much less understood than in unfractured porous media. In this dissertation, we aim to expand our understanding of foam in fractures through an experimental approach. To this end, we create four 1-m-long, 15-cm-wide glass model fractures (Models A, B, C and D) with different roughness and hydraulic apertures. Each model consists of two 2-cm-thick glass plates. The top plate is smooth and the bottom plate is roughened on the side facing the top plate. Between the two plates is a slit-like channel representing a single geological fracture. Model A has a roughened plate with a regular roughness. Models B, C and D, with increasing hydraulic apertures, use the same roughened plate with an irregular roughness. We profile the roughness of the roughened plates and study the aperture distribution of the model fractures to characterize the geometry of the model fractures. With local hills (maxima of height) and valleys (minima of height) on the roughened plates, the distribution of aperture of model fractures can be represented as a 2D network of pore bodies and pore throats. In the experiments, we inject pre-generated foam into the model fractures. We study foam behavior after foam flow reaches steady-state. As our models are transparent, we use a high-speed camera to directly visualize and record images of foam in the model fractures. Using ImageJ software, we analyze foam images to quantify the properties of the foam.","foam; naturally fractured reservoirs; fractures; image analysis; water saturation; capillary pressure; local equilibrium; gas trapping, capillary number; coarsening; gravity segregation","en","doctoral thesis","","978-94-6366-496-7","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:88aab5a4-0fca-4d07-8495-81fc75bdc5c2","http://resolver.tudelft.nl/uuid:88aab5a4-0fca-4d07-8495-81fc75bdc5c2","Troubled wastewaters: the politics of transitions to a circular economy","Ampe, K.V.J. (TU Delft BT/Biotechnology and Society)","Osseweijer, P. (promotor); Block, T. (promotor); Paredis, E. (promotor); Asveld, L. (copromotor); Delft University of Technology (degree granting institution)","2022","Humanity faces major challenges because the boundaries of an ecologically safe and socially just space are transgressed.
This thesis’ point of departure is that these societal challenges result from longterm, complex, unsustainable consumption and production patterns in sociotechnical systems such as energy, mobility, agriculture and water. These challenges cannot sufficiently be addressed by incremental improvements and technological fixes along path-dependent trajectories but also require path-breaking changes towards new socio-technical systems. So far, however, progress has been rather limited in achieving long-term sustainability objectives and fundamentally transforming these systems. Put differently, societal change does not happen or it takes place at an agonisingly slow pace.
Against this backdrop of persistent environmental problems and rigid, unsustainable socio-technical systems, innovative activities are being developed to enable a paradigm shift towards a circular economy. However, as such shifts are highly political, these new activities typically result in inertia or, at most, incremental changes in established socio-technical systems. Therefore, the thesis investigates the political processes underlying inertia and incremental change in established socio-technical systems, directing attention to the power of deeprooted ideas, entrenched networks, embedded rules and vast infrastructure that hinder fundamental change. To do so, it focusses on the wastewater systems of Belgium and the Netherlands. Here novel activities are being developed that arise from the need for rapid shift to a circular economy. Yet these wastewater systems are also characterised by large, stable infrastructures and robust institutional arrangements. As a result, the main topic of this thesis is the politics of sustainability transitions towards a circular economy in the wastewater systems of Belgium and the Netherlands.
In six chapters, the thesis delves into the politics of transitions towards a circular economy in the Dutch and Belgian wastewater system.","","en","doctoral thesis","","978-94-6384-294-5","","","","","","","","","BT/Biotechnology and Society","","",""
"uuid:09108773-40cb-4db1-adf9-a6f586e03eca","http://resolver.tudelft.nl/uuid:09108773-40cb-4db1-adf9-a6f586e03eca","Metabolic engineering of Saccharomyces cerevisiae for the production of aromatic compounds","Else-Hassing, J. (TU Delft BT/Industriele Microbiologie)","Daran, J.G. (promotor); Pronk, J.T. (promotor); Delft University of Technology (degree granting institution)","2022","Over the past century, the world population has increased fourfold. This increase in human population is accompanied by an increased demand in food, water, energy and consumer goods. The resulting intensification of agriculture, deforestation, emission of green-house gasses and high utilization of natural compounds such as coal, minerals, metals and fossil-fuels have resulted in global warming and a depletion of Earth’s natural reserves. The application of biotechnology can aid in the transition to a more sustainable, circular and bio-based economy. For example, by offering novel production processes for a range of different compounds, such as therapeutics, beverages, food, chemicals and fuels from renewable sources. For instance, Baker’s yeast is known for its natural ability to produce CO2 and ethanol from sugars, characteristics that were historically exploited for the production of alcoholic beverages and bread. Today, bio-ethanol as transportation fuel made by yeasts also provides a more sustainable alternative to gasoline. Additionally, due to the enormous increase in knowledge and the establishment of genome editing tools and sequencing possibilities, biotechnology can now apply genetically engineered microbes to produce an ever-increasing range of products, both native or heterologous, to the microorganism. For example, micro-organisms have been engineered to produce complex molecules such as human insulin, the flavoring compound vanillin and the antimalarial drug artemisinin. Insulin, which is essential for treatment of diabetes, was conventionally produced by extraction from pig pancreas, while artemisinin and vanillin were extracted from plants, or in case of vanillin, also synthesized chemically. However, microbial production of such industrially valuable compounds, from simple substrates such as glucose or second-generation feedstocks, offers a more reliable and sustainable production method compared to these classical methods. In this thesis special emphasis is given to the Baker’s yeast Saccharomyces cerevisiae and its application in the production of aromatic compounds. There is an increasing interest in the microbial production of aromatic molecules, such as in the flavor and fragrance industry. The economic potential of this field is partly due to European legislation, that allows the production and sale of microbially produced molecules, as long as the final product is devoid of genetically modified organisms (GMOs). S. cerevisiae is able to natively synthesize several aromatic compounds, although their production is limited by tight regulation of the involved pathways. Many other industrially attractive aromatic compounds find their origin in plants. In order to establish yeast-based production of these aromatic molecules, it is necessary to both introduce plant genes, and modify, the metabolism of S. cerevisiae to obtain fast and efficient production...","","en","doctoral thesis","","978-94-6423-618-7","","","","","","","","","BT/Industriele Microbiologie","","",""
"uuid:b9fef4c4-c52d-4155-875c-bda9bc985f0d","http://resolver.tudelft.nl/uuid:b9fef4c4-c52d-4155-875c-bda9bc985f0d","Fault Diagnosis in Household Appliances: A Design Perspective","Pozo Arcos, B. (TU Delft Circular Product Design)","Bakker, C.A. (promotor); Balkenende, A.R. (promotor); Delft University of Technology (degree granting institution)","2022","Today’s industrialized societies face the challenge of integrating economic activity with sustainable consumption. Prosperity has come hand in hand with environmental damage. Product lifetimes are decreasing and there is a rising demand for high-tech products for which no effective recycling is in place. Hence, the value from products is lost to waste. The current use and management of the Earth’s resources is unsustainable.
The circular economy (CE) aims at slowing, closing, and regenerating the flow of goods and materials that enter the economic system. It posits retaining the value from products and encourages a shift to renewable energy resources. In this way, the CE will help reduce our current accelerated resource depletion. In particular, product repairs can help slow down the flow of goods. Repairing products provides an alternative to premature product replacement, and contributes to a significant reduction of waste.
In this thesis, I look in detail at the process of fault diagnosis, one of the initial steps to be taken when repairing products. Fault diagnosis identifies the faulty component(s) or cause of failure in a malfunctioning appliance and is therefore essential for efficiently repair. It enables the time, cost, and skills required for the component repair to be established.","product design; fault diagnosis; repair; circular economy","en","doctoral thesis","","978-94-6384-292-1","","","","","","","","","Circular Product Design","","",""
"uuid:29bd3863-7008-4825-9856-34ee7beafb56","http://resolver.tudelft.nl/uuid:29bd3863-7008-4825-9856-34ee7beafb56","Next-Generation Protein Identification: Advancing Single-Molecule Fluorescence Approaches","Filius, M. (TU Delft BN/Chirlmin Joo Lab)","Joo, C. (promotor); Dekker, C. (promotor); Delft University of Technology (degree granting institution)","2022","Proteins are the workhorses of the cell, as such, they form the basis of all living systems. In order to fully understand biological processes, the ability to identify and quantify the proteins in cells is crucial. Identification can be achieved by determining the amino acid sequence of proteins, since this sequence is unique for each protein. However, protein sequencing remains an enormous challenge. The dynamic range at which proteins can occur spans several orders of magnitude, and the need to identify all 20 different amino acids are only a few of the challenges that are currently preventing us from sequencing proteins. However, when realized, single-molecule protein sequencing will create the opportunity for single-cell proteomics and screening for on-site medical diagnostics. It will lead to a revolution in biophysics, biotechnology, and healthcare. Fluorescence techniques belong to one of the most commonly used techniques in biophysics and have brought about a deeper understanding of biological processes at the single-cell or single-molecule level. Furthermore, the field of DNA sequencing has demonstrated that the use of fluorescence approaches has enabled one of the main breakthroughs in DNA sequencing: the development of next-generation sequencers (NGS). These NGS greatly reduced the sequencing costs per human genome. It comes as no surprise that fluorescence approaches are being explored for protein sequencing as well. In this thesis, we pioneer with single-molecule FRET (fluorescence resonance energy transfer) and DNA nanotechnology approaches for the development of a protein identification platform.","single-molecule biophysics; single-molecule FRET; Protein sequencing; protein fingerprinting; DNA nanotechnology; Fluorescence imaging","en","doctoral thesis","","978-90-8593-510-0","","","","","","","","","BN/Chirlmin Joo Lab","","",""
"uuid:52038194-6cdf-4098-8723-852eb68dfd00","http://resolver.tudelft.nl/uuid:52038194-6cdf-4098-8723-852eb68dfd00","Probabilistic Motion Planning for Multi-Robot Systems","Zhu, H. (TU Delft Learning & Autonomous Control)","Babuska, R. (promotor); Alonso Mora, J. (copromotor); Delft University of Technology (degree granting institution)","2022","Planning safe motions for multi-robot systems is crucial for deploying them in real-world applications such as target tracking, environmental monitoring, and multi-view cinematography. Traditional approaches mainly solve the multi-robot motion planning problem in a deterministic manner, where the robot states and system models are perfectly known. Practically, however, many sources of uncertainty exist in real-world environments, such as noisy sensor measurements, motion disturbances, and uncertain behaviors of other decision-making agents. Reasoning about these uncertainties is of utmost importance for robust and safe navigation of multi-robot systems. To this end, this thesis aims to develop probabilistic methods for multi-robot motion planning under uncertainty.
The first main contribution of this thesis is a Chance-Constrained Nonlinear Model Predictive Control (CCNMPC) method for probabilistic multi-robot motion planning. Taking into account uncertainties in robot localization, sensing, and motion disturbances, the method explicitly considers the collision probability between each robot and obstacle and formulates a model predictive control problem with chance constraints. A tight upper bound of the collision probability is developed which makes the CCNMPC formulation tractable and solvable in real time. In addition, the CCNMPC is incorporated into multi-robot motion planning using three coordination strategies: a) centralized sequential planning, b) distributed planning in which robots communicate their future planned trajectories, and c) decentralized planning in which robots predict other robots' trajectories using the constant velocity model (CVM). Performances of the three strategies are analyzed and compared.
The CCNMPC method requires robots to know the future trajectories of other robots, either via communication or motion prediction using CVM. However, communication is not always available, and the CVM based motion prediction can lead to collisions among robots, especially in crowded environments. To achieve decentralized and communication-free multi-robot collision avoidance under uncertainty, this thesis then presents a method that relies on the introduced Buffered Uncertainty-Aware Voronoi Cell (B-UAVC). The B-UAVC defines a local safe region for each robot among other robots and obstacles, such that the collision probability between robots and obstacles is below a specified threshold if each robot's motion is constrained to be within its corresponding B-UAVC. An approach to constructing the B-UAVC is proposed, which leverages the techniques of computing a separating hyperplane between two Gaussian distributions and adding buffers for probabilistic collision avoidance. Based on B-UAVC, a set of reactive controllers are designed for single-integrator, double-integrator, and differential-drive robots, respectively; and a receding horizon planner is proposed for general nonlinear dynamical systems.
Instead of directly generating a control action for each robot to move towards its waypoint goal as in the CCNMPC and B-UAVC methods, this thesis further presents a method that can compute a safe control input by minimally modifying a given nominal controller, which may come from a high-level task-oriented planner. The method is decentralized and relies on the Chance-Constrained Safety Barrier Certificates (CC-SBC), which defines a probabilistic safe control space for each robot in a multi-robot system considering robot localization and sensing uncertainties. The CC-SBC chance constraints are reformulated into a set of deterministic quadratic constraints, based on which a quadratically constrained quadratic program (QCQP) can be formulated. By solving the QCQP, the robot can obtain a safe control action thanks to that the CC-SBC guarantees forward invariance of the robot's safety set in a probabilistic manner. Hence, the CC-SBC method can be used as a probabilistic safety filter for multi-robot systems.
While both the B-UAVC and CC-SBC methods are decentralized and communication-free, they typically lead to more conservative robot motions than the CCNMPC method with robots communicating their planner trajectories. The CCNMPC method can also be communication-free by letting each robot predict the other robots' trajectories using the constant velocity model, but it is unsafe in crowded environments. To address the issue, this thesis finally presents a novel trajectory prediction model based on Recurrent Neural Networks (RNN) that can learn multi-robot motion behaviors from demonstrated trajectories generated using a centralized motion planner. By incorporating the learned RNN-based trajectory prediction model within the MPC framework, efficient and communication-free multi-robot motion planning is achieved.
The motion planning methods developed in the thesis have been extensively evaluated and validated in simulations and real-world experiments with a team of quadrotors, showing safe navigation of robots under uncertainty.","Motion planning; collision avoidance; multi-robot systems; micro aerial vehicles; planning under uncertainty; dynamic environments","en","doctoral thesis","","978-94-6423-651-4","","","","","","","","","Learning & Autonomous Control","","",""
"uuid:07d93422-d07c-492e-8d65-592344e01936","http://resolver.tudelft.nl/uuid:07d93422-d07c-492e-8d65-592344e01936","Mitigating Leakage and Noise in Superconducting Quantum Computing","Battistel, F. (TU Delft QCD/Terhal Group)","Terhal, B.M. (promotor); DiCarlo, L. (promotor); Delft University of Technology (degree granting institution)","2022","Computers are used all over the place to perform tasks ranging from sending an email to running some complicated numerical simulation. That is brilliant of course, because computers enable us to solve a lot of problems in the world in this way. At the same time, for some of those problems, not even powerful supercomputers are enough to get the result of the computation in any reasonable amount of time. An alternative that might be able to solve some of these problems very quickly are quantum computers. The operations performed by a quantum computer need to be faithful in order to get the right result of the quantum computation. However, nowadays quantum computers are fairly noisy, severely limiting their range of applicability in the near future. Various methods for quantum error correction have been developed, showing that, if error rates are below a certain threshold, one can make the computation as error-free as desired. However, while quantum error correction is starting to be tested in experiments, its performance has been mostly studied with respect to idealized error models. Furthermore, quantum error correction comes at the price of a substantial overhead in number of qubits and number of operations, especially if error rates are just barely below threshold. From a different perspective, error-mitigation techniques that do not need the full machinery of quantum correction have been put forward, fostering hope that noisy near-term devices might run useful applications even without quantum error correction. However, in either case the physical error rates of the fundamental operations are still high. In this thesis we focus on achieving lower error rates for some of the fundamental operations in a quantum computer, specifically for superconducting qubits, and we demonstrate the beneficial impact of these results on quantum error correction in a realistic setting. We develop error models that are physically motivated for superconducting qubits (reviewed in Chapter 2), based on the noise sources to which they are sensitive (reviewed in Chapter 3). The major elements of novelty in our models are the inclusion of leakage, quasi-static flux noise, and distortions of electronic signals. In Chapter 6 we discuss a flux-pulsing technique for controlled-phase gates, named Net Zero. In the first part, we show that the characteristic zero-integral feature protects from long-timescale distortions, echoes out flux noise and uses leakage interference to mitigate leakage, leading to a fast, high-fidelity gate. In the second part, we introduce an updated version of Net Zero, called Sudden Net Zero, that maintains the same advantages and adds easiness of tuneup and straightforward conditional-phase tunability. Diagnosing errors is crucial for correcting them and tuning up gates. In Chapter 7 we introduce Spectral Quantum Tomography, a tomographic method that can provide detailed information about errors in single- and two-qubit gates, in a way that is independent of state-preparation and measurement errors. In particular, we investigate the footprint of relaxation and dephasing, as well as leakage and non-Markovian noise. Leakage outside of the qubit computational subspace is particularly damaging for quantum error correcting codes, in particular stabilizer codes (reviewed in Chapter 4). Leakage-reduction units (reviewed in Chapter 5) can bring a leaked qubit back to the computational subspace, thus restoring part of the loss in performance. Based on the error model developed for two-qubit gates, we study the effect of leakage in quantum error correction using realistic density-matrix simulations. In Chapter 8 we use hidden Markov models to detect leakage in a transmon-qubit-based surface code and improve the logical fidelity by post-selection. The detection is based on recognizing patterns in the stabilizer measurements that can likely be attributed to leakage. In Chapter 9 we introduce a hardware-efficient leakage-reduction scheme to directly remove leakage in a scalable way that does not require extra qubits or time, leading to a reduction of the logical error rate. In particular, we propose two separate leakage-reduction units tailored for data and ancilla qubits, respectively. For data qubits, we apply a microwave pulse that transfers leakage to its dedicated readout resonator, where it quickly decays into the environment. For ancilla qubits, we use a microwave pulse that maps the leaked state to a computational state. These techniques for two-qubit gates, tomography and leakage mitigation contribute to reducing the error rates, benefiting quantum error correction as well as near-term devices. In the Conclusion we give an outlook on the potential challenges in superconducting quantum computing, including tunable couplers, real-time decoding and physical error rates in large devices.","superconducting qubits; leakage; quantum error correction; quantum gates gates","en","doctoral thesis","","978-94-6384-285-3","","","","","","","","","QCD/Terhal Group","","",""
"uuid:c2d123b9-7cf9-4ffd-b495-06a53fe727c2","http://resolver.tudelft.nl/uuid:c2d123b9-7cf9-4ffd-b495-06a53fe727c2","Cooperative Urban Driving Strategies at Signalized Intersections","Liu, M. (TU Delft Transport and Planning)","Hoogendoorn, S.P. (promotor); Wang, M. (copromotor); Delft University of Technology (degree granting institution)","2022","Growth in the number of vehicles causes excessive traffic congestion and travel delay on urban roads, especially at signalized intersections. The recent advances in connected and automated vehicle (CAV) technology and the upgrade of Vehicle-to-Vehicle (V2V), Vehicle-to-Infrastructure (V2I), and Infrastructure-to-Vehicle (I2V) communications have been proposed as potential solutions to efficient and effective urban transportation. CAVs enable the capability to share data, communicate with neighboring vehicles and roadside infrastructures, and connect to traffic control systems, and therefore offer the benefits to reduce congestion and pollution levels and improve comfort and road safety. CAV platoons can coordinate member vehicles for a common goal in platooning. In this way, vehicles can be cooperative to accelerate/decelerate facing the traffic signal controllers on urban roads. The challenge is posed by the diversity of signal control approaches, such as fixed-timing, actuated, and adaptive signals. However, the benefits and effectiveness of CAV platoon trajectory optimization for all those various systems in the vicinity of signalized intersections remain unclear in research and also practice...","","en","doctoral thesis","","978-90-5584-307-7","","","","","","","","","Transport and Planning","","",""
"uuid:3f978e0b-cb95-4827-bb2b-254de42d4b3b","http://resolver.tudelft.nl/uuid:3f978e0b-cb95-4827-bb2b-254de42d4b3b","Combinisme: Integraal Architectonisch Denken Aanzet tot een Ruimtewetenschap(actie)filosofie","Visser, M.J. (TU Delft Design Informatics)","Sariyildiz, I.S. (promotor); Delft University of Technology (degree granting institution)","2022","Combinism is not the same as Communism. Although the Combinist is interested in collective systems, theories and dogmas, he never loses sight for the individual perceptions and minds of people. The only slogan of the Combinist is ‘Form Follows Dialogue’. The Combinist is both a traditionalist and a modernist; sometimes even a dreamer, a futurist and if it cannot be otherwise an utopist. The Combinist perceives Combinism as a combination of arts & humanities, natural- and social sciences, but he will never consider this mentality and approach as a limit. New challenges, ideas, collaborations etc. will constantly lead the Combinist to new combinations of existing and new ideas, technics, models, forms, materials, etc.. Combinism is a dynamic and creative way of thinking and acting aimed at quality action and quality changes. Combined interests, qualities, values and ideals should go hand in hand with the interest of individuals, institutions and specialists. The Combinist in architecture is looking for a creative, complex, future proof, aimed at wellbeing-win–win, worldwide oriented (action) philosophy of architecture and urbanism (‘Space Science’). Knowledge of history, styles, building techniques, design theories, aesthetics, sizes, colors, materials and forms will be linked to knowledge of human needs and creative processes. Because the world is becoming more and more complex, teamwork of like-minded people, Allianceand Action-teams (‘A-Teams’) will become more important than ever. A Combinist is thinking out of the box, as well as creatively and imaginatively, but also to the point, businesswise and functional. He has a focus on both the outside world (economics, culture, society, nature, sciences) and the inner world (the world of the mind and the senses). Using Combinations he tries to bridge classic opposites as ratio-emotion, female-male, western-eastern, beautiful-ugly, order-disorder, simple-complex, natureculture, individual-collective, inside-outside, stupid-intelligent, urban-rural, etc.. He tries to establish a synthesis between people and their environment on the levels of chair, house, city, region, country and continent. The Combinist is not only interested in the classical themes as order, harmony, composition and building technique, but he will also involve in more modern themes such as context, irony, humor, entertainment, citizen participation, DIY, ICT, future proofing, affordability, maintenance and management, flexibility, and (why not occasionally?) disharmony, formlessness and deconstruction if it is necessary to wake people and encourage them to new experiences of each other and the surrounding space. During planning processes the Combinist tries to utilize the creativity, knowledge and experience of others and he will look for the “best” team- and project solution. “Best” will be defined collectively on a complex, integrated, multilayered, all-inclusive way. When making new acquaintances, he prefers showing the joint business card of his planning team over the business card of his own company. The Combinist combines content with process, theory with practice, soft with hard and he will try to accomplish something wherever possible and to make (fast) results. He considers nature and history as infinite sources of inspiration. The Combinist prefers a (creative) action-culture instead of a (bureaucratic) meeting culture; he will not avoid conflicts and does not hesitate to oppose those who possess power if necessary. He will try to get hang of projects quickly and combines the interest of the project with his personal interest without lapsing into egoism and self-enrichment at the expense of others. He tries to enhance the happiness of people with their life and existence. Creating quality, beauty, sustainability and comfort is more important than just making money. Eventually the Combinist, or rather the Team of Combinists, strives for the realization of a ‘Combi-environment’ that has been defined earlier as: A future-proof environment that provides a synthesis between people and their environment, that enables interactions between individuals and the collective, indoor and outdoor, that offers comfort, that is safe, clean, future-proof and beautiful, that stimulates the mind and senses, touches the heart and evokes feelings of general well-being. Use values, aesthetic values, value of ethics, sustainability values, values of culture and nature and perception enter into a mutual connection (combination). (CONTENT). By leaving things as they are, but also by creating or transforming things with eye for the human dimension, in our time we can take up the challenge of creating “wellbeing- win-win-worldwide” situations that highlight the quality of people and their environments, not motivated by greed and self-interest, but by a (natural) need for beauty, sustainability, comfort, safety, happiness and creativity. This result can be achieved through teamwork (Combi-work) of like-minded people. (PROCESS). Making good combinations ad creating cohesion between old and new is an inevitable and perpetual dynamic process. Therefore it is strange when people stick to welldefined styles, theories and beliefs. Combinists look to the past and to the future. They try to combine tradition and modernity. They develop new concept of housing and working in order to make a bridge between country life and city life. A Combinist likes to meet people in order to share knowledge, experiences, and make new plans for the future by means of multidisciplinary cooperation. Practical theory and theoretical practice are combined. The preservation and transformation of cultural heritage are very important. New concepts of energy-saving and energy-productions will be used in transformation projects. The dynamics and change of life itself is embedded in the formation of combinations. A style / theory / idea low on combinations is doomed. A Combinist is eager to develop new concepts, questionairs, models, actionlists and checklists. Other people can use these information and use them in new projects. A Combinist likes more debates, more reflections and more great perspectives overwhelming individual interests. Eventually a Combinist wants to integrate alfa, beta and gamma sciences: the sum of parts is more than the parts: 1 + 1 = 3. (CONTENT + PROCESS).","","nl","doctoral thesis","","978-90-9035643-3","","","","","","","","","Design Informatics","","",""
"uuid:b8e0648c-d38e-4f95-bcd7-b99a943cb2d1","http://resolver.tudelft.nl/uuid:b8e0648c-d38e-4f95-bcd7-b99a943cb2d1","Theory and Applications of Differential Equation Methods for Graph-based Learning","Budd, J.M. (TU Delft Mathematical Physics)","Dubbeldam, J.L.A. (promotor); van Gennip, Y. (copromotor); Delft University of Technology (degree granting institution)","2022","A large number of modern learning problems involve working with highly interrelated and interconnected data. Graph-based learning is an emerging technique for approaching such problems, by representing this data as a graph (a.k.a. a network). That is, the points of data are represented by the vertices of the graph, and then the edges linking these vertices represent the relationships between the points of data. This provides a unified perspective for thinking about all sorts of interrelated data: the vertices could represent pixels in an image or people in a social network, and the underlying framework would be the same...
In this dissertation, we first give a comprehensive introduction on the mimetic spectral element method with applications to the Poisson problem, which is followed by a new development of the mimetic spectral element method for the Navier-Stokes equations. This new development is on a conservative dual-field discretization that conserves mass, kinetic energy and helicity for the 3D incompressible Navier-Stokes equations in the absence of dissipative terms. And when there are dissipative terms, the method correctly predicts the decay rates of the kinetic energy and helicity. It is a dual-field method in the sense that two evolution equations are employed and weak solutions are sought for each physical variable in two different finite dimensional function spaces. This novel method and the promising results reveal its potential in multiple research fields like turbulence modeling, sub-grid methods and large eddy simulation.
Despite the mimetic spectral element method possesses preferable properties due to its feature of structure-preserving, its demand of high computational power is a major limitation. To address this drawback, two techniques, hybridization and dual basis functions, are employed for the mimetic spectral element method, which leads to an extension that decreases the computational cost not only by reducing the size and lowering the condition number of the global linear system, but also by improving the feasibility for parallel computing.
A special component, the Complement, is embedded in this thesis. It aims to provide a more friendly introduction for the readers, especially those who are new to this specific area of numerical methods. In these web-based additions, there are instructors and well-documented scripts which allow readers to learn in an interactive way, thus to get some hands-on experience and eventually to obtain a deeper understanding of the method. This component can help the readers to more quickly and efficiently implement their own new ideas, which will in return contribute to the development of this method.
Overall, we conclude that this dissertation fulfilled the goal to promote the application and development of the mimetic spectral element method.","structure-preserving discretization; mimetic spectral element method; de Rham comlex; hybridization; dual basis functions","en","doctoral thesis","","978-94-6419-420-3","","","","","","2022-01-20","","","Aerodynamics","","",""
"uuid:a8c16a82-e5aa-492a-88d1-72276b8328f1","http://resolver.tudelft.nl/uuid:a8c16a82-e5aa-492a-88d1-72276b8328f1","Physics and applications of electron-matter interaction simulations","van Kessel, L.C.P.M. (TU Delft ImPhys/Microscopy Instrumentation & Techniques)","Kruit, P. (promotor); Hagen, C.W. (promotor); Delft University of Technology (degree granting institution)","2022","Electrons with an energy ranging from 0 to 50 keV are among the most versatile tools in nanotechnology. A common example is the scanning electron microscope (SEM), which focuses an electron beam with an energy ranging from several hundred eV to tens of keV on a sample. When landing on the sample, the electrons in the beam penetrate the material. They can excite secondary electrons in the material, for example by ionization. Some of the electrons escape the sample again and reach a detector, where a high-resolution image of the sample is formed. Thanks to the small wavelength of electrons, a SEM is able to achieve single nanometre resolution while conventional optical microscopes are limited to hundreds of nanometres....","","en","doctoral thesis","","978-94-6366-491-2","","","","","","","","","ImPhys/Microscopy Instrumentation & Techniques","","",""
"uuid:f0268633-db06-4b40-b298-3b08db93f195","http://resolver.tudelft.nl/uuid:f0268633-db06-4b40-b298-3b08db93f195","Modeling Seafarers’ Navigational Decision-Making for Autonomous Ships’ Safety","Xue, J. (TU Delft Safety and Security Science)","van Gelder, P.H.A.J.M. (promotor); Papadimitriou, E. (copromotor); Delft University of Technology (degree granting institution)","2022","Maritime shipping is essential to the global economy, while waterway transportation is recognized as a high-risk industry. Additionally, maritime accidents are frequently caused by human errors, and with the rapid improvement of science and technology, the improvement of autonomous ships has been technically feasible, which attracts the wide attention of researchers in academia and industry. However, the knowledge acquisition and representation methods are mainly based on knowledge-based research methods, while the existing research for automatically achieving the autonomous ships’ maneuvering decision-making by acquiring the seafarers’ operation characteristics is still scanty. In addition, it also lacks the appropriate theoretical methods to explore the problem of autonomous ship human-like maneuvering decision-making modeling. Therefore, the research on ship maneuvering decision-making methods still needs to be improved and further developed. This thesis focuses on the problem of modeling seafarers’ navigational decision-making in a typical scenario for autonomous ships’ safety. We propose the method to prioritize safety influencing factors of autonomous ships’ maneuvering decisions and a series of ship maneuvering knowledge learning models to give the autonomous ship the ability to make decisions like a human. The autonomous ship human-like maneuvering decision-making problem has been considered as a machine learning problem, and we translate the problem into learning the maneuvering decision characteristics of the officer on watch (OOW) using various decision tree algorithms. By constructing autonomous ship human-like decision-making maneuvering decision recognition models under multiple constraints in the specific scenarios, the decision-making mechanism of the OOW’s maneuvering behavior under specific water traffic safety influencing factors in the inbound scenario is analyzed, and the OOW’s decisionmaking knowledge is automatically acquired and represented...","","en","doctoral thesis","","978-94-6384-291-4","","","","","","","","","Safety and Security Science","","",""
"uuid:26400589-ae0f-46bc-aaea-70004d5fbbf2","http://resolver.tudelft.nl/uuid:26400589-ae0f-46bc-aaea-70004d5fbbf2","Power Combiner and Antenna Array Concepts for Millimeter Wave Applications","van Schelven, R.M. (TU Delft Tera-Hertz Sensing)","Neto, A. (promotor); Cavallo, D. (copromotor); Delft University of Technology (degree granting institution)","2022","The fifth generation of mobile communication poses challenges in the form of increased data volume, network scalability and efficient network operation. An important part in meeting these requirements is the use of the mm-wave frequency range, enabling the use of a wide bandwidth and therefore the desired high-data rates. However, a challenging aspect of mm-wave communication lies in the efficient generation of RF power. In fact, currently reported power amplifiers are unable to reach the required output power for commercial applications. A typical way to increase the available power is to combine the signal from multiple sources using a power combiner. Power combiners in the mm-wave frequency range have been investigated for many years, but typical problems that occur are area occupation and insertion losses that grow directly with the number of inputs....","Artificial dielectric layers; equivalent circuit; leaky wave; pattern shaping; phased array; power combiner; slot antenna","en","doctoral thesis","","978-94-6421-612-7","","","","","","2023-01-19","","","Tera-Hertz Sensing","","",""
"uuid:d2ed28b3-b003-4ac7-b9d4-dfd3ac273732","http://resolver.tudelft.nl/uuid:d2ed28b3-b003-4ac7-b9d4-dfd3ac273732","Generating Secure and Gentle Grip on Soft Substrates","van Assenbergh, S.P. (TU Delft Medical Instruments & Bio-Inspired Technology)","Dodou, D. (promotor); Breedveld, P. (promotor); van Esch, J.H. (promotor); Delft University of Technology (degree granting institution)","2022","Generation of grip on soft tissue in the surgical field is most commonly done with forceps that generate friction grip, that is, the translation of normal (pinch) forces into shear forces. Errors made with these surgical grippers are often force-related: applying too low pinch forces results in slipping of the tissue out of the gripper, and too high pinch forces may lead to tissue damage. One possible solution for generating tissue grip that is secure yet gentle is the adhesive grip. In this case, contact between tissue and gripper is maintained by attracting gripper-tissue interactions, and gripping strength does not depend on the applied pinch forces. Inspiration for the design of such a gripper can be derived from the tree frog, an animal that uses adhesive grip to grip on a range of substrates in its habitat. The main aim of this thesis is to translate grip-generating principles used by tree frogs into designs of artificial adhesives that can generate firm yet gentle grip on soft substrates. The designs of the artificial adhesives in this thesis are inspired by two important characteristics of the tree frog’s attachment apparatus: the hierarchical surface pattern on the tree-frog toe-pad and reinforcing fibrillar structures located inside the pad. Specifically, the aim of this thesis is to mimic function rather than form, and focuses on mechanisms underlying the tree-frog attachment apparatus to satisfy two main requirements for strong grip: (1) contact formation and (2) preservation of the formed contact.","","en","doctoral thesis","","978-94-6366-495-0","","","","","","","","","Medical Instruments & Bio-Inspired Technology","","",""
"uuid:6f942371-855f-4fba-bcd6-a264edbeb5dd","http://resolver.tudelft.nl/uuid:6f942371-855f-4fba-bcd6-a264edbeb5dd","Design, modeling and characterization of multi-stable metastructures for shape reconfiguration and energy absorption","Zhang, Y. (TU Delft Computational Design and Mechanics)","van Keulen, A. (promotor); Tichem, M. (promotor); Delft University of Technology (degree granting institution)","2022","Multi-stable beam-type metastructures exhibiting snap-through behavior have been extensively studied in recent years , as their stable states can be maintained without the need of external power supply. By arranging a series of beams exhibiting bi-stability, multi-stable metastructures can be constructed. However, current designs of multi-stable metastructures are limited in terms of structural kinematics and the associated functionalities are not fully explored. This thesis aims to present design strategies that can facilitate new kinematic behavior and functionalities for multi-stable metastructures (i.e., energy absorption and shape reconfiguration). Specifically, we investigate the additional rotational degrees of freedom by incorporating rotational compliance in both 2D and 3D designs. In doing so, multi-stable metastructures are capable of realizing both translational and rotational motion, facilitating their applicability in soft robotics and deployable structures. Moreover, the energy dissipation of multi-stable metastructures are studied, where we have proposed design strategies that can enhance the energy absorption without using more materials. In addition, it is demonstrated that multi-stable metastructures can also be designed to realize shape reconfiguration of a morphing surface, where the stability requirement and accessible configurations have been presented. Such multi-stable metastructures exhibiting translational and rotational degrees of freedoms hold great potential for developing reconfigurable structures and energy absorbers.","multi-stable metastructures; snap-through; translational and rotational states; energy absorption; shape reconfiguration","en","doctoral thesis","","978-94-6384-286-0","","","","","","2022-01-17","","","Computational Design and Mechanics","","",""
"uuid:9e59cd4f-06da-4adf-b3a1-0067bda2d34d","http://resolver.tudelft.nl/uuid:9e59cd4f-06da-4adf-b3a1-0067bda2d34d","Energy-efficient Train Timetabling","Scheepmaker, G.M. (TU Delft Transport and Planning)","Goverde, R.M.P. (promotor); Delft University of Technology (degree granting institution)","2022","Railways in Europe need to reduce CO2 emissions and energy usage to contribute to sustainability. One of the measures that railway undertakings can apply with low investment cost and both high reductions in CO2 emission and energy consumption, is energy-efficient train trajectory optimization. Another efficient measure is incorporating energy-efficiency in the timetable design. The aim of this thesis is to incorporate energy-efficient train trajectory optimization in the timetable design in order to improve the potential for energy-efficiency of railways. The thesis develops and applies models for the energy-efficient train
control problem for a single train over (multiple) stops with both mechanical and regenerative braking behavior. In addition, energy-efficient train trajectory optimization is incorporated in timetabling by formulating and developing algorithms for a multiple-objective optimization problem applied on a corridor with multiple interacting trains.","Train trajectory optimization; Energy-efficiency; Railway timetabling; Multiple-objective optimization","en","doctoral thesis","Trail / TU Delft","978-90-5584-305-3","","","","TRAIL Thesis Series no. T2022/1, the Netherlands TRAIL Research School","","","","","Transport and Planning","","",""
"uuid:ef87dc11-e7b2-4726-a41f-28588d64c58d","http://resolver.tudelft.nl/uuid:ef87dc11-e7b2-4726-a41f-28588d64c58d","Hybrid-Electric Aircraft with Over-the-Wing Distributed Propulsion: Aerodynamic Performance and Conceptual Design","de Vries, R. (TU Delft Flight Performance and Propulsion)","Veldhuis, L.L.M. (promotor); Vos, Roelof (copromotor); Delft University of Technology (degree granting institution)","2022","Recent developments in the field of hybrid-electric propulsion (HEP) have opened the door to a wide range of novel aircraft configurations with improved energy efficiency. These electrically-driven powertrains enable “distributed propulsion” configurations, in which the aerodynamic interaction between the propulsive devices and the airframe is exploited to enhance the aero-propulsive efficiency of the aircraft. In this context, the present research focuses specifically on over-the-wing distributed propulsion (OTWDP) for regional propeller aircraft. Over-the-wing (OTW) propellers are particularly promising because they can significantly enhance the lift-to-drag ratio of the wing, as well as reduce flyover noise due to shielding by the wing.
The objective of this research is therefore to quantify the impact of OTWDP on the energy efficiency of hybrid-electric aircraft. For this, the research is divided into three main parts. First, a sizing method for hybrid-electric distributed-propulsion (HEDP) aircraft is developed, independently of where the propellers are positioned with respect to the airframe. Second, the aerodynamic interaction effects and performance characteristics of OTWDP systems are investigated, independently of the type of powertrain used to drive the propellers. And third, the sizing method and aerodynamic performance estimates of the previous two points are combined to assess the effect of hybrid-electric OTWDP on aircraft-level performance metrics. [...]","Hybrid electric propulsion; Distributed propulsion; Propulsion Integration; Propeller aerodynamics; Over-the-wing propellers; Conceptual aircraft design; Aircraft performance; wind-tunnel testing","en","doctoral thesis","","978-94-6384-287-7","","","","","","","","","Flight Performance and Propulsion","","",""
"uuid:8ebb7d9d-f090-49f7-8c6f-88e0dd31c372","http://resolver.tudelft.nl/uuid:8ebb7d9d-f090-49f7-8c6f-88e0dd31c372","Machine Learning and Image Processing Methods for the Segmentation and Quantification of the Corneal Endothelium","Vigueras Guillén, J.P. (TU Delft ImPhys/Computational Imaging)","van Vliet, L.J. (promotor); Vermeer, K.A. (copromotor); Delft University of Technology (degree granting institution)","2022","The corneal endothelium, a non-regenerative layer of cells controlling the state of corneal hydration, is the most critical tissue of the cornea. Quantifying its health status is not only important to diagnose and treat certain corneal diseases but also relevant to the execution and evaluation of many eye surgeries. By means of specular microscopy, the endothelium can be visualized and evaluated in vivo, in a non-invasive manner. The estimation of the corneal endothelium parameters, particularly cell density, provides a valuable input for disease diagnosis, prognosis, and monitoring. Manual estimation, which requires cell segmentation, is time consuming and tedious. In this thesis, we have presented and evaluated several automatic techniques for the segmentation and quantification of the corneal endothelium. The final method was evaluated in two clinical studies: one regarding the transplantation of the cornea (41 patients, 1 year follow-up, 383 images), and another regarding the implantation of the glaucoma drainage device Baerveldt (204 patients, 2 year follow-up, 7975 images). Our method detected more than twice the number of cells than the microscope built-in software (Topcon SP-1P), and our estimation error of the corneal parameters was less than one third of Topcon’s error. Overall, our proposed fully-automatic method provided such a high accuracy that the distinctive patterns in the corneal parameters throughout the months were clearly observable and equal to the manual annotations.
The third study sampled three cases of strategic partnerships which are characterized as long-term, highly integrated and collaborative relationships. To gain theoretical sensitivity in this third study, a conceptual framework was developed using the concepts from the first two studies. The major finding across the three studies is that the way integration in the supply chain develops is highly dependent on the interaction between project actors. The way actors use the inter-organizational rules of a project organization, influences the level of trust and no-blame culture that emerges through interaction. In turn, the level of trust can influence the rules of actors. More specifically, dominant actors seem to able to change the rules of the system. When a dominant actor uses his power position to change the rules of the social system, it can make other actors lose their commitment to the partnership.
This research shows that successful long-term and close collaboration between firms continuously requires careful consideration of how the organizational structures are designed and used and their effect on relationships between actors. One should not assume that integrated contracts and integrative practices that have been shown to work in one project, will automatically lead to close and long-lasting relationships between actors in another project.
The first part of the thesis mainly focuses on the robustness of network controllability in face of topological perturbations. In Chapter 2, we propose closed-form analytic approximations for the minimum number of driver nodes which denotes the controllability of the network. Inspired by the concept of critical links, we deduce and validate our approximations on both real-world and synthetic networks. We show that when the fraction of removed links is small, our approximations perform well. Besides, we also find that the critical link attack is the most effective among 4 considered attacks, as long as the fraction of removed links is smaller than the fraction of critical links. In Chapter 3, we focus on the controllability of swarm signalling networks with regular out-degree and bi-modal out-degree distribution. We deduce the generating functions in random failure process and then estimate the fraction of driver nodes with simulations. Results show that our estimations have high accuracy in predicting the fraction of driver nodes in case of random link failures. In order to further improve the accuracy of our proposed approximations in Chapter 4, we use a machine learning method to decrease the gap between our analytical approximations and the simulation results. We compare our approximations obtained by machine learning with existing analytical approximations and show that our approximations significantly outperform the existing closed-form analytical approximations in both synthetic and real-world networks. Apart from targeted attacks based upon the removal of critical links, we also propose analytical approximations for out-in degree-based attacks. In Chapter 5, we investigate the reachability-based robustness of controllability considering link-based random attack, targeted attack, as well as random attack under the protection of critical links. We validate our approximations using 200 real-world communication networks and some synthetic networks and find that our approximations perform well in most cases.
In the second part of the thesis, we work on the recoverability of networks. The recoverability of networks refers to the ability of a network to return to a desired performance level after suffering topological perturbations such as link failures. In Chapter 6, we propose a general topological approach and two recoverability indicators to measure the network recoverability for optical networks for two recovery scenarios. Furthermore, we employ the proposed approach to assess 20 real-world optical networks. Numerical results show that the network recoverability is coupled to the network topology, the robustness metric and the recovery strategy. We also find that assortativity, which denotes the tendency of network nodes to connect preferentially to other nodes with similar degree, has the strongest correlation with both recoverability indicators. In Chapter 7, we adopted the framework of network recoverability and investigate the recoverability of network controllability for two recovery scenarios. We employ the proposed approach to assess swarm signalling networks with regular out-degree, and networks with bi-modal out-degree distributions. Besides, we also deduced the analytical results of the recoverability indicators by generating functions, which are close to the results based on simulations. In Chapter 8, we conclude this thesis and come up with some future work.","Controllability; Recoverability; Robustness; Complex Networks","en","doctoral thesis","","978-94-6423-609-5","","","","","","","","","Network Architectures and Services","","",""
"uuid:ee0e14ec-c4b2-4bcf-81cf-d09c3bb8b8c1","http://resolver.tudelft.nl/uuid:ee0e14ec-c4b2-4bcf-81cf-d09c3bb8b8c1","Mechanisms of short pitch rail corrugation","Li, S. (TU Delft Railway Engineering)","Li, Z. (promotor); Dollevoet, R.P.B.J. (promotor); Delft University of Technology (degree granting institution)","2022","Short pitch corrugation is a (quasi-) sinusoidal rail vertical defect on rail surface, and it was first found more than one century ago. The wavelength of short pitch corrugation is 20-80 mm, and its amplitude can be up to 100 µm. It mainly develops on straight tracks or at gentle curves with comparatively light axle loads. Due to short pitch corrugation, dynamic wheel-rail contact forces increase considerably, and hence the degradations of vehicle-track components are accelerated. In addition, the corrugation excited vibration is a source that radiates “roaring” noise. Because of those negative aspects, researchers have spent many efforts to understand and theoretically explain the problem. At present, the corrugation phenomenon is usually understood through a damage mechanism and a wavelength-fixing mechanism. Based on the explanation, almost all types of corrugations can be explained with their corresponding mechanisms, and countermeasures were confirmed to be capable of effectively mitigating them. Nevertheless, there has been yet no consensus on the mechanisms of short pitch corrugation due to: 1) it only appears at some tracks and some locations, 2) different from other types of corrugation, short pitch corrugation (after this shortened as “corrugation”) changes minorly with the change in train speed.In this dissertation, a three dimensional (3D) dynamic finite element (FE) vehicle-track frictional rolling contact model, which was initially used to research rail squats, is extended to understand the corrugation enigma. The goal is to investigate if the model can explain the root causes of the corrugation. A second goal is to characterize the rail material damages from rail corrugation metallurgically. After an introduction, the 3D dynamic FE vehicle-track frictional rolling contact model is applied to rail corrugation research. The damage mechanism evaluated is differential wear, and it is considered proportional to the frictional work. Nominal parameters and boundary conditions are used in the model. Corrugations with different phase angles are added to the rail model to investigate whether they can consistently grow. Similar to conclusions from previous research, the obtained differential wear is in phase with the corrugation, which means the corrugation will be worn off and not grow. Nevertheless, it is found that the longitudinal track vibration modes may be dominant for short pitch corrugation initiation, and the vertical modes become dominant at certain stages. The consistency of longitudinal and vertical contact forces, differential wear, and corrugation should determine the development of short pitch corrugation.Then in the second part of this thesis, through the variation of fastening modeling, an initial differential wear with large amplitudes is identified to form from the smooth rail. This differential wear is found to be correlated to the rail longitudinal dynamics. The corrugation explained by this differential wear can consistently initiate and grow up to 80 µm. Additionally, the corrugation from the numerical analysis agrees well with a rail corrugation recorded from the field. Consistency is shown during the corrugation growth between the vertical and longitudinal contact force, the differential wear, and the corrugation. Besides, a corrugation wavelength selection phenomenon can also be explained by this consistency. These results confirm the insights from the first part of the thesis, reveal the whole development process of corrugation, and explain its root cause.The third part of this thesis is a study of the rail material structural damage from a corrugation. A metallurgical study was performed to analyze the rolling contact fatigue damage of a rail sample with corrugation. Besides the well-known white etching layer (WEL), an extra layer called the brown etching layer (BEL) was identified with distinctly lower hardness and brown colour contrast. It bears some similar properties as the WEL, such as brittle though much softer. Compared to WEL, the cracks formed in the BEL were found to propagate downwards without branching and can lead to rail fracture in the end. It is unknown if the BEL is a transitional state from the pearlite structure to the WEL, if it forms after the WEL, or if it is a different layer formed under certain thermomechanical conditions. In conclusion, this thesis extends a 3D dynamic FE vehicle-track rolling contact model for the mechanism of corrugation study. Based on the research results, the root cause of the corrugation found on the Dutch railway network is identified. This finding opens the possibility to design methods to avoid or mitigate corrugation by optimising track structure parameters. Finally, the finding of BEL brings a new concept that will help to understand the rail material damage mechanisms from rail corrugation. The understanding of BEL will provide insight into crack development mechanisms, as BEL can lead to rail fracture. A complete understanding of rail material is crucial for the development of new rail technologies.","","en","doctoral thesis","","978-94-6384-283-9","","","","","","2022-12-19","","","Railway Engineering","","",""
"uuid:638f19f5-6282-41fc-ba7e-2b4598f7b56d","http://resolver.tudelft.nl/uuid:638f19f5-6282-41fc-ba7e-2b4598f7b56d","River health monitoring in the Ayeyarwady river basin in Myanmar","Ko, N.T. (TU Delft Water Resources)","Bogaard, T.A. (promotor); Rutten, M.M. (copromotor); Delft University of Technology (degree granting institution)","2022","Freshwater is a finite resource. It offers goods and services of fundamental importance for the development of human societies. In the developing countries, water-related infrastructure developed rapidly, which will bring prosperity to an impoverished country but also risks compromising the sustainability of the environment. The scarcity of useful monitoring tools due to limited knowledge and capacity hinders informing management of the health on waterways in Myanmar, one of the developing countries. Therefore, we aimed to develop an approach to monitor the health of river ecosystems using remote-sensing satellite images and freshwater macroinvertebrate communities. This doctoral thesis looks into the potential of remote sensing for monitoring the quality of surface water in Myanmar’s rivers and it demonstrates how space datasets can be used to assess the water quality of rivers in Myanmar. We show that a remote sensing analysis using a space dataset can be successfully applied to detect the Suspended Sediment Concentration (SSC) and the water temperature of the Ayeyarwady River basin, in the central dry zone of Myanmar. Remote sensing information can offer a spatial and temporal view of the surface water quality in regard to suspended solids, and water temperature for larger bodies of water such as the Ayeyarwady and Chindwin Rivers...","Macroinvertebrates; Preliminary Myanmar Aquatic Biomonitoring Assessment Index; Myanmar; tropical rivers; irrigation channels; Hydropower generation dams; Water Management; Flow regime","en","doctoral thesis","","","","","","","","","","","Water Resources","","",""
"uuid:a8207380-70a7-4814-829e-97c043022955","http://resolver.tudelft.nl/uuid:a8207380-70a7-4814-829e-97c043022955","Miniaturized CMOS Circuits and Measurement Techniques for Broadband Dielectric Spectroscopy","Vlachogiannakis, G. (TU Delft Electronics)","Spirito, M. (promotor); de Vreede, L.C.N. (promotor); Delft University of Technology (degree granting institution)","2022","Abundant research reported in the literature has indicated that broadband dielectric spectroscopy (BDS), i.e., the measurement of material permittivity versus frequency, can serve a broad range of applications, including, but not limited to, biomedical, food, automotive, and agricultural industries. Adopting this technique in real-life application scenarios is directly dependent on the miniaturization of bulky measurement setups, currently in use for these (prototype) sensing systems. At the same time, a highly sensitive and precise permittivity readout is essential to distinguish between different materials or track variations in the material state composition. This work focuses on developing ultra-compact sensing elements, readout electronics, and measurement techniques to determine the localized complex permittivity with high accuracy, sensitivity, and spatial resolution at microwave operation frequencies.
Firstly, various sensing elements and high-resolution measurement setups are discussed for their compatibility with CMOS integration. Application scenarios are directed towards the characterization of low-loss materials, which often present much higher impedance than the currently 50-Ω oriented measurement setups. An I/Q-mixer-based interferometric technique is introduced to re-normalize the readout system reference impedance and improve the measurement sensitivity at high-impedance loads. Experimental results underline the potential of this technique. However, its compatibility with CMOS technology to enable small-factor systems is challenging at the intended frequencies of operation. Therefore, a double-balanced, RF-driven Wheatstone bridge with programmable branch impedance implemented in CMOS technology is proposed and analyzed for the high-resolution measurement of high-impedance loads (chapter 2).
Next, a high-sensitivity, ultra-compact BDS sensor system is introduced for localized permittivity sensing. As a sensing element, it utilizes a metal patch that performs the actual sensing by presenting permittivity-dependent admittance. This patch is best implemented on the top metallization layer of a CMOS technology such that it can directly interface with the material-under-test (MUT). High measurement sensitivity is achieved by embedding the patch in a double-balanced, RF-driven Wheatstone bridge followed by a frequency down-converting mixer. By driving the bridge with a square wave, permittivity information can be acquired at the fundamental and subsequent harmonics. This concept allows increasing the measurement speed and, at the same time, provides an extended measurement frequency range (chapter 3).
The measurement of the complex permittivity of materials is enabled by developing a dedicated calibration procedure for the patch-based BDS sensor. Measurement results of known liquids show good agreement with theoretical values in the literature, and the relative permittivity resolution in these measurements is better than 0.3 over a 0.1–10 GHz range. The proposed sensor implementation features a measurement speed of 1 ms and occupies an active area of only 0.15×0.3 mm^2, enabling the realization of very compact sensor arrays that can facilitate (real-time) 2-D dielectric imaging of permittivity contrast (chapter 4).
Such a real-time BDS sensor array has been implemented as a 5x5 array, illustrating the scalability of the proposed patch-based BDS concept. This matrix has been demonstrated for its functionality by resolving spatial permittivity variations in the sub-mm range (chapter 5).
Last, the findings and conclusions of this dissertation, and recommendations for future work, are discussed (chapter 6).","Biomedical sensors; CMOS sensors; microwave sensors; bridge circuits; permittivity measurement; medical diagnostic imaging","en","doctoral thesis","","","","","","","","","","","Electronics","","",""
"uuid:9e47e2cc-97f2-4482-8d46-96371c2c3d06","http://resolver.tudelft.nl/uuid:9e47e2cc-97f2-4482-8d46-96371c2c3d06","Continuous Ultrasonic Welding of Thermoplastic Composites: An experimental study towards understanding factors influencing weld quality","Jongbloed, B.C.P. (TU Delft Aerospace Structures & Computational Mechanics)","Villegas, I.F. (promotor); Benedictus, R. (promotor); Teuwen, Julie J.E. (copromotor); Delft University of Technology (degree granting institution)","2022","One of the most promising welding techniques for thermoplastic composites is ultrasonic welding. It is mainly known as a spot welding technique. The relatively new technology continuous ultrasonic welding of thermoplastic composites makes it possible to obtain large fully welded seams with higher load carrying capabilities. However, at the start of the research conducted for this dissertation very little knowledge on the process was available. The weld quality in the state-of-the-art research was significantly lower than the weld quality of the statically welded counterpart in terms weld uniformity as unwelded and overwelded areas were present simultaneously, the single-lap strength, and the presence of voids. Hence, before continuous ultrasonic welding can be industrially applied the weld quality needs to be improved. Consequently, the main objective of this dissertation was to acquire a deeper understanding of the continuous ultrasonic welding process of thermoplastic composites to promote its development for future industrial applications.","ultrasonic welding; fusion bonding; thermoplastic composites; energy director; CF/PPS","en","doctoral thesis","","978-94-6421-596-0","","","","","","","","","Aerospace Structures & Computational Mechanics","","",""
"uuid:8938bc7b-27e7-4b72-b744-1d8a1b0928a5","http://resolver.tudelft.nl/uuid:8938bc7b-27e7-4b72-b744-1d8a1b0928a5","Remote sensing-based prediction of forest fire characteristics","Maffei, C. (TU Delft Optical and Laser Remote Sensing)","Lindenbergh, R.C. (promotor); Menenti, M. (promotor); Delft University of Technology (degree granting institution)","2022","Forest fires are a major ecosystem disturbance at global scale, put pressure on agencies in charge of citizens and infrastructure security and cause unvaluable human losses. Fires are controlled by multiple static and dynamic drivers related to topography, land cover, climate, weather, and anthropic activity. Among these, weather is an active driver of live and dead fuel moisture, which has a direct effect on fire occurrence and behaviour. As a result, in areas experiencing prolonged droughts and heat waves, altered meteorological patterns lead to increased frequency and intensity of forest fires. The operational response of governments, local authorities, forest managers and civil protection agencies in charge of managing forest fires is informed by the assessment of factors controlling fire occurrence and behaviour, often synthesised in maps of fire danger. Danger is defined as the resultant of all factors affecting the inception, spread, and difficulty of control of fires, and it is typically expressed in the form of an index. Key contributors to fire danger are fuel type, amount, and conditions, notably with respect to moisture content. Remote sensing measurements in the shortwave infrared are sensitive to water content of live fuels, while measurements in the thermal infrared allow the detection of vegetation stress conditions due to vapour pressure deficit. In fact, several scholars proved that satellite estimates of vegetation water content and of land surface temperature could be effectively used to predict fire occurrence. Nevertheless, to the best of this author’s knowledge, no research was previously published connecting pre-fire remote sensing measurements to fire behaviour characteristics. This clearly identifies a knowledge gap which needs further investigation and that can be translated in the following research question: to what extent can remote sensing of forest condition be used to predict fire behaviour characteristics and assess the probability of extreme events? The research described in this dissertation aimed at developing methods based on pre-fire optical and thermal remote sensing observations of forests for the prediction of fire behaviour characteristics. The study was carried out in Campania, Italy (13595 km2), one of the most densely populated and fire affected regions in the Mediterranean. Data on all fire events recorded between 2002 and 2011 was provided by Carabinieri (Italian national gendarmerie) forest fire preparedness unit (Nucleo Informativo Antincendio Boschivo, NIAB). The study made use of MODIS land surface temperature (LST) and surface reflectance collection 6 products, which are publicly available on the USGS Land Processes Distributed Active Archive Center (LP DAAC). Approach was probabilistic in nature, trying to relate pre-fire satellite observations of vegetation conditions to the probability distributions of burned area, fire duration and rate of spread. Efforts initially focussed on assessing LST anomaly and its effect on fire behaviour characteristics. LST anomaly is a measure of excess enthalpy stored in fuels. It controls the probability of flames extinction and thus fire duration. First, a climatology of LST was constructed from the longest available time series of daily MODIS LST by means of the Harmonic Analysis of Time Series (HANTS) algorithm. HANTS was then used to construct annual models of daily LST. Finally, the daily LST anomaly was evaluated as the difference between the annual model and the climatology. Fires in the database were then associated with LST anomaly values recorded at their corresponding location on the day prior to the event. Probability distribution functions of log-transformed burned area (normal), log-transformed fire duration (generalised extreme value, GEV) and log-transformed rate of spread (Weibull) where then determined in ten decile bins of LST anomaly. The mean and the standard deviation of the normal distribution of log-transformed burned area showed a clear linear dependence on LST anomaly (r2=0.81, p<0.001 and r2=0.52, p<0.05 respectively), indicating an increase in the probability of large fires with increasing LST anomaly. Similarly, a marked linear dependence on LST anomaly was found for the location (r2=0.78, p<0.001), scale (r2=0.79, p<0.001) and shape (r2=0.87, p<0.001) of the GEV distribution of log-transformed fire duration, favouring longer fire duration with increasing LST anomaly. Conversely, the LST anomaly had a limited effect on the Weibull distribution of log-transformed rate of spread, with scale and shape showing slightly decreasing trends (r2=0.50, p<0.05 and r2=0.54, p<0.05 respectively). A likelihood ratio test showed that the probability models of log-transformed burned area, fire duration and rate of spread conditional to LST anomaly (alternative models) allowed the rejection of the corresponding unconditional models fitting all data (null models), confirming that LST anomaly is a covariate of burned area, fire duration and, to a lesser extent, rate of spread. These results are in line with expectations from models of the combustion process. Following a similar line of reasoning, this study further focussed on remote sensing of live fuel moisture content (LFMC). This vegetation property controls ignition delay, and thus affects flames propagation. The first step was the construction of a novel spectral index, the perpendicular moisture index (PMI), specifically designed to be sensitive to LFMC. The PMI was developed from simulated vegetation spectral data convolved to MODIS bands by noting that in the spectral reflectance subspace of MODIS bands 2 (0.86 µm) and 5 (1.24 µm) isolines of LFMC can be identified, and that these isolines are straight and parallel. By taking as a reference the line corresponding to LFMC=0 (completely dry vegetation), the PMI was calculated as the distance of measured reflectance from the reference line. The PMI is thus a measure of LFMC, and higher values of PMI correspond to higher moisture content. The index was found to be linearly related to LFMC, especially for dense vegetation cover (r2=0.70 when leaf area index is larger than 2, r2=0.87 when larger than 4). When vegetation cover is less dense, the contribution of soil background to the measured reflectance increases, and the PMI underestimates LFMC. PMI maps were produced from the MODIS 8-day composited reflectance product, and fires in the database were associated with the corresponding PMI value at the fire location in the pre-fire compositing period. Using the same approach adopted for LST anomaly, the probability distribution functions of log-transformed burned area, fire duration and rate of spread were determined in ten decile bins of PMI. The mean of the normal distribution of log-transformed burned area showed a clear linear dependence on PMI (r2=0.80, p<0.001), while no trend could be observed for standard deviation. A clear linear dependence on PMI was also found for scale and shape of the Weibull distribution of log-transformed rate of spread (r2=0.97, p<0.001 and r2=0.82, p<0.001 respectively). These results were further confirmed by a likelihood ratio test where the probability models of log-transformed burned area and rate of spread conditional to PMI allowed the rejection of the corresponding unconditional models fitting all data. Location and shape of the GEV distribution of log-transformed fire duration showed no significant linear trend with PMI, whereas scale showed a weak trend (r2=0.55, p<0.05). However, in the likelihood ratio test the probability model of log-transformed fire duration conditional to PMI failed to reject the corresponding unconditional model. These results showed that PMI is a covariate of burned area and rate of spread, as expected from flames propagation models, but not of fire duration. Predictions of fire characteristics based on concurrent observations of LST anomaly and PMI were compared with predictions based on the Fire Weather Index (FWI) System. This fire danger rating tool proved to be effective in several areas worldwide, including Europe. FWI values from weather reanalysis data were associated with fires in the database and were analysed with the same approach adopted for LST anomaly and PMI. It was found that parameters of the probability distribution function of log-transformed burned area and fire duration conditional to FWI System components followed clear linear trends, with increasing danger values leading to higher probabilities of large burned areas and long fire durations. Conversely, FWI System components were unrelated to the rate of spread. Trend analysis (coefficient of determination and p-value of the linear fit, Sen’s slope and Mann-Kendall test) and likelihood ratio tests were used to compare the trends in the parameters of the probability distributions of fire characteristics. It was shown that remote sensing predictions of burned area and fire duration were comparable or better than those from FWI, and that PMI is a good predictor of the rate of spread whereas FWI System components are not. The identified linear trends in the dependence of the parameters of the probability distribution of log-transformed burned area, fire duration and rate of spread on LST anomaly and on PMI allow the prediction of the probability of extreme events, conditional to ignition, as a function of pre-fire remote sensing observations. As both LST anomaly and PMI are good covariates of burned area, these two remote sensing observations of vegetation conditions can be used jointly to improve the prediction of the probability of fires larger than say, the 95th percentile of all events recorded in the study area (30 ha). It was found that the probability of a fire resulting in a burned area larger than 30 ha increases from 0.9% to 9.2% with pre-fire LST anomaly increasing from -2.1 to 4.3 K and increases from 1.8% to 7.4% with pre-fire PMI decreasing from 0.052 to -0.032. When the probability of fires exceeding 30.0 ha is modelled as a function of both LST anomaly and PMI, the probability increases from 0.5% to 12.7%. This confirms that the joint use of LST anomaly and PMI leads to improved predictions. The scientific community showed a consensus on the need to improve fire danger prediction through a more accurate assessment of live fuel condition. Existing fire danger rating systems estimate fuel moisture content from meteorological variables, which results in an undesired approximated solution due to underlying assumptions. Consequently, any direct observation of fuel moisture content has the potential to enable a better evaluation of fire occurrence and fire danger indices. From a remote sensing perspective, these considerations are translated in the research question on the need to understand to what extent can satellite measurements be used to predict forest fire behaviour characteristics. This research showed that remote sensing of vegetation in the optical and thermal domains allows the prediction of the probability distributions of fire behaviour characteristics such as burned area, duration, and rate of spread. These can be further used to evaluate the probability of extreme events, conditional to ignition, as a function of pre-fire remote sensing measurements, contributing to predict danger. It should be noted once more that this result was achieved by using pre-fire remote sensing observations, allowing the prediction of fire characteristics. In perspective, results showed in this dissertation can support the development of operational tools for forest managers and civil protection agencies in their fire preparedness activities.","Remote sensing; Earth observation; Forest fires; Fire danger; Fire burned area; Fire duration; Fire rate of spread; MODIS; Land surface temperature (LST); LST anomaly; Perpendicular Moisture Index (PMI); Live fuel moisture content (LFMC); Fire Weather Index (FWI); Probability of extreme events; Conditional probability distribution; Anderson-Darling goodness-of-fit; Generalized extreme value (GEV) distribution; Normal distribution; Weibull distribution; Time series; Harmonic Analysis of Time Series (HANTS)","en","doctoral thesis","","978-94-6384-275-4","","","","","","","","","Optical and Laser Remote Sensing","","",""
"uuid:21861780-4726-4fbf-976b-a0010de0d843","http://resolver.tudelft.nl/uuid:21861780-4726-4fbf-976b-a0010de0d843","Nanofabrication of cell-instructive and bactericidal surfaces for bone implants","Ganjian, M. (TU Delft Biomaterials & Tissue Biomechanics)","Zadpoor, A.A. (promotor); Fratila-Apachitei, E.L. (copromotor); Delft University of Technology (degree granting institution)","2022","Recurrent bacterial infection is one of the main reasons of implant failure, hugely impacting the patients’ quality of life, and ultimately resulting in morbidity and even mortality. This type of infection starts with the attachment of the bacteria to the implant surface, leading to biofilm formation and, thus, high resistance against antibacterial agents. To date, numerous strategies have been proposed to prevent biofilm formation and implant-associated infections. It has been revealed that physical surface patterns with specific dimensions and mechanical properties may have the potential to kill implant associated bacteria through a mechanical mechanism, while regulating stem cell differentiation. Therefore, the aim of this research was to advance the development of the nanofabrication methods for generation of patterns with controlled geometrical and mechanical characteristics, and assess their potential for achieving a dual surface biofunctionality for bone implants, namely osteogenic and bactericidal effects. The focus was on submicron to nanoscale patterns generated on 2D and 3D surfaces...","nanopatterns; cell-nanopatterns interactions; bone tissue engineering","en","doctoral thesis","","978-94-6419-423-4","","","","","","","","","Biomaterials & Tissue Biomechanics","","",""
"uuid:40ed5b16-bf5a-409e-a28e-74f115f984cf","http://resolver.tudelft.nl/uuid:40ed5b16-bf5a-409e-a28e-74f115f984cf","Boundary-layer instability on rotating cones: An experiment-based exploration","Tambe, S.S. (TU Delft Flight Performance and Propulsion)","Veldhuis, L.L.M. (promotor); Gangoli Rao, A. (promotor); Delft University of Technology (degree granting institution)","2022","Boundary-layer instability induces spiral vortices on rotating cones. As they grow along the cone, the vortices enhance mixing of high- and low-momentum fluid, and subsequently, cause the boundary-layer to transition into a turbulent state. This transition process is scientifically enticing as it is one of the classical problems in fluid mechanics. In practice, the transitions of rotating cone boundary-layer are relevant in several engineering applications, including rotating nose-cones of aero-engines.
The instability-induced spiral vortices on rotating aero-engine-nose-cones are expected to influence the aerodynamics at the fan root. This will potentially affect the loss mechanism at junctions between fan blades and the hub (the central rotating body of the engine, including the nose-cone). Accurate assessment of these losses requires knowing the boundary-layer instability behaviour on rotating cones in aero-engine-like flow conditions.
The past literature on this classical instability problem has only focused on low-speed (low-Reynolds number incompressible) axisymmetric inflow conditions. In reality, aero-engine-nose-cones often experience high-speed (high-Reynolds number compressible) inflow during a cruise. Moreover, several concepts of future-aircraft feature engines embedded in the airframe, or engines with ultra high bypass ratio with short nacelles. Owing to the associated inflow distortions, the nose-cones of these engines will experience non-axisymmetric inflow. However, limitations of the past experimental techniques pose hurdles in investigating the boundary-layer instability on rotating cones in non-axisymmetric as well as high-speed inflows.
This dissertation explores the boundary-layer instability on rotating cones with the inflow conditions pertaining to a typical aero-engine, i.e. non-axisymmetric as well as high-speed inflow. First, an experimental method is developed to measure the coherent flow structures on rotating cones. This method uses infrared thermography (IRT) with proper orthogonal decomposition (POD) to detect the thermal footprints of the spiral vortices on rotating cones. The POD modes are selectively used to reconstruct different instability-induced flow features. For this selection, a new criterion is formulated to determine the physical admissibility of the POD modes for reconstructing the flow-feature of interest. This method overcomes the limitations of the past experimental methods and has allowed quantitative measurements of spiral vortex growth, angle and azimuthal number, for the first time in complex flow environment, i.e. axial as well as non-axial inflow and high-speed inflow.
The asymmetry of the non-axial inflow has been found to delay the spiral vortex growth on the investigated case of a rotating slender cone (half-cone angle ψ=15º). Here, the spiral vortex growth appears at higher local Reynolds number Rel and local rotational speed ratio S compared to the axial inflow case at same operating conditions. It is postulated that the azimuthal asymmetry of the flow conditions (local Rel and S) disturbs the azimuthal coherence of the instability characteristics, i.e. angle and wavelength of the dominant mode. This inhibits the spiral vortex growth. However, at high rotational speed ratio S, when the instability characteristics approach the azimuthal coherence, the spiral vortices are found to be growing in the asymmetric flow field.
Furthermore, the dissertation extends the axial flow investigations from the most addressed case of a rotating slender cone of ψ=15º to the broader cones of ψ=22.5º, 30º, 45º, and 50º. Here, the boundary-layer instability mechanism changes from the centrifugal instability for slender cones ψ ≤ 30º to the cross-flow instability for the broad cones ψ ≥ 30º. The exact half-cone angle where this change occurs still remains unclear. While the past literature majorly focused on rotating slender cones in axial inflow, theoretical studies expressed the lack of experimental data for the rotating broad cones in axial inflow. This dissertation has provided this experimental data on the instability-induced spiral vortices for the rotating broad cones of ψ=45º and 50º in axial inflow.
The experimental method developed in this work has enabled studying the boundary-layer instability behaviour on rotating cones, for the first time in high-speed conditions, i.e. local Reynolds number Rel =0—3 × 106, rotational speed ratio S<1—1.5, and inflow Mach number M=0.5. These conditions are typically expected on the aero-engine-nose-cones during the transonic cruise of a large passenger aircraft (like A320, A350, etc.). These high-speed measurements revealed that the spiral vortices grow on the investigated rotating cones (ψ=15º, 30º and 40º) as expected from the low-speed studies. This confirms that the right circular type nose-cones of the transonic cruise aircraft will experience the spiral vortex growth in transitional boundary-layer.
The dissertation also conceptually discusses the potential effects of the spiral vortices on the fan aerodynamics. The spiral vortices are expected to influence the aerodynamics within the blade passage, especially, near the hub. Flow at the hub and fan-blade junction corner often separates on the suction side of the blade. This reduces the total pressure rise and efficiency of the engine. Presence of the spiral vortices is expected to affect the local aerodynamics at the hub, including the hub-corner separation, however, quantifying this effect needs further investigation. Furthermore, the dissertation has also shown a typical asymmetric flow field around the nose-cones when the fan is subjected to an inflow distortion. The fan-driven redistribution of the distorted inflow reduces the flow-field asymmetry near the nose-cone wall in the symmetry plane. This is a favourable condition for the spiral vortex growth.
Overall, this doctoral research has presented a new experimental approach to the classical problem of the boundary-layer instability on rotating cones. This has allowed furthering the fundamental knowledge about the instability-induced spiral vortex growth on rotating cones in following parameters: local Reynolds number Rel =0—3 × 106, rotational speed ratio S=0—250, inflow Mach number M=0—0.5, inflow incidence angle α=0º—10º and half-cone angle ψ=15º—50º.","Boundary layer instability; Boundary layer transition; Rotating boundary layers; Rotating cones; Experimental methods; Aero-engine nose-cones; Fan under distorted inflow","en","doctoral thesis","","978-94-6384-289-1","","","","","","","","","Flight Performance and Propulsion","","",""
"uuid:bb197cfd-b5a4-4b6c-933c-52f7d1d2732f","http://resolver.tudelft.nl/uuid:bb197cfd-b5a4-4b6c-933c-52f7d1d2732f","Localization microscopy of constrained fluorescent molecules: Pushing towards Ångström-scale resolution through cryogenics","Hulleman, C.N. (TU Delft ImPhys/Computational Imaging)","Rieger, B. (promotor); Stallinga, S. (promotor); Delft University of Technology (degree granting institution)","2021","i>Localization microscopy has circumvented the diffraction limit by sequentially imaging individual light emitting molecules at a time. The position of these individual molecules can be determined and a super-resolution reconstruction is made with improved resolution. Normally freely rotating emitters are used such that the point spread function (PSF) is rotationally symmetric and only minor errors in the localization process are made by approximating the PSF with a Gaussian. The precision with which the individual emitters can be localized scales with the 1/√N, N the number of detected photons so that more detected photons leads to a better localization precision. However, the emission of fluorescent molecules is limited by photobleaching, a light induced chemical reaction to a permanent non-fluorescent state. In this thesis we investigate the effect of cooling the sample to cryogenic temperatures with liquid nitrogen. This reduces the chemical reaction rates and improves photostability more than 100 fold. To use localization microscopy it is necessary to switch the fluorescent molecules between an on-state and off-state, this turns out to be difficult at cryogenic temperatures. Standard methods used at room temperature in aqueous media do not work. As the molecules are frozen in place at cryogenic temperatures we use polarized light to selectively image molecules with certain orientations at a time. To realize this it is necessary to generate pure linear polarization with an arbitrary orientation in the sample plane. By calibrating the phase difference induced by the dichroic mirrors this can be achieved, effectively modulating the fluorescence of fixed dipole emitters at cryogenic temperatures. The addition of an orthogonal linearly polarized stimulated emission depletion (STED) beam narrows the orientational distribution of fluorescing molecules. This method does induce some degree of sparsity, however, it is not enough for localization microscopy of dense biological samples. Furthermore, the STED process reduces the photon yield of single molecules. This is presumably caused by the long dark-state recovery measured on fluorescent molecules in vacuum and at cryogenic temperatures. Localization microscopy of fixed or orientationally constrained emitters has long been avoided as the orientation of individual molecules leads to bias in the localizations. There are various ways to eliminate this bias but they reduce the amount of information that can be extracted from the sample. By fixing the orientation of fluorescent emitters to biomolecules of interest they become reporters for the orientation of the biomolecules. We have devised the so-called Vortex PSF with which the orientation, 3D position and degree of rotational constraint can be extracted from a single image. Alternatively the orientation of single-molecules can be probed with varying polarization states over multiple frames achieving a better precision with less photons.","super-resolution; single-molecule localization microscopy (SMLM); cryogenic temperatures; stimulated emission depletion; fluorescence; optical aberrations; polarization","en","doctoral thesis","","9789463842785","","","","","","2021-12-23","","","ImPhys/Computational Imaging","","",""
"uuid:7f98a4b1-cc9d-4988-ad27-609677ce0796","http://resolver.tudelft.nl/uuid:7f98a4b1-cc9d-4988-ad27-609677ce0796","Superconducting Funnelled Through-Silicon Vias for Quantum Applications","Alfaro Barrantes, J.A. (TU Delft EKL Processing)","Sarro, Pasqualina M (promotor); Mastrangeli, Massimo (copromotor); Ishihara, R. (copromotor); Delft University of Technology (degree granting institution)","2021","System downscaling, 3D integration, and increasing functionalities are the main challenges that integrated circuits and MEMS technology have dealt with in the past decade. Advanced packaging schemes and interconnect technologies are some of the successful approaches to tackle the challenges. These issues also extend to modern designs such as terahertz applications and quantum technologies, particularly the solid-state quantum computer.
In-demand instances of the latter are e.g. high-density quantum computing systems, where the layer implementing quantum bits (qubits) needs to be bridged to the microelectronic control layer. The latter typically requires CMOS-based circuitry compatible with cryogenic temperatures (i.e., cryo-CMOS) for the control and readout of the many physical qubits needed to implement error-tolerant logical qubits. In order to scale the number of qubits, a more efficient way for the interconnection of the qubits is necessary. In line with such a three-dimensional (3D) integration approach, an interposed layer featuring superconducting vertical interconnections such as through-silicon vias (TSVs) represents a crucial element in the fabrication and assembly of large, scalable, and densely integrated superconducting systems....","","en","doctoral thesis","","","","","","","","","","","EKL Processing","","",""
"uuid:9367ebbb-6a82-4d68-9c30-6785566cbe54","http://resolver.tudelft.nl/uuid:9367ebbb-6a82-4d68-9c30-6785566cbe54","Operator-Based Modeling and Inversion: An Operator Approach to the Forward and Inverse Scattering Problems","Hammad, H.I.A. (TU Delft ImPhys/Medical Imaging)","Verschuur, D.J. (promotor); Stallinga, S. (promotor); Delft University of Technology (degree granting institution)","2021","The seismic method has many applications. It is important in the critical sector of energy. Besides being used in imaging oil and gas reservoirs, it is also utilized in other sectors of energy such as geothermal energy exploration and development. It also plays a role in extracting other resources such as minerals or in the process of monitoring CO2 sequestration to reduce the carbon footprint of humankind. While seismic waves can occur naturally, their study gives insight in analysing the occurrence of and mitigating risks related to earthquakes. As far as active-source seismic is concerned: seismic images make it possible to see what is in the subsurface with minimal expensive and invasive operations such as drilling unnecessary holes in the subsurface — similar to what medical professionals use ultrasound or X-ray images for. Several methods have been proposed to analyze seismic data. A popular method nowadays is full waveform inversion (FWI), for instance, which attempts to fit all the recorded waveformwith amodel. This process solves, in fact, a very complicated highly non-linear inverse problem. Another process that uses such inversion process, but which tries to separate classes of parameters to reduce non-linearity, is joint migration inversion (JMI), in which scattering properties of the subsurface are separated from the propagation properties of seismicwaves. Currently those two methods, FWI and JMI, are generally model-dependent — that is they have been formulated to fit specific physics model such as isotropic acoustic media, transversally isotropic media with or without absorption. Hence, they would tend to have biases towards those particular models. Another paradigmis the so-called data-driven paradigm, or data-adaptive paradigm, and since it is formulated in terms of operators, one could also refer to it as operatorbased. Since it contains less biases towards a particular physics model or require no detailed knowledge of model parameters, beforehand, some also refer to it as modelindependent, as it does not need to force the data to fit a specific model, rather the process adapts to the model contained within the data. A process such as surface-related multiple elimination follows this paradigm. Another process, which is also shown in this dissertation, separates the surface multiples scattering-order-by-scattering-order without the need to assume a specific physics model. The process is referred to as scattering order decomposition. So, this dissertation looks into the problem of extending the inversion process to the model-independent or the operator-based paradigm. This dissertation looks first into the theoretical underpinning of this problem, where integral representations are used to study it. These representations are divided into four categories: first model-based representations are derived and presented as directional and non-directional. So, it places in context those theories. Next, the operator-based representations are also divided into directional and non-directional. Finally, four representations are derived, in this dissertation, which have the potential for applications in modeling, inversion and various seismic data analysis processes. Modeling is needed before any inversion since the inverse problem is ill-posed or illconditioned and hence no unique solutions exist but rather preconditioned or regularized solutions to these problems are normally used. Moreover, the inverse problem uses modeling iteratively and also back-projects the data residuals with the forward modeling mechanism. Therefore, the next chapters study operator generation and the subsequent modeling of wavefields with these derived operators...","SeismicModeling; Operator-based modeling; inverse scattering","en","doctoral thesis","","978-94-6458-002-0","","","","","","","","","ImPhys/Medical Imaging","","",""
"uuid:e3f9f224-b5fc-4283-85b3-fc8b7d64b5bf","http://resolver.tudelft.nl/uuid:e3f9f224-b5fc-4283-85b3-fc8b7d64b5bf","From detectors towards systems: enabling clinical TOF-PET with monolithic scintillators","Borghi, G. (TU Delft RST/Medical Physics & Technology)","Schaart, D.R. (promotor); Delft University of Technology (degree granting institution)","2021","Nuclear medical imaging (NMI) is the branch of nuclear medicine aimed at imaging the in-vivo distribution of specific compounds labeled with radioactive elements (radiotracers) inside animals (preclinical applications) or patients (clinical applications). These compounds are developed to follow metabolic pathways or for binding to receptor systems of interest and are administered to the imaged subject to obtain diagnostic information, such as the functionality of certain organs or the presence of tissues with altered metabolism, e.g. tumors or inflamed tissues. The estimation of the radiotracer distribution is obtained by externally detecting the radiations emitted by the radioactive element attached to the tracer...","PET; monolithic scintillator detector","en","doctoral thesis","","978-94-6423-587-6","","","","","","","","","RST/Medical Physics & Technology","","",""
"uuid:97ce2efa-828c-41df-ac00-a3362be12a01","http://resolver.tudelft.nl/uuid:97ce2efa-828c-41df-ac00-a3362be12a01","Coordination Strategies of Connected and Automated Vehicles near On-ramp Bottlenecks on Motorways","Chen, N. (TU Delft Transport and Planning)","van Arem, B. (promotor); Wang, M. (copromotor); Delft University of Technology (degree granting institution)","2021","The aim of the thesis is to design coordination strategies for connected automated vehicles near on-ramps considering controller performance, safe lane changing conditions, maneuver planning, and trajectory control. CAVs have enhanced situation awareness with their onboard detection units and vehicle-to-everything communications. They have the potential to improve traffic operations by manoeuvring together under a common goal and by accepting a small time gap. Existing model predictive control controllers rarely check their controllers’ robustness considering the mismatch between vehicle dynamics and prediction models. The existing cooperative merging strategies constrain that on-ramp CAVs merge into mainline traffic after reaching the final desired inter-vehicle distance and/or (merging) speed. That constraint may make them not be applied to scenarios where the length of the on-ramp lane is short and on-ramp CAVs cannot reach desired states before merging. Few methods investigate optimal merging sequences for two conflicting streams of traffic. Besides, mainline CAVs are rarely allowed to change lane during cooperation. This thesis consecutively tackles the aforementioned four points by presenting four coordination strategies that address the mentioned limitations...","Connected automated vehicles (CAVs); Trajectory planning; Maneuvering control; on-ramp merging; Hierarchical control","en","doctoral thesis","","978-90-5584-304-6","","","","TRAIL Thesis Series no. T2021/29, the Netherlands TRAIL Research School","","2021-12-22","","","Transport and Planning","","",""
"uuid:0a27d29e-604e-4122-b646-0d79b7e9fbc7","http://resolver.tudelft.nl/uuid:0a27d29e-604e-4122-b646-0d79b7e9fbc7","Spreading processes in complex networks and systems","Ma, L. (TU Delft Network Architectures and Services)","Van Mieghem, P.F.A. (promotor); Kitsak, M.A. (copromotor); Delft University of Technology (degree granting institution)","2021","Complex systems are made up of many interconnected components. The interactions between components further produce emerging complex collective behaviors. In most cases, a complex system can be represented as a network in which the nodes represent the elements and the connected links illustrate the interactions. The spreading process is one of the most important dynamics in complex networks and systems. Since 2000, scientific findings related to spreading models and complex networks are experiencing a boom. Quantum computers are also complex systems. The development of quantum computing is to bring qualitative change to many fields by solving some problems faster than classical computers. However, during quantum computing, errors are spreading in the quantum circuits and accumulated with time...","Complex Networks; Epidemics; COVID-19 Pandemic; Inferring Network Properties; Reporting Delays; Forecast; Spectral Clustering; Error Accumulation; Quantum Circuits; Markov Chains","en","doctoral thesis","","978-94-6421-621-9","","","","","","","","","Network Architectures and Services","","",""
"uuid:63030917-8bbb-45d6-83f2-83fc1cb2bc47","http://resolver.tudelft.nl/uuid:63030917-8bbb-45d6-83f2-83fc1cb2bc47","The interaction of light with WS2 nanostructures","Komen, I. (TU Delft QN/Kuipers Lab)","Kuipers, L. (promotor); Conesa Boj, S. (promotor); Delft University of Technology (degree granting institution)","2021","In this thesis we describe three results of the interaction between WS2 and light: photoluminescence, the formation of exciton-polaritons and Raman scattering. The chirality of the photoluminescence interaction between WS2 and light opens the way for applictions in nanophotonics and specifically valleytronics, the field of interacting with and manipulating the valley pseudospin. In this work we propose a way to optically address and read out the valley pseudospin using silver and ZnO nanowires. Subsequently we confirm the existence of coherence between the WS2 valleys. Furthermore, exciton-polaritons in WS2 hold the promise of applications in nanophotonics that make use of the enormous light-matter interaction. Raman spectroscopy is commonly used as a characterization tool to confirm the nature of a material and its properties. In this work we go one step further, determining how structural and morphological variations in WS2 pyramids manifest themselves in Raman spectra. In addition we describe how Raman spectroscopy can be used to probe the orientation of WS2 nanoflowers.","Nanophotonics; 2D materials; Photoluminescence; Nanostructures; light-matter interactions; valleytronics; Raman spectroscopy; Polarization","en","doctoral thesis","","978-94-6384-271-6","","","","","","","","","QN/Kuipers Lab","","",""
"uuid:730e80b5-e567-494d-8bb2-df8c71e6de69","http://resolver.tudelft.nl/uuid:730e80b5-e567-494d-8bb2-df8c71e6de69","Measurement and Practice of Transversal Competencies in Engineering Education: Evaluation of Perceptions and Stimulation of Reflections of industry, lecturers and students","Leandro Cruz, M. (TU Delft Education AE)","de Vries, M.J. (promotor); Saunders-Smits, Gillian (copromotor); Delft University of Technology (degree granting institution)","2021","The engineering industry has changed in the last decades with the increasing complexity of technology, the global mobility of the engineering profession, the concern with sustainability and social responsibility, and the need for innovation and creativity. This shift has caused employability issues that include both the lack of engineering graduates available for recruitment and graduates equipped with the necessary set of transversal competencies. One of the efforts to produce engineering graduates ready for the labour market was the emphasis on transversal competencies. They have been highlighted in the Boeing list of “Desired Attributes of Engineer” and by the accreditation bodies in the United States of America and Europe. The focus shifted from only technical competencies to including also the transversal competencies in the field of engineering education around the world. Although engineering curricula have expanded curricular and pedagogical arrangements to include transversal competencies to prepare graduates for employment, there is still a gap between what engineering education provides to students and what employers desire from engineering graduates. Employer’s feel students lack transversal competencies such as communication, interpersonal, management and team working skills. The emphasis on the inclusion of transversal competencies has triggered the need for instruments that could measure and assess these competencies or their perceptions, or even to trigger reflection on these competencies. However measuring transversal competencies or their perceptions is considered difficult because of the lack of consensus on the transversal competency definitions between engineering educators, government bodies and employers, the overwhelming lists of transversal competencies created by universities and non-academic establishments with different terminologies and without collaborations between these parties, and finally the nature of transversal competencies which often are intertwined with the technical competencies and can also be acquired outside of the curriculum. The research presented in this thesis contributes to the measurement of perceptions of transversal competencies, and practice and reflection on transversal competencies in the field of engineering education. This work is part of the PREFER (Professional Roles and Employability of Future EngineeRs) project, which was a European project that started in 2017 to reduce the transversal competency gap in the field of engineering and to increase the employability of future engineers…","Transversal competencies; Measurement; Practice; Reflection; Keywords: Transversal competencies, Measurement, Practice, Reflection, Teaching intervention","en","doctoral thesis","","978-94-6421-576-2","","","","","","","","","Education AE","","",""
"uuid:9e875752-05bc-4dd8-9bdd-77e18cf3c43f","http://resolver.tudelft.nl/uuid:9e875752-05bc-4dd8-9bdd-77e18cf3c43f","The Curious Case of Io - Connections Between Interior Structure, Tidal Heating and Volcanism","Steinke, T. (TU Delft Astrodynamics & Space Missions)","Vermeersen, L.L.A. (promotor); de Pater, I. (promotor); van der Wal, W. (copromotor); Delft University of Technology (degree granting institution)","2021","Io's spectacular and unique appearance is characterised by its yellowish surface, colourful lava deposits, and black calderas. The reason for this appearance is extensive tidal heating in the moon's interior. Caught in the Laplace resonance with the Galilean moons Ganymede and Europa, Io is the most tidally heated and volcanically active world in the Solar System. It is therefore the best place to study fundamental processes important for the early evolution of terrestrial planets, and the habitability of icy satellites and terrestrial exoplanets subject to tidal heating. The physical state of Io’s interior, the driving tidal dissipation and heat transport mechanisms are unknown, however, form a strongly interconnected system: 1) Io’s internal temperature and melt distribution are controlled by tidal dissipation and heat loss processes; 2) The total amount and pattern of tidal dissipation depend on the rheological properties of Io's interior; 3) These rheological properties, in turn, depend on the internal temperature and melt distribution. Due to the strong dependence on melt, Io's volcanic activity hints at the dynamics beneath the surface and can therefore be used to improve our understanding of the underlying mechanisms. Aim of this thesis is to improve our understanding of these interconnections (1-3) and to constrain Io's current interior dynamics based on the moon's volcanic activity derived from satellite and Earth-based observations over the last 20 years.","Io; interior; tidal dissipation; volcanism","en","doctoral thesis","","978-94-6421-601-1","","","","","","","","","Astrodynamics & Space Missions","","",""
"uuid:839e79d1-ea4b-4440-9077-d6fb2df4c086","http://resolver.tudelft.nl/uuid:839e79d1-ea4b-4440-9077-d6fb2df4c086","Multi-Physics Driven Electromigration Study: Multi-Scale Modeling and Experiment","Cui, Z. (TU Delft Electronic Components, Technology and Materials)","Zhang, Kouchi (promotor); Fan, x.j. (promotor); Vollebregt, S. (copromotor); Delft University of Technology (degree granting institution)","2021","This dissertation presents a comprehensive and integrated study, including theory development, numerical simulation and experiment, for multi-physics driven electromigration in microelectronics. Multi-scale methodologies from atomistic modeling to continuum theory-based simulation have been developed. Moreover, extensive experimental testing, from testing wafer/die design and fabrication, sample preparation and process, to the measurement setup and characterization, has been conducted. The dissertation also provides synergetic and cohesive analysis between simulation and experiment. The simulation predictions and results have been well validated by experimental data.","Electromigration Simulation; Multiphysics; Multiscale, Accelerated Measurement; Stress-Migration; Self-Diffusion; Thermomigration; Molecular Dynamic Simulation; Finite Element Simulation","en","doctoral thesis","","","","","","","","2022-04-30","","","Electronic Components, Technology and Materials","","",""
"uuid:c07eb8de-8446-455d-ae2d-bd720e378e1d","http://resolver.tudelft.nl/uuid:c07eb8de-8446-455d-ae2d-bd720e378e1d","Matters of Attention: Gaining insight in student learning in the complexity of design-based chemistry education","Stammes, J.K. (TU Delft Science Education and Communication)","de Vries, M.J. (promotor); Barendsen, Erik (promotor); Henze, Ineke (copromotor); Delft University of Technology (degree granting institution)","2021","Engaging students in design has great potential for promoting learning in science education, and design practices have been gaining emphasis in national science curricula in recent years. In the actual success of this type of instruction, teachers play a key role. Many recommended teaching practices for design-based science education hinge on teachers’ attention to what and how students are learning as they are engaged in design. Gaining insight in student learning in the course of instruction means that teachers have the opportunity to tailor their actions to students’ learning needs, and enhance student learning during a learning process. While previous research showed that teachers’ attention to student learning differs between types of instruction, characterisations of teacher attention in secondary school, design-based science settings remained scarce. Attending to student learning has, nevertheless, been posited as particularly important yet complex in design-based classrooms due to design’s multifaceted and open-ended nature. The reform-based character of design-based science education further contributes to this complexity, which also pertains to design-based chemistry education. Chemistry has, however, seldom been featured in design-based education research, despite design’s central role in the chemistry discipline, and in chemistry curricula...","","en","doctoral thesis","","978-94-6384-274-7","","","","","","","","","Science Education and Communication","","",""
"uuid:b9fbf8e5-dc31-4da0-b560-79c05ebc00a2","http://resolver.tudelft.nl/uuid:b9fbf8e5-dc31-4da0-b560-79c05ebc00a2","On the powder metallurgy, additive manufacturing and welding of oxide dispersion strengthened Eurofer steel","Fu, J. (TU Delft Team Marcel Hermans)","Richardson, I.M. (promotor); Hermans, M.J.M. (promotor); Delft University of Technology (degree granting institution)","2021","Oxide dispersion strengthened (ODS) steels are promising candidates for use as structural materials in the next generation fission and fusion reactors. Compared to conventional ferritic or martensitic steels, ODS steels exhibit improved high-temperature creep properties and irradiation resistance. Favourable properties are mainly attributed to the fine grain features and the high number density of nanosized oxide particles in the steel matrix. These nanoparticles can act as pinning sites for dislocations and stable sinks for irradiation introduced defects, leading to significantly enhanced mechanical properties. In order to be employed in nuclear systems with large, complex structures, the fabrication and welding of ODS steels with reproducible and superior properties are inevitable and essential. However, after 10–20 years of studying since the emergence of ODS steels, these issues remain the major bottlenecks limiting further development. This thesis is concerned with ODS Eurofer steel, which is one of the representatives of ODS steels and has been the research focus in terms of promising nuclear materials within the European Union. With the aim to develop suitable and effective methods for the fabrication and welding of ODS Eurofer, the result of this study should help to extend the use of ODS steels in future nuclear applications.","Oxide dispersion strengthened steel; Microstructural analysis; Mechanical properties; Powder metallurgy; Additive manufacturing; Welding","en","doctoral thesis","","978-94-6423-565-4","","","","","","","","","Team Marcel Hermans","","",""
"uuid:7f26d675-4e27-4ddb-bc84-1daa0f3d2eb2","http://resolver.tudelft.nl/uuid:7f26d675-4e27-4ddb-bc84-1daa0f3d2eb2","Towards unmanned cargo ships: A task based design process to identify economically viable low and unmanned ship concepts","Kooij, C. (TU Delft Ship Design, Production and Operations)","Hekkenberg, R.G. (promotor); Kana, A.A. (promotor); Delft University of Technology (degree granting institution)","2021","Unmanned and low-manned transport has increasingly been studied this past decade. While there have been successful trials for autonomous navigation, unmanned cargo ships are not commercially available yet. First, this dissertation investigates how changes to a ship’s systems and organizational structure can affect the crew’s size and composition. Then, a cost benefit analysis determines the economic viability of these concepts. This research concludes with feasible intermediate steps between a conventional ship and a fully unmanned ship.","Unmanned ships; low-manned ships; design process; autonomous ships; greedy algorithm","en","doctoral thesis","","","","","","","","","","","Ship Design, Production and Operations","","",""
"uuid:7e9678fd-98c1-4087-829b-2fa4e96386af","http://resolver.tudelft.nl/uuid:7e9678fd-98c1-4087-829b-2fa4e96386af","Downstream consequences of Ribb River damming, Lake Tana Basin, Ethiopia","Mulatu, C.A. (TU Delft Water Resources)","McClain, M.E. (promotor); Crosato, A. (copromotor); Delft University of Technology (degree granting institution); IHE Delft Institute for Water Education (degree granting institution)","2021","This study assessed the downstream river system adaptation in response to upstream damming on the Ribb River, Ethiopia, to irrigate 15,000 ha. It combined primary and secondary data, and the application of remote
sensing and mathematical modeling. The predam morphodynamic trends of the Ribb River were analyzed for 59 years based on aerial photographs, satellite images, and newly collected field data. Three dam operation scenarios were developed to analyze the long-term hydro-morphological effects of the
dam on the downstream river reaches. It also assessed the applicability of physics-based analytical equations (Equilibrium Theory) compared to a 1D numerical model (SOBEKRE) to determine the least-morphologically impactful dam operation scenario on the river reaches downstream of the dam. Moreover,
a HEC-RAS 2D hydrodynamic model was developed to assess the effect of the dam on the flooding extent of the Fogera Plain. This was used to study the potential implications of hydrological alteration on the ecology of the floodplain wetlands, as they are the habitats of important fish and bird species. The results contribute to knowledge on the hydro-morphological and environmental impacts of dams on downstream river systems. The developed methodologies and findings may be used to study future hydro-morphological and ecological changes that may arise due to other dam operations or climate change.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-1-032-25031-1","","","","","","","","","Water Resources","","",""
"uuid:b125ec2d-e2af-4708-bccc-0a2357a533b1","http://resolver.tudelft.nl/uuid:b125ec2d-e2af-4708-bccc-0a2357a533b1","Multi-Node Quantum Networks with Diamond Qubits","Pompili, M. (TU Delft QID/Hanson Lab)","Hanson, R. (promotor); Wehner, S.D.C. (promotor); Delft University of Technology (degree granting institution)","2021","The Internet has revolutionized the way we live. It has enabled applications far beyond what it was originally built for, and it will continue to exceed our expectations for the future. Quantum computers, and the network that will connect them—the Quantum Internet— are likely going to follow the same path. Differently from normal computers, quantum computers can share a property called entanglement, which allows the qubits (quantum bits) to be connected at a much more fundamental level. This property enables a range of new applications that span secure communication to enhanced metrology. Over the past fifteen years, significant progress has been made in connecting rudimentary quantum network nodes via long-distance entanglement. Several quantum platforms have demonstrated entanglement generation between two physically separated qubits. In this thesis we take a significant step forward, both in terms of experimental complexity achieved, and in the abstraction of said complexity for future developments. Moving past two-node experiments required a fundamental redesign of our experimental apparatus, as well as developing the capabilities to control simultaneously the additional node. The first result of this thesis is building a three-node entanglement-based quantum network. We demonstrated distribution of Greenberger-Horne-Zeilinger states over the network, as well as a building block for larger networks: entanglement swapping. Differently from previous multi-node demonstrations, which relied on post-selection, our network is able to perform the entanglement distribution in a heralded fashion: a signal will notify the users that the protocol was successful, and that the state is ready to be used. The second result builds on the first, by adding control over a fifth qubit, improving the quality of the entanglement, and introducing a novel repetitive readout technique, to achieve quantum teleportation of a qubit from the third node to the first—nodes that do not share a direct entanglement channel. The third and final result is the demonstration of entanglement delivery using a quantum network stack. The Internet is built using a plethora of physical platforms: optical fibers, Ethernet cables, Wi-Fi, satellite signals etc. To abstract their functionality, and make applications work regardless of the underlying platform, a layered approach was developed in the 1970s (the Internet protocol). Taking inspiration from classical network stacks, we demonstrate the first two layers of a quantum network stack, the physical layer (where the qubits, lasers and signal generators live), and the link layer, which abstracts the concepts of qubit and entanglement generation such that they can be used by applications at the higher-layers, hiding the complexity of the quantum platform being used.","","en","doctoral thesis","","978-90-8593-497-4","","","","Casimir PhD Series, Delft-Leiden 2021-31","","","","","QID/Hanson Lab","","",""
"uuid:e3f9d78b-8ff0-49a7-8018-142b351ea4de","http://resolver.tudelft.nl/uuid:e3f9d78b-8ff0-49a7-8018-142b351ea4de","Suspension dynamics in transitional pipe flow","Hogendoorn, W.J. (TU Delft Multi Phase Systems)","Poelma, C. (promotor); Breugem, W.P. (promotor); Delft University of Technology (degree granting institution)","2021","Suspension flows are abundantly present in nature and industry. Typical examples include volcanic ash clouds, sediment transport in rivers, blood flow through human capillaries and the dredging industries. Accurate models of suspension flows are of key importance for prediction, optimization and control of particle-laden flows, especially in industrial applications. However, accurate experimental reference data is hardly available for the development and validation of these models. The opaque nature of suspension flows precludes the acquisition of quantitative flow information by means of established optical measurement techniques. Therefore, in this dissertation measurements are performed using state-of-the-art measurement techniques, which provide insight in particle-laden flows. These measurement techniques include ultrasound, magnetic resonance and optical imaging. The high-quality data, obtained using these measurement modalities, will subsequently be used for the modeling of suspension flows. The aim of this dissertation is to study the effect of the particle size and concentration on the behavior of pipe flow, in particular in the laminar-turbulent transition region.","","en","doctoral thesis","","978-94-6384-280-8","","","","","","","","","Multi Phase Systems","","",""
"uuid:1a9e29a6-4868-4096-bc88-a1095cf568d3","http://resolver.tudelft.nl/uuid:1a9e29a6-4868-4096-bc88-a1095cf568d3","Architected Cementitious Cellular Materials Towards Auxetic Behavior","Xu, Y. (TU Delft Materials and Environment)","Schlangen, E. (promotor); Šavija, B. (copromotor); Delft University of Technology (degree granting institution)","2021","In recent years, the rapid development of digital fabrication technology has substantially boosted the research of architected cellular materials. Among them, the auxetic materials are a distinct type for their unusual deformation behavior: negative Poisson’s ratio. When loaded longitudinally in compression, in contrast to conventional materials, the auxetic materials shrinks transversely, and vice versa. This featured deformation behavior gives the auxetic materials enhanced mechanical properties, especially high deformability and energy absorption ability. In many other fields, for instance metals and polymers, auxetic materials have already become a hot research topic. Numerous studies focusing on design, preparation, optimization, and application of auxetic materials have emerged. However, for cementitious materials, the auxetic behavior has never attracted attention and, to the author’s knowledge, auxetic cementitious cellular materials have never reported in literature. Therefore, this study mainly focuses on the development of architected cementitious cellular materials with auxetic behavior...","","en","doctoral thesis","","978-94-6421-599-1","","","","","","","","","Materials and Environment","","",""
"uuid:09ea6341-6dc2-4b01-ad33-d5d38e0bc282","http://resolver.tudelft.nl/uuid:09ea6341-6dc2-4b01-ad33-d5d38e0bc282","Creation of electron pulses with a laser-triggered micro fabricated electron beam deflector","Weppelman, I.G.C. (TU Delft ImPhys/Microscopy Instrumentation & Techniques)","Kruit, P. (promotor); Hoogenboom, J.P. (promotor); Delft University of Technology (degree granting institution)","2021","This thesis is dedicated to the development of a beam blanker for Ultrafast Electron Microscopy. Ultrafast electron microscopy aims to resolve structural dynamics at the nanometer and (sub) picosecond time scale. In these temporal and spatial scales many important processes in physics, chemistry and biology do occur. Examples of these are the interaction of light with small nano-patterned devices, the propagation is acoustic waves and phonons, the dynamics of melting and crystallization of materials. An example in biology is in photosynthesis, i.e. the dynamics of light harvesting complexes.","","en","doctoral thesis","","978-94-6416-960-7","","","","","","","","","ImPhys/Microscopy Instrumentation & Techniques","","",""
"uuid:a118e35c-dec9-4c1f-9ed7-8b65a5ca77a3","http://resolver.tudelft.nl/uuid:a118e35c-dec9-4c1f-9ed7-8b65a5ca77a3","Driver's risk field: A step towards a unified driver model","Kolekar, S.B. (TU Delft Human-Robot Interaction)","Abbink, D.A. (promotor); de Winter, J.C.F. (promotor); Boer, E.R. (copromotor); Delft University of Technology (degree granting institution)","2021","The National Highway Transportation Safety Administration (NHTSA) reports that 94-96% of the road accidents involve human error. These statistics make it seem as if humans are terrible drivers. However, a different set of numbers paint a completely different picture. According to the United States Bureau of Transportation Statistics, the failure rate of human drivers is 0.68 fatalities per 100 million kilometres. This number is so low that autonomous vehicle manufacturers are having a hard time proving that their vehicles are safer than human drivers. The safety benefits of Driver Assistance Systems, autonomous vehicles, and other safety systems are not being challenged here. However, it needs to be realised that a non-fatigued, attentive human is one of the safest drivers we can have....","Driver modelling; Risk field; Sensorimotor control","en","doctoral thesis","","","","","","","","","","","Human-Robot Interaction","","",""
"uuid:bf5eed9f-a9d3-4c5c-932e-e0574986da4e","http://resolver.tudelft.nl/uuid:bf5eed9f-a9d3-4c5c-932e-e0574986da4e","Mitigating the Risks in Energy Retrofits of Residential Buildings in China","Jia, L. (TU Delft Housing Quality and Process Innovation)","Visscher, H.J. (promotor); Qian, QK (copromotor); Meijer, F.M. (copromotor); Delft University of Technology (degree granting institution)","2021","To speed up residential energy retrofitting in the Hot Summer and Cold Winter(HSCW) zone, the barriers to retrofitting projects need elimination. Energy retrofitting contributes to improving building quality and living comfort, but has not been accepted by the public. It stems from poor project performance in quality, time, costs, etc. The risk is an essential factor hindering such project objectives and project success. Residential energy retrofitting in China is exposed to various risks due to uncertainties regarding finance, organization, coordination, technology, etc. This thesis thus aims to deepen the understanding of risks in the whole process of residential energy retrofitting to smoothen its implementation and develop risk mitigation strategies for the HSCW climate zone of China. The thesis adopts Transaction Costs Theory (TCT) to identify the risks in the whole process of project implementation and assesses the importance of these risks in both objective and subjective aspects. Given the importance of homeowners-related risks and the key role of the government in retrofitting projects, this research develops s series of develop strategies for risk mitigation from the viewpoints of both homeowners and the government. The thesis contributes to the body of knowledge by conducting a systematic exploration of risks in retrofitting projects. In terms of the practical contributions, it does not only enable project managers to recognize the priority of project risks, but also help the government tackle these issues at its source for promotion of residential energy retrofitting.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-478-3","","","","A+BE | Architecture and the Built Environment No. 23 (2021)","","","","","Housing Quality and Process Innovation","","",""
"uuid:209dce6d-bc95-43fd-9203-9204dd561684","http://resolver.tudelft.nl/uuid:209dce6d-bc95-43fd-9203-9204dd561684","Reconstituting microtubule-actin coordination by cytolinkers","Alkemade, C. (TU Delft BN/Marileen Dogterom Lab; TU Delft BN/Gijsje Koenderink Lab; AMOLF)","Koenderink, G.H. (promotor); Dogterom, A.M. (promotor); Delft University of Technology (degree granting institution)","2021","The human body is composed of about 3−4×1013 (30-40 trillion) cells [1]. These cells are all functioning consistently, and working elegantly together, to sustain the organism. Not only humans, but all other living things on earth (from plants to parrots) are composed of cells. Cells are the smallest living building blocks of plants and animals, and in fact some organisms are built from a single cell, such as bacteria. To be able to sustain life, cells are dynamic entities that need to grow, divide, and interact with their environment. They accomplish this by a number of complex and dynamic processes. For instance, to perform vital functions such as cell division, a key factor of life, cells need to dramatically change their shape. In addition to shape changes, the internal cellular organization needs to be tightly controlled to properly function. For example, cells need to establish a front-back polarity to drive directional migration, like immune cells that hunt for intruders. Furthermore, the proper functioning of brain cells (also named neurons), which depends on finding and connecting to other neuronal cells, is closely related to their internal organization and cellular shape. For cells, essentially small bags filled with proteins, to change their shape and internal organization, they depend on an internal filamentous scaffold named the cytoskeleton. Unlike the name might suggest, this ’cellular skeleton’ is actually very dynamic, with constant assembly and disassembly of the constituent filaments and changes in filament organization. In addition to organizing the cellular interior, this cytoskeleton provides mechanical support for cells and allows them to generate forces. Two main cytoskeletal components are microtubules and actin filaments. They are usually studied as separate systems, despite a growing body of work indicating their functions are closely intertwined and interdependent. This thesis studies how these two cytoskeletal components influence each other. More specifically, we focus on the question how actin and microtubules co-organize and affect each other via proteins that physically link them to each other, named cytolinkers. To study how cytolinkers impact cytoskeletal crosstalk, we move away from the complex environment of the cell, where many other proteins are present and different processes take place. We took the cytoskeletal building blocks and cytolinking proteins out of the cell, rebuilt a cytoskeleton from these building blocks and characterized the effects of the cytolinkers on cytoskeletal co-organization by fluorescence microscopy. In addition to natural cytolinkers, we engineered our own cytolinkers to better understand how these proteins influence microtubule/actin coordination and in the absence of illdefined regulatory processes in the cell. This isolated context is a powerful tool to study cellular functions, as the simplification allows us to tightly control all variables and identify the underlying mechanisms.","Microtubules; actin filaments; cytoskeletal crosstalk; in vitro reconstitution; cytolinkers; Gas2L1; Shot","en","doctoral thesis","","978-94-6384-272-3","","","","","","","","","BN/Marileen Dogterom Lab","","",""
"uuid:24139d54-fd58-4349-8d2d-69045fa2da6f","http://resolver.tudelft.nl/uuid:24139d54-fd58-4349-8d2d-69045fa2da6f","Green Climate Control: Analysing the impact of (active) Plant-based Systems on Indoor Air Quality","Armijos Moya, T.E. (TU Delft Design of Constrution)","Bluyssen, P.M. (promotor); van den Dobbelsteen, A.A.J.F. (promotor); Ottele, M. (copromotor); Delft University of Technology (degree granting institution)","2021","Several studies have demonstrated the potential of botanical biofiltration and phytoremediation to remove indoor pollutants and improve overall comfort. However, there is a lack of evidence on how indoor greenery affects the Indoor Environmental Quality (IEQ), particularly on Indoor Air Quality (IAQ). The main goal of this research project was to explore and evaluate the efficacy of an active plant-based system in terms of IAQ and being able to answer the main research question: “Can an active plant-based system improve the Indoor Air Quality (IAQ)?” This was achieved through laboratory studies of several plant-based systems, including chemical, physical and sensorial evaluations as well as qualitative and quantitative assessments. Some of the outcomes of this research are described below:
– To develop an effective plant-based system the proper selection of its components is essential.
– In real settings, the concentration of the gaseous pollutants is present in lower levels and current equipment are not able to detect them. Therefore, it is clear and confirmed that physical, chemical and sensory assessments are crucial to evaluate the real impact of plant‑based system in the IAQ.
– In this project, different substrates and plants were tested and it became clear that the substrate is an important ally in reducing gaseous pollutants, such as formaldehyde.
– The polluted air needs to be transported to the vicinity of the plant-based system to be able to uptake the gaseous pollutant. Therefore, an active plant-based system is needed to potentialize the impact of such systems in the IAQ since the air has to be forced to go through the system to achieve the biofiltration process.
– An indoor forest is required to meet the minimum standards for ventilation rates in breathing zones just with plants without any extra mechanical ventilation.","Indoor Air Quality; Biofiltration; phytoremediation; Living Wall System; Indoor Environmental Quality; sensory assessment; plant monitoring; pollution sources; Clean air delivery rate; Formaldehyde; Plant-based system","en","doctoral thesis","A+BE | Architecture and the Built Environment","ISBN 978-94-6366-477-6","","","","","","","","","Design of Constrution","","",""
"uuid:60bf122a-6ce7-48b4-b094-986d05c48428","http://resolver.tudelft.nl/uuid:60bf122a-6ce7-48b4-b094-986d05c48428","Citizen Engagement with Open Government Data: A Model for Analyzing Factors Influencing Citizen Engagement","Purwanto, A. (TU Delft Information and Communication Technology)","Janssen, M.F.W.H.A. (promotor); Zuiderwijk-van Eijk, A.M.G. (copromotor); Delft University of Technology (degree granting institution)","2021","Governmental organizations are increasingly making non-privacy-restricted and non-confidential data publicly available on the internet that anyone can freely use, reuse, and distribute without any restrictions. The opening up the government data enables citizens to engage with OGD, i.e., to collaboratively convert OGD into valuable artifacts that benefit society. However, citizen engagement with OGD is contingent upon many factors, and researchers often overlook these factors, and insight is needed to stimulate engagement. This study develops the OGD Citizen Engagement Model (OGD-CEM) that can be used to understand factors contributing to citizen engagement with OGD. The OGD-CEM model hypothesizes that (both extrinsic and intrinsic) motivations toward the engagement and perceived political factors toward OGD and government determine citizens’ intention to engage with OGD. Notably, the more citizens perceive that engaging with OGD will give them an advantage and provide the opportunity to broaden their social networks, the more inclined they will be to engage with OGD. Furthermore, the more citizens perceive that they can engage with OGD easily, that engaging with OGD is enjoyable, and that OGD engagement challenges them intellectually, the more likely they will engage with it. Finally, the more citizens perceive that their engagement with OGD will influence public policy, and the higher citizens’ trust in OGD and the governmental organizations that provide it, the more they will be inclined to engage with OGD. This study is among the first to provide an integrated overview of the profiles of citizens who engage with OGD and the drivers and inhibitors of citizen engagement with OGD. This dissertation also contributes to science by integrating political participation and trust theories into the model because engaging with OGD can yield outcomes with high political values. Moreover, this study is among the first to provide empirical evidence about the citizen-led OGD engagement. Finally, this study is also among the first to empirically test assumptions held in the open data research regarding the effects of OGD quality, political participation, and trust on the intention to engage with OGD.","open data, open government data, citizen engagement, citizen-led initiative, hackathon, factors, drivers, inhibitors, motivation, intention","en","doctoral thesis","","978-94-6366-471-4","","","","","","","","","Information and Communication Technology","","",""
"uuid:ae431c9d-cbb3-4b5f-88c9-86d4ee81cabe","http://resolver.tudelft.nl/uuid:ae431c9d-cbb3-4b5f-88c9-86d4ee81cabe","Development of a Closed-loop degaussing system: Towards magnetic unobservable vessels","Vijn, A.R.P.J. (TU Delft Mathematical Physics)","Heemink, A.W. (promotor); van Gijzen, M.B. (promotor); Dubbeldam, J.L.A. (promotor); Lepelaars, E.S.A.M. (copromotor); Delft University of Technology (degree granting institution)","2021","The focus of this thesis is to study the magnetic behavior of ferromagnetic materials, parameter estimates of the material properties of ferromagnetic materials, and to introduce a mathematical physics model that can accurately describe the temporal changes of the magnetic state of a naval vessel. The aspects mentioned are important steps in the development of a closed-loop degaussing system.","Magnetostatics; Hysteresis modeling; Magnetic signature; Mathematical modeling","en","doctoral thesis","","978-94-6366-483-7","","","","","","","","","Mathematical Physics","","",""
"uuid:2276c5ba-1ec3-4ff7-b0fb-141175f4c76f","http://resolver.tudelft.nl/uuid:2276c5ba-1ec3-4ff7-b0fb-141175f4c76f","Mycobacterium tuberculosis genomics: The Next Generation","Anyansi, C.A. (TU Delft Pattern Recognition and Bioinformatics)","Reinders, M.J.T. (promotor); Abeel, T.E.P.M.F. (copromotor); Delft University of Technology (degree granting institution)","2021","This thesis represents one small step in the battle with TB, where we apply WGS approaches to explore different topics of TB research. Here we offer the global community a method to diagnose complex TB infections consisting of multiple distinct strains. We show that this functionality is necessary and has been overlooked by the TB community in research studies, which might have contributed to poor treatment outcomes. As the increase in samples resistant to multiple antibiotics is a pressing challenge, we explore global trends in antibiotic resistance evolution and have identified a particular order of resistance acquisition for 6 anti-tubercular drugs. This thesis also provides complete assemblies of 18 MTB genomes spanning 7 lineages that was used to analyze MTB’s largest and least studied gene family.","tuberculosis; genomics; bioinformatics; global health; sequencing","en","doctoral thesis","","978-94-6423-586-9","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:7641219e-259f-48a7-8b91-f6691a5e937d","http://resolver.tudelft.nl/uuid:7641219e-259f-48a7-8b91-f6691a5e937d"," Scalable Simulation Models for Fractured Porous Media with Complex Geometries","Hosseinimehr, S.M. (TU Delft Numerical Analysis)","Vuik, Cornelis (copromotor); Hajibeygi, H. (copromotor); Delft University of Technology (degree granting institution)","2021","In various geo-engineering fields, accurate and scalable modeling of fluid and heat transport in the subsurface fractured porous media is important in order to fulfill scientific, economical and societal expectations on successful field development plans. Such models and the predictions they provide, contribute to efficient and safe operations on the production or storage facilities. However, while attempting to provide accurate results, a number of key challenges exist. Over the past decades, the scientific community have been developing various advanced numerical techniques to address these challenges. In this work, a number of scientific contributions have been made to help address specific challenges, by developing scalable numerical methods for fractured porous media, some with complex geometries. The primary aim of these methods is to provide computational efficiency while delivering accurate results on a desired level. Chapter 1 starts with background information on why these computer models are needed and the key challenges that exist along the way. Moreover, the contribution of the scientific community in various aspects are highlighted. In addition, the numerical methods developed in this work are briefly pointed out in this chapter. Chapter 2 covers the governing equations as well as the mathematical and physical relations for various flow models in great detail. These equations include capturing the effect of fractures and faults in the subsurface flowaswell. Chapter 3 attempts to provide detailed explanation of the discretized equations. The fine-scale simulation approaches as well as the coupling strategies for the governing equations are described. Moreover, the linearization of the non-linear equations is covered as well. Afterwards, the embedded discrete fracture models are thoroughly explained, where the effect of fractures on the patterns of flow are explicitly captured. In chapter 4, the mentioned fracture models are extended and applied to geologically relevant field-scale models. This is an important part of this work as the real field-scale geological formations cannot be represented by the Cartesian grid geometry (orthogonal box-shaped grids), but they are better represented by unstructured grids (such as corner-point grids). Using a number of numerical results, the capabilities of the developed model are showcased. It is also discussed how this model can offer great flexibility in the gridding strategies for field-scale models. In the above-mentioned chapters, the focus is on the fine-scale approaches in the numerical simulations. However, despite the technological advancements in computer hardware and high performance computation, the large size of the real field-scale domains, makes it impractical for the current computers to provide simulation results using fine-scale numerical methods. From this point onward, the focus shifts towards the multilevel multiscale methods. Chapter 5 covers the static multilevel multiscale methods for simulation of fluid flow in fractured domains, where the domain is subdivided in coarser grids across multiple levels of coarsening. With the help of the locally computed functions (also known as the basis functions), an approximated solution is obtained for the entire domain, reducing the size of the linear system of equations and providing computational efficiency. In chapter 6 and 7, the dynamic multilevel method is described in which different parts of the domain are treated and processed at different resolutions and coarsening levels. Due to different physical processes at various scales in the domain, while some parts of the domain can be treated on a lower resolution, certain regions need a higher resolution to capture the physics accurately, which can dynamically change across simulation time. The dynamic multilevel method uses fine-scale high resolution grids only when and where needed, providing a robust and efficient performance while keeping the accuracy at a desired level. Various numerical tests compare the results of the dynamic multilevel method against those of the fine-scale approach. It is shown that accurate results can be obtained while using only a fraction of the high resolution grids. For large-scale domains, such model can offer a significant reduction in the size of the linear systems, providing an optimal scalability. This dissertation is concluded in chapter 8 and references used in this work are followed afterwards.","Fractured porous media; Adaptive mesh refinement; Multiscale simulation; Dynamic multilevel methods; Scalable physics-based nonlinear simulation; Corner-point grid geometry; Geologically relevant models; Geothermal reservoirs","en","doctoral thesis","","978-94-6366-481-3","","","","","","","","","Numerical Analysis","","",""
"uuid:499dabd7-14be-4815-9597-5b9415db8f04","http://resolver.tudelft.nl/uuid:499dabd7-14be-4815-9597-5b9415db8f04","New measurement methods and physico-chemical properties of the MSFR salt","Mastromarino, S. (TU Delft RST/Reactor Physics and Nuclear Materials)","Kloosterman, J.L. (promotor); Rohde, M. (promotor); Delft University of Technology (degree granting institution)","2021","The Molten Salt Reactor (MSR) is one of the six types of Generation IV nuclear energy systems with the main goals of sustainability, safety, reliability, economic competitiveness and proliferation resistance. This technology has spawn interest worldwide. Numerous universities, institutes and companies are carrying out research projects related to molten salt reactors. In Europe, the research is focused on the development of a fast-spectrum design, the Molten Salt Fast Reactor (MSFR). The peculiarity and innovation of the MSR technology is the use of liquid fuel; a molten salt mixture in which both fissile and fertile isotopes are dissolved. A number of technological challenges must be addressed for the design of the reactor and the safety approach must be established. The highest priority issues are in the area of fuel salt development, structural materials, on-site fuel processing and the licensing procedure. Fundamental research needs to be conducted to determine thermodynamic and kinetic data of fuel salts. One of the aims of the project that led to this thesis is the characterization of the fuel salt under normal and accidental conditions, providing the basis for the safety evaluation of the reactor. Reliable data on the thermal properties of molten salt mixtures are scarce. This research focuses on the experimental evaluation of some thermodynamic properties of molten salts. The thesis presents new methods for measuring thermal diffusivity and viscosity, and established methods for measuring melting point and dissolution of the salt in water...","Molten salt reactor; Nuclear reactor; Chemico-physical properties; Viscosity; Melting point; Dissolution; Thermal diffusivity","en","doctoral thesis","","","","","","","","","","","RST/Reactor Physics and Nuclear Materials","","",""
"uuid:24872ead-0ee2-4854-9728-fe04f5957ca2","http://resolver.tudelft.nl/uuid:24872ead-0ee2-4854-9728-fe04f5957ca2","Quantum effects of superconducting phase","Erdmanis, J. (TU Delft QN/Nazarov Group)","Nazarov, Y.V. (promotor); Blanter, Y.M. (promotor); Delft University of Technology (degree granting institution)","2021","The microscopic theory of superconductivity proposed by John Bardeen, Leon Cooper, and John Robert Schrieffer has been a vital milestone of condensed matter physics and the basis of development of new quantum technologies. It explained the superconductivity as an emergent phenomenon arising from weak phonon-mediated attraction of individual electrons and giving raise to what we know as the superconducting condensate characterized by a complex order parameter that has a modulus and a phase. In many superconductors, the modulus defines a spectral gap. Because of the gap, the superconductor cannot host low-energy single-electron excitations. When an electron excitation comes from a normal metal to a superconductor, it is reflected as a hole, and an incoming hole is reflected as an electron. This process is known as Andreev reflection. In the last 60 years, the superconducting heterostructures have been extensively studied, Josephson junction being the most prominent example. The electrons and holes in such structures perform never-ending roundtrips between the super-conductor interfaces. This gives raise to Andreev bound state spectrum, that determines the supercurrent-phase relation of the nanostructure. There is a recent upheaval of interest in nanostructures with multiple superconducting terminals. One of the subjects of interest is Weyl points: for four or more terminals, the Andreev bands can cross zero energy at a point in the space of superconducting phases space. This is a direct analogy to band crossings in the Brillion zone of topological materials. Another subject is the peculiarities of the spectrum under conditions of the superconducting proximity effect, where the gapped-gapless transition in the space of superconducting phase has been observed. It has been long known that the superconducting phase is, in fact, a quantum variable that is canonically conjugated to charge. It is the basis of all applications for the development of superconducting quantum computing. The 2π periodicity of the superconducting phase also manifests itself in a non-trivial way. It enables events known as quantum phase slips, those promise metrological applications, for instance, a metrological standard of cur-rent. Thus it is of great interest to understand the quantum effects of the superconducting phase in novel and more complex setups. This is the main theme of this thesis.","superconductivity; superconducting circuits; phase slips; topology","en","doctoral thesis","","978-90-8593-507-0","","","","","","","","","QN/Nazarov Group","","",""
"uuid:884cbc96-ebcd-488a-a820-0daae3962bb1","http://resolver.tudelft.nl/uuid:884cbc96-ebcd-488a-a820-0daae3962bb1","Nearshore waves and related wave overtopping in complex estuaries","Oosterlo, P. (TU Delft Hydraulic Structures and Flood Risk)","van der Meer, J.W. (promotor); Hofland, Bas (copromotor); Zijlema, Marcel (copromotor); Delft University of Technology (degree granting institution)","2021","This dissertation focuses on the Eems-Dollard estuary in the north of the Netherlands and contributes to the MVED (’Meerjarige Veldmetingen Eems-Dollard’) field measurement project in the area. The Eems-Dollard estuary is part of the Wadden Sea, a shallow shelf sea with barrier islands, deep tidal channels, shallow tidal flats and wetlands. The Eems-Dollard estuary is even more complex than theWadden Sea, because of the deep channels, which run close to the dikes, and the very shallow flats, as well as the funnel shape, which can lead to very high water levels during storms. A particular aspect for this area is that the dike design conditions consist of an offshore-directed wind and very obliquely incident waves, up to 80° relative to the dike normal. Almost no studies have been performed on the estuary and almost no measurements were available inside the estuary.
This dissertation considers two main knowledge gaps, related to the modelling of wave propagation effects and measuring of (very) oblique wave run-up and overtopping, in a complex estuary. First, the performance of the SWAN wave model in predicting the wave conditions in a highly complex area, such as the Eems-Dollard estuary, has not been assessed before. Second, knowledge on and (field) measurements of the extra parameters (such as front velocities) necessary for the cumulative overload method are still scarce. This method considers the overtopping and erosion of the dike cover explicitly. Added to this, the few available (lab) investigations on wave run-up and overtopping during (very) oblique wave attack have not yet led to clear conclusions or guidelines. Therefore, the aim of this dissertation is to gain more insight into the uncertainties related to wave propagation processes and (very) oblique wave run-up and overtopping, which are important for the extreme wave loads on the dikes around the Eems-Dollard estuary.","Eems-Dollard estuary; field measurements; numerical wave modelling; refraction; diffraction; non-linear interactions; laser scanner; LIDAR; wave run-up; wave overtopping","en","doctoral thesis","","978-94-6416-915-7","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:0fd5201c-09cd-46e1-899c-47c0d9ad1f6b","http://resolver.tudelft.nl/uuid:0fd5201c-09cd-46e1-899c-47c0d9ad1f6b","Best-Worst Method: Inconsistency, Uncertainty, Consensus, and Range Sensitivity","Liang, F. (TU Delft Transport and Logistics)","Rezaei, J. (promotor); Brunelli, Matteo (promotor); Delft University of Technology (degree granting institution)","2021","It is our choices that make us who we are. To lead a better life, we have to make better decisions. Nowadays, decisions are increasingly made in complex contexts, in a host of different application domains. Because of that, we need more reliable decision analysis methodologies to improve our decisions. The ability to deal with multi-dimensionality is one of the critical requirements of the decision analysis methods that help us make better decisions. Multi-Criteria Decision-Making (MCDM) is one of the most popular approaches when it comes to formulating and solving decision-making problems, best-known for its ability to handle problems where a multitude of, often conflicting, criteria arise. As one of the latest MCDM methods, the Best-Worst Method (BWM) has been studied substantially and applied increasingly to various fields since its introduction, thanks to its simplicity, flexibility and general applicability. Despite its popularity, some significant issues of BWM have not yet been systematically investigated in existing literature, including: (i) the inconsistency in the preferences provided by Decision-Makers (DMs), (ii) the uncertain information embedded in the DMs’ judgements, (iii) problems in reaching a consensus in group decision-making, and (iv) the range sensitivity in an MCDM problem that is not taken into account in BWM. The main objective of this thesis is to develop an approach to measure, check and improve inconsistency, to develop an approach to incorporate judgments uncertainty, to develop a method to reach consensus and to incorporate range sensitivity in the BWM.","Best-Worst Method; inconsistency; uncertainty; consensus; range sensitivity","en","doctoral thesis","","978-94-6366-484-4","","","","","","","","","Transport and Logistics","","",""
"uuid:18ab40eb-5336-4498-a20b-b5d9a1663003","http://resolver.tudelft.nl/uuid:18ab40eb-5336-4498-a20b-b5d9a1663003","Coding Techniques for Noisy Channels with Gain and/or Offset Mismatch","Bu, R. (TU Delft Discrete Mathematics and Optimization)","Weber, J.H. (promotor); Aardal, K.I. (promotor); Delft University of Technology (degree granting institution)","2021","Data transmission is ubiquitous in all walks of life, ranging from basic home and office appliances like compact disc players and hard disk drives to deep space communication. More often than not, the communication and storage channels are noisy, and data might be distorted during transmission. However, noise is not the only disturbance during the data transmission, and information can sometimes be seriously distorted by the phenomena of unknown channel gain or offset (drift) mismatch. The conventional minimum Euclidean distance based detection, where the receiver picks a codeword from the codebook to minimize the Euclidean distance with the received word, has a poor performance under the gain and/or offset mismatch. Recently, a Pearson distance based detection was introduced, which is immune to unknown offset and/or gain mismatch, but the drawback is that it is pretty sensitive to errors caused by the noise. This thesis investigates possible coding techniques to improve decoders’ performance in noisy channel conditions while maintaining the resistance against the gain and/or offset mismatch. The results discussed in the thesis are divided into four parts, based on different assumptions on the gain and/or offset mismatch.","Coding techniques; Pearson distance; Euclidean distance; maximum likelihood decoding; offset; gain; fading; mismatch","en","doctoral thesis","","978-94-6332-812-8","","","","","","","","","Discrete Mathematics and Optimization","","",""
"uuid:375598ed-f138-471c-9a1e-e28ee3820855","http://resolver.tudelft.nl/uuid:375598ed-f138-471c-9a1e-e28ee3820855","Consolidation during laser assisted fiber placement: Heating, compaction and cooling phases","Çelik, O. (TU Delft Aerospace Manufacturing Technologies)","Dransfeld, C.A. (promotor); Teuwen, Julie J.E. (copromotor); Delft University of Technology (degree granting institution)","2021","Thermoplastic composites (TPCs) are emerging in the aerospace industry owing to their advantages over thermoset counterparts such as infinite shelf life, high fracture toughness, chemical and solvent resistance, and recyclability. Also, they are suitable for fast, automated manufacturing since chemical reactions are not required to obtain the final mechanical properties and shape of the TPC structures. Laser-assisted fiber placement (LAFP) has become a promising manufacturing solution with a potential in reducing the material scrap rates and labor time per component, and increasing the repeatibility. Moreover, thanks to the re-melting capability of TPCs, in-situ consolidated (without a post-consolidation step in an oven, press or autoclave) structures can be produced using the LAFP process, which can reduce the capital and energy costs associated with post-consolidation. For the aerospace industry to widely accept and use LAFP with in-situ consolidation, sufficient part quality with a feasible processing speed must be achieved. One of the primary quality indicators is the consolidation quality, which can be quantified by the remaining voids within the composite laminate after manufacturing. Proper consolidation quality can only be obtained once the underlying mechanisms and the relation with the process parameters are understood. The main goal of this thesis is therefore to improve the understanding on the consolidation process during LAFP with in-situ consolidation...","","en","doctoral thesis","","978-94-6421-578-6","","","","","","","","","Aerospace Manufacturing Technologies","","",""
"uuid:3d240b0f-cf5b-4a63-86a1-c67bc24fe9b5","http://resolver.tudelft.nl/uuid:3d240b0f-cf5b-4a63-86a1-c67bc24fe9b5","Bringing classical mechanical resonators towards the quantum regime: A journey on developing integrated optomechanical devices and exploring experiments at high temperature","Guo, J. (TU Delft QN/Groeblacher Lab)","Groeblacher, S. (promotor); van der Zant, H.S.J. (promotor); Delft University of Technology (degree granting institution)","2021","The work in this thesis focuses on potential ways to bring a macroscopic mechanical
resonator in a classical environment into the quantum regime with integrated cavity
optomechanics...","Cavity optomechanics; mechanical resonators; quality factor; coherent feedback; integrated device; feedback cooling","en","doctoral thesis","","978-90-8593-500-1","","","","","","2023-01-01","","","QN/Groeblacher Lab","","",""
"uuid:b5134895-8d70-4b2f-a416-662891b2aa0d","http://resolver.tudelft.nl/uuid:b5134895-8d70-4b2f-a416-662891b2aa0d","Ethics of Trust-Inviting Digital Systems: Blockchain, Reputation-Based Platforms, and COVID19 Tracing Technologies","Teng, Y. (TU Delft Ethics & Philosophy of Technology)","van den Hoven, M.J. (promotor); Santoni De Sio, F. (copromotor); Delft University of Technology (degree granting institution)","2021","In the past decade, rapid shifting and evolving technological systems that take trust as part of the design objective (call such systems “trust-inviting systems”) have incredibly transformed the way we interact with others. Think of reputation-based platforms mediating interactions between strangers and other digital services provided by institutions, as well as blockchain systems, which work to shape, respectively, our trust relations with individuals, institutions, and technologies. However, the ways these systems use to facilitate trust are not always justifiable and can lead to negative moral and social consequences if certain parts of the systems are shown to be flawed. By reflecting on several present cases of trust-inviting systems that are experiencing great tension of trust, this thesis argues that trust-inviting systems essentially attempt to interpret, translate, and ultimately institutionalize the idea of trustworthiness in given contexts. However, the ways that trust-inviting systems are using to institutionalize the characteristics of trustworthy persons, institutions, and technologies should not be accepted without scrutiny. For each case analysed here, a discrepancy between the intention to improve trust and trustworthiness and the means that are adopted to facilitate them is shown. Such cleavage is argued to be primarily caused by flawed understanding of the trust concepts and the resulting ill-suited design choices, as well as problems emerged from the implementation process. These issues are proposed to be ameliorated by a recalibration of the understanding of the trust concepts, which has the potential for remedying shortcomings of the current design and development of the systems with forward-looking strategies taking into account a wide range of societal needs, values, and technical properties. In a word, it is argued that trust-inviting digital systems should be designed, developed, and deployed in ways that are aligned with the essence of the trust relation in context, in order to achieve proper trust and trustworthy systems. As such, the pitfalls identified in each case are able to be used as perspectives contributing to building affordances that foster warranted trust and foreclosing affordances that would undermine warranted trust.
2+-doped Halides: In Prospect of Novel Luminescence Solar Concentrators","Plokker, M.P. (TU Delft RST/Luminescence Materials)","Dorenbos, P. (promotor); van der Kolk, E. (promotor); Delft University of Technology (degree granting institution)","2021","Tm2+-doped halides exhibit excellent properties for use in Luminescence Solar Concentrator (LSC) applications. Such LSCs consist of a glass waveguide with small Copper-Indium- Selenide (CIS) solar cells attached to its edges. The waveguide contains a luminescent coating based on a Tm2+-doped halide, whose 4f125d1 absorption bands are able to absorb a large fraction of the AM 1.5 solar spectrum. As the absorption occurs over the entire visible light range (380-750 nm) and with a largely uniform absorption strength, the coating can appear colourless and semi-transparent. Via the Tm2+ excited states dynamics, the absorbed sunlight will be converted into the 2F5/2→2F7/2 emission that has a wavelength of 1140 nm. Since this emission falls outside the range of the 4f125d1 absorption bands, no selfabsorption losses can occur. These generally pose a significant limitation to the overall LSC efficiency. Subsequently, the converted light is re-emitted by the coating and propagates via total internal reflection through the waveguide that directs it towards the CIS solar cells. These solar cells then photovoltaically convert it into electricity. LSC coatings based on Tm2+- doped halides can be applied as a sustainable window technology and, as part of Building- Integrated Photovoltaics (BIPV), can reduce the energy consumption of buildings making them self-sustaining and less dependent on fossil fuels. [1] Although the optical and luminescence properties of primarily CaF2:Tm2+, CsCaX3:Tm2+ (X = Cl, Br, I) and MCl2:Tm2+ (M = Ba, Ca, Sr) have been investigated, a substantial amount of other Tm2+-doped halides remain unexplored. Above all, key topics such as the internal Quantum Efficiency (QE) and Tm2+ concentration quenching in such materials remains completely unaddressed. The former property has a direct influence on the overall LSC efficiency and is governed by the Tm2+ excited states dynamics of the material. In the past, the Tm2+ excited states dynamics has been studied in depth for Tm2+-doped CsCaX3 (X = Cl, Br, I) trihalide perovskites. [2-5] However, in these works no other options beside quenching via multi-phonon relaxation has been considered and no correlation was made to QEs nor were such values ever reported. It is therefore the goal of this dissertation to investigate the Tm2+ excited states dynamics in various halides as a function of composition, temperature and time, and in connection to the 2F5/2→2F7/2 QE. Both a qualitative and quantitative analysis is provided on the different 4f125d1→4f125d1 and 4f125d1→4f13 nonradiative quenching processes.","Tm2+-doped halides; luminescence; quenching; excited states dynamics; internal quantum efficiency; luminescence solar concentrator materials","en","doctoral thesis","","978-94-6423-417-6","","","","","","","","","RST/Luminescence Materials","","",""
"uuid:049932ab-4124-4639-a7e3-146ac4fd805d","http://resolver.tudelft.nl/uuid:049932ab-4124-4639-a7e3-146ac4fd805d","On Phylogenetic Encodings and Orchard Networks","Murakami, Yukihiro (TU Delft Discrete Mathematics and Optimization)","van Iersel, L.J.J. (promotor); Aardal, K.I. (promotor); Delft University of Technology (degree granting institution)","2021","Phylogenetic networks are a type of graph with vertices and edges, used to elucidate the evolutionary history of species. The fundamental goal of phylogenetic research is to infer the true phylogeny of species from raw data such as DNA sequences and morphological data. Most network inference methods require one to solve an NP-hard problem; furthermore, there is generally no guarantee of a unique network. One way of resolving this is to restrict our scope to networks within a certain class and to ask the following question.
What input data guarantees a unique network within this class?
Such a question brings us to the idea of encodings. A network class is encoded by a certain building block, such as displayed trees, splits, or induced inter-taxa distance matrices, if the building block distinguishes one network in the class from another. More precisely, no two networks in the same class may have the same set of building blocks. Often, encoding results give inspiration for polynomial-time algorithms for inferring networks within certain classes. Assuming to have data that corresponds to a network in that class, one may plausibly construct it as the unique network that is consistent with such information.
The following objectives were proposed. 1) Improve the methodology for characterising drought based on the phenomenon’s spatial features. 2) Develop a visual approach to analysing drought variations. 3) Develop a methodology for spatial drought tracking. 4) Explore machine learning (ML) techniques to predict crop-yield responses to drought. The four objectives were addressed and results are presented.
Finally, a scope was formulated for integrating ML and the spatio-temporal analysis of drought. Proposed scope opens a new area of potential for drought prediction (i.e. predicting spatial drought tracks and areas). It is expected that the drought tracking and prediction method will help populations cope with drought and its severe impacts.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-1-032-24650-5","","","","","","","","","GIS Technologie","","",""
"uuid:7505f2c0-5a20-4365-bc5a-1bcee82be6fd","http://resolver.tudelft.nl/uuid:7505f2c0-5a20-4365-bc5a-1bcee82be6fd","Model-based approaches for supporting equitable delta adaptation planning","Jafino, B.A. (TU Delft Policy Analysis)","Kwakkel, J.H. (promotor); Klijn, F. (promotor); Haasnoot, Marjolijn (copromotor); Delft University of Technology (degree granting institution)","2021","Climate change and anthropogenic pressures are threatening the livelihood of people in the world’s deltas. To secure a sustainable future, alternative courses of actions should be well prepared in advance, and their efficacy should be evaluated against future uncertainties. The assessment of alternative adaptation plans often takes an aggregate perspective, where the projected outcomes are aggregated across all people and over the entire planning horizon. In reality, plans which are optimal at the aggregate level, may benefit some people while harming others. This is because climate change impacts and socio-economic pressures vary in space and time, and they affect people differently depending on one’s adaptive capacity. Therefore, adaptation can exacerbate inequality.","","en","doctoral thesis","","978-94-6421-552-6","","","","","","","","","Policy Analysis","","",""
"uuid:db49c9b4-4654-4cfc-b4b7-9c6ae4882d39","http://resolver.tudelft.nl/uuid:db49c9b4-4654-4cfc-b4b7-9c6ae4882d39","On-chip integration of Si/SiGe-based quantum dots and electronic circuits for scaling","XU, Y. (TU Delft QCD/Vandersypen Lab)","Vandersypen, L.M.K. (promotor); Ishihara, R. (copromotor); Delft University of Technology (degree granting institution)","2021","With continuous breakthroughs in quantum science and technology in recent years, the development of quantum computers is moving from pure scientific research to engineering realization. Meanwhile, the underlying physical structures also develop from the initial single qubit to multiple qubits or medium-scale qubit registers. Since qubits are operated by many sophisticated instruments under strict environmental conditions, people need a scalable solution to support many qubits working at the same time, so as to achieve high computing speed for a practical quantum computer.","floating gate; quantum dots; integration; scalability","en","doctoral thesis","","978-94-6419-370-1","","","","","","","","","QCD/Vandersypen Lab","","",""
"uuid:e42721d6-6bf6-47b3-a683-3497f3d917ae","http://resolver.tudelft.nl/uuid:e42721d6-6bf6-47b3-a683-3497f3d917ae","Digital-Intensive Up-Converters for Wireless Communication","Shen, Y. (TU Delft Electronics)","de Vreede, L.C.N. (promotor); Alavi, S.M. (copromotor); Delft University of Technology (degree granting institution)","2021","This thesis focuses on digital-intensive up-converters for sub-6GHz wireless communication. Nowadays, wireless cellular communication is entering its 5th generation (5G), driven by the demand for faster mobile access and higher data throughput. 5G utilizes larger modulation bandwidths, higher-order modulations, and (many) more transmitters and receivers than its precessors, requiring higher system efficiency, flexibility, and integration of the transmitter (TX). An essential building block in the TX system is the RF modulator that converts the baseband data to an RF signal. New modulator architectures and circuits are required to handle the increased 5G modulation bandwidths linearly and energy-efficiently. Along with the progress in wireless communication, nano-scale CMOS technologies are advancing toward their physical limitations. Transistors have become smaller and more suited towards digital signal processing (DSP). Moreover, their high-frequency performance has improved, enabling RF analog/mixed-signal circuits. These improvements offer digital-intensive transmitters (DTXs) the opportunity to enter a territory that has been the traditional stronghold of analog-intensive TXs. Consequently, the research question of this dissertation is “What if we change the nature of the RF front-end, such that we can start truly benefiting from the power of CMOS in “digital” (switching) operations?” This thesis proposes new digital-intensive TX line-ups and up-converters architectures with enhanced linearity, bandwidth, and power efficiency to answer this question...","up-converters; digital-intensive transmitters (DTXs); digital power amplifiers (DPAs); polar transmitters; efficiency enhancement; phase modulators; direct-digital RF modulators (DDRMs); IQ-image; transmitter line-ups","en","doctoral thesis","","978-94-6419-378-7","","","","","","","","","Electronics","","",""
"uuid:cea59727-fda2-41e1-ba87-9404ef22202d","http://resolver.tudelft.nl/uuid:cea59727-fda2-41e1-ba87-9404ef22202d","CMOS circuits and systems for cryogenic control of silicon quantum processors","Patra, B (TU Delft QCD/Sebastiano Lab)","Charbon-Iwasaki-Charbon, E. (promotor); Babaie, M. (copromotor); Delft University of Technology (degree granting institution)","2021","Quantum computers can provide exponential speedup in solving certain computational problems pertaining to drug discovery, cybersecurity, weather forecasting, etc. Although a quantum computer with just 50-qubits has been shown to surpass the computing power of the best supercomputers in specific applications, millions of qubits would be required for useful practical applications. One of the biggest obstacles in scaling from 50 qubits to millions of qubits is the interconnect bottleneck between solid-state qubits operating at 20 milliKelvin inside a dilution refrigerator and control electronics outside the dilution refrigerator connected via long coaxial cables. A 50-qubit processor requires hundreds of cables, digital to analog converters, mixers and amplifiers, clearly challenging the scalability of the system. The goal of this research is to build scalable quantum computers by operating CMOS control electronics inside the dilution refrigerator in proximity to quantum processors. The dissertation covers a broad aspect of cryogenic integrated circuit design spanning across devices, circuits and systems.","Cryogenic electronics; Cryo-CMOS integrated circuits; quantum computing; microwave passive components; cryogenic device modeling; oscillators; controller specifications; wideband RF front-end; scalable qubit controller; spin qubits; fidelity; quantum algorithm","en","doctoral thesis","","978-90-8593-501-8","","","","","","","","","QCD/Sebastiano Lab","","",""
"uuid:50dc25bc-1ca3-4d2a-ad18-8360812d017c","http://resolver.tudelft.nl/uuid:50dc25bc-1ca3-4d2a-ad18-8360812d017c","Development of an Efficient Modelling Approach to Support Economically and Socially Acceptable Flood Risk Reduction in Coastal Cities: Can Tho City, Mekong Delta, Vietnam","Ngo, Q.H. (TU Delft Hydraulic Structures and Flood Risk; IHE Delft Institute for Water Education)","Zevenbergen, C. (promotor); Ranasinghe, Roshanka (promotor); Pathirana, Assela (copromotor); Delft University of Technology (degree granting institution)","2021","Flooding is one of the most frequently occurring and damaging natural disasters worldwide. Quantitative flood risk management (FRM) in the modern context demands statistically robust approaches (e.g. probabilistic) due to the need to deal with complex uncertainties. However, probabilistic estimates often involve ensemble 2D hydraulic model runs resulting in large computational costs.
Additionally, modern FRM necessitates the involvement of a broad range of stakeholders via co-design sessions. This makes it necessary for the flood models, at least at a simplified level, to be understood by and accessible to non-specialists.
This study was undertaken to develop a flood modelling system that can provide rapid and sufficiently accurate estimates of flood risk within a methodology that is accessible to a wider range of stakeholders for a coastal city – Can Tho city, Mekong Delta, Vietnam.
A web-based hydraulic tool, Inform, was developed based on a simplified 1D model for the entire Mekong Delta, flood hazard and damage maps, and estimated flood damages for the urban centre of Can Tho city (Ninh Kieu district), containing the must-have features of a co-design tool (e.g. inbuilt input library, flexible options, easy to use, quick results, user-friendly interface). Inform provides rapid flood risk assessments with quantitative information (e.g. flood levels, flood hazard and damage maps, estimated damages) required for co-designing efforts aimed at flood risk reduction for Ninh Kieu district in the future.","Coastal cities; Quantitative flood risk assessment; Flood risk management; Climate change; Land subsidence; Can Tho city; Mekong Delta","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-1-0322-2914-0","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:90d06f1d-4f23-48cc-8f96-51500258020f","http://resolver.tudelft.nl/uuid:90d06f1d-4f23-48cc-8f96-51500258020f","Tools for the design of quantum repeater networks","Coopmans, T.J. (TU Delft QID/Elkouss Group)","Wehner, S.D.C. (promotor); Elkouss Coronas, D. (copromotor); Delft University of Technology (degree granting institution)","2021","Communication between remote quantum computers enables tasks that are unachievable with their conventional counterparts, such as unconditionally-secure communication or quantum computing in the cloud. Bridging long distances, where the communication fundamentally suffers from loss, can be achieved by splitting up the distance into segments and positioning so-called quantum repeaters in between. In this thesis, we develop tools to analyse how a large class of quantum repeater schemes will perform when implemented on real hardware suffering from time-dependent noise, in particular imperfect quantum memories for storing quantum information.","","en","doctoral thesis","","978-94-6384-269-3","","","","","","","","","QID/Elkouss Group","","",""
"uuid:abb9830d-4598-465f-802d-5e9bbe7b6ce0","http://resolver.tudelft.nl/uuid:abb9830d-4598-465f-802d-5e9bbe7b6ce0","Application of poly(ε-caprolactone-b-ethylene oxide) micelles combined with ionizing radiation in cancer treatment","Liu, H. (TU Delft RST/Applied Radiation & Isotopes)","Denkova, A.G. (promotor); Eelkema, R. (promotor); Delft University of Technology (degree granting institution)","2021","The focus of this thesis is on the utilization of biodegradable drug nanocarriers combined with ionizing radiation in cancer treatment. Polymeric micelles based on PCL-PEO block copolymers were selected as the main platform for delivering various therapeutic substances due to their size, degradability and easy preparation. This thesis can roughly be divided into two main parts. In the first part we combined the external radiation beams with PCL-PEO micelles which are applied as a nanoplatform for chemotherapeutic drugs and photosensitizer
and investigated the possibility that using the radiation power as a trigger for drug release from the micelles. In the second part, we focus more on the cooperation of radionuclides and PCL-PEO micelles. So far we developed a chelator-free method to radiolabel micelles for determining their in vivo behavior, as well as evaluated the possibility to combine chemotherapy with radionuclide therapy using the micelles as a nanoplatform. In both parts we attempted to unravel mechanisms behind the observed phenomena to be able to adjust the
nano-carriers accordingly.
For this and in particular for the construction management of offshore wind assets, in this thesis new models and methods have been developed to support this enhanced decision making. These decisions are subject to various types of risks and uncertainties, varying from environmental uncertainties, supply chain disruptions and stochasticity of construction activities’ duration. Therefore, these should be properly taken into account in construction management models using performance and/or expert data from past construction projects.
In this thesis two types of data availability have been distinguished: (i) where sufficient relevant performance data is available and (ii) where relevant past performance data is rather limited. In the first case, statistical methods are used, such as Copula functions to model the dependence between metocean variables and Bayesian Networks to model the dependence between subsequent construction activities. In the second case, expert knowledge and data are used to quantify the uncertainty using a mathematical aggregation method for expert judgments (i.e. Cooke’s classical modelling). The different methods have been applied to several test cases to investigate the associated cost and time impact. As a result of this research, different tools and an open-source software
were developed. These also can be used in different fields of application using this proper mathematical expert judgment aggregation modelling.
Finally, it can be concluded that the state-of the art developments within this thesis substantially contribute to decision making under uncertainty, so that construction management strategies are optimized and thereby the offshore wind energy assets life cycle value is maximized.","Probabilistic simulation; Uncertainty quantification; Offshore wind assets; Decision support; Construction management; Asset management","en","doctoral thesis","","978-94-6384-268-6","","","","","","","","","Integral Design & Management","","",""
"uuid:91047108-e77e-4d3b-956e-1e97ee8f8c47","http://resolver.tudelft.nl/uuid:91047108-e77e-4d3b-956e-1e97ee8f8c47","Superconducting Integrated Circuits at Sub-millimeter Wavelengths","Hähnle, S.A. (TU Delft Tera-Hertz Sensing)","Baselmans, J.J.A. (promotor); Endo, A. (copromotor); Delft University of Technology (degree granting institution)","2021","Superconducting integrated circuits (SICs) represent a natural step forward for devices operating at frequencies from microwave up to sub-millimeter wavelengths. They offer massive miniaturization via compact design based on low-loss superconducting transmission lines. At sub-millimeter wavelengths, the development of SICs is driven by astronomical instruments where it could allow the realization of an imaging spectrometer, combining simultaneous imaging and spectroscopy capabilities into a single instrument analogous to integral field units in the infrared and optical regimes. Such an imaging spectrometer can be achieved with SICs by integrating the required elements, such as spectral filters and polarizers, with the detectors onto a single chip. Without this integration, the dispersive system for even a single spatial pixel at these wavelengths would be prohibitively large and could not be realistically scaled up to allow imaging. Astronomical signals are exceedingly weak, typically requiring many nights of exposure to get a good signal to noise ratio. It is therefore imperative that the instrument has minimal losses before its detectors. As a consequence, the losses of each element in the SIC needs to be minimized, which requires careful characterization of the individual elements, including antenna, filters, detectors and connecting transmission lines. The primary focus of this thesis lies on the experimental characterization of the wideband antenna and the low-loss superconducting transmission lines.","Sub-millimeter systems; Transmission Lines; Dielectric Loss; Radiation Loss; Sub-millimeter Loss; Fabry–Pérot; Low-temperature Detectors","en","doctoral thesis","","978-94-6421-528-1","","","","","","2021-11-15","","","Tera-Hertz Sensing","","",""
"uuid:782fcd3e-0db4-42ee-965a-c180586759f4","http://resolver.tudelft.nl/uuid:782fcd3e-0db4-42ee-965a-c180586759f4","Preventing major hazard accidents through barrier performance monitoring","Schmitz, P.J.H. (TU Delft Safety and Security Science)","Reniers, G.L.L.M.E. (promotor); Swuste, P.H.J.J. (copromotor); Delft University of Technology (degree granting institution)","2021","Foreseeing or even predicting major accidents is understandably challenging, both for any practitioner involved as for safety scientists and other academics. Understanding these events and trying to prevent them is a primary goal of a safety theory. Major hazard-related accidents rarely occur but when they do, they can cause many casualties and injured, and have major financial consequences due to production loss, material damage to the installation and/or environmental damage. Ultimately, major hazard-related accidents may ruin the company involved. Process safety is becoming more and more important in the process industry and is strongly linked to reliability, quality, productivity, security of supply, and good business....","process safety; indicator; ammonia; barrier; bowtie","en","doctoral thesis","","978-94-6419-348-0","","","","","","","","","Safety and Security Science","","",""
"uuid:34d6b261-5a55-45f6-8103-1d900cc98dc9","http://resolver.tudelft.nl/uuid:34d6b261-5a55-45f6-8103-1d900cc98dc9","Evolution of the Greenland Ice Sheet with the Global Climate as modelled with CESM2-CISM2","Muntjewerf, L. (TU Delft Physical and Space Geodesy)","Klees, R. (promotor); Vizcaino, M. (copromotor); Delft University of Technology (degree granting institution)","2021","Human-induced climate change is one of the challenges of our time. The increasing global mean temperature, shifts in precipitation patterns, and the rising sea level threaten ecosystems and natural resources, and pose a great risk on society at large. Policymakers need information about the expected impacts, as accurate as possible, in order to make adequate climate change mitigation and adaptation policies.
The Greenland Ice Sheet (GrIS) appears to be sensitive to the changing climate. At present, the GrIS is losing mass at an accelerated pace. This is the focus of this thesis. The key terms of the GrIS mass balance are (1) the Surface Mass Balance (SMB) and (2) the ice discharge at glacier fronts. At the surface, the ice sheet gains mass through precipitation and loses mass through meltwater runoff and through sublimation. Ice discharge is a loss term regulated by ice flow. When the mass balance is negative, the ice sheet loses mass contributing to sea level rise.
The GrIS, however, is not an isolated environment. It is an integral part of the Earth system. Interactions and feedback mechanisms between the ice sheet and various parts of the Earth’s system affect the ice sheet’s mass loss. The future behavior of the GrIS is a major source of uncertainty in the projections of 21st century sea level rise. The basis for this lies, among other things, in an incomplete understanding of the interactions between the ice sheets and other components the Earth system.","Greenland Ice Sheet; Sea Level Rise; Anthropogenic Climate Change; Coupled Ice-Sheet/Earth System Modelling","en","doctoral thesis","","978-94-6416-917-1","","","","","","","","","Physical and Space Geodesy","","",""
"uuid:a2658b9e-c380-4a36-8049-a0af26fe3e54","http://resolver.tudelft.nl/uuid:a2658b9e-c380-4a36-8049-a0af26fe3e54","Molten Salt Reactor Chemistry: Structure and Equilibria","Ocadiz flores, J.A. (TU Delft RST/Reactor Physics and Nuclear Materials)","Konings, R. (promotor); Smith, A.L. (copromotor); Delft University of Technology (degree granting institution)","2021","Molten salts are a class of ionic liquids which have in recent years been the focus of extensive fundamental research given that they are a versatile class of reaction media with a variety of appealing thermophysical and thermochemical properties (e.g. melting points, heat capacities, vapor pressures, densities, thermal conductivities, etc..) suited for a variety of industrial applications, in particular at high temperature. Themost wellknown is perhaps the production of materials as important as aluminum and sulfuric acid, yet thermal energy storage is also a notable application. One of the most noteworthy application of molten salts, is as fuel and coolant for a type of nuclear fission reactor known as theMolten Salt Reactor (MSR). In its most general sense, aMSR is a class of nuclear reactor in which fissile (235U, 233U, 239Pu) and/or fertile isotopes (e.g. 232Th, 238U) are dissolved in a carrier salt. The resulting mixture acts both as fuel and coolant. The two prototypes which have been built in the past used a fluoride fuel, so historicallymost work has concentrated on fluoride salt mixtures. However,modern day reactor developers are also interested in chloride fuels, so both molten salt fuel families are relevant at present...","Molten Salt Reactor; molten salts; actinide fluorides; actinide chlorides; CALPHAD; Differential Scanning Calorimetry (DSC); X-ray diffraction (XRD); Extended XRay Absorption Fine Structure (EXAFS); Polarizable IonModel (PIM); Molecular Dynamics (MD)","en","doctoral thesis","","978-94-6384-270-9","","","","","","","","","RST/Reactor Physics and Nuclear Materials","","",""
"uuid:2ba49af1-fa17-476c-88f4-97783ca4e39a","http://resolver.tudelft.nl/uuid:2ba49af1-fa17-476c-88f4-97783ca4e39a","Approximately Optimal Resource Management for Multi-Function Radar: Algorithmic Solutions Using a Generic Framework","Schöpe, M.I. (TU Delft Microwave Sensing, Signals & Systems)","Yarovoy, Alexander (promotor); Driessen, J.N. (copromotor); Delft University of Technology (degree granting institution)","2021","Recent advances in Multi-Function Radar (MFR) systems led to an increase in their degrees of freedom. As a result, modern MFR systems are capable of adjusting many parameters during runtime. An automatic adaptation of the radar system to changing situations, like weather conditions, interference, or target maneuvers, is often mentioned in the context of MFR and is usually called Radar Resource Management (RRM). This thesis aims at developing a generic framework and approximately optimal algorithmic solutions for solving RRM problems. This is achieved by formulating the sensor tasks as Partially Observable Markov Decision Processes (POMDPs). Although the focus is on MFR, the approach is not limited to such sensor systems and has broader applicability.
In Chapter 2, a first step is taken by investigating Lagrangian Relaxation (LR) and the subgradient method for optimally distributing the sensor resources to the different tasks in a multi-target tracking scenario. A constrained optimization problem is formulated. Using LR, the constraints can be included in the cost function. In a time-invariant scenario, it is shown that the proposed Optimal Steady-State Budget Balancing (OSB) algorithm will lead to balanced budgets based on track parameters like maneuverability and measurement uncertainty. The time-invariant scenario is a special case of general tracking scenarios, and the presented solution can be seen as the optimal POMDP solution in that case. Since real-world applications quickly lead to time-varying scenarios, it is demonstrated how the approach can be extended to such cases. Finally, the proposed method is compared with other budget assignment strategies.
Subsequently, the tracking tasks are explicitly formulated as POMDPs, and the novel Approximately Optimal Dynamic Budget Balancing (AODB) algorithm is proposed in Chapter 3. The algorithm applies a combination of LR and Policy Rollout (PR). PR is a Monte Carlo sampling method for POMDPs to find the expected future cost. Due to its generic architecture, the framework can be applied to different radar or sensor systems and cost functions. In a time-invariant scenario, the algorithm calculates a solution close to the optimal steady-state solution, as presented in Chapter 2. This is shown through simulations of a two-dimensional tracking scenario. Moreover, it is demonstrated how the algorithm dynamically allocates the sensor time budgets to the tasks in a changing environment using a non-myopic fashion. Finally, the algorithm's performance is compared with different resource allocation techniques.
Based on the previous results, Chapter 4 conducts a detailed investigation of the computational load of the AODB algorithm. It is shown how the choice of several input parameters influences computational performance. Additionally, Model Predictive Control (MPC) is applied in the same framework as an alternative POMDP solution method. Compared to stochastic optimization methods such as PR, the computational load is dramatically reduced while the resource allocation results are similar. This is shown through simulations of dynamic multi-target tracking scenarios in which the cost and computational load of different approaches are compared.
So far, this thesis has used tracking scenarios to demonstrate the validity of the proposed algorithms. Chapter 5 shows how to apply the proposed framework and algorithmic solution to a multi-target joint tracking and classification scenario. It is shown that tracking and classification can be considered in a single task type. Furthermore, it is shown how the task resource allocations can be jointly optimized using a single carefully formulated cost function based on the task threat variance. Multiple two-dimensional radar scenarios demonstrate how sensor resources are allocated depending on the current knowledge of the target position and class.
Chapter 6 extends the single-sensor approach shown in the previous chapters to multiple sensors and demonstrates the usefulness of the proposed algorithm in two different multi-sensor multi-target tracking scenarios. The first scenario considers a generic surveillance situation. An approximately optimal approach based on the previously proposed algorithm is formulated assuming a central processor. Subsequently, a distributed implementation is introduced that converges to the same results as the centralized implementation and requires less computational resources. The performance of the proposed approach for both centralized and distributed implementation is demonstrated through dynamic tracking scenarios. The second scenario focuses explicitly on an automotive application. The proposed generic framework and algorithmic solution are used to allocate scarce resources across multiple mobile sensor nodes. A central system manages the nodes' transmission and shares sensing data with other sensor nodes if this improves the overall track accuracy. The proposed method allocates time and frequency resources. Through simulation of a typical traffic situation, the validity of the approach is demonstrated.
This thesis shows that the application of the proposed novel generic framework and algorithmic solution increases the performance w.r.t. heuristic solutions. Furthermore, it is demonstrated that the proposed framework allows the user to exchange elements such as cost function or POMDP solution method to adjust it to specific needs. The proposed method can be applied in many different areas involving different types of sensors. Possible applications include automotive scenarios, such as autonomous driving or traffic monitoring, (maritime) surveillance, and air traffic control.","Radar Resource Management; Lagrangian Relaxation; Partially Observable Markov Decision Process; Policy Rollout","en","doctoral thesis","","978-94-6384-263-1","","","","","","","","","Microwave Sensing, Signals & Systems","","",""
"uuid:c717460d-eb79-495a-b239-4d030d0412c6","http://resolver.tudelft.nl/uuid:c717460d-eb79-495a-b239-4d030d0412c6","Towards a bottom-up reconstitution of the nuclear pore complex","Fragasso, A. (TU Delft BN/Cees Dekker Lab)","Dekker, C. (promotor); Delft University of Technology (degree granting institution)","2021","At first glance nanopores may appear simple, almost intuitive, to understand given that they are, quite literally, ‘just’ very small pores in a membrane. In fact, one may even wonder why at all we need trained scientists to study such seemingly simple entities. The short answer is that nanopores, as the word suggests, are nanoscale entities and, as such, one can not directly see or experience any of the events that occur down there. The long answer can be found in this thesis. Here, I present and discuss a wide array of nanopores, from biological nanopores like the nuclear pore complex (NPC), to solidstate nanopores, and DNA-origami nanopores. While the central focus of my research is to understand the inner workings of the NPC, a short journey into the world of ion transport in solid-state nanopores is first undertaken, with special emphasis on the random fluctuations of the ion flow within the nanopore, referred to as current noise. Next, I introduce the concept of biomimetic nanopores, where a solid-state nanopore is ‘camouflaged’ by coating its inner surface with purified proteins, resulting in an entity that behaves somewhat like a real NPC. Biomimetic nanopores have enabled us to mimic, study, and gain new insights into how the real NPC works, and bear great potential for further developments and discoveries.","nuclear pore complex; nanopores; FG nups; intrinsically disordered proteins; biomimetics; DNA origami; 1/f noise","en","doctoral thesis","","978-90-8593-494-3","","","","","","","","","BN/Cees Dekker Lab","","",""
"uuid:a698459b-c38b-4c0f-9f6b-9b186de6c946","http://resolver.tudelft.nl/uuid:a698459b-c38b-4c0f-9f6b-9b186de6c946","From the Village to the Neighbourhood: The transformation of open spaces through public housing","Garcia Fernandez, A. (TU Delft Space & Type)","van Gameren, D.E. (promotor); Meyer, Han (promotor); Sepulveda Carmona, D.A. (copromotor); Delft University of Technology (degree granting institution)","2021","This thesis examines urban transformation and opportunities for urban upgrading through the rehabilitation and recycling of public housing neighbourhoods built in intermediate cities with a rural base and slow growth.
The research explores the past and present of the residential estates of the main Galician industrial cities in order to discover, on different scales in four chapters, how the public housing projects built in the second half of the twentieth century were formed, how their urban integration process has taken shape, what the open spaces associated with public housing are like, and if they have served as a bridge between the public, the community and the private, to end with recommendations that can help in participative processes of integral urban regeneration for better articulation, integration and urban cohesion of the open spaces included in the public project.
After a first chapter introducing the problem, the main questions, the structure of the research, the methodology used, and the theoretical and analytical framework, the second chapter studies the context within which the public housing appeared, and how it was integrated into the consolidated city in Europe, Spain and Galicia, in order to explain the location where the public housing was developed, as a basis for analysing the study cases.
Chapter three evaluates the formative potential of the residential estates in the urban fabric at the scale of the neighbourhood, studying the initial formation of estates, considered as peripheral fragments, answering the question of how it affects the inherited territorial structure in the urban setting of the estate. It also shares with chapter four the study of the creation of relationship spaces from the construction of the estates, which allows us to observe their urban arrangement, responding to how it affects the distribution of the built elements and open spaces of the neighbourhood in the urban cohesion of the public housing project.
Chapter four studies current open spaces on two scales, within the estate and in its surroundings. On the first scale, we study the creation of spaces for social interaction in the housing estates, which allows us to observe their urban fit, responding to how the distribution of the built elements and open spaces of the neighbourhood affects the spatial cohesion of the estate. The second scale studies the current configuration of the open spaces inside the estate, responding to how the configuration of the space between buildings influences the quality of the spaces for social interaction.
Chapter five shares the parameters used in the analysis of the case studies, answering the question of what conclusions can be drawn from the comparison of the case studies, which spaces of opportunity are found in the case studies, and what is the framework for discussion from where to begin establishing intervention proposals for the physical regeneration of the estate based on the improving the spaces of opportunity.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-479-0","","","","","","","","","Space & Type","","",""
"uuid:dfd526c6-c606-4562-b1e5-ccb984ea1170","http://resolver.tudelft.nl/uuid:dfd526c6-c606-4562-b1e5-ccb984ea1170","Modelling collisionconsequences of unmanned aircraftsystems on human","Rattanagraikanakorn, B. (TU Delft Air Transport & Operations)","Blom, H.A.P. (promotor); Sharpanskykh, Alexei (copromotor); Schuurman, M.J. (copromotor); Delft University of Technology (degree granting institution)","2021","Unmanned aircraft system (UAS) is an emerging technology that is now gaining traction around the world. UAS operations are expected to be integrated into very-low-level rural and urban airspace via the enabling of the novel concept of unmanned traffic system (UTM). For such operations to become a reality, one of the major challenges that needs to be overcome is the assessment and, subsequently, mitigation of safety risk posed to third parties on the ground.
Third parties on the ground refer to people or pedestrians that resides within the area of operation but are not involved with the operation. To assess this risk, an approach called third-party risk (TPR) assessment has been developed in many research. Prediction of TPR of UAS operations will allow operators, authorities and stakeholders make well-informed decision on the deployment of UAS operations. If the TPR risk level of the designed operational concept exceeds the acceptable risk level, then risk mitigation can then be applied.
In a typical TPR model, one of the important sub-models is the collision consequence model used to predict probability of fatality (PoF) of human subjected to UAS collision. This sub-component requires a good understanding of human fatality due to inflicted injury by UAS collision which is, at this time of writing, still under-studied.
This thesis addresses the key component of the TPR framework that is the quantification of UAS collision consequence on human on the ground. The central aim of this thesis is to develop a quantitative, model-based collision consequence model of UAS collision on human. To achieve this main aim, a series of interrelated research studies was performed in a systematic way.","unmanned aircraft system; impact modelling; multibody system; human; injury","en","doctoral thesis","","978-94-6366-463-9","","","","","","","","","Air Transport & Operations","","",""
"uuid:3d31068b-259a-4798-8723-be755cc15d23","http://resolver.tudelft.nl/uuid:3d31068b-259a-4798-8723-be755cc15d23","Phosphorus recovery from iron-coagulated sewage sludge","Prot, T.J.F. (TU Delft BT/Environmental Biotechnology)","van Loosdrecht, Mark C.M. (promotor); Wilfert, P.K. (copromotor); Delft University of Technology (degree granting institution)","2021","Fertilizers are vital for our society since we use them to grow plants. These plants can produce fruits and vegetables that we can consume or use as feedstock for the animals, ending up in our plates. In short, we need fertilizers to make our food, and we are using an increasing quantity of it with the growing population. Phosphorus is an essential constituent of fertilizers and a critical element for every living organism since it is present in DNA and bones. The current approach is to mine phosphate rock to make fertilizer. This strategy is the only option we have so far to produce phosphorus in large quantities, but it is polluting, and the resources are not endless. In our society, we are trying to replace fossil energies with renewable energy. However, this cannot be done for phosphorus; nothing can substitute it. Therefore, we need to find alternatives to obtain phosphorus without further damaging the planet...","","en","doctoral thesis","","","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:6a8cabb6-ce9b-42bf-a356-4be99e868cfe","http://resolver.tudelft.nl/uuid:6a8cabb6-ce9b-42bf-a356-4be99e868cfe","Occurrence and ecology of antibiotic Resistance determinants in wastewater Treatment systems","Pallares Vega, R. (TU Delft BT/Environmental Biotechnology)","van Loosdrecht, Mark C.M. (promotor); Weissbrodt, D.G. (copromotor); Schmitt, H. (promotor); Delft University of Technology (degree granting institution)","2021","The rise of antibiotic resistant bacteria threatens the existing status quo of successful treatment of infectious diseases, leading to substantial personal and economic losses. Wastewater, carrying antibiotic resistant microorganisms from fecal origin, is an important route for disseminating anthropogenic-related resistant bacteria to natural ecosystems. Wastewater treatment plants (WWTPs), collecting and treating sewage, comprise an opportunity to mitigate such dissemination. However, because of their intrinsic characteristics, namely constant nutrient inputs, presence of selectors in sewage (i.e., antibiotics), and high bacterial densities within the biological treatment, these facilities have been postulated as environments selecting for antibiotic resistant bacteria and fostering horizontal exchange of antibiotic resistance genes (ARGs). Unravelling the ecology of antibiotic resistant determinants in WWTPs is essential to identify which stages or technologies are critical for their proliferation or removal and pinpoint possible additional or alternative intervention strategies. This thesis aims to contribute to such a quest with a multidimensional approach. The work presented here involves extensive field studies combined with qPCR measurements and statistical analysis to assess how WWTPs affect antibiotic resistant determinants. In addition, culture and molecular assays are used to investigate the conjugal exchange of plasmid-borne antibiotic resistance in wastewater environments.","","en","doctoral thesis","","978-94-6423-473-2","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:8e7b13cb-6c1c-43e8-bcfc-22e588c862b6","http://resolver.tudelft.nl/uuid:8e7b13cb-6c1c-43e8-bcfc-22e588c862b6","Waterborne platooning: A viability study of the vessel train concept","Colling, A.P. (TU Delft Ship Design, Production and Operations)","Hekkenberg, R.G. (promotor); van Hassel, Edwin (copromotor); Delft University of Technology (degree granting institution)","2021","The research in this thesis is a viability study of a waterborne platooning concept called the Vessel Train (VT). This manuscript describes the VT characteristics. It explains how the VT’s advantages are reaped, as well as the VT’s challenges that need to be considered and dealt with. To assess the potential of the platooning concept an inland navigation and a short sea shipping case are studied. They elaborated upon detailed information on viable operating requirements for different vessel types and operating conditions. A general outlook regarding geographical application differences is considered by analysing the difference between European inland corridors, bridge interaction in urban areas and the global application potential.","Waterborne platooning; Vessel Train; Short sea shipping; Inland waterway transport; Viability study; Semi-autonomous navigation","en","doctoral thesis","","978-94-6384-237-2","","","","","","","","","Ship Design, Production and Operations","","",""
"uuid:1c2e90fd-0273-4f1a-a9ed-56b3ca69b99e","http://resolver.tudelft.nl/uuid:1c2e90fd-0273-4f1a-a9ed-56b3ca69b99e","Extreme convective precipitation events in a changing climate","Lochbihler, K.U. (TU Delft Atmospheric Remote Sensing)","Siebesma, A.P. (promotor); Lenderink, G. (copromotor); Delft University of Technology (degree granting institution)","2021","As I am writing this, parts of Central Europe are plagued by a series of intense rainfall events that, in less than two days, turn rivers into powerful streams, cause flooding, damage infrastructure and property, and harm people. The number of such extreme events, which are associated with high economic losses and casualties, has been increasing for decades. How is this happening? And, what is the relation between extreme convective precipitation events and increasing temperatures, such as which we are currently experiencing due to climate change?
To tackle these questions, consider the following simplified version of a convective rain event. We imagine a column of air, a part of the atmosphere with a cloud inside of it. Near the surface, air streams into the column where it starts rising vertically. While gaining height, the air cools, until at a certain level the contained water vapor will condensate in the formof small cloud droplets. From this level, the cloud base, the air mass continues ascending while the amount of condensed water keeps increasing, so that the cloud droplets grow in size. Finally, when they grow sufficiently large, precipitation will set in, and, in the most extreme case, all the cloud water will reach the ground as rain. Following this conceptual model, one way to increase the amount of precipitation is to increase the moisture content of the air that enters the cloud.","","en","doctoral thesis","","978-94-6416-879-2","","","","","","","","","Atmospheric Remote Sensing","","",""
"uuid:ab648631-4b9a-4c4b-add6-5696b3cf0165","http://resolver.tudelft.nl/uuid:ab648631-4b9a-4c4b-add6-5696b3cf0165","The effect of microstructure on micro- and macro-scale corrosion and passivation behaviour of low-alloyed ferrous materials","Yilmaz, A. (TU Delft Team Yaiza Gonzalez Garcia)","Sietsma, J. (promotor); Gonzalez Garcia, Y. (copromotor); Delft University of Technology (degree granting institution)","2021","","Microstructure; Corrosion; Passivation; Local electrochemistry; Surface analysis","en","doctoral thesis","","","","","","","","","","","Team Yaiza Gonzalez Garcia","","",""
"uuid:32e06afa-1aac-4bb7-a3a5-86ae107ac4d1","http://resolver.tudelft.nl/uuid:32e06afa-1aac-4bb7-a3a5-86ae107ac4d1","Het gemeentelijk Investeringsraadsel: Het verband tussen gemeentelijke investeringen en lokale welvaartsontwikkeling","Verburg, P.J. (TU Delft Housing Systems)","Boelhouwer, P.J. (promotor); Delft University of Technology (degree granting institution)","2021","Dit proefschrift gaat over de vraag in hoeverre in Nederland de lokale welvaartsontwikkeling wordt bepaald door de gemeentelijke investeringsinspanning. Op project niveau is meestal wel het een en ander bekend over de maatschappelijke effecten, maar over de impact van de totale investeringsinspanning ontbreekt zowel fundamentele als toepasbare kennis. Het is een raadsel dat nog opgelost dient te worden. De doelstelling van dit proefschrift is daaraan een bijdrage te leveren.","Dutch municipalities; municipal investments; local prosperity; investment effects; house prices; integral policy","nl","doctoral thesis","","978 90 9035255 8","","","","","","","","","Housing Systems","","",""
"uuid:5e00069e-2ab3-481d-b533-cf4d31f3f1b8","http://resolver.tudelft.nl/uuid:5e00069e-2ab3-481d-b533-cf4d31f3f1b8","A novel approach to sludge treatment using microwave technology","Kocbek, E. (TU Delft BT/Environmental Biotechnology)","Brdjanovic, Damir (promotor); Garcia, H. (copromotor); Delft University of Technology (degree granting institution)","2021","Sludge transportation costs can represent a large fraction of the expenses associated with municipal and faecal sludge management. These costs can be mitigated through the use of thermal drying approaches to reduce the sludge volume. This thesis described the application of a novel microwave-based pilot-scale unit as an alternative technology for the sanitisation and drying of sludge from municipal wastewater treatment plants and on-site sanitation facilities. The potential economic benefits of volumetric heating, moisture levelling, and increased liquid and vapour migration from the interior to the surface of the product underpins the increasing interest in the use of microwave technology during sludge treatment processes. According to the findings of this study, these factors lead to faster processing times, improved drying rates, and a reduced physical footprint. Furthermore, microwave technology operates as a standalone treatment unit. When coupled with mechanical dewatering techniques and membrane separation technology, it can increase the reliability of the technology employed in the treatment of sludge while recovering valuable resources through an agricultural or thermochemical application such as (co-) combustion. The results of this work demonstrate the strong feasibility of applying microwave-based technology within initiatives designed to protect the environment and safeguard public health.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-1-032-21799-4.","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:2d821f6a-67f2-46ed-8518-6e6fa07580d7","http://resolver.tudelft.nl/uuid:2d821f6a-67f2-46ed-8518-6e6fa07580d7","Towards Artificial Empathic Memory: Accounting for the Influence of Personal Memories in Automatic Predictions of Affect","Dudzik, B.J.W. (TU Delft Pattern Recognition and Bioinformatics)","Neerincx, M.A. (promotor); Hung, H.S. (promotor); Broekens, D.J. (copromotor); Delft University of Technology (degree granting institution)","2021","","Affective Computing; User Modeling; Context-awareness; Emotion recognition; Cognitive models","en","doctoral thesis","","","","","","","","2021-11-04","","","Pattern Recognition and Bioinformatics","","",""
"uuid:e164a3b3-75fb-4879-ba5e-27f7f50b3e6b","http://resolver.tudelft.nl/uuid:e164a3b3-75fb-4879-ba5e-27f7f50b3e6b","Towards low-cost PEM fuel cells: Interfacial effects and material dynamics of a non-PGM electrocatalyst","Rangel Cardenas, A.L. (TU Delft RST/Storage of Electrochemical Energy; TU Delft ChemE/Advanced Soft Matter)","Koper, G.J.M. (promotor); Kelder, E.M. (promotor); Picken, S.J. (promotor); Delft University of Technology (degree granting institution)","2021","In this thesis several aspects of PEM fuel cells are discussed. From an atomic scale, e.g. crystal and electronic structure of a non-noble metal material, e.g. (H)zLiMn2O4 (protonated spinel lithium manganese oxide) for electrocatalysis and its dynamics, to meso-macro scale effects such as those concerning transport resistances at the membrane/electrode interface.","PEM fuel cell; oxygen reduction (ORR); hydrogen oxidation reaction (HOR); solid-state NMR; Interfacial resistances; Water transport; spinel LMO; protonated spinel LMO (HLMO); Electrocatalysis; X-ray absorption spectroscopy; Neutron diffraction","en","doctoral thesis","","978-94-6384-264-8","","","","","","","","","RST/Storage of Electrochemical Energy","","",""
"uuid:eeebed12-090f-4a4b-8450-d05a763fc9da","http://resolver.tudelft.nl/uuid:eeebed12-090f-4a4b-8450-d05a763fc9da","Micro Energy Harvesting from Low-Frequency Vibrations: Towards Powering Pacemakers with Heartbeats","Blad, Thijs","Herder, J.L. (promotor); van Ostayen, R.A.J. (copromotor); Delft University of Technology (degree granting institution)","2021","Miniaturized vibration energy harvesters can provide an alternative to batteries in powering the billions of low power devices we use today by extracting power from ambient motion. Especially for implanted medical devices, these alternative power sources may be an attractive option to overcome the limitations on longevity and the replacement costs and inconvenience that result from the use of batteries....","Vibration energy harvesting; Compliant mechanisms; Buckling","en","doctoral thesis","","978-94-6366-468-4","","","","","","","","","Mechatronic Systems Design","","",""
"uuid:e9c2ef42-c17b-4578-ab7c-9ac422a54c1a","http://resolver.tudelft.nl/uuid:e9c2ef42-c17b-4578-ab7c-9ac422a54c1a","Power-Efficient Amplifiers for Data Converters","Akter, S. (TU Delft Electronic Instrumentation)","Makinwa, K.A.A. (promotor); Bult, K. (promotor); Delft University of Technology (degree granting institution)","2021","This thesis describes the design and implementation of power-efficient discrete-time amplifiers for data converter systems.","","en","doctoral thesis","","978-94-6423-469-5","","","","","","","","","Electronic Instrumentation","","",""
"uuid:db85b819-cf0b-4909-8e02-9d75ca52b130","http://resolver.tudelft.nl/uuid:db85b819-cf0b-4909-8e02-9d75ca52b130","Towards predicting cavitation noise using scale-resolving simulations: The importance of inflow turbulence","Klapwijk, M.D. (TU Delft Ship Hydromechanics and Structures)","van Terwisga, T.J.C. (promotor); Westerweel, J. (promotor); Delft University of Technology (degree granting institution)","2021","There is increasing attention for the effects of anthropogenic underwater radiated noise (URN) on marine fauna. This is expected to lead to regulations with respect to the maximum permitted sound emissions of ships. It is known that cavitating tip vortices, generated by ship propellers, are some of the key contributors to URN. Consequently, there is a need to evaluate propeller designs with respect to noise generation in a design stage. Computational fluid dynamics (CFD) has the potential to offer detailed insights into cavitating vortex dynamics and noise sources, at a reasonable cost. URN can be efficiently estimated using CFD in combination with an acoustic analogy. In order to use such predictions in a design process, it is essential to understand and quantify the errors associated with the numerical predictions of noise sources. This thesis investigates the reliability of such evaluations and aims to reduce occurring modelling errors.
To compute noise sources, it is necessary to simulate cavitation dynamics using scale-resolving simulations (SRS). Here, part of the turbulence kinetic energy spectrum is resolved in space and time, as opposed to being modelled using Reynolds averaged Navier-Stokes (RANS). The SRS method of choice in this work is the partially averaged Navier-Stokes (PANS) method. Bridging models, such as PANS, exhibit a smooth transition and absence of commutation errors between RANS and large eddy simulation (LES) zones, in contrast to hybrid models such as detached eddy simulation (DES). The formulation allows for a theoretical decoupling of the discretisation and modelling errors, thereby enabling verification and validation processes.
PANS allows the user to select the ratios of resolved-to-total turbulence kinetic energy and dissipation (rate). Appropriate settings and methods to estimate these settings a priori are investigated. Furthermore, a new PANS closure is developed, which offers improved convergence behaviour compared to more commonly used models, and is better suited to application for multiphase flows. It has been shown repeatedly in literature that SRS should be accompanied by physical inflow boundary conditions, where time-varying fluctuations, resembling turbulence, should be inserted upstream of the object of interest, to prevent laminar solutions. However, from literature it is clear that for maritime applications this is often neglected. To the knowledge of the author, there is no previous application of such an inflow in combination with cavitation. In this PhD thesis, a synthetic inflow turbulence generator (ITG) is implemented, and tested for several test cases in wetted and cavitating conditions. For these cases, the numerical errors, consisting of discretisation, iterative and statistical errors are evaluated.
Firstly, the results when using the ITG are compared against recycling flow results for a turbulent channel flow, using different SRS methods. It was shown that the ITG can deliver a resolved turbulent inflow at lower computational cost. Secondly, the effect of neglecting such an inflow was tested for the Delft Twist 11 hydrofoil, where it was shown that simulating such a flow with a low ratio of resolved-to-total turbulence kinetic energy can lead to flow separation at the wing leading edge. This is in contrast to experimentally observed behaviour. The inclusion of the ITG can reduce this modelling error, although the sheet cavity dynamics remain largely unaffected. Finally, an elliptical wing with a cavitating tip vortex is simulated. The observed vortex dynamics are compared against a semi-analytical model from literature. To obtain vortex dynamics, the ITG was shown to be necessary. The far-field noise generated by the vortex is quantified and related to the cavity dynamics.
Some of the main contributions of this research are improved insight in the use of SRS in cavitating conditions, in simulating cavity dynamics and in using an ITG to obtain flow fields representative of experimental conditions. In this way it has enhanced our understanding of the ability and limitations in the prediction of acoustic sources due to cavitation. To improve predictions of cavitation dynamics it is recommended to address the cavitation model and the method which describes the cavity interface, to reduce the discrepancy in average cavity size between simulations and experimental observations.","turbulence; scale-resolving simulations; cavitation; acoustics","en","doctoral thesis","","978-94-6384-259-4","","","","","","","","","Ship Hydromechanics and Structures","","",""
"uuid:f93143bc-0214-41ad-af4a-4ad2168a42a5","http://resolver.tudelft.nl/uuid:f93143bc-0214-41ad-af4a-4ad2168a42a5","The Influence of Infragravity Waves on Overtopping at Coastal Structures with Shallow Foreshores","Lashley, Christopher H. (TU Delft Hydraulic Structures and Flood Risk)","Jonkman, Sebastiaan N. (promotor); van der Meer, J.W. (promotor); Bricker, J.D. (copromotor); Delft University of Technology (degree granting institution)","2021","Coastal communities across the globe are often protected by structures, such as seawalls, levees or dikes, which allow only a safe volume of water to pass over or “overtop” them due to wave action during storms. The area seaward of these structures is often characterised by shallow, gently sloping beds referred to as foreshores.
As storm waves propagate over the shallow foreshores, two notable processes occur. The first, is the attenuation of high-frequency waves that are collectively referred to as wind-sea and swell (SS), with periods less than 20 seconds. The limited water depth over the foreshore forces the SS waves to shoal and ultimately break. This shoaling and breaking, in turn, results in the second important process: the growth of infragravity (IG) waves, with periods in the order of minutes.
The methods used in current practice to estimate wave overtopping are able to accurately quantify the impact of SS waves. However, they tend to neglect the influence of IG waves, which are known to play a critical role in erosion and flooding along shallow coast lines. In light of this, this dissertation aimed to develop new methods to estimate the influence of IG waves on the safety of coastal defences with shallow foreshores against wave overtopping. This aim was ultimately achieved by using state-of-art numerical models, empirical methods and field measurements to develop a suite of tools, that together, provide a framework to accurately quantify the influence of IG waves on wave overtopping.
As data on shallow foreshores was limited, a numerical model (XBeach Non-hydrostatic) was first used to generate a large dataset of wave measurements at the toe of the structure for varying offshore, foreshore and structure slope conditions. The analysis, detailed in Chapter 2, revealed that the influence of IG waves increased for higher, directionally narrow-banded (long-crested) offshore waves; shallower foreshore water depths; milder foreshore slopes; and reduced vegetated cover. The combined effect of the different environmental parameters on the IG waves was then captured in an empirical model, which formed the base of the framework to follow.
For determining wave overtopping, the standard approach requires the use of a wave model (often a phase-averaged model like SWAN) to estimate wave parameters at the toe, which are then used as input to the well-known formulae of the EurOtop design manual. However, this approach largely neglects the impact of IG waves. In Chapter 3, this is rectified by augmenting the traditional approach with the empirical model developed in Chapter 2 to include the effects of the IG waves on the design parameters. Considering accuracy and computational demand, the modified approach proved superior when assessing wave overtopping at dikes with shallow foreshores. This approach formed the first sub-method to estimating wave overtopping in the overall framework.
Nevertheless, it is often difficult to obtain accurate estimates of wave parameters at the toe of structures with shallow foreshores. Chapter 4 offers a solution to this problem by proposing a new set of overtopping formulae that instead rely on deep-water wave parameters as input. This is done by revisiting the old but proven approach of Yoshimi Goda, now with additional data and new trend analysis techniques. The newly-derived formulae proved accurate and can be considered an alternative to the current standard (Chapter 3). Particularly, for dikes and seawalls with very and extremely shallow foreshores, where IG waves tend to dominate. This approach formed the second sub-method to estimating wave overtopping in the overall framework.
Finally, in order to estimate the impact of IG waves on safety, a probabilistic method (FORM) was introduced to the framework in Chapter 5. Using the first sub-method (Chapter 3), the probability of dike failure by wave overtopping with and without IG waves was determined for dikes along the shallow Dutch Wadden Sea coast. Including the IG waves resulted in 1.1 to 1.6 times higher failure probabilities for the Dutch Wadden Sea coast, suggesting that coastal safety may be overestimated when they are neglected. This was attributed to the influence of the IG waves on the wave period and, to a lesser extent, the wave height at the structure toe. Furthermore, the spatial variation in this effect observed for the Dutch Wadden Sea highlighted its dependence on local bathymetric and offshore forcing conditions—with IG waves having greater influence on the failure probability for cases with larger offshore waves and shallower water depths.
The general conclusion of the dissertation is that IG waves can have an important impact on safety. Moreover, findings indicate that the safety of existing coastal defences with shallow foreshores may be overestimated, since IG waves are largely neglected in the current practice for their design and assessment. For the case considered here (the Dutch Wadden Sea), the increase in required crest level due to the IG waves was around 2 dm with a cost in the order of M€1/per km. For shallower coastlines exposed to more energetic wave conditions, the influence of the IG waves and the corresponding safety costs are likely to be greater. This dissertation provides practitioners with a suite of tools to quantify to influence of IG waves on the safety of coastal defences with shallow foreshores against wave overtopping. Thereby, reducing the uncertainty in the overall impact of shallow foreshores and allowing dike managers to make more informed decisions when considering hazard mitigation strategies.
- from groundwater prior to drinking is important in terms of protection of public health. Current defluoridation techniques can be generally grouped into precipitation, coagulation, membrane processes, electrochemical processes, and adsorption/ion exchange. Although considerable advancement has been made in defluoridation research, a universal and sustainable solution to this ongoing crisis still appears intangible. By means of comparison to over 100 different materials, it can be concluded that mineral based materials are among the most promising for F- removal for drinking water production. Therefore, this thesis focused on investigating F- removal from groundwater by layered double hydroxides (LDHs), geopolymers, softening pellets and struvite. Alike various clays and rocks, these materials are composed primarily of minerals, (naturally) crystallized and have a periodic structure. These materials were selected, apart for their affinity for F- removal, because of their low cost and local availability, for example due to being waste or by products from industrial operations.","","en","doctoral thesis","","978-94-93270-13-8","","","","","","","","","Sanitary Engineering","","",""
"uuid:20dd1357-2c56-446d-9635-d60edf2c0bd1","http://resolver.tudelft.nl/uuid:20dd1357-2c56-446d-9635-d60edf2c0bd1","Quantification of flyby effects in the three-body problem using the Gaussian process method","Liu, Y. (TU Delft Astrodynamics & Space Missions)","Visser, P.N.A.M. (promotor); Noomen, R. (copromotor); Delft University of Technology (degree granting institution)","2021","The gravity assist (GA) plays an important role in space missions since itwas first applied by the Luna 3 vehicle in 1959. For preliminary trajectory design, the so-called patchedconics model provides a simple model for a gravity assist. This approach, based on twobody formulations, splits amulti-body probleminto a succession of two-body problems. This model has a fundamental assumption: the trajectory of the spacecraft is driven by one celestial body only. A boundary for switching the driving bodies is defined by the Sphere of Influence (SoI) of the GA body. The patched conics model cannot be used to study low-energy trajectories. Moreover, it fails to describe special dynamics existing in the multi-body regime, such as the invariant manifolds. The three-body formulation is a logical choice to study the dynamics in the multi-body problem. In order to reduce its inherent difficulty, the circular restricted three-body problem (CR3BP) formulation is developed to study the behavior of the motion of a particle influenced by two massive bodies simultaneously. Flybys in the CR3BP have been studied by many researchers, using a numerical or semi-analytical approach, e.g. the Flyby map (FM) and Keplerianmap (KM), respectively. Inspired by these approaches and the idea of artificial intelligence, this thesis focuses on the investigation of flybys froma machine-learning perspective.","Gravity Assists; Circular Restricted Three-Body Problem; Gaussian Process Method; Gravity Assist Mapping; Jacobi Constant","en","doctoral thesis","","","","","","","","2022-08-31","","","Astrodynamics & Space Missions","","",""
"uuid:95293616-4c1d-4e75-ad45-f502a5b5f22a","http://resolver.tudelft.nl/uuid:95293616-4c1d-4e75-ad45-f502a5b5f22a","Higher-order phenomena in nanomechanics of two-dimensional material membranes","Siskins, M. (TU Delft QN/Steeneken Lab)","Steeneken, P.G. (promotor); van der Zant, H.S.J. (promotor); Delft University of Technology (degree granting institution)","2021","This thesis studies higher-order material properties* and effects in van der Waals crystals, such as anisotropic Young’s modulus, magnetostriction, and non-trivial thermal expansion effects near magnetic and electronic phase transitions, that can affect the nanomechanical motion ofmultilayer two-dimensional (2D) material membranes. These couplings make the motion of nanomechanical resonators a useful and universal tool to probe 2Dmaterial properties that are often hard to access otherwise. The thesis consists of four parts...","two-dimensional materials; nanomechanics; resonance; NEMS; graphene; pressure sensor; magnetic materials; phase transitions; anisotropy; laser interferometry","en","doctoral thesis","","978-90-8593-492-9","","","","","","2022-11-03","","","QN/Steeneken Lab","","",""
"uuid:db6851f3-0fce-4874-949b-e4fe0ec1cadd","http://resolver.tudelft.nl/uuid:db6851f3-0fce-4874-949b-e4fe0ec1cadd","Solving Large-Scale Dynamic Collaborative Vehicle Routing Problems: An Auction-Based Multi-Agent Approach","Los, J. (TU Delft Transport Engineering and Logistics)","Negenborn, R.R. (promotor); Spaan, M.T.J. (promotor); Schulte, F. (copromotor); Delft University of Technology (degree granting institution)","2021","The freight transportation sector is one of the major contributors to air pollution. An important way to reduce emissions consists of collective route planning. Although unloaded trips and inefficient routes could not always be prevented by individual carriers, more efficient operations could often be obtained if multiple carriers collaborate by exchanging part of their shipments. The resulting vehicle mileage reductions not only lower the costs for the cooperating carriers, but also reduce emissions and decrease the level of congestion.
Achieving a successful collaboration between carriers, however, is a difficult problem. On top of the NP-hardness of the vehicle routing problem, the collaborative variants suffer from different carriers each having their individual policies, objectives, and preferences. Whereas information is generally assumed to be available in fleet management problems for individual carriers, this is problematic in collaborative cases: carriers might be hesitant to share confidential information with each other or with a platform that coordinates the cooperation. Furthermore, carriers might be more interested in increasing their own profits than in reducing the overall costs. Hence, they might try to exploit a cooperative approach.
This thesis explores how the above problems can be approached in the context of dynamic large-scale collaborative pickup and delivery problems. Earlier, centralized collaboration approaches have been proposed, but these are only applicable to problems of limited size: computation times increase with the number of orders, and hence, quick adaptations in a dynamic world will be hindered. Furthermore, information is assumed to be always available in centralized approaches, and carriers need to give up their autonomy. To avoid the last two problems, decentralized approaches with central auctions have been used, but these still suffer from scalability issues due to the role of a central auctioneer. This thesis therefore proposes a decentralized approach with local auctions: carriers can bid on transportation orders offered by individual shippers or associate carriers. Thus, no central authority is involved. The main aim of this thesis is to investigate to what extent such an auction-based multi-agent system can be applied to dynamic large-scale collaborative vehicle routing problems.
First, we investigate the value of information sharing, that is, the quality of solutions that can be obtained when different types and amounts of carrier information are known. In a computational study, we vary whether carriers' routing plans or the positions of their vehicles are made available and also whether carriers share or hide information about their marginal costs for orders within each auction. The solutions generally improve in terms of service level, travel costs, and individual profits if more carrier information is available. Cost information is important to obtain high service levels, whereas position information is most useful if only a limited number of carriers is consulted for an order. In scenarios with a small fleet or urgent orders, limited information often suffices.
Next, we analyze the potential results of large-scale carrier cooperation. In a computational study based on a real-world data set consisting of over 12000 orders, we vary the number of carriers that collaborate. Reductions in travel costs of up to 77% can be obtained with 1000 cooperating carriers. Thus, whereas previous studies only report improvements of 20-30% for small collaborations, our local auction approach allows to solve large-scale problems and exceeds the reported cost reductions by a factor of three. Furthermore, small bundles of orders can be offered within our approach to benefit from interaction effects. Although the extra computational effort is limited, bundling can improve the results with up to 13% for 1000 cooperating carriers.
A third major contribution of this thesis is the investigation of the possible advantages of strategic behaviour. Instead of reporting (estimates of) their marginal costs, carriers might bid strategically and try to increase their individual profits at the cost of the others. We analyze that incurring small losses in an auction might be acceptable for carriers since they can be compensated either by a share of the cooperation gains or by future events. A computational study shows that it is highly dependent on the distribution of the cooperation gains whether strategic bidding pays off. Hence, cheating is possible but not straightforward. Strikingly, a second-price auction system does not help in preventing strategic behaviour: the possible benefits of cheating even increase.
Finally, we extend the developed auction-based multi-agent system such that it can be applied to problem variants where multiple pickup and delivery alternatives can be specified. By this, carriers have more flexibility in choosing the most efficient options. Furthermore, users may specify their preferences for the different options. The auction approach then assists in finding a balance between constructing efficient routes and meeting the user preferences as much as possible. A computational study shows that the approach outperforms centralized heuristics on large-scale instances of 2000 orders.
In short, the proposed multi-agent approach with local auctions can contribute to enabling and stimulating collaboration between many carriers in a dynamic world and thereby drastically reduce the overall number of driven kilometers -- implying less costs, less emissions, and less congestion. The approach is rather flexible in its assumptions on information availability, it can withstand strategic behaviour under some conditions, and can successfully be applied to practically relevant problems with specific user preferences. To fully exploit the benefits of cooperation in practice, some open challenges still must be addressed: incentives for carriers to participate must be carefully designed, among others through a fair distribution of obtained collaboration profits, stronger guarantees on truthful behaviour of collaborators, and high levels of autonomy for individual carriers.","Collaborative Vehicle Routing; Collaborative Transportation; Platform-Based Transportation; Multi-Agent System; Logistics; Dynamic Pickup and Delivery Problem; Dynamic Fleet Management","en","doctoral thesis","","978-90-5584-301-5","","","","","","2021-10-18","","","Transport Engineering and Logistics","","",""
"uuid:e5a05014-d3b7-4566-8d1b-9faf67fa0260","http://resolver.tudelft.nl/uuid:e5a05014-d3b7-4566-8d1b-9faf67fa0260","On cooperativity in cellular habitats, with quantitative experiments and modelling","Daneshpour Aryadi, H. (TU Delft BN/Greg Bokinsky Lab)","Youk, H.O. (promotor); Blanter, Y.M. (promotor); Bokinsky, G.E. (copromotor); Delft University of Technology (degree granting institution)","2021","Tales of a Fountain of Youth and the invention of medicine illustrate our age-long obsession with two themes: life and death. What it takes to stay alive, and not to be dead, is a basic question in science that is easy to state, and yet difficult to address at a profound level. One striking feature of many living organisms is the ability of individuals to behave in unison by communicating with each other. At life’s microscopic level, living cells can also send and receive chemical signals to communicate with each other in their habitat but for a population of many thousands of cells it remains enigmatic who is communicating with whom, what are the signals, and how the signals work over space and time. We used quantitative experiments and mathematical modelling to systematically explore how mouse Embryonic Stem (ES) cells might cooperate by communicating when differentiating into the first two lineages. We discovered that differentiating mouse ES cells scattered across many centimeters on a dish form one macroscopic entity that either survives or dies in unison if and only if its population-density is above a threshold value. This switch-like behavior is determined by cells that secrete and sense FGF4 that diffuses over many millimeters to activate YAP1-induced survival mechanisms. Our work shows that living cells (in vitro) can rely on macroscopic cooperation to stay alive.","stem cells; cell-cell communication; systems biology; mathematical modelling; phase diagrams; Signaling pathways; quorum sensing","en","doctoral thesis","","978-90-8593-488-2","","","","Casimir PhD Series, Delft-Leiden 2021-22","","","","","BN/Greg Bokinsky Lab","","",""
"uuid:88181e83-edb5-4ed1-ae02-de119786ffbb","http://resolver.tudelft.nl/uuid:88181e83-edb5-4ed1-ae02-de119786ffbb","Piezo-sensors for in-situ boundary layer monitoring on morphing wings: Development, validation and implementation","Stuber, V.L. (TU Delft Novel Aerospace Materials)","van der Zwaag, S. (promotor); De Breuker, R. (promotor); Delft University of Technology (degree granting institution)","2021","This thesis describes the development, validation and implementation of piezoelectric flow sensors. These sensors are meant to measure flow phenomena such as transition and separation in the boundary layer of an aircraft wing. Such information can be used
as input variables to a control loop to push laminar-to-turbulent transition towards the trailing edge of a wing, thereby reducing the overall skin friction drag. The developed sensors are installed in the SmartX wing, which is a morphing wing developed at the
Delft University of Technology (TUD).","Piezoelectricmaterials; Laminar-to-turbulent transition; Sensors","en","doctoral thesis","","978-94-6421-490-1","","","","","","","","","Novel Aerospace Materials","","",""
"uuid:9d0f7b28-dc4e-43b8-b838-84dc15b19b04","http://resolver.tudelft.nl/uuid:9d0f7b28-dc4e-43b8-b838-84dc15b19b04","Reaction Spheres for Attitude Control of Microsatellites","Zhu, L. (TU Delft Space Systems Egineering)","Guo, J. (promotor); Gill, E.K.A. (promotor); Delft University of Technology (degree granting institution)","2021","In past decades, small spacecraft have raised worldwide interest for their low cost and short development time. In general, three reaction wheels are needed for three-axis attitude control of general satellites. However, for small spacecraft where both volume and power budgets are limited, employing three wheels is a challenge. Therefore, reaction spheres are proposed as a replacement for reaction wheels. In a reaction sphere, the spherical rotor is driven by forces generated between the stator and the rotor. Since the rotor’s spin axis and the output torque could be about any desired axis in the 4¼ space, a single reaction sphere could be sufficient to implement three-axis control. Till today, various reaction spheres have been developed, but their performances are far from satisfactory. A better understanding of reaction spheres is needed and great improvements of these actuators are expected especially for small satellites. This dissertation aims at performance modeling of reaction spheres. Through the modeling process, restricting factors of performances and possible improvements are investigated. This work is focused on induction-based reaction spheres (IBRSs), which are selected as the most promising type of reaction sphere for applications to small spacecraft.","reaction sphere; electromagnetic induction; field modeling; performance analysis; motion coupling","en","doctoral thesis","","978-94-6366-460-8","","","","","","","","","Space Systems Egineering","","",""
"uuid:abedaa79-505a-4d94-bbf3-633a2e9f6599","http://resolver.tudelft.nl/uuid:abedaa79-505a-4d94-bbf3-633a2e9f6599","Alluvial Stratigraphic Response to Astronomical Climate Change: Numerical modelling and outcrop study in the Bighorn Basin, Wyoming, USA","Wang, Y. (TU Delft Applied Geology)","Martinius, A.W. (promotor); Abels, H.A. (copromotor); Delft University of Technology (degree granting institution)","2021","Alluvial stratigraphy is influenced by both allogenic and autogenic factors, which are difficult to be distinguished from each other because they operate at overlapping spatial and temporal scales. Moreover, it remains uncertain whether autogenic dynamics can result in sedimentary cyclicity that resembles allogenically-driven stratigraphic products. In order to undertake this challenge and address the uncertainty, we first test what sedimentary processes can produce the alluvial cyclicity observed in outcrops by designing comparable scenarios in the process-based numerical modelling. In the meantime, we systematically characterize floodplain aggradation cycles by tracing them in a UAV-based photogrammetric model. Moreover, we comprehensively describe channelized sandstone bodies in the field and the model to reconstruct the paleogeography. Lastly, we configure the relationships between floodplain aggradation cycles and sandstone bodies of different river styles, based on which we identify the link between orbital forcing and alluvial stratigraphic response.","Alluvial; Orbital forcing; Bighorn Basin; Numerical modelling","en","doctoral thesis","","978-94-6384-262-4","","","","","","","","","Applied Geology","","",""
"uuid:8f7d924d-ad3d-4e90-a05f-1bbdae23146c","http://resolver.tudelft.nl/uuid:8f7d924d-ad3d-4e90-a05f-1bbdae23146c","A Flat Theory: Toward a Genealogy of Apartments, 1540–1752","Gorny, R.A. (TU Delft Situated Architecture)","Avermaete, T.L.P. (promotor); Radman, A. (copromotor); Delft University of Technology (degree granting institution)","2021","A Flat Theory presents a first step toward a yet-to-be-completed, larger project: a genealogy of apartments. While centering on the historical formation of apartments, it does not offer a straight-forward history of apartments or flats. Rather, as a contribution to a wider history of the present, it draws together the first synthetic study of the complex processes through which apartments have initially taken form. To do so, it proposes an eco-systemic and assemblage-theoretic extension of genealogical modes of inquiry so as to draw together an epiphylogenetic mapping of this complex process. After situating and specifying this approach, A Flat Theory charts three converging lineages that mark the ‘material-discursive’ formation of appartamenti and appartements as an (I) architectural concept, (II) spatial phenomenon, and (III) residential system during the 1540–1780s in western Europe","#architecture; architecture history; architecture theory; apartments; genealogy; assemblage theory; mapping; diagram; topology; flat ontology","en","doctoral thesis","","978-94-6366-461-5","","","","","","","","","Situated Architecture","","",""
"uuid:5e1093cb-5f7f-485d-9bb5-cd505a772820","http://resolver.tudelft.nl/uuid:5e1093cb-5f7f-485d-9bb5-cd505a772820","Liquid Crystalline Polymers as Tunable Composite Resins for Composites Applications","Marchetti, M. (TU Delft Novel Aerospace Materials)","Dingemans, T.J. (promotor); van der Zwaag, S. (promotor); Delft University of Technology (degree granting institution)","2021","The work presented in this Thesis describes the use of reactive all aromatic liquid crystalline thermosetting resins for fiber reinforced composite applications. One of the challenges associated with reactive LC thermoplastic resins is that the melting temperature ( T k lc ) is typically close to or above the decomposition temperature. In order to improve the melt processing characteristics of reactive liquid crystal resins, but without compromising the after cure (thermo)mechanical properties, two synthetic approaches have been explored: 1-control of the polymer molecular weight by using phenylethynyl reactive end groups and 2- by introducing non linear co monomers.","LCP; LCT; Liquid crystal polymer; Composite laminate; fiber; Polymer; Polymer composite","en","doctoral thesis","","978-94-6421-511-3","","","","","","","","","Novel Aerospace Materials","","",""
"uuid:436de838-0962-4b7f-898b-fe409f565689","http://resolver.tudelft.nl/uuid:436de838-0962-4b7f-898b-fe409f565689","Similarity and Versatility of Lanthanides: Mixed and Matched for Medical Applications","Mayer, F. (TU Delft BT/Biocatalysis)","Hanefeld, U. (promotor); Djanashvili, K. (promotor); Delft University of Technology (degree granting institution)","2021","Since lanthanides were discovered in the 18th century, they fascinate scientists due to their similarity in chemical behavior and versatility in physical properties. It is thus not surprising, that lanthanides are part of uncountable applications in modern industry, electronics, magnets, light sources and medical applications. Especially, as the lanthanide elements are not naturally occurring in living organisms, it is essential that the molecular dynamics and chemical as well as physical properties are understood in detail for in vivo applications. In this thesis, different lanthanide-containing compounds and nanoparticles were investigated to gain deeper knowledge of fundamental processes governing physical properties and synthetic procedures.","","en","doctoral thesis","","978-94-6366-466-0","","","","","","","","","BT/Biocatalysis","","",""
"uuid:af53215c-69ea-4c18-8761-d5cfd2c6e186","http://resolver.tudelft.nl/uuid:af53215c-69ea-4c18-8761-d5cfd2c6e186","Atrial fibrillation fingerprinting","Abdi, Bahareh (TU Delft Signal Processing Systems)","Hendriks, R.C. (promotor); van der Veen, A.J. (promotor); de Groot, N.M.S. (promotor); Delft University of Technology (degree granting institution)","2021","Atrial fibrillation (AF) is a common age-related cardiac arrhythmia. AF is characterized by rapid and irregular electrical activity of the heart leading to a higher risk of stroke and heart failure. During AF, the upper chambers of the heart, called atria, experience chaotic electrical wave propagation. However, despite the various mechanisms introduced in the literature, there is still an ongoing debate on a precise and consistent mechanism underlying the initiation and perpetuation of AF. Some studies show that AF is rooted in impaired electrical conduction and structural damage of atrial tissue, known as electropathology. Atrial electrograms (EGMs) recorded directly from heart’s surface, provide an important diagnostic tool to localize and quantify the degree of electropathology in the tissue. However, the analysis of the electrograms is currently constrained by the lack of suitable methods that can reveal the hidden electrophysiological parameters of the tissue. These parameters can be used as local indication of electropathology in the tissue. We believe that understanding AF and improving AF therapy starts with developing a proper forward model that is accurate enough (from a physiological point of view) and simultaneously simple enough to allow for subsequent parameter estimation. Therefore, the main focus of this thesis is on developing a simplified forward model that can efficiently explain the observed EGM based on AF relevant tissue parameters. An initial step before performing any analysis on the data is to remove noise and artefacts. All atrial electrogram recordings suffer from strong far-field ventricular activities (VA). Therefore, as the first step, we propose a new framework for removal of VA from atrial electrograms, which is based on interpolation and subtraction followed by low-rank and sparse matrix decomposition. The proposed framework is of low complexity, does not require high resolution multi-channel recordings, or a calibration step for each individual patient. In the next step, we develop a simplified electrogram model. We represent the model in a compact matrix form and show its linear dependence on the conductivity vector, enabling the estimation of this parameter from the recorded electrograms. The results show that despite the low resolution and all simplifying assumptions, the model can efficiently estimate the conductivity map and regenerate realistic electrograms, especially during sinus rhythm. In the next contribution of this dissertation, we propose a new approach for a better estimation of local activation times for atrial mapping by reducing the spatial blurring effect that is inherent to electrogram recordings using deconvolution. Employing sparsity based regularization and first-order time derivatives in formulating the deconvolution problem, improved performance of transmembrane current estimation is obtained. In the final part, we focus on translating our findings from research to clinical application. Therefore, we studied the effect of electrode size on electrogram properties including the length of the block line observed on the resulting activation map, percentage of observed low voltage areas, percentage of electrograms with low maximum steepness, and the number of deflections in the recorded electrograms.","atrial fibrillation; atrial electrograms; atrial mapping; fractionation; local activation time estimation; electrogram model; transmembrane current; conductivity estimation; electrophysiological model; inverse problem; reaction-diffusion equation; deconvolution; electrode size; electrogram morphology; activation map; electrogram interpolation","en","doctoral thesis","","978-94-6384-260-0","","","","","","","","","Signal Processing Systems","","",""
"uuid:002fd175-87bf-4f59-9428-96c0a3d3f6f8","http://resolver.tudelft.nl/uuid:002fd175-87bf-4f59-9428-96c0a3d3f6f8","Droplet microfluidics for bioprocess engineering","Totlani, K. (TU Delft ChemE/Product and Process Engineering)","van Steijn, V. (promotor); van Gulik, W.M. (promotor); Kreutzer, M.T. (promotor); Delft University of Technology (degree granting institution)","2021","A crucial challenge during the initial stages of bioprocess development is that tools used to screen microorganisms and optimize cultivation conditions do not represent the environment imposed at industrial scale. Inside an industrial-scale bioreactor, microorganisms are often cultivated under fed-batch conditions, where nutrients are supplied during the culture. Additionally, microorganisms continuously keep crossing zones with low and high concentrations of substrate and dissolved oxygen. However, during initial bioprocess development, growth and productivity of microorganisms are evaluated under batch conditions due to the difficulty of dynamically controlling nutrient and dissolved oxygen concentrations in screening equipment such as micotiter plates. This inconsistency in cultivation conditions often leads to selection of strains that fail to perform at industrial scale. The difficulty in continuously supplying minute amounts of nutrients to microorganisms in microtiter plates and imposing dynamic dissolved oxygen levels throughout the cultivation experiment necessitates an alternative approach. Microfluidic technology holds the potential to address this inconsistency with fidelity by offering high-throughput experimentation and excellent control over the culture microenvironment. The central theme of this Ph.D. project is the design and development of droplet-based microfluidic technology, that enable studying microorganisms under such dynamically controlled cultivation conditions. As such, the outcomes from this Ph.D. project form a foundation step towards narrowing the gap between screening and industrial-scale use, with an eye to keeping the technology sufficiently simple to be adopted by the biotechnology and bioengineering community.","Bioprocess engineering; droplet microfluidics; droplet on-demand; fed-batch; nutrient-controlled growth; yeast; dissolved oxygen; droplet-based assays; lab-on-a-chip","en","doctoral thesis","","978-94-6421-522-9","","","","","","","","","ChemE/Product and Process Engineering","","",""
"uuid:8445e901-e31e-4602-8630-5b39a3de7ff6","http://resolver.tudelft.nl/uuid:8445e901-e31e-4602-8630-5b39a3de7ff6","P-multigrid methods for Isogeometric analysis","Tielen, R.P.W.M. (TU Delft Numerical Analysis)","Vuik, Cornelis (promotor); Möller, M. (promotor); Delft University of Technology (degree granting institution)","2021","Isogeometric Analysis is a methodology that bridges the gap between Computer Aided Design (CAD) and the Finite Element Method (FEM) by adopting the building blocks used in CAD, namely Non-UniformRational B-Splines and B-splines, as a basis for FEM. The use of these high-order spline functions does not only lead to an accurate representation of the geometry, but has shown to be advantageous in many different fields of research. In order to obtain accurate numerical solutions, sufficiently fine meshes have to be considered which results in very large linear systems of equations. Furthermore, the condition numbers of the system matrices grow exponentially in the spline degree p, making the use of standard iterative solvers less efficient. Direct methods, on the other hand, might not be a viable alternative for large problem sizes, due to memory constraints and difficulties to parallelize. In recent years, the development of efficient iterative solvers for Isogeometric Analysis has therefore become an active field of research. For standard FEM, multigrid methods are known to be among the most efficient solvers for elliptic partial differential equations. The direct application of these methods to linear systems arising in Isogeometric Analysis results, however, in multigrid methods with deteriorating performance for higher values of the spline degree p, since the multigrid smoother becomes less and less effective in damping the error. This has led to the development of multigrid methods with non-standard smoothers. In this dissertation,we propose the use of p-multigridmethods as an alternative solution strategy. Within our p-multigrid method, the coarse grid correction is obtained at level p Æ 1, enabling the use of well-known solution methods for standard Lagrangian FEM (in particular h-multigrid methods). Furthermore, the support of the basis functions significantly reduces at level p Æ 1, thereby reducing the number of non-zero entries in the coarse grid operators. We analyze the performance of our p-multigrid method, adopting different smoothers, for single patch and multipatch geometries. In particular, we perform a spectral analysis to investigate the interplay between the coarse grid correction and smoothing procedure and obtain the asymptotic convergence rate of the p-multigrid method for a representative scenario. Numerical results (i.e., iteration numbers and CPU timings) are obtained for a variety of two- and three-dimensional benchmarks and compared to (state-of-the-art) h-multigrid methods to show the potential of p-multigrid methods in the context of Isogeometric Analysis. For time-dependent partial differential equations, we apply Multigrid Reduced in Time (MGRIT), which is a parallel-in-time method, on discretizations arising in Isogeometric Analysis. Here, MGRIT is successfully combined with a p-multigrid method to obtain an overall efficient method.","Isogeometric Analysis; p-multigrid; Multigrid Reduced in Time","en","doctoral thesis","","978-94-6366-453-0","","","","","","","","","Numerical Analysis","","",""
"uuid:61ab1f7f-fccc-480b-b20f-9010c19d990c","http://resolver.tudelft.nl/uuid:61ab1f7f-fccc-480b-b20f-9010c19d990c","Single Cells in the Spotlight: Probing the kinetics of CRISPR Adaptation and Interference","McKenzie, R. (TU Delft BN/Stan Brouns Lab)","Brouns, S.J.J. (promotor); Tans, S.J. (copromotor); Delft University of Technology (degree granting institution)","2021","","","en","doctoral thesis","","978-90-8593-493-6","","","","","","","","","BN/Stan Brouns Lab","","",""
"uuid:e0a82278-daa1-4ea3-ad33-dc18b825be90","http://resolver.tudelft.nl/uuid:e0a82278-daa1-4ea3-ad33-dc18b825be90","Oxygen requirements for lipid biosynthesis in yeast","Wiersma, S.J. (TU Delft BT/Industriele Microbiologie)","Pronk, J.T. (promotor); Daran, J.G. (promotor); Delft University of Technology (degree granting institution)","2021","The yeast Saccharomyces cerevisiae has been used by humans for many centuries in microbial fermentation processes for the production of, for example, bread and alcoholic beverages. This long history of use and, in addition, its fast growth, its ability to rapidly convert sugars into ethanol and the ease with which it can be genetically modified, have contributed to this yeast becoming a very popular model organism. S. cerevisiae is currently used in large-scale industrial processes for the production of biofuels and a broad range of other chemicals. The ability of S. cerevisiae to grow in the absence of oxygen is quite unique among yeasts, and not only important for the production of beer and wine, but also for industrial production of bulk chemicals. The high product yields that are required in these types of processes can in theory only be achieved when sugars are completely converted into product, instead of being partially or completely oxidized to CO2 via aerobic respiration. Fast anaerobic growth of S. cerevisiae does require that standard synthetic media, that are used to grow this yeast in the laboratory, are supplemented with a number of additional components. These additional nutritional requirements originate from the fact that oxygen is required for biosynthesis of some important components of the yeast cell. While most yeast species are actually able to ferment, they usually cannot grow in the complete absence of oxygen at all, not even when such cell components or their precursors are added to anaerobic growth media. As a consequence, some yeast species that have industrially relevant traits that are absent or less pronounced in S. cerevisiae, such as resistance to higher temperatures, cannot at the moment be used in anaerobic industrial processes. This PhD thesis describes research on oxygen requirements related to membrane synthesis in yeast, using S. cerevisiae as the main model organism, with the goal to understand these requirements and eliminate them by genetic modification. Inspiration is obtained from evolutionary adaptations of eukaryotic microorganisms that naturally occur in anaerobic or oxygen-poor environments. Metabolic engineering strategies developed in this way may then possibly be applied to other industrially relevant yeast species and thus aid the elucidation of additional, as yet unknown oxygen requirements, or even enable anaerobic growth of those yeasts as well.","","en","doctoral thesis","","978-94-6384-239-6","","","","","","","","","BT/Industriele Microbiologie","","",""
"uuid:0a0344dc-b98b-4539-8456-2c6de4843315","http://resolver.tudelft.nl/uuid:0a0344dc-b98b-4539-8456-2c6de4843315","The Symmetric Exclusion Process and the Gausian Free Field on compact Riemannian manifolds","van Ginkel, G.J. (TU Delft Applied Probability)","Redig, F.H.J. (promotor); van Neerven, J.M.A.M. (promotor); Cipriani, A. (copromotor); Delft University of Technology (degree granting institution)","2021","In this thesis we study the Symmetric Exclusion Process (SEP) and the Discrete Gaussian Free Field (DGFF) on compact Riemannian manifolds. In particular, we obtain the hydrodynamic limit and the equilibrium fluctuations of SEP and we show that the DGFF converges to its continuous counterpart. To define these discrete models, we construct grids with edge weights that approximate the underlying manifold in a suitable way. Additionally, we study a model of an active particle and the role of reversibility for its limiting diffusion coeffcient and large deviations rate function.","Interacting particle systems; Hydrodynamic limit; Equilibrium fluctuations; (Discrete) Gaussian Free Field; Scaling limit; Active particle; Riemannian manifold; Stochastic processes","en","doctoral thesis","","","","","","","","","","","Applied Probability","","",""
"uuid:f0312839-3444-41ee-9313-b07b21b59c11","http://resolver.tudelft.nl/uuid:f0312839-3444-41ee-9313-b07b21b59c11","Correct by Construction Language Implementations","Rouvoet, A.J. (TU Delft Programming Languages)","Visser, Eelco (promotor); Krebbers, R.J. (promotor); Delft University of Technology (degree granting institution)","2021","Programming language implementations bridge the gap between what the program developer sees and understands, and what the computer executes. Hence, it is crucial for the reliability of software that language implementations are correct. Correctness of an implementation is judged with respect to a criterion. In this thesis, we focus on the criterion type correctness, striking a balance between the difficulty of the assessment of the criterion and its usefulness to rule out errors throughout a programming language implementation. If both the front- and the back-end fulfill their role in maintaining the type contract between the programmer and the language implementation, then unexpected type errors will not occur when the program is executed. To verify type correctness throughout a language implementation, we want to establish it formally. That is, we aim to give a specification of program typing in a formal language, and to give a mathematical proof that every part of the language implementation satisfies the necessary property to make the whole implementation type-correct. Type checkers ought to be sound and only accept programs that are indeed typeable according to the specification of the language. Interpreters should be type safe, and reduce expressions to values of the same type. Program compilers should preserve well-typing when they transform programs. These properties are essential for implementations of typed programming languages, ensuring that the typing of the source program is a meaningful notion that can be trusted by the programmer to prevent certain errors from occurring during program execution. A conventional formal type-","","en","doctoral thesis","","978-94-6384-256-3","","","","","","","","","Programming Languages","","",""
"uuid:20d9fea6-8b48-43a8-af60-9c4e1f7b94a8","http://resolver.tudelft.nl/uuid:20d9fea6-8b48-43a8-af60-9c4e1f7b94a8","Long-Distance Foam Propagation","Yu, G. (TU Delft Reservoir Engineering)","Rossen, W.R. (promotor); Voskov, D.V. (copromotor); Delft University of Technology (degree granting institution)","2021","Creating a gas-liquid foam means dispersing gas as individual bubbles in an aqueous solution, in which each gas bubble is separated by liquid films or lamella. The most common form of liquid foam (as opposed to solid foams, like polymer sponges) seen in day-to-day life is bulk foam. This refers to a foam that rests in a large container (or flows in a free open space) that has a volume considerably larger than the bubble size. Foam in a porous medium, however, resides and flows in a network of narrow pore spaces. The behaviour of foam is therefore complicated by many complex capillary phenomena...","Foam generation; Foam propagation; Coreflooding experiment; Critical superficial velocity; Multiple steady-states; surfactant concentration; foam quality; Population-Balance model; Local steady-state model; experimental criteria; CMG-STARS simulation; limiting capillary pressure; relative gas mobility; EOR","en","doctoral thesis","","","","","","","","","","","Reservoir Engineering","","",""
"uuid:537cd39c-355d-4c78-aa7b-30f8d3471d20","http://resolver.tudelft.nl/uuid:537cd39c-355d-4c78-aa7b-30f8d3471d20","Understanding comfort and health of outpatient workers in hospitals, a mixed-methods study","Eijkelenboom, A.M. (TU Delft Indoor Environment)","Bluyssen, P.M. (promotor); Ortiz, Marco A. (copromotor); Delft University of Technology (degree granting institution)","2021","Against the backdrop of an increasing need for healthcare, staff shortages and relatively high rates of sick leave, understanding of wellbeing (comfort and health) of hospital workers is important. This research aims to provide a contribution, through a mixed-methods approach, with broad and in-depth insights into comfort and health. Therefore, data have been collected from questionnaires, building inspections, interviews, and photos, and analysed with several techniques. Personal, work, and building-related aspects were included in data collection, because a preliminary literature review identified mutual relations with comfort and health. As previous studies on outpatient workers were missing, while staff is generally less satisfied with comfort than patients, this research focuses on staff in outpatient areas. To gain insights into the outpatient workers’ comfort and health, four important aspects are highlighted: differences in comfort in relation to room types, occupant profiles differentiated by the individuals’ preferences and satisfaction, changes of preferences due to contextual changes, and associations of health with building-related aspects. This research builds on previous studies which identified indoor environmental quality (IEQ) profiles of home occupants and school children. New are social comfort profiles, comparison between room types and contextual influence on preferences, as well as the studied occupant group and building. The study enables academical and practical exploration of preferences and perceptions of comfort and their integration in the design process.","","en","doctoral thesis","","978-94-6366-465-3","","","","A+BE | Architecture and the Built Environment No 19 (2021)","","","","","Indoor Environment","","",""
"uuid:3fea6d14-73a7-4d05-8d3e-36f8a03fd699","http://resolver.tudelft.nl/uuid:3fea6d14-73a7-4d05-8d3e-36f8a03fd699","Supporting Adaptive Delta Management: Systematic Exploration of Community Livelihood Adaptation as Uncertainty","Kulsum, U. (TU Delft Policy Analysis)","Thissen, W.A.H. (promotor); Shah Alam Khan, M. (promotor); Timmermans, Jos (copromotor); Delft University of Technology (degree granting institution)","2021","Long-term planning in urbanizing deltas has to deal with deep uncertainties in socio-economic development and climate change. Adaptive Delta Management (ADM) has been developed as an approach that acknowledges these and similar uncertainties. The Bangladesh Delta Plan 2100 has, in principle, adopted the ADM approach, and it recognizes general uncertainties in (external) physical and socio-economic conditions. It does, however, not acknowledge uncertainties in the way local communities may adapt to uncertain conditions and policy measures. Historical analysis confirms that local adaptation may be different from policymaker’s expectations, and that ignoring this may seriously harm the effectiveness of such a planning approach. This research offers two novel approaches for systematic exploration of the uncertainties in community livelihood adaptation under a variety of uncertain future conditions. The first approach looks into the mental model that guides local actors’ decision making, while the second approach uses a model describing the impact of (external) triggers on actors’ motivation and abilities for a variety of adaptation actions. While both these approaches might be improved, case study applications in the polders of southwest Bangladesh illustrate their utility as instruments to create awareness of possible developments and to act as vehicles for participatory learning by both policymakers and local communities.","Uncertainty; Community; Livelihood; Adaptation; Adaptive Delta Management (ADM); Polders; Bangladesh","en","doctoral thesis","","978-94-6366-264-2","","","","","","","","","Policy Analysis","","",""
"uuid:e94a4c47-b3f7-45a7-a38c-9c319c38a51f","http://resolver.tudelft.nl/uuid:e94a4c47-b3f7-45a7-a38c-9c319c38a51f","Catalysis, chemistry, and automation: Addressing complexity to explore practical limits of homogeneous Mn catalysis","van Putten, R. (TU Delft ChemE/Inorganic Systems Engineering)","Pidko, E.A. (promotor); Jensen, K.F. (promotor); Delft University of Technology (degree granting institution)","2021","Catalysis is a critical enabling technology that directly contributes to higher standards of living. This is a remarkable feat for a technology that most consumers have no direct contact with and perhaps only vaguely know from their car’s catalytic converter. Even many technical users regard catalysts as ideal substances that promote a target transformation without being consumed in the reaction. Reality, however, is much more complex because catalysts can also produce undesirable side-products or stop working before the target reaction is complete. This dissertation explores such complexity.
The aim of this work was to study real catalytic systems under relevant reaction conditions. This challenge was approached from two different directions, and the dissertation is therefore divided into two parts. Part I describes our study of non-noble Mn complexes as catalysts for a variety of reduction chemistries. Particular attention was given to the study of deactivation phenomena to understand why some catalysts stop operating early in their lifetime. This work led to catalytic methods that enable improved operation at significantly lower catalyst loading. The second part of this dissertation focuses on the development and application of automation and data-rich experimentation methods for catalysis R&D. An automated reaction analysis platform was developed that simultaneously produces kinetic data and several representative data streams of operando spectroscopy. These tools were used to study and optimise the performance of a variety of chemistries, and are expected to provide the required dense data for contemporary data-driven methods.","","en","doctoral thesis","","978-94-6384-246-4","","","","","","","","","ChemE/Inorganic Systems Engineering","","",""
"uuid:d5d43ee7-10bc-4924-beec-1040dba4ac12","http://resolver.tudelft.nl/uuid:d5d43ee7-10bc-4924-beec-1040dba4ac12","Naturally fractured reservoir characterization: Advanced workflows for discrete fracture network modeling","Prabhakaran, R. (TU Delft Applied Geology)","Bertotti, G. (promotor); Smeulders, D.M.J. (promotor); Delft University of Technology (degree granting institution)","2021","Natural fractures in subsurface rocks are a source of heterogeneity that impacts flow and transport behaviour. The presence of fracture discontinuities needs to be modelled explicitly due to observed deviations from the continuum assumption of porous media. The departures are due to both individual properties (such as aperture, infill, and roughness) and global network properties (such as topological summary and length distribution). Understanding flow patterns due to effects of rock fractures networks is essential formany applications such as exploiting hydrocarbons, geothermal heat extraction, subsurface nuclear waste storage, and water aquifer development. Assessing the impact of fractures in modelling studies requires fracture network data which is difficult to sample from seismic data (due to image resolution issues) and borehole data (owing to sparse sampling). Outcrop analogue data provide a means to sample networkswhile honouring both spatial position and topological relationships.","Naturally fractured reservoirs; automatic fracture detection; graph theory; spatial networks; spatial network heterogeneity","en","doctoral thesis","","978-94-6384-257-0","","","","","","","","","Applied Geology","","",""
"uuid:1dda6e64-ae94-4fee-a5d1-d37473644d75","http://resolver.tudelft.nl/uuid:1dda6e64-ae94-4fee-a5d1-d37473644d75","Smart campus tools: Technologies to support campus users and campus managers","Valks, B. (TU Delft CRE Strategic Portfolio Management)","den Heijer, A.C. (promotor); Koutamanis, A. (copromotor); Arkesteijn, M.H. (copromotor); Delft University of Technology (degree granting institution)","2021","In recent years, the density on the Dutch university campus has increased substantially due to a continued growth of student populations. Campus managers face the challenge of accommodating the university’s students and employees mainly in the existing buildings, which are used ineffectively and inefficiently. In order to improve the space use on campus, campus managers need better information about space use. Therefore, this PhD dissertation proposes the use of Smart campus tools: a service or product with which information on space use is collected real-time to improve utilization of the current campus on the one hand, and to improve decision‑making about the future campus on the other hand. The main research question is: How can smart campus tools optimally contribute to the match between demand for and supply of space, both on the current campus and on the future campus? To answer the research question, this PhD dissertation explores the use of Smart campus tools in Dutch and international contexts, at universities and other organisations. Then, it researches how information from Smart campus tools can be properly connected to campus decision‑making processes. The results from this research are used to inform existing theories and draw lessons for practice.","","en","doctoral thesis","","978-94-6366-454-7","","","","A+BE | Architecture and the Built Environment No. 18 (2021)","","2021-10-11","","","CRE Strategic Portfolio Management","","",""
"uuid:1f87acc6-6905-4a1b-bebf-738791d3b915","http://resolver.tudelft.nl/uuid:1f87acc6-6905-4a1b-bebf-738791d3b915","Fate of Hydroxylamine in the Nitrogen Cycle","Soler Jofra, A. (TU Delft BT/Environmental Biotechnology)","van Loosdrecht, Mark C.M. (promotor); Pérez, Julio (promotor); Delft University of Technology (degree granting institution)","2021","One of the least studied nitrogen compounds is hydroxylamine. Hydroxylamine
is a highly reactive and toxic inorganic compound that some microorganisms use
as intermediate during nitrogen conversions. The aim of this thesis was to further
understand the role of hydroxylamine in the nitrogen cycle.","Nitrogen cycle; Hydroxylamine; Ammonium oxidation; N2O emissions","en","doctoral thesis","","978-94-6421-465-9","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:1b81527f-c493-482c-af9f-cd46adb32729","http://resolver.tudelft.nl/uuid:1b81527f-c493-482c-af9f-cd46adb32729","Graphene-based neuromorphic computing: Artificial spiking neural networks","Wang, H. (TU Delft Computer Engineering)","Cotofana, S.D. (promotor); Wong, J.S.S.M. (promotor); Delft University of Technology (degree granting institution)","2021","The human brain is a natural high-performance computing systemwith outstanding properties, e.g., ultra-low energy consumption, highly parallel information processing, suitability for solving complex tasks, and robustness. As such, numerous attempts have been made to devise neuromorphic systems able to achieve brain-akin computation abilities, which can aid in understanding the complex human brain functionality and can be utilized to solve complex problems, e.g., pattern recognition and data mining. However, the fact that human brain comprises billions of neurons, which are the fundamental information processing units, and trillions of synapses that interconnect them makes the design and implementation of large-scale brain-inspired computing systems quite a challenging task. Graphene appears to be a promising candidate for scalable neuromorphic implementations as it exhibits a wealth of outstanding properties, e.g., ballistic transport, ultimate thinness, flexibility, and graphene devices are capable of emulating complex nonlinear functions and can be readily tuned to provide various conduction dynamicswhile preserving low energy operation and small footprint. Moreover, graphene is biocompatible, which offers perspectives for graphene-based neuromorphic bio-interfaces. This thesis aims to investigate graphene’s potential to enable scalable and energy effective neuromorphic computing. To this end, we first introduce an atomistic-level simulation model for calculating graphene electronic transport properties, that captures the hysteresis effects induced by interface charges trapping/detrapping phenomena. Second, we propose a generic graphene based synapse, which can be tailored to emulate different synaptic plasticity types by properly modifying its Graphene NanoRibbon (GNR) shape and contacts topology, as well as applying external voltages. Subsequently, we introduce a compact graphene-based integrate-and-fire spiking neuron that mimics the basic spiking neuronal dynamics. We further propose a basic SpikingNeuralNetwork (SNN) unit,which can be utilized to implement complex graphene-based SNNstructures. Finally,we introduce a reconfigurable graphene-based SNN architecture and a training methodology for obtaining the initial SNN synaptic weight values. We demonstrate the feasibility of the synaptic weights training methodology and the practical capabilities of the proposedSNNarchitecture by applying them to solve character recognition and edge detection problems. Our experiments clearly indicate that the proposed graphene-based neuromorphic approach enables lowenergy operation at small chip real estate footprint, which are enabling factors for the realization of scalable energy-efficient SNN implementations.","Neuromorphic Computing; Graphene; Spiking Neural Network; Synaptic Plasticity; Spiking Neuron","en","doctoral thesis","","978-94-6384-265-5","","","","","","","","","Computer Engineering","","",""
"uuid:4b770960-b810-4acc-abe9-d81c93823a91","http://resolver.tudelft.nl/uuid:4b770960-b810-4acc-abe9-d81c93823a91","Analogical Reasoning in Biomimicry Design Education","Stevens, L.L. (TU Delft Science Education and Communication)","de Vries, M.J. (promotor); Mulder, K.F. (promotor); Delft University of Technology (degree granting institution)","2021","“Teaching is both an art and a science” (Harrison & Coll, 2008 p.1). Good teaching excites students and cultivates their curiosity to learn more than they are asked. But what if students’ blank faces tell you that the teaching did not land, what can you do? Using an analogy or metaphor to explain the principle helps students visualize and comprehend the knowledge of difficult, abstract concepts by making it familiar. The National Academy of Engineers issued a report in 2008 emphasizing the need for design engineers to develop 21st century skills, such as ingenuity and creativity, and to create innovative products and markets. However, designers have a hard time ignoring evident constraints on their concepts during their design process. This is especially difficult for novice designers when attempting to use analogical reasoning (Osborn, 1963; Hey et al. 2008). Hey et al. explains how the multitude of design considerations is even more difficult for novice as compared to expert designers who are more able to focus on the important features of a problem. Kolodner (1997) iterates how novice designers have difficulty sifting through the mass of information they encounter. They need help with the transfer of knowledge that analogical reasoning requires. When students can clearly extract and articulate what they have learned, this helps them to internalize this. Biomimicry education teaches the clear extraction and articulation while learning to decipher and transfer function analogies from biology to design. This transfer can also improve reasoning when solving problems (Wu and Weng, 2013), reacting to the challenge in a more ‘out-of-the-box’ manner (Yang et al. 2015). However, not being able to fully understand this “conceptual leap between biology and design” in an accurate manner, is sited as a key obstacle of this field (Rowland, 2017; Rovalo and McCardle 2019, p. 1). Therefore, didactics on how to teach this analogical leap to overcome the hurdles is essential. There is insufficient research on the effectivity of biomimicry education in design to help establish ‘best practices’. This thesis offers advice to fill this pedagogical gap to find out how to overcome the obstacle of analogical reasoning for novice designers, while practicing biomimicry. The contribution to science is a not earlier tested methodology that leads to a clearer understanding of the translation of biological strategies and mechanisms found in scientific research. This translation from biology to design in visual and textual manner, is called the Abstracted Design Principle (ADP) and is introduced and explained in detail in chapters 4, 5 and 6 of this thesis. Together with the proposed instructions, we sketch the net-gain of positive mind-set for novice designers on their path to design for a sustainable future.","Biomimicry; Analogical reasoning; Design Education; Systems Thinking; Art and science; STEM; Life's Principles; Drawing to learn; Nature Technology Summary","en","doctoral thesis","Laura Stevens","978-94-6366-446-2","","","","Dr. ir. Laura Stevens holds two MS degrees in the fields of Architecture from Delft University of Technology and in Biomimicry from Arizona State University. She is a biomimicry design educator in her role as a senior lecturer in the Industrial Design Engineering program at The Hague University of Applied Sciences in the Netherlands. A sustainable design instructor since 2007, she writes peer-reviewed articles and book chapters on the topic of Biomimicry Design Thinking as a methodology to enhance circular, systems-thinking solutions in design by learning from time-tested biological strategies and mechanisms found in nature. Her aim is to evolve together with the education of Industrial Design Engineering to Regenerative Design Engineering enabling students to take charge of the design of their future world. Biomimicry, the field that teaches us to mimic biological strategies into design solutions, is the best of both worlds and can aid them to do this. Laura aspires to replicate strategies that work and cultivate cooperative relationships to offer a platform in which interdisciplined design teams tackle the complex challenges of today. By incorporating the education from the bottom up and combining modular and nested components one at a time, she hopes to integrate the development of biomimicry with the growth of a passion to learn more.","","2021-10-09","","","Science Education and Communication","","",""
"uuid:8c624c7a-15c4-41a1-8cc6-7eb8a9b36a86","http://resolver.tudelft.nl/uuid:8c624c7a-15c4-41a1-8cc6-7eb8a9b36a86","Mathematical Aspects of Cell-Based and Agent-Based Modelling for Skin Contraction after Deep Tissue Injury","Peng, Q. (TU Delft Numerical Analysis)","Vermolen, F.J. (promotor); Vuik, Cornelis (promotor); Delft University of Technology (degree granting institution)","2021","Burns and other skin traumas occur at various intensities regarding the depth and area of the skin, as well as the involvement of the different skin layers. Worldwide, an estimated six million patients need hospitalisation for burns annually. Furthermore, most severe burn injuries will develop morbidity and unaesthetic scars like contractures and hypertrophic scars, which cause a significantly negative impact on the patients’ life. Contractures, which usually concur with disabilities and disfunctionings of the joints, are recognized as excessive contractions. Contractions are caused by the pulling forces exerted on the extracellular matrix (ECM) by the (myo)fibroblasts in the proliferation stage. To have a better understanding and insight into the occurrences of contractions and other biological phenomena, mathematical modelling is a useful tool for visualization and prediction. Using mathematical models, it is possible to simulate important biological mechanisms and track the cellular activities and positions of each individual cell. The research described in the thesis is divided into three parts: (1) agent-based modelling for skin contractions after burn injuries; (2) the numerical treatment of point forces and their alternatives in cell-based models for skin contractions; (3) cell-based modelling for the evolution of cell geometry duringmigration. The skin contraction model is able to reproduce important trends that are observed in clinical settings. The Monte Carlo based parameter sensitivity analysis reveals significant correlations between several stages in the contraction process. These correlations can be used by clinicians to predict scar characteristics on the basis of earlier observations. The flexibility in adjusting parameter values allows the model to be used as patient-oriented simulation tool for the prediction of the evolution of skin after serious trauma.
To model the traction forces exerted by the (myo)fibroblasts, we use point forces that are described by the Dirac Delta distributions, which is an important feature of the socalled immersed boundary approaches. For the case of linear elasticity, the superposition argument is used in the analysis of the solution to the linear set of partial differential equations. However, for the dimensionalities that are higher than one, the Dirac Delta distributions result into singular solutions. Hence, we developed various alternatives to get around the singular behaviour of the solutions which allows classical finite-element techniques to be applied to the current agent-based formulations. All the alternatives have been proved to be consistent with the immersed boundary approach. One of the alternatives is the smoothed particle approach that is also proposed in this thesis. This approach is optimal in its use regarding the straightforward numerical treatment since it allows classical solutions in the sense of smoothness, which makes it attractive from a computational point of view. Furthermore, this formalism is a bridge between the continuum (fully partial differential equations-based) approach and the agent-based approach.","Skin contractions; Agent-based model; Cellular traction forces; Morphoelasticity; Dirac Delta distributions; Cell geometry","en","doctoral thesis","","978-94-6384-253-2","","","","","","","","","Numerical Analysis","","",""
"uuid:3fdd3c36-38cc-4e38-a5ac-c6445c4857d5","http://resolver.tudelft.nl/uuid:3fdd3c36-38cc-4e38-a5ac-c6445c4857d5","Fouling in Membrane Processes for Water Treatment","Jafari Eshlaghi, M. (TU Delft BT/Environmental Biotechnology)","van Loosdrecht, Mark C.M. (promotor); Picioreanu, C. (promotor); Verliefde, A.R.D. (promotor); Delft University of Technology (degree granting institution); Universiteit Gent (degree granting institution)","2021","Membranes are widely applied in water and waste water treatment as they provide an absolute barrier against the contaminants. Membranes are offered in wide pore size range and they are applied vastly due to their versatile and cost effective operation in dealing with wide range of streams. However, membranes, like any other filtration systems, suffer from fouling. Fouling layer, accumulation of rejected materials over time on membrane surface, is often called the main bottleneck of membrane processes. Fouling formation reduces water flux, increases energy consumption and leads to the early membrane replacement. To better control and mitigate fouling layer formation, better understanding of fouling mechanism and properties are required. Fouling properties can be categorized into hydraulic properties, mechanical properties, structural properties, chemical properties. These properties can be impacted by operational conditions, feed water quality and membrane properties. Moreover, these properties influence membrane performance parameters such as water flux, energy consumption and eventually the plant expenses. Therefore, the fouling properties, their inter-relations, their impacts on performance parameters should be further studied. We used novel modelling techniques and experimental measurements in laboratory and full-scale plants to study fouling properties and their impacts on membrane performance parameters. We also discussed the opportunities and challenges for future fouling study.Chapter 2. To evaluate the relation between structural, hydraulic and mechanical properties of fouling layer in the membrane systems, a novel method to extract these properties was developed to extracted fouling properties in a non-destructive and in-situ technique. The performance parameters of a dead-end UF system with integrated OCT imaging (in-situ) was coupled with a fully-coupled fluid-structural interaction (FSI) model. The dead-end UF was operated under a compression-relaxation cycle to evaluate how fouling properties changes under different applied pressure. Several mechanical models were evaluated to find the most suitable mechanical model to explain the fouling layer behaviour under compression-relaxation cycle in the dead-end UF. The results indicate that the hydraulic resistance of homogeneous biofilms under UF was much more affected by change in permeability than by the fouling layer thickness. Interestingly, we also found that even a poroelastic model (relatively simple model) can fairly good explain behaviour of the fouling layer in this study under different applied pressures. Compression of the fouling layer in UF systems can significantly increase hydraulic resistance of the membrane systems. In Chapter 2, a new technique was developed to extract fouling properties of the smooth surface biofilms. In Chapter 3 the new technique was further expanded to extract the mechanical properties of rough surface fouling layer under dead-end UF. We observed for the fouling layer which is fed with real surface water (i.e., river water), a dual-layer fouling structure with a thin and dense base layer and a thick and porous top layer could best explain the observed results. We also introduced a new fouling structure indicator, the fraction of exposed base layer, as a good indicator in the determination of water flux in UF systems.In Chapter 4 the chemical properties of fouling layer (e.g., composition) and their impacts on chemical cleaning efficiency in Reverse Osmosis (RO) systems were evaluated. Chemical cleaning protocols (often referred as CIP protocols) are usually developed under laboratory conditions (synthetic feed water, short-term experiments) and then are applied in the full-scale RO installations. This often leads to significant differences in CIP efficiency in the lab and full-scale installations. Thus, we compared the fouling layer properties and CIP efficiency of typical laboratory conditions RO and several full-scale RO plants. The results show that CIP efficiency in the full-scale RO plants are much lower than lab conditions RO. Later, we correlated such differences in CIP efficiency to their significantly different extracellular polymeric substance (EPS) properties. The EPS extracted from lab RO had different composition and adherence properties than the EPS extracted from full-scale RO. Therefore, we concluded that CIP protocols should not be developed under lab conditions. In the Chapter 5 we suggested a new method to better develop CIP protocols and study fouling properties with more industrial applications. We installed several new RO modules in the full-scale installation and they were operated for 30 days under identical conditions as the full-scale installation. Later, the fouling properties and CIP efficiency (in-situ measurement of permeability and pressure drop recovery) were compared between new RO modules (after 30 days of operation) and old RO modules (>2 years of operations). The new proposed fouling simulation method show promising results in both CIP efficiency results and fouling properties. Although fouling is inevitable part of filtration processes, its economic impacts on membrane systems is not well evaluated. In Chapter 6 cost of fouling in several RO and NF systems in The Netherlands has been calculated using plant performance data. All the cost factors contributing to cost of fouling such as CIP cost, energy cost, down cost were considered. We observed that for the RO plants, around a quarter of OPEX is caused by fouling, as oppose of around 10% for anoxic NF plants. The most important factor in the cost of fouling was considered the early membrane replacement cost, followed closely by additional energy cost. CIP costs have a minor contribution to the overall cost of fouling. Reuse of municipal wastewater effluent is part of solution to deal with water scarcity challenges. In Chapter 7 a fit-for-purpose approach to water reuse was proposed. We developed full techno-economic analysis on membrane-based Water reuse plant for municipal wastewater treatment effluent in the Netherlands. The impact of fouling and its properties and fouling cost have been integrated in all the membrane systems. A novel approach on design of water reuse plant has been offered to not only inherently reduce fouling impact but also increase plant robustness and water recovery. In chapter 8 a summarized and generalized conclusions of the previous chapters is presented. We also presents our suggestions and opportunities for the future membrane and fouling research.","","en","doctoral thesis","","978-94-6423-472-5","","","","This doctoral research has been carried out in the context of agreement on joint doctoral supervision between Ghent University, Belgium and Delft University of Technology, the Netherlands.","","","","","BT/Environmental Biotechnology","","",""
"uuid:fe758a56-8ccc-4087-81bf-a1d0ec90daba","http://resolver.tudelft.nl/uuid:fe758a56-8ccc-4087-81bf-a1d0ec90daba","Homicide investigation in the digital era: The development and evaluation of a case-specific elements library (C-SEL)","Sutmuller, A.D. (TU Delft Safety and Security Science)","van Gelder, P.H.A.J.M. (promotor); Delft University of Technology (degree granting institution)","2021","The homicide rate dropped globally in the last decades, while in the same period the percentage of unsolved homicide cases increased. At the same time technological developments have drastically changed our lives and little is known about how this influenced the process of decision-making within homicide investigations. Therefore, the aim of this research is to develop a methodology that supports homicide investigators in the collection, prioritizing and elimination of persons of interest using pieces of evidence, in the new digital era, in such a way that the methodology is effective, reduces tunnel vision and obeys the laws for privacy.Currently used methodologies that use pieces of evidence to manage persons of interest were applied to three recent real-world homicide cases to evaluate whether these methodologies effectively collect, prioritize and eliminate persons of interest. The potential of a general approach, relying on big data and data science, to limit the number of persons of interest to be incorporated in homicide investigation, was explored. Subsequently, a methodology to incorporate and prioritize persons of interest was developed based on literature and the knowledge and expertise of experts in criminal investigation. This case specific elements library (C-SEL) consists of twenty-four elements and twelve underlying factors that can be used for the incorporation and prioritization of persons of interest. This new developed methodology was evaluated and compared to the currently used methodologies, not only on the performance measures of effectiveness, but also on tunnel vision reducing properties and the obedience to the laws of privacy.C-SEL effectively collects and prioritizes the perpetrator in all three real-world homicide cases and showed better results compared to the currently used methodologies on the tunnel vision reducing character and on the obedience to the laws of privacy. These results provide sufficient leads to further validate the new developed methodology.","","en","doctoral thesis","","978-94-6416-794-8","","","","","","2026-09-14","","","Safety and Security Science","","",""
"uuid:6e8e2017-f2b9-4051-9f8b-32583180547b","http://resolver.tudelft.nl/uuid:6e8e2017-f2b9-4051-9f8b-32583180547b","Immediate Systems in Architecture: Continuous adaptability at the speed of human intention","Friedrich, H.C. (TU Delft Architectural Engineering)","Oosterhuis, K. (promotor); Bier, H.H. (copromotor); Delft University of Technology (degree granting institution)","2021","The presented research on Immediate Systems in Architecture (IS-A) is an attempt to afford a better human-technology match in architecture, pursuing a state of immediacy where humans can simultaneously use and design, apply and amend the technical system they engage with.
The thesis contains both theoretical and experimental contributions on IS-A. Initially, the notion of Immediate Systems (IS) is introduced and framed. IS offer interaction in the style of direct manipulation, embed design and implementation in situations of use, and overcome limitations of remote design. IS are related to psychological concepts and described through the lens of Gibson’s Theory of Affordances. Characteristics and conditions of IS are distilled from the presentation and discussion of a series of examples.
The application of IS in architecture is approached from three angles. First, from the lived perspective of a user-designer, as adhocist mode of action. Second, from the methodology and technology, as accelerated design transfer. Third, in an ecological perspective, as human-architecture symbiosis. Each of these readings is hypothesized to be more feasible by implementing IS-A as Human-in-the-Loop Cyber-Physical Systems (HiLCPS).
Following the method of research by design, prototypes were developed in a series of experiments. The experiments result in multiple tools for real-time multi-directional volumetric design exploration that allows users to interactively model and ad-hoc reconceptualize parametric geometry, topology and components of architectural assemblies. A combination of these tools with digital fabrication and interactive building components lead to the most encompassing IS-A prototype, an attempt to realize an open-ended building system that joins simultaneous design, adaptation, construction and reconfiguration as interaction possibilities embedded in the built environment.","","en","doctoral thesis","","978-94-6366-469-1","","","","","","","","","Architectural Engineering","","",""
"uuid:acafe18b-3345-4692-9c9b-05e970ffbe40","http://resolver.tudelft.nl/uuid:acafe18b-3345-4692-9c9b-05e970ffbe40","Order from Disorder: Control of Multi-Qubit Spin Registers in Diamond","Bradley, C.E. (TU Delft QID/Taminiau Lab)","Hanson, R. (promotor); Taminiau, T.H. (copromotor); Delft University of Technology (degree granting institution)","2021","Electron spin qubits associated with individual solid-state defects can exhibit exceptional coherence and bright optical interfaces. Furthermore, their magnetic interactions with nuclear spins in the host material present a resource for multi-qubit registers. They have thus emerged as powerful systems with which to develop quantum technologies. In this thesis, we develop a toolbox for the precise control of multi-qubit spin systems associated with single nitrogen-vacancy centres in diamond. We utilise this platform to explore a number of avenues in quantum science: networks, computation, sensing, and simulation. Our findings provide new insights towards the goal of distributed quantum computation, and establish a programmable solid-state-spin quantum simulator for studying many-body physics.","","en","doctoral thesis","Delft University of Technology","978-90-8593-487-5","","","","","","2021-11-01","","","QID/Taminiau Lab","","",""
"uuid:3706e467-dd4b-400d-a08e-1a5c57635775","http://resolver.tudelft.nl/uuid:3706e467-dd4b-400d-a08e-1a5c57635775","Corrosion and Corrosion Inhibition Studies of Aerospace Aluminium Alloys at the Nanoscale using TEM Approaches","Kosari, A. (TU Delft Team Yaiza Gonzalez Garcia)","Mol, J.M.C. (promotor); Terryn, H.A. (promotor); Delft University of Technology (degree granting institution)","2021","For many decades, corrosion and corrosion inhibition of high-strength aluminium alloys have been studied indirectly and through traditional and separately performed electrochemical, spectroscopic and microscopic techniques. These approaches employed to date commonly lack sufficient lateral and time resolution to unravel early-stage events which is controlled at the nanoscopic levels at which microstructural heterogeneities actually steer local and dynamic electrochemical activities. Besides, techniques with appropriate resolution like transmission electron microscopy (TEM) have been applied to the field, but carried out ex-situ, normally providing no detailed on-site time-resolved information to investigate distinctive-but-consecutive stages of corrosion and corrosion inhibition phenomena. That is why theories of relevance are established through bridging and linking separately-obtained information and therefore are described in rather stochastic than deterministic terms. This is particularly the case for the legacy alloy AA2024-T3 which is prone to complicated forms of local corrosion resulting from extremely complex and heterogeneous local microstructures.
Local corrosion in AA2024-T3 is site-specific where complicated local degradation events predominantly take place at surface intermetallic particles (IMPs) dispersed in the alloy matrix and eventually lead to pitting and intergranular forms of corrosion. Thus, the detailed understanding of space- and time-resolved local corrosion mechanisms of engineered microstructures is of pivotal importance to developing reliable and active protection strategies. However, despite the high demand for time- and space-resolved mechanistic information of local corrosion, it has not yet been possible to unambiguously define the morphological and micro-electrochemical characteristics during local corrosion and corrosion inhibition owing to extremely demanding experimental challenges. Nevertheless, this thesis put efforts into carrying out dedicated TEM experimental approaches including in-situ liquid-phase, quasi in-situ and ex-situ TEM to provide time-resolved and direct nanoscopic evidence of local corrosion and corrosion inhibition processes from early surface initiation to an advanced stage of propagation.
For UAVs to be considered for specific tasks, their use must positively outweigh the use of other established, conventional systems. A key feature for UAVs would be a capability to perform autonomous, onboard real–time path planning. Path planning is defined as the process of automatically generating feasible and optimal paths to a predefined goal point in view of static and dynamic environmental and model constraints and uncertainties. This functionality allows UAVs to require minimal human intervention once its working environment and goals are defined. Therefore, autonomous and robust path planning is fundamental for UAVs to be considered for indoor applications in industrial, commercial, military and home applications.
The need for autonomous path planning initiated with the introduction of robotics in industrial repetitive applications several decades ago. Since then, path planning extended outside factory floors evolving from 2D to 3D, operating in both static and dynamic environments with a wide spectrum of constraints and uncertainties. Path planning algorithms for autonomous vehicles can be broadly categorised into three main categories: Graph–based or Grid–based algorithms; Sampling–based algorithms and Interpolation algorithms.
Although the use of UAVs has increased, the UAVs’ potential is far from reached. This can be mainly attributed to a number of challenges that have not been fully tackled and are hindering the use of small UAVs in indoor environments. This research will focus on path planning challenges in indoor, obstacle–rich environments with no UTM availability except for goal point definitions. In such scenario’s, the UAV is expected to operate using only onboard facilities. In this regard, three challenges are identified, which can be summarised as follows:
Construct in real-time, non-colliding paths from the current UAV position to a
goal position using only onboard UAV resources in the presence of both static
and dynamic obstacles and in the presence of uncertainties.
The following research goal is formulated to address these three challenges for the realisation of path planning algorithm of UAVs in indoor environments.
Assess the performance of state-of-the-art path planning rationales in the context of UAVs operating in 3D real–time, dynamic indoor environments in the
presence of uncertainty and identify a customised configuration based on the
application.
To tackle this research goal, five research questions are formulated:
Research Question 1: What is the state-of-the-art in the field of path planning for UAVs in 3D and how do these algorithms compare?
To investigate the potential of different path planning algorithms, the current state of-the-art in all fields of engineering are considered. The literature review shows that graph–based and sampling–based methods are potential candidates for 3D UAV path planning. The most often utilised algorithms from each category, that is the A* and Rapidly– Exploring Random Tree (RRT), and their variants, namely RRT without step size constraints and the Multiple RRT (MRRT) are tested in 3D scenarios of different complexity. A path smoothing interpolation algorithm is also developed to attenuate non–optimal paths, especially for the sampling-based methods.
The same path smoothing algorithm is implemented on each path planning variant with the same parameters to offer a fair comparison. These algorithms are tested on the same set of different complexity 3D scenarios using the same computer. For comparison, the path length and the computational time are the considered performance measures.
The A* with a spectrum of resolutions, the standard RRTwith different step–size constraints, RRT without step size constraints and the Multiple RRT (MRRT) with various seeds are implemented and their performance measures compared. For A*, tests show an inherent ripple in path length with change in resolution for all scenarios. This results due to the grid-based nature of the A* algorithm that creates situations in which a small increase in resolution, which theoretically shall slightly decrease the path length, effectively generates longer or shorter paths. This ripple is mitigated by randomly shifting the environment in all three dimensions by a distance varying between zero and half the distance between adjacent graph points.
Results confirm that all algorithms are able to generate a path in all scenarios for all resolutions, step sizes and seeds considered. In comparison, the A* algorithm generates shorter paths in less time with respect to RRT algorithms, although the A* algorithm only explores areas necessary for path construction while RRT algorithms explore the environment evenly. Results show that A* outperformed the RRT, both in terms of path length and path generation time in offline situations with static obstacles, with 100% success rate for both in all scenarios considered.
A* allows the environment to be discretised differently according to different exigencies of different parts of the scenario, making optimal use of resources. Oppositely, RRT and its variants are suited to generate paths efficiently in evenly distributed and focused 3D area exploration applications. Based on the results obtained, and their implication to UAV path planning, the second research question is tackled.
Research Question 2: Can the selected path planning algorithms be applied in
real-time static environments using the computational resources onboard small
UAVs?
This research question assumes that all path planning computation, sensing and environmental modelling and actuator controls must be computed onboard and in real–time. Another implication is that the path planner can only visualise the environment within the sensing distance determined by the on-board sensing systems and therefore can only construct, if possible, a path to an intermediate goal point.
For the scope of this research question, a sphere equal to the sensing range of the UAV is considered, assuming that the sensing system has a 360 degrees field-of-view (FOV) in all three dimensions. It is further assumed that static obstacles within the sensing range are known with certainty, while other obstacles are unknown and become visible only if the UAV moves in their direction. To simulate real-time path planning, the computational time must be less than or equal to the time needed by the UAV tomove from the current
position to a new position. The same test environment used to tackle Research Question 1 is used, using the same performance measures.
Results show that the A* algorithm again outperforms the RRT algorithm in both path length and computational time for all scenarios considered, with the difference increasing with scenario complexity. A* is successful 90% or more of all tests for all scenarios considered provided the look-ahead distance is at least double the distance moved per iterate. In general, the RRT algorithmresults in a lower success rate than A* owing to the longer computational time required to construct intermediate paths with respect to A*.
The UAV speed, sensor range and computational power are defined based on different studies that analyse these parameters onboard a range of UAVs [1–3]. The path planning results, based on these UAV parameters, show that 3D real-time path planning can be realised using only UAV onboard systems. The results outline the best empirical values for the different parameters. The setting of these parameters will configure the 3D real-time path planning platform, optimising its performance to each particular indoor application.
Research Question 2 considered only static obstacles but in real UAV application obstacles can move and rotate, hence a dynamic environment needs to be considered to assess the usability of the developed 3D real-time UAV path planning algorithm. This requirement is investigated in the following research question:
Research Question 3: What is the effect on path planning performance if static
obstacles are replaced with dynamic obstacles?
The inclusion of dynamic environments is external to the path planning algorithm
but it can affect the path that the UAV will traverse. Dynamic obstacles within an indoor environment can be represented by symmetrical shapes. For the scope of this work, four different scenarios with different complexity are constructed. These incorporate rotating and non-rotating cubes, rotating V-shaped obstacles and static 2D planes with windows.
Both obstacle movement and orientation are considered in the dynamic environment modelling. The random obstacle movement speed is assumed to be smaller than or equal to the speed of the UAV, as otherwise obstacle avoidance is not possible.
A real-time environment with a limited range creates situations where an intermediate goal point is not available. In this regard, two different rationales are developed to mitigate this situation. In the waiting rationale, the UAV waits in its current position until the defined intermediate goal position becomes available. In the moving rationale, the intermediate goal position is moved closed to the current UAV position, consequently increasing the chances of the UAV moving closer to the final goal position. Both rationales are integrated within the A* and RRT path planning algorithms and tested in all scenarios with dynamic obstacles.
Results show that the moving option yields better overall results in terms of path
length, computational time and success rate for A* and RRT with respect to the waiting option. Both A* and RRT produce similar results in relatively simple scenarios with RRT recording better results in path length, computational time and success rate. For complex scenarios, the RRT is better if time is not limited while the A* algorithm is less susceptible to time constraints. Also, as speed increases in complex scenarios the success rate drops due to lack of path planning time in both A* and RRT.
The results show that the developed 3D real-time path planning platform with both A* and RRT algorithms has potential to be used in low obstacle density dynamic obstacle scenarios. The waiting variant is suited in situations where safety is paramount. In home environments, this is usually the case as the UAV cannot collide with obstacles, especially if these are humans. The moving variant would be ideal in situations where goal achievement is more important than safety. Such situations include search and rescue.
Until now it is assumed that no uncertainties are present within the UAV systems. In real scenarios a range of uncertainties are present. In the next research question, uncertainty in a UAV operating in an indoor environment is investigated.
Research Question 4: Do uncertainties affect 3D path planning of UAVs? If yes,
how can these uncertainties be modelled?
This research question queries whether uncertainties affect path planning of UAVs in indoor environments. This requires a thorough literature survey and consequently the identification and modelling of uncertainty sources that might affect path planning performance.
For the scope of this work, only uncertainties within the UAV model and the
environment (perceived through UAV onboard sensing systems), are considered. Other uncertainties, such as communication with user/s, are not considered as they are out of scope for this analysis.
Literature identifies the need for uncertainty consideration in real-time 3D UAV path planning, due to the possible negative implications on path planning performance if uncertainties are neglected. The fidelity with which uncertainties can be predicted is essential in determining the usability of the proposed path planning algorithms. Furthermore, literature portrays the bounding shapes and probabilistic distributions methods as key candidates for uncertainty modelling in UAV applications. After considering the characteristics of both methods, uncertainty is modelled using bounded shapes around the current UAV position and obstacle volume.
Once uncertainty sources are identified, estimated and modelled, the developed 3D real-time path planning algorithms are assessed in the presence of dynamic obstacles and uncertainties.
Research Question 5: Can uncertainties be mitigated to ensure collision-free 3D
path planning of UAVs in real-time in the presence of dynamic obstacles?
The same test environment constructed to assess Research Question 3 is considered using the same real-time 3D UAV path planning platform. Tests are performed using both A* and RRT path planning algorithms with the moving method. Uncertainty bounds are quantified based on literature and varied between 2% and 20% for both UAV position and obstacles. Uncertainty is included by adding an offset to the actual respective parameter. The effect of each uncertainty source will be analysed independently and collectively with dynamic obstacles to identify how real-time path planning algorithms can safely operate.
Results show that both sources of uncertainty (UAV position and obstacles), deteriorate path planning performance of both A* and RRT algorithms, for all scenarios considered, with RRT exhibiting the larger effect. The concurrent inclusion of both uncertainty sources further deteriorates path planning performance. RRT results in the fastest and shortest paths, with approximately the same success rate as A*, for relatively simpler scenarios, while A* performs better in the relatively complex case. Furthermore, RRT has a higher risk of collision than A* as RRT nears obstacles more often than A*.
From the results it is confirmed that uncertainty must be considered as it has an
effect on path planning performance. The accuracy with which uncertainty is modelled affects path planning performance for both path planning rationales considered.
In this thesis, each research question built on the previous one, in such a way to
reach the final research goal and tackling the research challenges. In the process of assessing the path planning performance (first part of research goal) of each algorithm, the response of each method with respect to each additional complication, can be independently analysed. This knowledge can be used to guide future UAV designers in selecting the best configuration, based on their application, hence reaching the second part of the research goal.
The implementation of the developed 3D real-time path planning algorithms to configure a real UAV for autonomous 3D UAV navigation in an indoor, obstacle-rich environment is the ultimate future goal that can lead into the commercialisation of this system for use in domestic applications. Moreover, this real-time 3D UAV path planning system can be proposed for integration in outdoor UAVs. Finally, this dissertation’s ultimate aim is to contribute in reducing the gap that still exists to integrate UAVs into domestic environments with the aim of improving current and future services that rely on UAVs with the ultimate aim of enhancing people in their daily lives.
REFERENCES
[1] Hrabar, S., “3D Path Planning and Stereo-based Obstacle Avoidance for Rotorcraft UAVs”, IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22-26 Sep. 2008, pp. 807-814.
[2] Yao, P.,Wang, H. and Su, Z., “Real-time path planning of unmanned aerial vehicle for target tracking and obstacle avoidance in complex dynamic environment,” Aerospace Science and Technology, Vol. 47, pp. 269-279, 2015.
[3] Ferguson, D. and Stentz, A. “Using interpolation to improve path planning: The field D* algorithm”, Journal of Field Robotics, Vol. 23, No. 2, pp. 79-101, 2006.","UAV; Path Planning; 3-Dimension; Real-time; Uncertainty; Obstacle Avoidance; Dynamic Environments; Artificial intelligence","en","doctoral thesis","","978-94-6421-492-5","","","","","","","","","Control & Simulation","","",""
"uuid:4f1e8ff6-e910-4048-8a17-2a959c74f508","http://resolver.tudelft.nl/uuid:4f1e8ff6-e910-4048-8a17-2a959c74f508","Desulphurisation in 21st century iron- and steelmaking","Schrama, F.N.H. (TU Delft Team Yongxiang Yang)","Yang, Y. (promotor); Sietsma, J. (promotor); Boom, R. (copromotor); Beunder, E.M. (copromotor); Delft University of Technology (degree granting institution)","2021","In this PhD thesis, the desulphurisation in 21st century iron- and steelmaking is investigated. The current state of the art in sulphur removal in ironmaking and oxygen steelmaking is discussed (Part I of this thesis) and optimisation of the hot metal desulphurisation (HMD) slag, which is an important aspect of present day desulphurisation, is investigated (Part II of this thesis). Furthermore, since the steelmaking industry will change significantly as a result of the global climate change mitigation, desulphurisation of hot metal from HIsarna, a new low-CO2 ironmaking process, is studied (Part III of this thesis). Finally, an overview of the main conclusions of this work, as well as an outlook about desulphurisation in iron- and steelmaking for the coming decades, based on the research presented in this thesis, is given in Part IV of this thesis. Sulphur is an unwanted impurity in steel that lowers the formability and weldability of steel and it makes steel more brittle. Therefore, steelmakers try to limit the concentration of sulphur in the steel. In 2021, globally roughly two third of the steel is produced via the BF-BOF steelmaking process, where iron ore is reduced by carbon (coal and coke) in the blast furnace (BF), and the hot metal from the BF is refined in the basic oxygen furnace (BOF, or converter). Sulphur can be removed at different process steps in the steelmaking process chain, like at the HMD, at the converter, or at the secondary metallurgy processes. Because of the low oxygen activity in hot metal, sulphur is most efficiently removed at the HMD process. In Chapter 2, the different sulphur removal steps in the steelmaking process chain are discussed. Here also the different HMD processes that are globally being used are discussed. The two most important HMD processes are the co-injection process (where desulphurising reagents, typically Mg and CaO or CaC2, are injected into the hot metal) and the Kanbara reactor (KR; where calcium-based reagents are mixed through the hot metal with an impellor). Currently, the co-injection process is globally the most commonly used process of the two and it is dominant in Europe and North America. Typically, magnesium and lime are used as reagents in the co-injection HMD process. Magnesium dissolves in the hot metal and reacts with the dissolved sulphur to form solid MgS. Although some lime directly reacts with the sulphur, its main task is to react with the MgS to form the more stable CaS, which moves to the slag phase. When the reagent injection is finished, the slag is removed with a skimmer. During the removal of the slag, some iron is lost with it. The amount of iron lost per heat is typically 0.5-2.5 wt% of the total hot metal weight, which is a major cost for the HMD process. In Chapter 3 it is explained that iron loss is governed by two mechanisms: colloidal loss (iron present in the slag in a colloidal form, which is removed together with the slag) and entrainment loss (iron being entrained with the slag during the slag removal). Entrainment loss can be minimised by optimising skimming conditions like an experienced operator, a clean skimmer paddle and a well-controlled skimmer. Colloidal loss can be minimised by decreasing the apparent viscosity of the slag, which under typical HMD conditions means that the solid fraction of the slag should be minimised. This can be achieved by either increasing the slag temperature (in practice minimising the temperature loss) or by decreasing the slag’s basicity, which lowers the melting temperature of the slag. Furthermore, in Chapter 3 it is shown that the HMD slag also needs a B2 basicity (ratio of the concentrations of CaO and SiO2) of at least 1.1 and enough lime to convert all present sulphur to CaS, in order to have a sufficient sulphur removal capacity. The optimal HMD slag has a B2 basicity high enough to allow all the removed sulphur to stay in the slag (sufficient sulphur removal capacity), but its basicity is low enough to keep the slag’s melting temperature below the actual temperature of the slag (typically 1300-1450 °C), ensuring a mostly liquid slag, resulting in a low colloidal loss. In Chapter 4, these findings are evaluated and supported with a Monte Carlo simulation based on thermodynamic data from FactSage, melting point and viscosity measurements with artificial HMD slags and plant data analysis. The temperature of the slag has the strongest influence on the colloidal loss and total iron loss, where a lower temperature leads to a slag with a higher solid fraction and, thus, a higher iron loss. From the typical HMD slag components, MgO has the largest influence on the slag’s melting temperature, where a higher concentration of MgO leads to a higher melting temperature (thus to a higher iron loss). In an industrial setting, it is difficult to increase the temperature of the slag (which is typically 1300-1450 °C). Also, it is difficult to influence the HMD slag composition, because 60-80 wt% of the HMD slag is carryover slag from the BF (changing that would require changing the BF process) and the rest is determined by the reagents injected to remove a certain amount of sulphur (resulting in a certain amount of CaO, MgO and CaS being added to the slag). A more practical method to change the HMD slag composition for a lower viscosity is to add a slag modifier. In Chapter 5, fly ash and nepheline syenite are investigated as suitable slag modifiers for the HMD process. Fly ash contains SiO2 and Al2O3 and decreases the basicity of the slag and, thus, its melting temperature. Nephelene syenite contains SiO2 and Al2O3 as well, but it also contains Na2O, which is a basic network modifier that decreases the slag’s viscosity. Melting point and viscosity experiments with synthetic HMD slags show that both fly ash and nepheline syenite are viable slag modifiers and are a good alternative to the fluoride-based slag modifiers, which are common in industry. Fluoride-based slag modifiers lower the slag’s melting temperature and the viscosity of the liquid fraction. However, fluoride leads to health and environment issues and it decreases the desulphurisation efficiency of magnesium as well. As a result of the global climate change mitigation, the steel industry has to lower its CO2 emission. One new process that can contribute to a lower CO2 footprint of the steelmaking industry is the HIsarna process, which is being developed at Tata Steel in IJmuiden, the Netherlands. Like a BF, HIsarna produces hot metal, but with a 20 % lower CO2 emission. Even an 80 % lower CO2 emission can be achieved when using carbon capture and storage or usage, due to the concentrated CO2 off gas. Compared to a BF, HIsarna produces hot metal with a lower temperature and with lower carbon, manganese and phosphorus concentrations. HIsarna hot metal contains almost no silicon and titanium. However, compared to a BF, HIsarna produces hot metal with roughly 3-4 times more sulphur (typical sulphur concentration in hot metal is around 0.1 wt%). This means that for HIsarna hot metal more sulphur needs to be removed compared to typical BF hot metal. The consequences for desulphurisation of HIsarna hot metal are discussed in Part III of this thesis. Typically, due to cooling, hot metal from a BF is supersaturated in carbon by the time it arrives at the HMD. This carbon supersaturation leads to graphite formation, also known as kish. The formed graphite flakes can form a layer between the slag and the hot metal. Earlier research suggested that this graphite layer could hamper the HMD process, as it would block MgS formed in the hot metal, thus it cannot reach the slag phase and form the more stable CaS. This would result in a lower desulphurisation efficiency. Since HIsarna hot metal contains less carbon, this effect could be smaller or even non-existent for desulphurisation of HIsarna hot metal. However, as is explained in Chapter 6, this hampering effect of precipitated graphite on the efficiency of the HMD process is very small, for both HIsarna and BF hot metal. Analysis of plant data shows that there is only a small correlation between expected graphite formation and HMD efficiency. Only for heats with a low initial sulphur concentration (below 225 ppm sulphur) showed a significant correlation. This means that there is no significant benefit for desulphurisation of low-carbon HIsarna hot metal, compared to carbon-saturated BF hot metal. The lower temperature and the higher initial sulphur concentration of HIsarna hot metal, compared to BF hot metal, do influence the HMD process, as is discussed in Chapter 7. A literature study, a thermodynamic analysis with FactSage and plant data analysis show that the lower temperature and higher initial sulphur concentration lead to a lower specific magnesium consumption. The lower temperature, typically 50 °C colder than BF hot metal, thermodynamically favours the desulphurisation reaction with magnesium. The higher initial sulphur concentration leads to a higher sulphur activity, which enhances the desulphurisation reactions. However, it should be noted that the higher efficiency caused by the initial sulphur concentration is only valid for the surplus of sulphur, compared to BF hot metal. The total amount of sulphur that has to be removed is still 3-4 times higher for HIsarna hot metal than for BF hot metal. Therefore, the total magnesium consumption for desulphurisation of HIsarna hot metal is higher as well. This also means that desulphurisation of HIsarna hot metal will take longer than desulphurisation of BF hot metal, which could lead to the HMD becoming the bottleneck in a steel plant. It is estimated that the oxygen concentration in HIsarna hot metal (~6 ppm) is roughly 5-10 times higher than in BF hot metal (0.5-1 ppm). A high oxygen concentration leads to a lower desulphurisation efficiency. However, since the oxygen concentration in HIsarna hot metal is still low, the expected extra magnesium consumption as a result of the higher oxygen concentration is limited to about 2-3 kg for a 300 t heat. The absence of silicon and titanium in the hot metal will not influence the efficiency of a magnesium-based HMD process. However, a lime-based HMD process, like KR, will have a lower efficiency, since silicon reacts with the oxygen in lime as the calcium reacts with sulphur to form CaS. Furthermore, since HIsarna produces hot metal without slag, an alternative for the carryover slag from the BF needs to be found. A slag based only on the injected magnesium and lime and the formed CaS would be solid at HMD temperatures (see Chapter 3). Therefore, the use of acidic slag additions (like SiO2 and Al2O3) is required, to keep the slag liquid and minimise the iron loss. Still, the higher slag volumes as a result of more sulphur that has to be removed will lead to a higher iron loss, compared to desulphurisation of typical BF hot metal. Because the HIsarna produces, and taps, hot metal continuously with a constant composition and temperature, it is ideal for continuous hot metal desulphurisation. Therefore, at Tata Steel in IJmuiden, the Netherlands, a new continuous hot metal desulphurisation (CHMD) process is being developed. In Chapter 8, this novel CHMD process is introduced. The CHMD process is based on the magnesium-lime co-injection HMD process. It uses several reactors in series (process simulations suggest three reactors in series are required to desulphurise typical HIsarna hot metal to typical post-HMD sulphur concentrations), to limit the total reactor volume. The desulphurisation efficiency of the process is increased by optimising the reactor dimensions to a height to diameter ratio of 5:1, whereas a typical hot metal ladle, used for the batch HMD, has a height to diameter ratio of 1.5:1. It is expected that this leads to a reduction in reagent consumption of ~20 %. Furthermore, the continuous nature of the process allows for a foxhole-type slag skimming (separating slag and hot metal by their density difference), which will lead to an estimated 60 % lower total iron loss, compared to the skimming method of the batch HMD process (a remote-controlled skimmer arm, raking off the slag). Based on the cost estimation for iron loss and reagent costs, the cost for desulphurising one tonne hot metal with the CHMD process will be approximately € 2 lower than with the state-of-the-art batch HMD process. However, according to the current calculations, the residence time of the hot metal in the CHMD is 3-4 times longer than in a batch HMD process, leading to a higher temperature loss and, possibly, a higher CO2 footprint, as a lower temperature allows for less scrap being charged at the converter. Given the already lower temperature of HIsarna hot metal, compared to BF hot metal, this is an issue that needs to be solved before the CHMD process can be used in industry. Currently the development of the CHMD process is still in the conceptual design phase. The changes in the global steel industry as a result of the climate change mitigation will not stop after 2030. Finally, in 2050, the steel industry should be CO2-neutral. In Chapter 10, an outlook is given for the expected changes in the steel industry between now and 2050 and its impact on sulphur removal in iron- and steelmaking. The amount of carbon used to reduce iron ore will gradually decrease and so will the demand for HMD, as the carbon sources coal and coke are the largest source of sulphur in hot metal. However, it is unlikely that carbon can be fully replaced by hydrogen or electricity. It is expected that in 2050 still a significant amount of steel will be produced via carbon-utilising smelting processes, like HIsarna, in combination with carbon capture and usage. Besides, scrap contains sulphur that needs to be removed as well. It is expected that the share of scrap as a source of iron in the steelmaking industry will increase in the coming decades. Therefore, steel desulphurisation will remain necessary in every steel plant and hot metal desulphurisation will be required as well for the carbon-utilising plants.","Hot Metal Desulphurisation; Steelmaking; Ironmaking; Slag; HIsarna","en","doctoral thesis","","9789464193015","","","","","","","","","Team Yongxiang Yang","","",""
"uuid:f8f6566e-e50a-47e2-b1f9-67503ca1d021","http://resolver.tudelft.nl/uuid:f8f6566e-e50a-47e2-b1f9-67503ca1d021","Integrated transport and energy systems based on hydrogen and fuel cell electric vehicles","Oldenbroek, V.D.W.M. (TU Delft Energy Technology)","van Wijk, A.J.M. (promotor); Blok, K. (promotor); Aravind, P.V. (promotor); Delft University of Technology (degree granting institution)","2021","This thesis presents the design and analysis of future 100% renewable integrated transport and energy systems based on electricity and hydrogen as energy carriers. In which Fuel Cell Electric Vehicles (FCEVs) are used for transport, distributing energy and balancing electricity demand. Passenger cars in Europe are parked on average 97% of the time. They are used for driving only 3% of the time (<300 hours per year). So passenger car FCEVs can be used for energy balancing and electricity generation when parked and connected to the electricity grid, in the socalled Vehicle-to-Grid (V2G) mode. In Europe around 15.3 million passenger vehicles were sold in 2019 [1]. Using the “Our Car as Power Plant” analogy of Van Wijk et al. [2], multiplying each vehicle by 100 kW of future installed electric power in it, this would equal to 1,530 GW of annual sold power capacity in passenger vehicles. This is more than the existing 950 GW installed power generation capacity in Europe in 2019 [3]. The theoretical potential to use passenger FCEVs for power production, with the present low usage for driving, seems to be large. Commercially available FCEVs use proton exchange membrane fuel cells systems to generate electricity from oxygen from the air and the hydrogen stored in on-board tanks at 700 bar. In parallel to the fuel cell, a small high voltage (HV) battery pack is connected. The HV battery is used for regenerative braking and provides additional power for acceleration. This combination of fuel cell and HV battery can deliver almost every kind of electrical energy service, from balancing intermittent renewables to emergency power back-up. By using both the HV battery and fuel cell of a few up to tens of thousands of aggregated FCEVs in combination with large-scale hydrogen storage, kW to GW-scale power generation and energy storage from seconds to seasons can be achieved.","vehicle-to-grid; hydrogen; fuel cell electric vehicles; integrated transport and energy systems; techno-economic scenario modelling","en","doctoral thesis","","978-94-6384-241-9","","","","","","","","","Energy Technology","","",""
"uuid:98a7f072-7423-4a23-ac9b-8b88540c260d","http://resolver.tudelft.nl/uuid:98a7f072-7423-4a23-ac9b-8b88540c260d","High Accuracy Terrestrial Positioning Based on Time Delay and Carrier Phase Using Wideband Radio Signals","Dun, H. (TU Delft Mathematical Geodesy and Positioning)","Tiberius, C.C.J.M. (promotor); Hanssen, R.F. (promotor); Delft University of Technology (degree granting institution)","2021","Accurate position solutions are in high demand for many emerging applications. Global navigation satellite systems (GNSS), however, may not meet the required positioning performance, especially in urban environments, due to multipath and weak received power of the GNSS signal that can be easily blocked by surrounding objects. To achieve a high ranging precision and improve resolvability of unwanted reflections in urban areas, a large signal bandwidth is required. In this thesis, a terrestrial positioning system using a wideband radio signal is developed as a complement to the existing GNSS, which can provide a better ranging accuracy and higher received signal power, compared to GNSS. In the terrestrial positioning system presented in this thesis, a wideband ranging signal is implemented by means of a multiband orthogonal frequency division multiplexing (OFDM) signal. All transmitters are synchronized by time and frequency reference signals, which are optically distributed through the white-rabbit precision time protocol (WR-PTP). Like in GNSS, the to-be-positioned receiver is not synchronized to the transmitters. Positioning takes place through range measurements between a number of transmitters and the receiver. Time delay and carrier phase are to be estimated from the received radio signal, which propagated through a multipath channel. This estimation is done on the basis of the channel frequency response and using the maximum likelihood principle. To determine whether or not reflections need to be considered in the estimation model, a measure of dependence is introduced to evaluate the change of the precision (i.e., variance), and the measure of bias is introduced to assess the bias of the estimator when the reflection is not considered. Also, a methodology is proposed for sparsity-promoting ranging signal design in this thesis. Based on a multiband OFDM signal, ranging signal design comes to sparsely select as few signal bands as possible. Using fewer signal bands for ranging leads to less computational complexity in time delay and carrier phase estimation, while the ranging performance can still benefit from a large virtual signal bandwidth, which is defined by the entire bandwidth between the two signal bands at the spectral edges. It is proposed to use the Cramér-Rao lower bound (CRLB) of time delay estimation, the measure of dependence, and the measure of bias as constraints in ranging performance, and formulate an optimization problem to design a sparse multiband signal.","terrestrial positioning system; time delay estimation; carrier phase estimation; precise positioning; ranging signal design; Multipath","en","doctoral thesis","","978-94-6384-258-7","","","","","","","","","Mathematical Geodesy and Positioning","","",""
"uuid:d37db2c0-cf16-4edf-97ba-aebff35011b5","http://resolver.tudelft.nl/uuid:d37db2c0-cf16-4edf-97ba-aebff35011b5","Conversational Crowdsourcing","Qiu, S. (TU Delft Web Information Systems)","Houben, G.J.P.M. (promotor); Bozzon, A. (promotor); Gadiraju, Ujwal (copromotor); Delft University of Technology (degree granting institution)","2021","Crowdsourcing has become a standard approach for the collection of the human input required by scientists and practitioners alike to execute their experiments, or to train, control, and verify the behavior of their intelligent systems. Despite years of successful research and industrial application, how to improve the engagement and satisfaction of crowd workers with crowdsourcing tasks is still an open research question. In this thesis, we introduce conversational crowdsourcing – a novel crowdsourcing interaction paradigm based on conversational interfaces. We study conversational crowdsourcing, and experimentally evaluate its ability to foster workers’ engagement and satisfaction from four perspectives: conversational crowdsourcing design, improving worker engagement and satisfaction, analyzing the roles of worker mood and self-identification, and applying conversational crowdsourcing for conducting online studies. We describe the design of conversational crowdsourcing and show that conversational crowdsourcing can achieve similar output quality and execution time compared to the traditional web-based crowdsourcing. To facilitate our research, we designed and developed TickTalkTurk, a web application that facilitates the design and development of conversational crowdsourcing tasks on popular crowdsourcing platforms. We demonstrate the feasibility of improving worker engagement and satisfaction and show that conversational crowdsourcing can improve worker retention and perceived engagement that are significantly connected to satisfaction. We present a reliable conversational style estimation method and illustrate that style estimation can be a useful tool for facilitating outcome prediction and task assignment.","Crowdsourcing; Conversational agent; Chatbot; Engagement; Satisfaction; Mood; Self-identification; Avatar; Memorability; Health","en","doctoral thesis","","978-94-6423-487-9","","","","","","","","","Web Information Systems","","",""
"uuid:183c7014-a4cb-41b7-a107-0fd4b1a672d7","http://resolver.tudelft.nl/uuid:183c7014-a4cb-41b7-a107-0fd4b1a672d7","Adaptive distributed control of uncertain multi-agent systems in the power-chained form","Lv, Maolong (TU Delft Team Bart De Schutter)","Baldi, S. (promotor); De Schutter, B.H.K. (promotor); Delft University of Technology (degree granting institution)","2021","Power-chained form systems are a generalization of strict-feedback and pure-feedback systems since integrators with positive odd-powers can appear in the dynamics (chain of positive-odd power integrators) and they are extremely challenging to deal with, as their linearized dynamics might possess uncontrollable modes whose eigenvalues are in the right-hand-side plane, making standard feedback linearization or standard backstepping methodologies fail. The adding-one-power-integrator technique was proposed to handle power-chained formsystems. Progress made for power-chained formsystems includes employing universal approximators to handle completely unknown nonlinearities. However, state-of-the-art results on power-chained form systems are mainly focused on the single-agent case since a direct extension of the existing design to a distributed setting is not very meaningful on account of the facts that: i) the control gain of each virtual control is incorporated into the next virtual control law iteratively, possibly leading to high-gain issues; ii) state-of-the-art results rely on the assumption that the agents’ control directions are known a priori and are available for control design; iii) universal approximators often used in the adding-one-power-integrator procedure inevitably increase the complexity in the sense that extra adaptive parameters have to be updated (i.e. extra nonlinear differential equations need to be solved numerically), thus making their distributed implementation difficult.","Distributed control; multi-agent systems; power-chained form","en","doctoral thesis","","978-94-6419-299-5","","","","","","2024-10-01","","","Team Bart De Schutter","","",""
"uuid:1e63fd37-62af-4dcd-9650-6db0ebfa9ff5","http://resolver.tudelft.nl/uuid:1e63fd37-62af-4dcd-9650-6db0ebfa9ff5","Exploring Microbial Diversity: Extending the boundaries of biopolymer production using parallel cultivation","Stouten, G.R. (TU Delft BT/Environmental Biotechnology)","Kleerebezem, R. (promotor); van Loosdrecht, Mark C.M. (promotor); Delft University of Technology (degree granting institution)","2021","Quickly, the show is about the start. Date: 3.5 thousand million years ago, location: planet Earth, event: life. Naturally, life is starting small, even microscopically tiny. Life in the form of microorganisms endures eons of time in which the world changes. They survived, failed, adapted, thrived, and they actually changed the world. They have seen humankind step into the light of day, and they will be there when we see no more.
Microorganisms are the link between the inanimate, mineral planet and the living world. They facilitate the natural cycle of the elements. The CO2 we breathe out is transformed by phototropic algae to oxygen. The nitrogen in proteins that we eat finds its way to the nitrogen gas in the air, and back into the roots of plants through countless microorganisms. And central to our life: carbon, it is the food that we eat, the oil that we burn, and the plastics that will immortalize humans’ existence.
We are life, we flourish, and like all living things, we are greedy. So greedy that we disrupted the circularity of nature. We are far from the first, nor the most successful, organism to change the face of the earth. Algae made the world aerobic, and the first trees covered the world in meters of indigestible wood for millions of years. And while nature seems to have found a new balance, those algae and trees now form the oil and coal that drive our manic existence.
What differentiates us from those earlier life forms is that we can appreciate that we are running on borrowed time, as we can see the world changing, fast. Over the past century, it has become clear that we are shaping a linear society, predominantly driven by fossil fuels. If we, by contrast, could manage to convert our waste streams back into resources at the same rate that we produce them, that would chime in a new era. And even more profound is that we are living in a world shaped and dominated by microorganisms. We need to start cooperating with them for our health and prosperity, which requires a better understanding of the microbial world. And although we are making significant progress; time is ticking and we could use all the help there is.
This thesis is on how we can explore and utilize 3.5 billion years of help. In the first chapter the vastness, complexity and wealth of the microbial world are introduced. It focusses on a fraction of that wealth, the specific topic of interest, which is the production of biopolymers by microbial communities. These biopolymers are important building blocks for a circular society, as they can serve as precursor to oil, plastics, food, and specialty materials. Of the many biopolymers in nature, the predominant one within this thesis are polyhydroxyalkanoates (PHA), which are produced by microorganisms as their equivalent to human fat, and can be used by us to produce bioplastics.
In the second chapter our key contribution to the scientific field of microbial community research is made. A key aspect that is holding back research on microbial communities is the lack of experimental freedom to bring nature to the lab. In this work, we attempt to bring cultivation research into the 21st century with a more flexible biodiscovery cultivation platform. This chapter describes a part of the hardware and software that was developed to significantly assist parallel enrichment research in dynamic conditions, it elaborates on the bioreactor setups of 8 systems, the automatization, on-line data processing, and process modelling. We demonstrate a generalized respiration rate reconstruction tool for dynamic operated bioreactors. The setup and tools described here have facilitated over twenty research topics that were conducted during and alongside this Doctoral research.
The third chapter demonstrates how the setup can be used to increase the research intensity of enrichment studies. We investigated the influence of temperature on the enrichment of PHA accumulating microbial communities, which yielded several noteworthy findings. Besides an explanation for the global temperature optimum of 30°C, we identified other competitive strategies in feast-famine enrichment systems, that of fast-growth and decay, and subsequent growth on cell lysis. Furthermore, we were able to align shifts in microbial function with microbial community shifts, and addressed important issues of reproducibility in microbial community enrichments. The results demonstrate that a rigorous experimental approach involving parallel cultivation allows for unambiguous identification of competitive strategies in microbial communities. And a major improvement with this approach is that we can pinpoint where our knowledge is lacking.
The fourth chapter follows a systematic investigation of a specific surprising observation that was made possible by the close monitoring of the enrichment systems. During a study investigating the influence of pH on the enrichment of PHA accumulating microbial communities (analogous to the temperature study), we noticed markedly different microbial community structure and behavior between enrichments, that seemed solely based on the type of acid used for pH control. We demonstrated that the observed changes were not directly caused by the change in acid used for pH control, but resulted from the difference in corrosive strength of both acids and the related iron leaching from the bioreactor piping. Neither system was iron deficient, suggesting that the biological availability of iron is affected by the leaching process. Our results demonstrate that microbial competition and process development can be affected dramatically by secondary factors related to nutrient supply and bioavailability, and is way more complex than generally assumed in a single carbon substrate limited process.
In chapter five, we investigate a novel enrichment process for PHA accumulating microbial communities. The strict uncoupling in time of nutrient supply of two growth nutrients is investigated. The setup was used to optimize the process by investigating the influence of (i) nitrogen or phosphorous uncoupling from carbon, (ii) increased carbon to nutrient ratios, and (iii) increased exchange ratios. The uncoupling strategy resulted in stable enrichments, that achieved 89 wt% (gPHA/gDW) in eight hours, every operational cycle, making this the most PHA rich production system to date. The proposed strict uncoupling strategy yields stable microbial communities with an unprecedented combination of PHA storing capacity, productivity, product yield, and general applicability for feed streams without nitrogen or phosphate.
Chapter six looks forward on the future of microbial community research, it explores the collaborative efforts between Wageningen University and Delft University in the 24 million euro UNLOCK project, for which the work in this thesis laid a principal foundation.","Microbial diversity; Microbial cultivation; Enrichment cultures; Storage polymers; Polyhydroxyalkanoates; Dynamic system characterization","en","doctoral thesis","","978-94-91837-42-5","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:9dbe12b5-e77e-4a9a-9c6b-251f5c477e77","http://resolver.tudelft.nl/uuid:9dbe12b5-e77e-4a9a-9c6b-251f5c477e77","Pathogen removal in aerobic granular sludge treatment systems","Barrios Hernandez, M.L. (TU Delft BT/Environmental Biotechnology)","Brdjanovic, Damir (promotor); van Loosdrecht, Mark C.M. (promotor); Hooijmans, CM (promotor); Delft University of Technology (degree granting institution)","2021","This book describes pathogen removal processes in aerobic granular sludge (AGS) wastewater treatment systems. Faecal indicators (E. coli, Enterococci, coliforms and bacteriophages) were tracked in full-scale AGS facilities and compared to parallel activated sludge (CAS) systems. AGS showed similar removals as the more complex CAS configurations. Removal mechanisms investigated in laboratory-scale reactors showed that the AGS morphology contributes to the removal processes. By tracking E. coli and MS2, it was observed that organisms not attached to the granules are predated by protozoa during aeration. 18S RNA gene analyses confirmed the occurrence of bacterivorous organisms (e.g., Epistylis, Vorticella, Rhogostoma) in the system. Particulate material in the feeding stimulated their development, and a protozoa bloom arose when co-treating with (synthetic) faecal sludge (4 % v/v). An overview of the diverse eukaryotic community in laboratory reactors and real-life applications is also provided. The microbial diversity of the influent was different compared to AGS and CAS sludge samples. However, no clear differences were found between them on species level. This study contributes to a better understanding of the mechanisms behind pathogen removals in AGS systems.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","9781032139487","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:25111657-c744-49ef-84f9-3beba5a48e63","http://resolver.tudelft.nl/uuid:25111657-c744-49ef-84f9-3beba5a48e63","An Edgy Journey with Transition Metal Dichalcogenides: From Flakes to Nanopillars","Maduro, L.A. (TU Delft QN/Conesa-Boj Lab)","Kuipers, L. (promotor); Conesa Boj, S. (copromotor); Delft University of Technology (degree granting institution)","2021","The family of transitionmetal dichalcogenides offer a unique platformfor electronic and optical tunability due to the sensitivity to their dimensional configuration, edge terminations, and varying crystal phases. In this thesis we focus on structures based on the transition metal dichalcogenides MoS2 and WS2. We study how different crystal phases and edge structures of these two transition metal dichalcogenides affect their optical, electronic, and structural behaviour with the use of electron energy loss spectroscopy, energy-dispersive X-ray spectroscopy, and high resolution spatial imaging in the transmission electron microscopy in order to carry out our studies. When possible, we complement our experimental studies with ab initio calculations.","Transition Metal Dichalcogenides; Density Functional Theory; Nanofabrication; Electron Energy Loss Spectroscopy; Transmission Electron Microscopy; Scanning Transmission Electron Microscopy; MoS2 nanostructures","en","doctoral thesis","","978-90-8593-489-9","","","","","","","","","QN/Conesa-Boj Lab","","",""
"uuid:53a0f525-0131-424d-b610-7cd21cbca550","http://resolver.tudelft.nl/uuid:53a0f525-0131-424d-b610-7cd21cbca550","Towards an NMR multiphase flowmeter: Method development and performance evaluation for two-phase flow measurements","Daalmans, A.C.L.M. (TU Delft ChemE/Algemeen)","Mudde, R.F. (promotor); Portela, L. (copromotor); Delft University of Technology (degree granting institution)","2021","There is an increasing demand from the petroleum industry for high accuracy subsea multiphase flowmeters to make the development of difficult to access deep water and ultra-deep water fields economically viable. Nuclear Magnetic Resonance (NMR) is a very powerful measuring principle, and is one of the few techniques considered to be capable of meeting the required high performance. The work presented in this thesis is a first step in the development of a non-invasive, in-line, full-bore, nuclear magnetic resonance based, multiphase flowmeter without phase separation. The continued development of the investigated system has led to the commercial KROHNE M-PHASE 5000 flowmeter, which is based on an improved prototype. To keep maintenance as low as possible, the flowmeter consists of a permanent polarizing magnet with two measuring probes, operating at 14.1 MHz, at different streamwise positions. Pipelines with an inner diameter of up to 10 cm fit through the bore of the flowmeter prototype. Two NMR measurement concepts have been developed, each with a different underlying velocity measuring principle, to determine the liquid flowrate in a gas-liquid flow: (i) the T1 Relaxation Residence Time (T1RRT) method utilizes the longitudinal relaxation principle to derive the velocity from the intensity of the NMR signal, which is a function of the residence time in the polarizing magnetic field; (ii) the Pulsed Gradient Spin Echo (PGSE) method exploits pulsed magnetic field gradients, to obtain the fluid velocity from the phase shift of a spin echo signal. Both flowmetering concepts resolve the fractions of the liquid components identically, from the multi-exponential course of the signal intensity associated with the different polarization lengths of the measuring probes. The measurement concepts have been tested for singlephase flow in 9.85 mm, 34 mm and 98.6 mm diameter pipes and for two-phase flow in a 98.6 mm diameter pipe.","","en","doctoral thesis","","978-94-6332-787-9","","","","","","","","","ChemE/Algemeen","","",""
"uuid:39db65ee-6e96-4dc5-a654-22893e5c8a29","http://resolver.tudelft.nl/uuid:39db65ee-6e96-4dc5-a654-22893e5c8a29","Towards a heat transfer based distance sensor for measuring sub-micrometer separations","Bijster, R.J.F. (TU Delft Computational Design and Mechanics)","van Keulen, A. (promotor); Gerini, Giampiero (promotor); Delft University of Technology (degree granting institution)","2021","In this thesis a proof-of-principle demonstration is developed that uses the heat flux between a probe and a sample as a proxy for their separation. The proposed architecture uses a probe that consists of a bilayer cantilever with an attached sphere at its free end. The deflection of the cantilever that is caused by the heat input is measured using the optical beam deflection method. To eliminate temperature dependent effects the temperatures of the probe and the sample are kept constant. Moreover, a total internal reflection microscopy is included to provide an independent measurement of the separation between the probe and the sample. This architecture allows the measurement of the heat flux as a function of only the separation.
An equation is derived that relates the output signal of the instrument directly to the heat flux that is absorbed by the probe. It couples the top-level design parameters to the system output and is used to study and design the separate elements. In addition to the design of the instrument, the research contributes a detailed study of the influences of the microsphere and the microcantilever on the heat flux measurement.
The vast number of pioneering theoretical developments inthe area of non-classical gas dynamics together with the rising number ofapplications of organic Rankine cycle (ORC) technology have shaped a new branchof fluid mechanics called non-ideal compressible fluid dynamics (NICFD). Thisfield of fluid mechanics is concerned with flows of dense vapors, whoseproperties do not comply with the ideal gas model. These types of flows occurin dense vapors, supercritical fluids and liquid-vapor mixtures.
NICFD is encountered in a variety of industrial processes,the most relevant in the power and propulsion sector, and is of interest forfundamental research. Chapter 2documents an extensive literature review which covers the main developments inthe numerical, theoretical and experimental areas. In particular, the review ofcurrent and past experiments was aimed at summarizing the lessons learned.Despite the relevance of NICFD applications, experimental information regardingthis type of flows is scarce, due to the challenges inherent with the operatingconditions of such experiments.
The main objective of the research documented in thisdissertation was to perform accurate measurements of NICFD flows which can thenbe used to assess the predictive capabilities of state-of-the art numericaltools. To this end, the design and commissioning of suitable and fullyinstrumented facilities capable of generating NICFD flows in a multitude ofsteady, controlled conditions is necessary in order to provide high-quality andwell characterised data. Arguably, state-of-the-art optical techniques are mostsuited for this goal, together with more conventional temperature, pressure andmass flow rate measurements.
Advanced laser diagnostic techniques such as particle imagevelocimetry (PIV) are possibly the measurement technique of choice whenaccurate measurements with a high spatial and temporal resolution are needed.However, the use of PIV or other optical techniques capable of providing localand instantaneous information within the flow is not documented. Therefore, afeasibility study was conducted by means of a simpler experiment: the planar PIV technique was applied tocharacterise the dense vapor of an organic fluid (D4, a siloxane) stirred withina transparent container. Chapter 3 documents the successful results of thisexperiment. The optical properties of the dense vapor make PIV possible.Titanium dioxide (TiO2) seeding particles were used to track the low-speed motionof the fluid around a rotating disk. Vector fields of the natural convectionflow and of the superposition of natural convection and rotating flow wereacquired and studied as exemplary cases. The particles adequately trace theflow since the calculated Stokes number is 6.5×10-5. The quality ofthe experimental data was assessed by means of particle seeding density andparticle image Signal to Noise ratio (S/N). The results are deemed acceptablein view of envisaged high-speed flow experiments.
In order to obtain measurement data of high-speed vaporflows in the NICFD regime, new experimental facilities must be conceived,designed, realized and tested. Chapter 4 presents the organic Rankine cyclehybrid integrated device (ORCHID), which was designed, built and commissionedat the Delft University of Technology. The facility can operate continuouslyand with a wide range of operating conditions. The maximum operating pressureand temperature are 25 bara and 350 °C. The facility has been designedto operate with siloxane MM as the working fluid, but it was numericallyverified that it may also be operated with other working fluids such as MDM, MD2M,D4, D5, D6, pentane, cyclopentane, NOVEC649, PP2, PP80, PP90, and toluene. Twotest sections allow to operate the ORCHID either as a supersonic/transonicvapor tunnel or as an ORC turbine test bed. Currently, a supersonic nozzlefeaturing a throat of 150 mm2 with optical access allows to perform gasdynamic experiments for the validation of numerical simulation codes. A secondtest section, a test-bench for mini-ORC expanders, is being designed and will accommodate afully instrumented 10 kWe machine, however machines of any configuration andwith a rated power of up to approximately 80 kWe can be tested.
Chapter 5 documents the achievements reached during thecommissioning of the ORCHID. The successful commissioning of the setup with MMas the working fluid is detailed and discussed based on the recordings ofseveral test runs, including the start up and shut down of the facility. Data were acquired during the operation atsteady state at the two main operating conditions typical of supersonic nozzleand ORC turbine tests. The operation of the facility is characterized withregards to the process stability, moreover process variables are assessed fortheir uncertainties. The correct operation of the nozzle test section wasverified with a mass flow rate of fluid of m = 1.15 kg / s, and at a thermodynamic state at the nozzle inletcorresponding to T = 252 °C and P =18.36 bara. The test sectionconditions typical of a turbine experiment were T = 275 °C, P =20.8 bara, with a mass flow rate of m = 0.17 kg / s. All therelevant process variables of the test section are affected by a relativeuncertainty that is lower than 0.6 %.
Chapter 6 reports the results of the first supersonic nozzleexperiments. Schlieren images of the MM flow through the two-dimensionalconverging-diverging nozzle with the inlet in the NICFD regime were recorded,together with the static pressure profile along the nozzle. A series ofschlieren photographs displaying Mach waves in the supersonic flow wereobtained and are documented at two operating conditions, namely, for inletconditions corresponding to a stagnation temperature and pressure of T0 = 252 °C and P0 = 18.4 bara, and to a back pressure of 2.1bara. Furthermore, static pressure values were measured along the expansionpath for operating conditions given by T0 = 252 °C and P0 = 11.2 bara at the nozzle inletand by a back pressure of 1.2 bara. The two inlet conditions of the fluidcorrespond to a compressibility factor of Z0 = 0.58 and Z0 = 0.79.
These Mach number values together with the values of thestatic pressure along the top and the bottom profile of the nozzle were usedfor a first assessment of the capability of evaluating NICFD effects occurringin dense organic vapor flows by comparison with the results of CFD simulations.The outcome of this initial comparison was deemed satisfactory.
Chapter 7 introduces the first steps towards the validationof a CFD solver for non-ideal compressible flows. In particular, anindustry-standard validation method was used together with syntheticexperimental data (experimental data were not available yet) to illustrate, asan exercise, how the uncertainty of NICFD software can be quantified. Theassessment is limited to determining how the uncertainties in model inputs,e.g., fluctuations in boundary conditions and thermodynamic property modelsinfluence the overall accuracy of NICFD simulations. The assessment of theuncertainty of the other sub-models is left for a successive phase of thisresearch program. The validation exercise confirmed the applicability of theproposed method, but also pointed out that the adopted validation metricsshould be complemented with additional statistical indicators. The errorsources in the designed experiment are identified and all the uncertainties areadequately quantified.
The final chapter summarizes the answers to the researchquestions listed in the first chapter, above all that the ORCHID facility wassuccessfully commissioned and can generate stable dense vapor flows of organicfluids for both fundamental gas dynamic experiments and ORC turbine testing.The first experiments demonstrate that it can be used to obtain accurateoptical and non-optical measurements of supersonic nozzle flows for thevalidation of CFD codes, including flows in the NICFD regime. An overview of the next phases of theresearch program is also provided.","Organic Rankine Cycle; transonic and supersonic flow; experiments; simulations; Verification; Validation and Uncertainty Quantification","en","doctoral thesis","","978-94-6366-450-9","","","","","","","","","Flight Performance and Propulsion","","",""
"uuid:1b1868a5-da6c-42c0-9bdc-24c4bbfdf7f3","http://resolver.tudelft.nl/uuid:1b1868a5-da6c-42c0-9bdc-24c4bbfdf7f3","The Fourier transform interrogator","Grillo Peternella, F. (TU Delft ImPhys/Optics)","Urbach, Paul (promotor); Adam, A.J.L. (copromotor); Delft University of Technology (degree granting institution)","2021","Photonic sensors have recently attracted much attention in both industry and academia. High accuracy, low weight and the possibility of building a large sensor network are key benefits of photonic sensors. Another benefit is installing optical sensors in harsh environments where electronic sensors' usage is not plausible: aerospace applications where ionizing radiation is present or gas and oil pipes are some examples.
Integrated photonics brings new challenges to the interrogation of multiplexed sensors in WDM. Unlike FBG sensors, whose resonance wavelength can be chosen to an accuracy better than 1.0 nm, the resonance wavelength of integrated micro-ring resonators cannot be chosen during the design stage. The main reason is the imperfections of the manufacturing process. The fact that the resonance wavelength is unpredictable is a problem for interrogators based on interferometry. Such interrogators perform the demultiplexing and demodulation in different stages: first, a spectrometer separates the optical channels; subsequently, outputs of the spectrometer are conveyed to interferometers. From the photo-receiver voltages connected to MZI outputs, the signal from the sensors can be demodulated. As the resonance value of sensors cannot be determined during design, two sensors may have their resonances in the same spectrometer's channel. As a result, the demultiplexing step fails, compromising the interrogator's operation.
In Chapter 4 of this thesis, a new interrogation method is proposed. Much of the effort of interferometric interrogators is to separate the spectrum of the sensors correctly. In the Fourier Transform Interrogator, the spectrum of all sensors is sent to an array of Mach-Zehnder interferometers (MZI) with different OPDs. Using the output voltages from the photo-receivers attached to the MZIs, we derive a system of non-linear equations, whose solution provides the signal from each sensor. The demodulation and demultiplexing steps are performed simultaneously for the Fourier interrogator, which guarantees the interrogator's unique flexibility. On the other hand, the computational cost is high since the system of non-linear equations is solved using Newton's method. For each set of voltages sampled over time, a different system of equations is obtained. Chapter 4 leaves some unanswered questions:
Does the system of non-linear equations have a unique solution?
How many solutions are there?
What is the physical meaning of each of the solutions?
Is it possible to solve non-linear systems of equations for fast sensors in real-time?
All these questions are answered in Chapter 5. As a consequence of the new algebraic formulation, it is possible to solve about 1 000 000 algebraic systems of equations in about 10 ns, i.e., allowing the real-time interrogation of high-speed sensors. The interrogator is a candidate for interrogating arrays of ultrasound ring resonator sensors in the tens of MHz range.","Interrogators; Photonic Sensors; Fourier transform spectroscopy","en","doctoral thesis","","9789464234664","","","","","","","","","ImPhys/Optics","","",""
"uuid:4ed9de9b-f351-4319-ba64-5114f78c5df4","http://resolver.tudelft.nl/uuid:4ed9de9b-f351-4319-ba64-5114f78c5df4","Spin dynamics with the interplay of elasticity and radiation in hybrid systems","Zhang, Xiang (TU Delft QN/Blaauboer Group)","Blanter, Y.M. (promotor); Blaauboer, M. (copromotor); Delft University of Technology (degree granting institution)","2021","Originated from the electron’s intrinsic angular momentum, magnetism has endowed various manipulations in both macroscopic and microscopic setups with another degree of freedom. Beyond the traditionally developed usage such as storage and sensors, there are enormous applications based on engineering and integrating magnetism into heterostructures and their susceptibility to external stimuli. The emergent fields of nano-level spintronics and spin caloritronics with novel properties have been intensively studied both theoretically and experimentally. Within those developments, the interaction of atomic spins with electromagnetic waves (photons) and elastic dynamics (phonons) are of fundamental importance. This thesis is devoted to investigating the interplay of magnetism with electrodynamics and lattice elasticity in several hybrid systems.","Magnetism; Magnon-phonon coupling; Magnon-photon coupling; Spin waves; phase transition.","en","doctoral thesis","","","","","","","","","","","QN/Blaauboer Group","","",""
"uuid:36295f09-923e-4c89-aecc-55994deb2e65","http://resolver.tudelft.nl/uuid:36295f09-923e-4c89-aecc-55994deb2e65","Crumb rubber modified bitumen: Experimental characterization and modelling","Wang, H. (TU Delft Pavement Engineering)","Erkens, S. (promotor); Scarpas, Athanasios (promotor); Liu, X. (copromotor); Delft University of Technology (degree granting institution)","2021","A sustainable pavement, which can minimize environmental impacts through the reduction of energy consumption, natural resources and associated emissions while meeting all performance conditions and standards, is in urgent need to combat the climate change. The current scenario of depleting crude oil, reduced quarry zones, and stringent environmental regulations has driven the use of waste materials and by-products in pavement applications. The utilization of crumb rubber from scrap tires for bitumen modification has become a common engineering practice since last century...
This research aims to address these problems by developing a coherent set of methods for spatiotemporal evaluation of water reuse. This dissertation presents and demonstrates an appropriate framework of concepts and indicators, as well as a number of complementary procedures for quantifying these indicators based on innovative data sources and newly developed algorithms.","water reuse; remote sensing; water accounting; water saving","en","doctoral thesis","","","","","","","","","","","Water Resources","","",""
"uuid:dddca4bd-d083-4138-b835-c9c84d8a99d1","http://resolver.tudelft.nl/uuid:dddca4bd-d083-4138-b835-c9c84d8a99d1","Detecting Single Photons with Superconducting Nanowires","Chang, J. (TU Delft ImPhys/Optics)","Urbach, Paul (promotor); Zwiller, Val (promotor); Delft University of Technology (degree granting institution)","2021","In the past decades, generating single photons on demand with well defined quantum states and detecting them after photon-photon or photon-matter interaction are central to the area of quantum optics and quantum information science. The ability to detect light efficiently at the single photon level offers unprecedented opportunities for a wide range of applications, including long-distance quantum key distribution, light detection and ranging, photonic quantum computing, weak light detection for astronomy and bio-imaging.","","en","doctoral thesis","","978-94-6384-216-7","","","","","","","","","ImPhys/Optics","","",""
"uuid:ef4bae0f-0263-472e-a868-f0c4a2fdb908","http://resolver.tudelft.nl/uuid:ef4bae0f-0263-472e-a868-f0c4a2fdb908","On the Painterly Depiction of Materials: An Interdisciplinary Study on the Depiction and Perception of Materials within Paintings","van Zuijlen, M.J.P. (TU Delft Human Information Communication Design)","Pont, S.C. (promotor); Wijntjes, M.W.A. (copromotor); Delft University of Technology (degree granting institution)","2021","The world around us is filled with materials. Our ability of visual material perception informs us how to navigate and interact with our environment. It tells us, for example, whether food is fresh, if a chair is strong enough to sit on, how much force to use to pick up a glass, etc. Painters have studied how to depict the world and the materials therein for thousands of years. We believe that the material depictions within paintings can be leveraged into insights for the scientific understanding of material perception. In this thesis, we studied the perception of painterly depictions of materials and aimed to make the study thereof accessible to other researchers with the release of the Materials In Paintings dataset. We collected a large set of paintings from museums and galleries. Then, we used an online crowd-sourcing approach to annotate material identity (fabrics, stone, etc.,) and gather spatial material segmentations (i.e., “cutting out” piece of the painting that depict the material). In the first study, we measured the perception of material attributes (soft, rough, fragile, etc.,) across a range of materials and found that painterly materials trigger distinct distributions of perceived attributes and we furthermore compared these distributions to those for photographic materials. In the second study, we continued crowd-sourcing annotations on material identity and material segmentations and combined these into the Materials In Paintings dataset. In a number of cross-disciplinary demonstrations we presented novel findings across art history, human perception, and computer vision. While these demonstrations are useful in their own right, the main focus here was the release of the dataset. Next, we used the dataset as a source of stimuli for two studies into specific materials. First, for fabrics, we studied the perception of satin and velvet and the effect of presenting only local or, both local and global information, and found that the perceptual distinction between these two fabrics becomes more ambiguous when removing global information. Furthermore, we showed that local image cues can affect perceptual responses for shininess but not for softness. Lastly, we studied the perception and depiction of pearls by identifying three image features that might trigger the perception of pearliness. In a series of experiments, we confirm the role of these image features but find that the presence of only one of these image features, highlights, is already sufficient for naive participants to trigger the perception of pearliness. Conversely, expert participants (art historians or pearl experts) perceive depictions with all three features as more pearly, which implies the existence of visual expertise for pearl perception. All in all, in this thesis we show the benefits of studying material perception through painterly depictions of materials and enable further study with the release of the MIP dataset.","Material perception; material properties; art history; crowdsourcing","en","doctoral thesis","","978-94-6419-317-6","","","","","","","","","Human Information Communication Design","","",""
"uuid:01bc5442-00a4-45ca-ae48-1440ef16f833","http://resolver.tudelft.nl/uuid:01bc5442-00a4-45ca-ae48-1440ef16f833","Winning Data: Designing and testing a game to change civil servants' attitudes towards open governmental data provision","Kleiman, F. (TU Delft Information and Communication Technology)","Janssen, M.F.W.H.A. (promotor); Meijer, S.A. (promotor); Delft University of Technology (degree granting institution)","2021","Data is needed for a government to function, and civil servants generate data that can be opened. However, this data is not always publicly available. Governments open their data to meet societal needs to increase transparency, accountability, stimulate participation and innovation. The opening of governmental data can be seen as a source of uncertainty for public servants, or it can even be legally prohibited, depending on how the regulation is interpreted. For instance, open data might be experienced as a burden or not easy to practice, whereas the opening might create societal relevance. This research focuses on overcoming behavioral barriers for civil servants to manage data release at the individual level by using a serious game. Open data relates to any data produced by any device or person, which is publicly shared for free or at a minimal cost, and that can be accessed by anyone. These behavioral barriers for civil servants influence governments’ decisions to make data available to the public. Behavioral barriers are the impediments for governments to release open data which originates from human behaviors. The literature suggests that behaviors are difficult to measure, and therefore, we focus on attitudes, which are measurable through declared perception. Attitude refers to a set of beliefs and feelings which is a common predictor of behavior. In this research, we use governmental civil servants’ behavioral intention to support open data to measure attitudes and the change in behavior intentions of civil servants as a proxy to analyze attitude change. Serious games are game-based interventions designed for other goals than (only) entertaining the players. They offer a safe and controlled environment for experimentation and experiential learning. The research objective of this thesis is to develop and test a game to influence the attitudes of civil servants towards the release of open data by governments, by enabling them to experience the positive and negative sides of open data in the game. Design science research was used for prototyping development and testing a game in a quasi-experimental set-up. Four research questions guided the study: RQ1. What are the behavioral barriers for civil servants to support the opening of governmental data? RQ2. What are the requirements to design a game to change civil servants’ attitudes towards supporting the opening of governmental data? RQ3. Which game design mechanisms enable the change of civil servants’ attitudes towards opening governmental data? RQ4. What are the effects of the open data game on civil servant’s attitudes towards supporting the opening of data? Each research question demanded the application of specific research methods. As the first step, systematic literature reviews were performed in the field of 1) open data provision behavioral barriers, 2) games for civil servants, 3) games for open data, and 4) games designed for attitude change. The first literature review was used to answer RQ1, whereas the other aimed at RQ2. For RQ1 (What are the behavioral barriers for civil servants to support the opening of governmental data?), the literature review identified a list of 38 behavioral barriers for civil servants influencing the opening of data. These behavioral barriers discussed in this thesis should be considered to change civil servants’ attitudes to support the opening of governmental data. For RQ2 (What are the requirements to design a game to change civil servants’ attitudes towards supporting the opening of governmental data?), three literature reviews were conducted to find game design requirements from previous research. They targeted at specific aspects of proven serious games: 1) for civil servants, to better understand the audience characteristics which could influence gameplay; 2) using open data content to inspire metaphors and operational representation of data release in the game; and 3) to change attitudes of players, targeting at successful use of game use towards attitude change. For civil servants, many games exist, whereas, for open data provision, no games were found. Even though many mechanisms exist in the literature, they did not prescribe an operationalization for an open data game. To evolve towards the most suitable game, we followed an iterative process to better understand how the game could be realized. Games are context-dependent, particularly to our specific case, open data governmental provision. Likewise, the iterative process enabled testing the operationalization of such requirements into game mechanisms. Four prototypes resulted from this game design process. Each designed prototype was evaluated, updating the lists of requirements and mechanisms for the final version of the game. •Prototype 1: Cards for open data debriefing showed that engaging mechanics could help connecting players to the open data challenges, but a card game resulted in lower levels of knowledge transfer about open data; •Prototype 2. Solvd, a group debate play-setting, resulted in interactive content from group interactions. However, the game was not entertaining, resulting in a loss of engagement; •Prototype 3. Job-matching simulator, a decision-making labor-market digital game, helped to map the real-life public service data production and use routines. This prototype highlighted the need to represent situations encountered by public servants in reality, including risks and ways to prevent them; and •Prototype 4. Open data office, a role-playing game aimed at engagement and learning for attitude change. Still, it lacked a more precise metaphor for routines and the office environment. Likewise, playing roles with humans was found to be important for our learning goals, in addition to adjusting the number of players and rounds. The prototypes resulted in the following main requirements on a serious game to influence civil servants support to the opening of data: •Requirement 1. Open government data content used in the game should be highlighted; •Requirement 2. The focus should be on a game experience that enables experiential learning; •Requirement 3. Civil servants’ practical knowledge should be reflected in the game; •Requirement 4. The game should be used as a safe environment for experimentation; •Requirement 5. The game setting should be realistic; •Requirement 6. Game dynamics should be organized as a role-playing game; and •Requirement 7. The number of roles, players, and rounds should be limited. Additionally, the literature findings combined with the outcomes of the iterative design cycles, pilot-testing, and debriefing, enabled the answering of RQ3 (Which game design mechanisms enable the change of civil servants’ attitudes towards opening governmental data?). The final version of the game, named WINNING DATA, operationalized the requirements into mechanisms that enabled players to change their attitudes towards open data. These mechanisms emerged from the design process, where each prototype debriefing informed the next round of iteration and new prototype. For instance, the needs for open data content and realism are represented through assets such as forms, files, and demand cards; demand cards express pre-defined routines: service requests. Demands are identified by specific card codes, which enable an automatic scoring system for the game facilitation; the service delivery, processed by rolling sets of dice, results in the creation of datasets. Depending on the dice combinations, privacy and security crises can occur, affecting the challenges of the game. The following final list of mechanisms resulted from this process: •Mechanism 1: Dataset description and labeling; •Mechanism 2: Card codes; •Mechanism 3: Pre-defined demands (not random); •Mechanism 4: Forms, Files and Demand cards; •Mechanism 5: Service delivery goal; •Mechanism 6: Upgrades; •Mechanism 7: Facilitation; •Mechanism 8: Crisis board; •Mechanism 9: Dice as processing machine; •Mechanism 10: Multi-player (with different roles); and •Mechanism 11: Time-limited rounds. Based on these requirements and mechanisms, WINNING DATA was designed as a four-player role-playing in-person game that can be played in a two-hour session. The game was evaluated for its effects on the attitudes of civil servants towards supporting the opening of governmental data. Playing the game consists of five rounds in which participants switch roles. The roles are citizen, two civil servants, and a manager. The player, playing the role of a citizen, demands services to the one playing the role of a civil servant; the player playing the role of civil servant has to work together with the colleague and boss to deliver the service back. Each service delivered results in a dataset which is discussed by the team and labeled by the boss. Labeling decisions influence the chances of having a privacy or security crisis in the coming rounds, resulting from specific dice combinations. Lastly, game play, data collection, and statistical analysis were used to answer the RQ4 (What are the effects of the open data game on civil servant’s attitudes towards supporting the opening of data?). Our main hypothesis is that the attitudes of civil servants can be changed by using a serious game. From the list of behavior barriers (RQ1), an initial list of factors influencing civil servants’ attitudes emerged. Four influencing factors were defined to influence Behavioral Intention, the dependent variable representing civil servants’ attitudes: lack of knowledge, performance expectancy, effort expectancy, and social influence. Explorative testing was conducted to determine which factors are at work and how the game affected them. The factors were hypothesized for testing game effects on civil servants’ attitudes to supporting open data. All factors were measured using a 33-item 7-point Likert scale questionnaire. The survey was used to measure the players’ attitudes before and after the game was played. Comparison enabled the assessment of the effects of change in their attitudes. In a quasi-experimental set-up, 77 civil servants played the game and filled in the pre- and post-test survey. Another 35 civil servants filled in the survey on two different occasions, without the gaming intervention, as a control group. The data was analyzed. Firstly, the internal reliability of the factors was checked, followed by explorative testing on the factors that did not load. The resulting factors were organized into a model which included Behavioral Intention as the dependent factor, measuring multiple dimensions of civil servants’ attitudes towards open data. Other seven factors were defined: Data Management Knowledge (DK), Performance Expectancy (PE), Risks (RK), Social Influence (SI), Knowledge of Data Production (DP), Data Sharing Knowledge (DS), and Data Costs (DC). The eight resulting hypotheses were tested using the 112 completed surveys: Hypothesis 1: Behavioral intention increases after playing the game; Hypothesis 2: The game results in more knowledge about ways to open data; Hypothesis 3: The game results in a better understanding of the expected benefits of opening data; Hypothesis 4: The game decreases expectations of the risks related to making data available; Hypothesis 5: The game reduces civil servants’ perceptions of open data practice difficulties, as exerted by hierarchies and legal frameworks; Hypothesis 6: The game increases civil servants’ knowledge of data production; Hypothesis 7: The game increases civil servants’ knowledge of the possibility of sharing data; and Hypothesis 8: The game increases civil servants’ perception of data provision costs. Through a Wilcoxon Signed Rank test, we assessed the main hypothesis and concluded that the game is likely to have a statistically significant effect on the dependent variable of Behavior Intention. As we did not find significant effects on behavior intention in the control group, our conclusion that civil servants who played the game are likely to have their attitudes towards open data increased by the game was strengthened. After that, the WINNING DATA’s gameplay additional seven hypotheses were tested. The game had a significant positive effect on Risks and Performance Expectancy. Though there were differences in the pre- and post-test scores of Data management knowledge, Social Influence, Knowledge of Data Production, Data Sharing Knowledge, and Data Costs, none of them were statistically significant. Our research has limitations resulting from (1) the limited number of participants and their distributions’ characteristics; (2) the absence of alternative strategies to which our results could be compared; and (3) the feasibility of more complex statistical analyses that were limited due to the available sample. Furthermore, this research (4) could not explore other diverse outcomes, such as a more complex model discussion on the factors influencing civil servants’ attitudes to support the opening of governmental data, which is needed and still to be done. Additionally, these limitations shed light on other improvements for new versions of the game. Future research is recommended to test the game with larger samples, players having a more diverse background, and coming from different countries. Using the same survey questions to different passive interventions, such as text and lectures, can also contribute to comparing the results. The long-term effects of the game were not investigated and recommended as a further research direction. Another further research direction is the digitalization of the game. Particularly in the light of the recent crisis of COVID-19, this is needed as playing the game with many persons in one room is not a good option. Likewise, advancing with the model discussions, including more open data elements, and extending the topics to other fields is also recommended by this thesis. Concluding, the game developed and tested during this project has proven its effects on changing civil servants’ attitudes towards the opening of governmental data. This thesis’s results can be used to design better interventions to make more governmental data available to the public.","Open data; Attitude; Attitude change; Serious Gaming; Serious Game; Open Government; OGD; Game Design","en","doctoral thesis","","978-903610662-7","","","","","","","","","Information and Communication Technology","","",""
"uuid:67cefd79-5a66-46ff-982c-1d0885bf8d2c","http://resolver.tudelft.nl/uuid:67cefd79-5a66-46ff-982c-1d0885bf8d2c","Transport and Morphodynamics in a Fine Sediment Estuary: From Conceptual Understanding to Numerical Modeling","Mathew, R. (TU Delft Environmental Fluid Mechanics)","Winterwerp, J.C. (promotor); Delft University of Technology (degree granting institution)","2021","This dissertation presents a study of fine sediment transport and morphodynamics in estuarine settings using data from the Lower Passaic River (LPR), located in New Jersey, USA. Originally a relatively shallow system, it has been dredged and deepened for navigation purposes from the late-1800s onwards, along with other modifications such as wetland reclamation, shoreline armoring, construction of bridges, etc. The last such dredging occurred several decades ago, and although the subsequent long-term morphological trend has been one of infilling, morphological trends over the short term (inter-annual durations) are more variable, with some years experiencing erosion and others experiencing infilling. Therefore, this dissertation seeks to understand the processes driving the long- and short-term morphological trends and the processes controlling the long-term morphodynamic equilibrium of the estuary. The dissertation approaches this problem by first assessing the small-scale (spatial and temporal) transport processes responsible for morphological evolution over the short term. Subsequently, it assesses the large-scale system dynamics from a morphodynamic perspective and the processes driving the variations thereof. Finally, the information gained from the small- and large-scale assessments is used to support the development and application of a morphodynamic model.
Sediment transport, and consequently morphodynamics, in starved-bed or erosion-limited fine sediment systems is a non-equilibrium process related to the availability of mobile sediment. This defines one time-scale of transport in such systems, that of the tidal period. During such conditions, transport is associated with the dynamics of a thin layer (2-4 mm thick in the LPR) of easily-erodible surficial sediments termed the fluff layer. Based on variations in suspended sediment concentrations that follow the oscillatory tidal currents, an analytical method referred to as the entrainment flux method for quantifying fluff layer erodibility (specifically, the critical shear stress for erosion and the erosion rate coefficient) was formulated and applied. The results of the entrainment flux method are analogous to the erosion data used to formulate the well-known standard linear erosion formulation; the inferred erosion properties are also comparable to direct measurements of erodibility on sediment samples using a Gust Microcosm. The favorable comparison with the direct measurements suggests that the entrainment flux method can be used to quantify the erodibility of the fluff layer in such systems.
Further to the various time-scales of transport in fine sediment systems, another time-scale is that spanning episodic scouring events. In the LPR, such scouring conditions are primarily associated with high river-flow events occurring every few years. During such conditions, depending on river flow-rate, erosion can extend beyond the fluff layer and up to tens of centimeters in the bed; consequently, sediment dynamics during such conditions is dependent on the fluvial forcing. However, during non-event conditions, sediment dynamics are controlled by barotropic and baroclinic circulation. In order to understand and quantify the dynamic impact of the various forcings on transport, an extensive dataset consisting of suspended sediment fluxes, inter-annual morphological change, sediment erodibility, and a numerical hydrodynamic model was analyzed. The former two datasets were used to develop an understanding of sediment dynamics over the full range of hydrologic conditions, and the latter two datasets were used to interpret the system behavior. Subsequently, a conceptual picture was developed, one that classifies the instantaneous morphological status of the system into three regimes dependent on river flow --- under Regime I the system imports sediments, under Regime II the system exports sediments by flushing the fluff layer, and under Regime III the system exports sediments by scouring the less-erodible strata underneath the fluff layer. Regime III is relevant for the long-term morphodynamic equilibrium of the estuary by providing a mechanism that scours and exports sediment accumulated under Regime I conditions. Limited information from the literature suggests that such a conceptualization of sediment dynamics may be common to estuaries characterized by starved-bed transport. These regimes also imply that transport in such systems also depends on the time-history of river flow and the long-term morphological progression of the system, i.e., the system develops a memory (represented by the availability of mobile sediment) that influences subsequent morphological response.
The conceptual and quantitative information on transport and sediment dynamics in the LPR was used as the basis for the development of a process-based morphodynamic model. Key processes of relevance in fine sediment settings were formulated and parameterized in the model. Specifically, these include sediment mobility considerations that lead to erosion-limited transport, either due to armoring effects or decreasing sediment erodibility with depth in the bed. The model framework also includes morphological upscaling using the Morfac approach, with specific formulations and considerations relevant for morphodynamics in fine sediment settings. Model performance was assessed against various metrics including suspended sediment concentrations and fluxes, and short- and long-term morphological change. Although the model does not capture measured morphological response at local scales over the short term, it predicts the large-scale spatial and temporal (river flow-dependent) short- and long-term morphological trends of the system. The model was subsequently applied to assess the long-term morphodynamic evolution of the estuary in response to changes in various forcings, with results that are conceptually and theoretically explainable. The results support the application of the morphodynamic model using Morfac for studying the long-term morphodynamic evolution of such fine sediment systems.
The overall conceptual findings, and the analytical and numerical methods developed in this dissertation are generally applicable to fine sediment systems characterized by starved-bed conditions. For instance, features such the presence of a fluff layer and its relatively high erodibility, and transport dynamics modulated by river flow have been observed in other systems as well. Similarly, concepts of sediment mobility and erosion-limited transport are also well known in the literature. This dissertation seeks to add to the body of knowledge for such systems by formulating a new method for quantifying the erodibility of the fluff layer, by presenting a conceptualization of sediment dynamics over the full range of hydrological conditions, by presenting a morphodynamic model framework that accounts for sediment mobility and erosion-limited transport, and by extending the applicability of the Morfac approach to fine sediment settings.","Fluff layer; erodibility; transport regimes; morphodynamics; MORFAC; morphological upscaling","en","doctoral thesis","","978-0-578-97186-5","","","","","","","","","Environmental Fluid Mechanics","","",""
"uuid:1bf74319-3be4-410e-a5d1-c8c8060e1957","http://resolver.tudelft.nl/uuid:1bf74319-3be4-410e-a5d1-c8c8060e1957","The interplay between wind and clouds in the trades","Helfer, K.C. (TU Delft Atmospheric Remote Sensing)","Siebesma, A.P. (promotor); Nuijens, Louise (copromotor); Delft University of Technology (degree granting institution)","2021","Cumulus clouds ('fair-weather clouds') form as a result of atmospheric convection and have vertical extents between a few hundred metres (humilis species) and several kilometres (congestus species). They are a major source of uncertainty in the estimation of climate sensitivity by climate models. In order to reach more agreement in cloud changes due to global warming as predicted by different climate models, a better understanding of the physics of these clouds is needed. Shallow cumulus clouds are particularly common over the oceans of Earth's trade-wind regions, which are situated roughly between the 10° and 30° parallels on both hemispheres and are characterised by steady easterly surface winds. These winds are part of the Hadley cell, a large-scale circulation system in which air flows away from the equator at high altitudes and towards the equator near the surface. As a consequence, vertical shear (i.e. vertical differences in wind speed and direction) is common in this region. While recent studies have shown that (surface) wind speed is an important predictor of cloudiness in this region, little work has been done to elucidate how shear affects clouds. Vice versa, clouds also affect the wind by vertically transporting it. While this convective momentum transport (CMT) undoubtedly plays an important role in the force balance that sets the trade winds, only little is known about the details of how CMT sets the vertical structure of the wind and of the spatial scales of the momentum-transporting eddies. In this thesis, more light is shed on the bidirectional interaction between shallow cumulus convection and the wind. Particular focus is put on the effect of wind shear on convection and on the different spatial scales (convective and turbulent) at which convection affects the wind at different heights. To this end, results from numerical large-eddy-simulation (LES) experiments are utilised in this thesis. Due to their fine horizontal resolution (of hundreds of metres and less), LES is able to numerically resolve clouds and the largest turbulent eddies explicitly. This leads to a high degree of realism in the simulation. Together with the possibility to artificially simplify the experimental set-up as well as the completeness of the output (in terms of time, space and quantities), this makes LES the ideal tool to understand physical mechanisms in the atmosphere. To identify and understand the effect of wind shear on cumulus convection, LES experiments were carried out in which typical conditions of the trades were simulated, while the amount of wind shear was systematically varied. In these idealised LES, vertical wind shear effectively limits the deepening of trade-wind convection. Several mechanisms are responsible for this, which depend on the direction of the shear vector (vertically decreasing or increasing wind speed) as well as the altitude at which shear is present. A situation with easterly surface winds that weaken with height and eventually turn westerly is referred to as backward shear, and the opposite situation with easterlies that strengthen with height is called forward shear. Different directions of wind shear cause different surface winds due to CMT, which in turn affect the surface evaporation: Faster surface winds occur in the presence of forward shear and lead to stronger evaporation of sea water, resulting in deeper convection. Forward shear in the subcloud layer also leads to a spatial separation of precipitative downdrafts and emerging updrafts, as clouds move faster than their subcloud-layer roots; this is favourable for convective development. Conversely, under backward shear, the surface evaporation is weakened and precipitative downdrafts interfere with updrafts, hindering convective deepening. However, once clouds grow to sufficient depths, they may produce precipitation so strong that the associated downdrafts spread out laterally near the surface, forming a distinct circular region of cold air, a cold pool. The spreading of this cold pool can cause uplift at its edges, triggering new convection. Backward shear limits the triggering of such secondary convection at cold-pool fronts, while forward shear facilitates it. Finally, shear of any direction in the cloud layer weakens cloud updrafts through an enhanced downward-oriented pressure perturbation force. The limiting effect of wind shear on cloud depth also affects the thermodynamic properties of the cloud layer: The relative humidity is larger and its decrease near the trade-wind inversion is more distinct if clouds are shallower. Large-domain LES hindcasts of specific days during the NARVAL measurement campaigns (which took place in December 2013 and August 2016 in the North Atlantic trades) give a uniquely realistic and complete view on the momentum balance of the trade winds. The combined effect of advection resolved by the model — which here is interpreted as CMT — and unresolved small-scale turbulence is to decelerate the wind in a layer that extends from the surface up to a height of about 2 km in winter and 1 km in summer. However, the role of each term in the balance depends on the altitude. CMT itself acts to accelerate near-surface winds, and only due to strong small-scale turbulence, there is still an overall frictious force at this height. Halfway into the subcloud layer, CMT starts to act as a frictious force. This friction is strengthened by small-scale turbulence from cloud base upwards and quickly diminishes with height. Thus, the cumulus clouds themselves do not introduce significant friction at the altitude where the zonal trade-wind jet resides, which coincides with cloud base. In fact, combined with momentum transport against the wind gradient (counter-gradient momentum transport), they may help to sustain this jet. Overall, wind shear appears to be an important player in setting the typical structure of the trade-wind atmosphere by modulating the depth of convection and may thus even affect cloud-radiative effects. Conversely, convection and turbulence give rise to an overall frictious force on the trade winds. CMT alone acts to accelerate the winds near the surface, which may weaken the Hadley circulation, while in the cloud layer, CMT hardly affects the wind.","shallow convection; cumulus; wind shear; trade winds; momentum budget; convective momentum transport; large-eddy simulation","en","doctoral thesis","","978-94-6416-719-1","","","","","","","","","Atmospheric Remote Sensing","","",""
"uuid:64a8aef7-91ad-420a-8c72-2766afe5c276","http://resolver.tudelft.nl/uuid:64a8aef7-91ad-420a-8c72-2766afe5c276","Challenges of prefabricated housing in China: Supply chain, Stakeholders, and Transaction costs","Wu, H. (TU Delft Housing Quality and Process Innovation)","Visscher, H.J. (promotor); Straub, A. (promotor); Qian, QK (copromotor); Delft University of Technology (degree granting institution)","2021","Recently, the implementation of prefabricated housing (PH) has become prevalent in China to achieve sustainability while ensuring green construction, innovative products, and higher quality. However, numerous challenges arise, such as the overrun costs, inexperienced workers, and the inefficient management process. High transaction costs (TCs) occur in the PH project supply chain since additional efforts are consumed for overcoming these challenges. This study aims to seek insights into TCs of PH and investigate strategies for minimizing the TCs thus smooth the development process of PH projects. Three key elements have been addressed throughout the thesis: supply chain, stakeholders, and transaction costs. Four-step research is employed to uncover the TCs in the PH supply chain, collect the stakeholders’ perceptions, investigates the causes of TCs, and explore decisions for reducing TCs. This thesis identifies three types of TCs in Chinese PH projects by their nature: due diligence costs, negotiation costs, monitoring and enforcement costs. Private stakeholders in China’s PH industry put more of their attention on TCs related to the specificity of prefabrication. The simple and joint strategies are provided for reducing their benefits lost from the unexpected TCs. Additionally, the value of the governmental TCs has been revealed for reducing the TCs of PH, which inspires and supports the policymakers to develop a healthy policy environment.","prefabricated housing; Transaction costs; Stakeholders' perceptions; Supply chain","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-462-2","","","","","","2023-09-18","","","Housing Quality and Process Innovation","","",""
"uuid:48e47c01-34f3-41e3-88e3-1180aecfec4b","http://resolver.tudelft.nl/uuid:48e47c01-34f3-41e3-88e3-1180aecfec4b","Decision-making Support for Opening Government Data","Luthfi, A. (TU Delft Information and Communication Technology)","Janssen, M.F.W.H.A. (promotor); Crompvoets, Joep (promotor); Delft University of Technology (degree granting institution)","2021","Government institutions collect and produce an extraordinary number of datasets to conduct and execute their programs and agendas. Various types of datasets collected by the governments can increase transparency and accountability, improve citizen engagement, and create value-added services for the public. Through the Open Government Data (OGD) initiatives, Non-Government Organisations (NGOs), private agencies, business enablers, data analysts, researchers, civil societies, and other open data stakeholders can take advantage of disclosing the government datasets. Despite its significance, the decision-making process to disclose government datasets is given limited attention and encounters several challenges. Although numerous datasets have been published to the public, many datasets remain undisclosed. Government institutions face several challenges in deciding to open datasets. First, the governments have not systematically analysed datasets to identify the benefits and disadvantages of opening datasets. Decision-makers, policy-makers, civil servants, and administrative officers do not know how to balance the advantages and disadvantages of opening datasets. Second, various stakeholders’ backgrounds may have different objectives and interests to analyse and disclose datasets. Third, the easy understanding of possible disadvantages of opening datasets results in moving away from the potential benefits due to the risk-avoiding culture in government. Therefore, these results in keeping datasets undisclosed. Furthermore, the stakeholders’ involvement in the decision-making process to open data, such as politicians, executive boards, decision-makers, civil servants, data analysts, and societies, all play essential roles and have different objectives for opening and using the datasets. For example, some decision-makers might have the authority to publish or keep the dataset closed. Some public servants might be risk-averse, whereas others might open datasets without considering possible negative consequences. As a result, the decision-making process becomes fuzzy, and the objectives of disclosing data are not reached. The different roles and interests of the heterogonous actors in the internal government organisation might create uncertainty and delay the decision-making process. Although there are guidelines, there are no decision-making tools to help governments decide to open their datasets. On the governments’ side, the potential disadvantages might easily dominate over the advantages. It is much easier for the decision-makers to keep a dataset closed than take the disadvantages of releasing a dataset. The lack of insights and expertise in estimating the potential advantages and disadvantages of opening data can also lead to uncertainty, which might result in avoiding the disclosure of datasets. Therefore, this research aims to develop Decision-making Support for Opening Government Data (DSOD). This DSOD accommodates a systematic approach to decide to open datasets. To achieve the objective of this research, we followed the Design Science Research (DSR) approach. The DSR approach results in developing a prototype of the DSOD as a design artefact and demonstrate it to the stakeholders.","decision-making support; decision tree analysis; open data; open government data; stakeholders; advantages; disadvantages; costs; benefits; risks; Bayesian-belief networks; fuzzy multi-criteria decision making","en","doctoral thesis","","978-94-6384-244-0","","","","","","","","","Information and Communication Technology","","",""
"uuid:6ced7129-362c-4a05-91eb-4e9b0e3924b3","http://resolver.tudelft.nl/uuid:6ced7129-362c-4a05-91eb-4e9b0e3924b3","Reworking land reform: A credibility approach to property rights in China's forest sector","Krul, K. (TU Delft Organisation & Governance)","Correljé, A. (promotor); Hermans, L.M. (copromotor); Delft University of Technology (degree granting institution)","2021","Why do land reforms rarely achieve their desired effects? This dissertation posits that a key to solve this question lies in a closer understanding of the specific workings of property rights. The idea is developed empirically in China’s forest sector, where one of the world’s largest land-reform undertakings in modern times was initiated under the Collective Forest Tenure Reform. The study offers a credibility approach to focus on the relations between property rights and their embedded political, legal, and social structures. Three phases of reform are selected for further empirical investigation: The establishment, enforcement, and exercising of property rights. The dissertation empirically demonstrates how each phase is critical for the functioning and credibility of reform objectives, and ultimately in influencing socioeconomic development in the Chinese forest sector and beyond.","Land reform; Property rights; Institutional economics; Credibility; Natural resource management","en","doctoral thesis","","978-94-6384-255-6","","","","","","","","","Organisation & Governance","","",""
"uuid:933b37d4-7f00-4070-becc-9c462bf9d8df","http://resolver.tudelft.nl/uuid:933b37d4-7f00-4070-becc-9c462bf9d8df","Cavity-enhanced quantum network nodes in diamond","Ruf, M.T. (TU Delft QID/Hanson Lab)","Hanson, R. (promotor); Taminiau, T.H. (copromotor); Delft University of Technology (degree granting institution)","2021","With their ability to process and transfer quantum information, large-scale entanglement-based quantum networks could be at the heart of a new age of quantum information, enabling fundamentally new applications such as distributed quantum computation, quantum communication, and quantum enhanced sensing. Due to their long spin coherence, controllable local qubit registers and optically active spins, color centers in diamond are prime candidates for nodes of such a network, and have enabled some of the most advanced quantum network demonstrations to date. These demonstrations include the distribution of a 3-node GHZ state across a quantum network, entanglement swapping, entanglement distillation, and memory-enhanced quantum communication. To move beyond current proof-of-principle networks, a further increase of entanglement generation rates is crucial. This thesis presents theoretical and experimental work on enhancing the spin-photon interface of color centers in diamond to achieve this goal, making use of the Purcell effect. The discussed work follows two main directions: embedding of color centers in open, tuneable Fabry-Perot micro-cavities, and in all-diamond photonic crystal cavities.
First, we describe theoretical and experimental progress towards cavity-enhanced quantum networks based on nitrogen-vacancy (NV) centers in diamond. Due to their first order sensitivity to electric fields, we choose to embed NV centers in microns-thin diamond membranes that can be integrated into open fiber-based micro-cavities. We develop analytical methods to optimize the design of such open cavity systems for maximum Purcell enhancement of embedded color centers. We demonstrate a method to fabricate optically coherent NV centers in microns-thin diamond membranes, and use such structures to demonstrate the resonant excitation and detection of coherent, Purcell enhanced NV emission. A theoretical model in excellent agreement with our results suggests our system can improve entanglement rates between distant NV centers by two orders of magnitude with near-term improvements to the setup.
Second, we describe progress towards an efficient spin-photon interface of group-IV color centers in diamond by coupling them to photonic nanostructures, allowing for large-scale integration. Due to their first-order insensitivity to electric fields, group-IV color centers can be brought in close proximity to surfaces (~ 100 nm), allowing for sub-wavelength mode volumes and thus very high Purcell factors. We numerically optimize photonic crystal cavity designs to maximize the Purcell enhancement of embedded emitters, test the robustness of our designs to real-world fabrication imperfections, and devise a method to efficiently interface nanophotonic structures. We then proceed to fabricate all-diamond photonic crystal cavities, making use of a dry etching technique that is selective to different crystallographic directions, and characterize the resulting optical quality factors. Finally, we marry the developed fabrication methods to fabricate microns-sized diamond platelets that can be transferred from a holding frame in a controlled fashion. This capability could be crucial for the realization of hybrid photonic circuits that we expect to be at the heart of future large-scale quantum networks.","","en","doctoral thesis","","978-90-8593-483-7","","","","","","","","","QID/Hanson Lab","","",""
"uuid:6a9954ba-1a15-4aaa-93f8-b3b49aa55f96","http://resolver.tudelft.nl/uuid:6a9954ba-1a15-4aaa-93f8-b3b49aa55f96","Single-cell Analysis from the perspective of how to Interact, Identify and Integrate cells","Abdelaal, T.R.M. (TU Delft Pattern Recognition and Bioinformatics)","Reinders, M.J.T. (promotor); Mahfouz, A.M.E.T.A. (copromotor); Delft University of Technology (degree granting institution)","2021","Single-cell technologies have emerged as powerful tools to analyze complex tissues at the single-cell resolution, resolving the cellular heterogeneity within a tissue through the discovery of different cell populations. Over the past decade, single-cell technologies have greatly developed allowing the profiling of various molecular features including genomics, transcriptomics and proteomics. These high-throughput technologies produce datasets containing thousands to millions of cells in a single experiment. These large high-dimensional datasets impose several challenges to the data analysis. These challenges can be divided into three categories: interaction, identification and integration. Interaction refers to the visual exploration and interactive analysis of the data, identification refers to the definition of the identity of each single-cell, while integration deals with the combination of different molecular information from different datasets. In this thesis, we introduced several computational methods, addressing these three challenges, to eventually improve the analysis of single-cell data. Regarding the interaction, we focused on developing scalable methods that can analyze datasets having millions of cells and thousands of features within workable time frames. We improved the scalability of both clustering and visualization of single-cell data by summarizing the data using a hierarchical representation. To improve the identification of cells, we make use of the large number of annotated datasets available nowadays, and identify cell populations present in a single-cell dataset using classification methods instead of clustering the data. These classification methods can be trained using the previously annotated datasets. We benchmarked a large number of different classification methods and based on this analysis propose to use simple linear classifiers since they have better performance and scale better to larger datasets. We applied this linear classification on single-cell mass cytometry data to automatically identify cell populations when comparing two cohorts of colorectal cancer patients. To integrate single-cell multi-omics data, we focused on extending the number of measured features to overcome current technological limitations. For single-cell mass cytometry, we integrated different panels measured from the same biological sample, resulting in an extended number of proteins markers per cell. Downstream analysis of this data revealed new cell subpopulations showing a more fine-grained cellular heterogeneity. We also extended spatial single-cell transcriptomic data by integrating it with scRNA-seq data that lacks the spatial localization of the cells. Our proposed integration generates whole transcriptome spatial data, which makes it possible to predict spatial expression patterns of genes (in-silico) that are not originally measured in the spatial data. Taken together, this thesis presents several computational methods that aid and improve single-cell data analysis, increasing our insights in molecular heterogeneity.","Bioinformatics; Machine Learning (ML); Single-cell; Interactive data analysis; Cell type identification; Data integration","en","doctoral thesis","","978-94-6423-384-1","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:acac18a9-0dc1-46dc-8abb-045140a07b51","http://resolver.tudelft.nl/uuid:acac18a9-0dc1-46dc-8abb-045140a07b51","The Architecture of CubeSats and PocketQubes: Towards a Lean and Reliable Implementation of Subsystems and Their Interfaces","Bouwmeester, J. (TU Delft Space Systems Egineering)","Gill, E.K.A. (promotor); Menicucci, A. (copromotor); Delft University of Technology (degree granting institution)","2021","This thesis provides an innovative architecture for CubeSats and PocketQubes to improve their performance and reliability. CubeSats and PocketQubes are standard satellite form factors composed of one or more cubic units of 10 cm and 5 cm respectively. It is found that the current modular subsystem approach and the electrical interfaces are not optimal in terms of performance and use of technical resources. Reliability is also a concern for CubeSats. Only 35% are able to achieve full mission success. Available literature provides no comprehensive studies on these matters and there is thus a gap in knowledge to be filled. The objective of this study is to identify and quantify the performance and reliability issues related to the physical arrangement of subsystems and the electrical interfaces and to develop an innovative bus architecture which is reliable, flexible and allows for increased performance. The overarching research question is: ”Which satellite bus architecture provides a reliable solution to the needs and constraints of a CubeSat and a PocketQube mission?”","CubeSat; PocketQube; interface; architecture; satellite subsystem; reliability; redundancy; testing","en","doctoral thesis","","9789463664325","","","","","","","","","Space Systems Egineering","","",""
"uuid:e0fa7522-0380-4ec9-9bc6-7568564b72bf","http://resolver.tudelft.nl/uuid:e0fa7522-0380-4ec9-9bc6-7568564b72bf","A thousand needles in a haystack: The search for invading DNA sequences by the CRISPR immune system","Vink, J.N.A. (TU Delft BN/Stan Brouns Lab)","Brouns, S.J.J. (promotor); Hohlbein, Johannes (copromotor); Delft University of Technology (degree granting institution)","2021","Viruses are, as we have seen over the past year, very proficient at invading a host and subsequently reproducing rapidly. Our adaptive immune system, after a first encounter with a virus, can store information about the outside protein shell and use this information to destroy the virus in later encounters. Bacteria have an adaptive immune system that works on the nucleic acid level (DNA/RNA). The right functioning of the system demands that a fragment of a virus is incorporated into the bacterial genome correctly (adaptation). Subsequently, proteins carrying copies of this fragment have to find and destroy the virus quickly enough after it enters the cell (interference). An obstacle in the interference process is that the cell is filled with host DNA, which has to be scanned to differentiate it from viral DNA. How bacteria are still able to find these viruses fast enough is the subject of my thesis.","","en","doctoral thesis","","978-90-8593-486-8","","","","","","","","","BN/Stan Brouns Lab","","",""
"uuid:09d84cc1-27e2-4327-a8c7-207a75952061","http://resolver.tudelft.nl/uuid:09d84cc1-27e2-4327-a8c7-207a75952061","Internal processes in hydrological models: A glance at the Meuse basin from space","Bouaziz, L.J.E. (TU Delft Water Resources)","Hrachowitz, M. (promotor); Savenije, Hubert (promotor); Delft University of Technology (degree granting institution)","2021","Contemplating the Meuse or any other river of the world, one may wonder about the journey of rain in becoming river. This fascinates hydrologists, as they develop theories to understand movement, storage and release of water through the landscape across climates. These theories are translated to hydrological models, which describe the complex reality in a simpler way. Models are then used to predict the hydrological cycle for the nearby or long-term future. This thesis aims to assist the Dutch Ministry of Infrastructure and Water Management in improving the reliability of hydrological modeling of the Meuse basin for operational and policy applications. Using in-situ and remote-sensing data, the value of representing additional processes in models is explored, as well as the creative use of additional data to improve hydrological predictions. First, water balance data is used to identify the potential presence of intercatchment groundwater flows (Chapter 3). These underground flow paths cross topographic catchment boundaries and mainly play a role in headwater catchments (< 500 km2) of the Meuse basin, which are underlain by productive aquifers. Representing this flux as a preferential threshold-initiated process improves low and high flow model performance and increases the consistency between modeled and remote-sensing estimates of actual evaporation. Besides the importance of quantifying the long-term hydrological partitioning of precipitation into streamflow, evaporation and potentially intercatchment groundwater flows, another key element of the hydrological response is the amount of water available in the root-zone of vegetation. The temporal dynamics of root-zone soil moisture control how much more water can be stored in the soil and how much water is available for transpiration. In Chapter 4, meaningful estimates of root-zone soil moisture are inferred from satellite observations of near-surface soil moisture, by establishing a link between the catchment-scale root-zone storage capacity and the Soil Water Index. Interestingly, hydrological models with different internal process representations of root-zone soil moisture, evaporation, snow and total storage at the catchment scale may lead to a similar aggregated streamflow response (Chapter 5). This discrepancy implies that models are not necessarily providing the right answers for the right reasons, as they cannot simultaneously be close to reality and different from each other. To circumvent the uncertainty of process representation, which is inherent to hydrological science, the use of multiple model structures is advocated for operational and policy applications. Nonetheless, testing the consistency between modeled hydrological behavior and independent remote-sensing data can foster model developments and lead to creating better models. Finally, we move beyond the use of historical in-situ and remote-sensing data to predict long-term hydrological behavior of the Meuse basin under projected global warming (Chapter 6). If environmental conditions change, it is likely to also assume ecosystem adaptation in response to climate change and a potential natural and/or anthropogenic shift in dominant species across the landscape. Non-stationarity in the representation of hydrological systems is introduced in a process-based model with three hydrological response units to account for the spatial variability of hydrological processes. More specifically, we adapt the root-zone storage capacity parameter using the information contained in the projected climate data. This is an important step forward in the great challenge of hydrological predictions under change. Despite data uncertainties and a lack of data at the required temporal and spatial resolutions, many possibilities are at hand with what is currently available to develop new theories, test and improve hydrological models. Requiring creativity, this is a beautiful challenge to further unravel the mysteries of the hydrological landscape.","hydrological modeling; Meuse basin; root-zone storage capacity; remote sensing; states and fluxes; intercatchment groundwater flow; Climate change; Soil Water Index","en","doctoral thesis","","978-94-6421-419-2","","","","","","","","","Water Resources","","",""
"uuid:015bbf35-5e29-4630-b466-1a29d4c5bfb3","http://resolver.tudelft.nl/uuid:015bbf35-5e29-4630-b466-1a29d4c5bfb3","Accelerating finite element analysis using machine learning","Ghavamian, F. (TU Delft Applied Mechanics)","Sluys, Lambertus J. (promotor); Simone, A. (promotor); Delft University of Technology (degree granting institution)","2021","We study the acceleration of the finite element method (FEM) simulations using machine learning (ML) models. Specifically, we replace computationally expensive (parts of) FEM models with efficient ML surrogates. We develop three methods to speed up FEM simulations. The primary difference between these models is their degree of intrusion into the FEM source code. Here, we enumerate them from the most to the least intrusive. In the first contribution, we tackle two bottlenecks of a FEM model equipped with a viscoplastic constitutive equation namely, solving the linear system of equations and evaluating the force vector. To tackle the former, we use a proper orthogonal decomposition (POD) method. And we tackle the latter with a discrete empirical interpolation method (DEIM).We observe that DEIM does not effectively speed up such a highly nonlinear FEM model. As a remedy, we divide the time domain into subdomains using a clustering algorithm. Then we construct a set of DEIM points for each cluster. By doing so, we manage to increase the efficiency of the POD-DEIM scheme. We, however, observe that the POD-DEIM scheme is sometimes unstable. The source of this instability is, to the extent of our knowledge, an open question. In the second contribution, we consider a FE2 scheme. The micro model is a FEM model equipped with a viscoplastic constitutive equation. The evaluation of the micro model is the computational bottleneck in this framework. Therefore, we develop a recurrent neural network as a surrogate for the micro model. In this contribution, we also propose a simple but effective sampling technique to collect stress-strain data points. The RNN model is trained based on this data. We also discuss how the RNN model becomes inaccurate when extrapolating. For these scenarios, we discuss how to improve the RNN by collecting more data and retraining. In the third contribution, we develop a surrogate for the entire FEM simulation of a multi-physics problem. Specifically, we consider the FEM simulation of the electrochemical-mechanical interactions in a Li-ion battery. We propose a variant of the convolutional neural network (CNN), namely the HydraNet. The HydraNet takes the geometry of the battery and predicts all solution fields of the FEM model. Solution fields are either output of the solver or that of the post-processing. The HydraNet accepts inputs in the form of image-like fields. We discuss how to encode the geometry of the battery into a set of image-like fields. We argue that the degree of intrusion of these methods to the FEM source code is inversely related to their industrial applicability. As a result, we believe that the first method (POD-DEIM) will mostly remain an academic contribution, while the other two could have potential industrial applications. The central use case of these methods is in a multi-query application such as uncertainty quantification, design, and real-time simulations.","machine learning; deep learning; finite element analysis","en","doctoral thesis","","","","","","","","","","","Applied Mechanics","","",""
"uuid:0943e030-7486-4ee6-8e7e-b35d02d528b0","http://resolver.tudelft.nl/uuid:0943e030-7486-4ee6-8e7e-b35d02d528b0","Algorithms for Efficient Inference in Convolutional Neural Networks","Zhu, B. (TU Delft Computer Engineering)","Al-Ars, Z. (promotor); Hofstee, H.P. (promotor); Delft University of Technology (degree granting institution)","2021","In recent years, the accuracy of Deep Neural Networks (DNNs) has improved significantly because of three main factors: the availability of massive amounts training data, the introduction of powerful low-cost computational resources, and the development of complex deep learning models. The cloud can provide powerful computational resources to calculate DNNs but limits their deployment due to data communication and privacy issues. Thus, computing DNNs at the edge is becoming an important alternative to calculating these models in a centralized service. However, there is a mismatch between the resource-constrained devices at the edge and the models with increased computational complexity. To alleviate this mismatch, both the algorithms and hardware need to be explored to improve the efficiency of training various feedforward and recurrent neural networks and inferring using a DNN.","Convolution neural network; Efficiency; Approximation; Architecture design; Reconstruction; Feature reuse; Attention; Search","en","doctoral thesis","","","","","","","","","","","Computer Engineering","","",""
"uuid:76bd48f2-9a5e-4b59-9631-6e34fede8c82","http://resolver.tudelft.nl/uuid:76bd48f2-9a5e-4b59-9631-6e34fede8c82","Overcoming the Valley of Death in a Service Organisation: Designing Innovation Implementation","Klitsie, J.B. (TU Delft Marketing and Consumer Research)","Santema, S.C. (promotor); de Lille, C.S.H. (copromotor); Price, R.A. (copromotor); Delft University of Technology (degree granting institution)","2021","Large and mature organisations, with their access to knowledge, capital and customers, are perfectly positioned to walk the road from invention to innovation; to turn promising breakthrough technologies and creative concepts into profitable and scalable business opportunities. However, these organisations rarely generate winds of creative destruction and instead start-ups disrupt them at an increasing pace (Anthony et al., 2018; Elsbach & Stigliani, 2018). Large and mature organisations struggle to innovate sustainably, in part because of their rigid organisational structures and processes that maintain the status quo (O’Reilly & Binns, 2019). To overcome this, organisations increasingly deploy ‘innovation hubs’. Innovation hubs are partially independent physical and managerial spaces intended as safe havens for exploratory activities. Examples of hubs are Xerox's’ PARC and Google X ‘the Moonshot Factory’. These are spaces where innovators find freedom to challenge the status quo and where there is space to consider alternatives, to experiment and to learn. Innovation hubs fuel the discussion of “what might be”.However, if organisations want to transform their business, they need to go beyond generating thought-provoking concepts. They need to implement promising concepts and integrate them with the rest of the organisation. Scholars call this gap that exists between concept generation and implementation the ‘Valley of Death’ (from heron: VoD) (Markham et al., 2010). It is crucial that organisations resolve issues related to the VoD if they want to reap the benefits of innovation. However, innovation implementation is a relatively under-examined field (Baer, 2012).Innovation implementations scholars predominantly focus on the proposed concepts. Questions arise, such as are the ideas ‘good’ enough? Are they ‘radical’? Do they serve an actual need? Alternatively, the innovator becomes the focal point of the study. There are stories (in both popular and academic writing) in which one well connected, head strong champion heroically shepherds an innovative concept into realisation, in resistance to incumbent forces. But it is risky for organisations to bet their future survival on the presence, capabilities and ultimately, success of lone champions who succeed despite organisational circumstances, not because of them (Dougherty & Hardy, 1996). Especially since failing to implement innovations often stems from factors beyond the control of champions8(Goepel et al., 2012). Thus, in this research, I take an approach to explore what organisational conditions help innovators to mitigate the VoD and achieve implementation.As a designer, I particularly focus on the relationship between design practices in innovation and the VoD. The Design Council states that design practices can mitigate the VoD (Kolarz et al., 2015). Others suggest they may actually aggravate the issue (Carlgren et al., 2016a). Recently, scholars have noted that designers need to consider implementation issues if they want to contribute to resolving organisational and society-level challenges (Dorst, 2019b; Norman & Stappers, 2015). In this thesis, I consider different conceptualisations of design in an innovation context (as problem solving and as inquiry) and shed light on the role of design in mitigating the VoD.Research DesignI performed this study using an action research approach (Reason & Bradbury, 2008a) in collaboration with a large heritage airline ‘FlyCo’ (kept anonymous for privacy reasons). FlyCo finds itself in a competitive landscape. Weighted down by large labour forces, considerable and long-term capital investments, and legacy management structures, FlyCo faces a battle to remain airborne while competing with both low-cost entrants (e.g., EasyJet) and high-quality ‘Gulf’ behemoths (e.g., Emirates). It operates in (for safety and security), a highly regulated and increasingly commoditised industry, which makes achieving innovation difficult yet rewarding. In response, FlyCo started an ambitious ‘architectural transformation’ (Safrudin et al., 2014) in which ‘design thinking’ was a central pillar to deliver a more customer-centred and cost-efficient service. This transformation required that FlyCo adjust its organisation to implement innovation projects more effectively. This situation provided a solid launching pad for this study. The research objectives, combined with the needs of FlyCo, informed the following main research question:How can design catalyse innovation implementation at a service organisation?Over a 14-month period, I embedded as an action researcher at ‘FlyCo’. I engaged employees from different levels of FlyCo to conduct actions as part of reflective, collaborative research cycles. The research contained three action research cycles (ARCs). Each ARC was performed in collaboration9with a distinct set of stakeholders and with different research aims. In the first ARC, my efforts focussed on building a network and an understanding of FlyCo and the VoD phenomenon. In ARC 2, the focus moved towards investigating conditions that contribute to a VoD with a focus on the role of design practices. In ARC 3, the focus again shifted towards how design interventions in organisational context could contribute to implementation success. Over the research period, I became increasingly immersed in FlyCo as my role shifted from being an outsider to obtaining increasingly influential positions (I became an interim manager in ARC 3 for example), which provided an opportunity to gather a rich dataset.During the embedded period, I employed multiple data gathering methods. Predominantly, I took part in- and observed corporate activities, resulting in 231 temporal observations (events). I captured observations and reflections in field notes, resulting in 426 pages of notes and drawings. Additionally, I gathered internal documents (such as strategies, project proposals, training manuals). Finally, 48 interviews were conducted at multiple intervals during the study. Of these interviews, 17 were semi-structured, audio-recorded, and transcribed, whilst 31 were conversational and recorded via hand-written notes. I initially analysed the data using a visual mapping strategy. Subsequently, a thematic analysis was performed using NVivo software. A breakdown between identified themes and existing literature finally informed a narrative analysis strategy. Together, this data collection and analysis strategy helped to observe nuances in FlyCo's innovation and implementation processes that can evade detection by other ‘outside-in’ research designs.InsightsThe data inform four sets of insights. Extant research on innovation implementation has focussed on product/manufacturing organisations (with historically large R&D departments) that aim to reach additional customers through new/improved products. In this context, managers and scholars noticed that R&D output did not reach controlled stage-gate New Product Development (NPD) processes. But innovation hubs are also increasingly popular at service organisations (Blindenbach-Driessen & Van Den Ende, 2014), which have different (and less structured) innovation processes. The first set of insights describes an exploration and re-conceptualisation of the VoD phenomenon in a service organisation context. I identify three10organisational unit types that contribute to innovation: exploration hubs, support partners and operational units. In this context, the metaphor of a singular ‘valley’ between two contributing units appears erroneous, as implementation challenges exceed the dichotomous relationship between design and production.A deeper investigation into the mechanism that drives the VoD shaped the second set of insights, which highlights the role of institutions, specifically organisational logics. At FlyCo, a constellation of three organisational logics and the absence of a recombination strategy fosters an environment inhibiting resource pooling between organisational units. The three logics inform conflicts on three issues: innovation priorities, innovation processes and problem frames. As logics guide legitimacy judgement, conflicts between logics lead to a Not-Invented-Here attitude from gatekeepers towards concepts from ‘foreign’ logics. Consequently, champions can’t gather the resources needed for implementation and their concepts end in a VoD.The third set of insights describes how 10 barriers contribute to the VoD. I identify four barriers related to organisation properties of FlyCo. A complex and siloed organisation, the absence of a shared service vision, decentralised innovation portfolio management, and a competing internal innovation marketplace stimulate a VoD. Two barriers describe project characteristics related to the VoD: founding problem frames in an inferior domain and proposed solutions with a weak fit with the existing service system. Two process-related barriers highlight how engaging stakeholders late in the innovation process and inadequate communication of project decisions contributes to a VoD. Finally, two barriers describe how the organisational set-up of an exploration hub contributes to a VoD: when there is no ‘Shadow of the Future’ and when hubs have limited access to resources, they struggle to mitigate the VoD.The fourth set of insights explores the relation of design practices with innovation implementation. When viewed as a problem-solving approach, I exhibit how design practices contribute to mitigating implementation issues by fostering more holistic concepts and an innovation process with engaged and aligned stakeholders. However, as an inquiry process, design practices contribute to a VoD when projects are reframed such that the aspired value shifts. A VoD then appears in two situations: if the new working principle requires new stakeholders (not part of the founding problem frame) to become involved, or if not, all involved stakeholders accept the new frame. In11addition, I deployed design practices to create new organisational infrastructure which fosters innovation implementation success. These practices inform a sense of shared ownership and novel organisation designs, but they also introduce challenges that require further investigation.Contributions and GuidelinesOne principal contribution to literature is the reconceptualisation of service innovation implementation. Instead of three sequential phases, ‘elaboration’, ‘championing’ and ‘production’ (Perry-Smith & Mannucci, 2017) are three reiterating micro-processes. These micro-processes constitute two innovation-to-implementation process streams. In one process stream, innovation teams solve ‘innovation challenges’ (Dougherty & Hardy, 1996) through concept elaboration and production. In the other stream, championing in the organisation sphere aims to solve ‘innovation-to-organisation challenges’ (Dougherty & Hardy, 1996). In line with this conceptualisation, I propose to define the VoD in this context as ‘when concept development terminates because champions fail to gather the required resources for further development because of innovation-to-organisation challenges’.Second, I propose a classification of three types of organisational units involved in innovation. In service organisations, achieving innovation requires mitigating gaps between (1) explorative units, (2) support resources, and (3) operational units. I challenge whether the dichotomous conceptualisation of a VoD does justice to the complexities of achieving alignment for reform within service organisations.The findings add to a growing body of knowledge that considers the role of institutions in realising (service) innovation. I add that, besides on an ecosystem level, organisational level ‘Logics matter when coordinating resources’ (Edvardsson et al., 2014) in service innovation. I identify three issues where misalignment between organisational logics hampers innovation implementation: innovation priorities, innovation processes and problem frames. I propose that besides contextual, spatial, and organisational boundaries (Antons & Piller, 2015), organisational logic boundaries can trigger a Not Invented Here attitude.Insights from this study suggest a complicated relationship between design innovation and the successful implementation of these innovations, which I call the ‘Design Implementation Paradox’. Design principles and practices related to experimentation, experiential learning, and embracing12diversity contribute to implementation success. Practices related to embracing diversity, user-centricity and materialisation contribute to resolving innovation-to-organisation challenges and mitigating logic conflicts, and thus to implementation success. However, design can also contribute to a VoD when reframing leads to a shift in the stakeholder field or when champions cannot convince involved stakeholders of a new frame. This study represents an initial exploration into this relation, but more research is needed.The final contribution to theory is 10 organisational barriers identified that contribute to the VoD in a service organisation. For example, by exhibiting how an internal innovation ‘marketplace’ encourages competing behaviour as opposed to collaborative behaviour, which hinders innovation implementation.The insights inform six guidelines for managers, specifically for those who shape organisational conditions, to design organisational infrastructure that promotes innovation implementation. These guidelines describe organisational infrastructure that contributes to mitigating the VoD:1. To resolve innovation-to-organisation problems, service organisations can use innovation hubs because this infrastructure facilitates the required social dynamics.2. To avoid a Not Invented Here attitude, the infrastructure of these innovation hubs can promote institutionalisation and legitimisation of innovation concepts.3. To motivate aligned innovation processes and ‘implementable’ concepts, the infrastructure of these hubs must act as a ‘shadow of the future’.14. To align decisions making across organisational units, a service vision - which describes what value the organisation wants to create in the future - should be formulated and shared.5. To ensure alignment between resource allocation and the innovation vision, and to spot potential VoD issues, centralised innovation portfolio management can be applied.6. To align the innovation portfolio with the current technological and organisational system, the service system-fit framework can be applied.1 An example of such infrastructure is when incentives of innovation hubs relate to implemented innovations, not merely proposed concepts.13This research emphasises the need to study implementation in design research, if designers aim to realise societal impact. Design education needs to adjust to fit the more strategic role that design is assuming. If design is indeed going ‘beyond design’ (Dorst, 2019b) to contribute to solving the world challenges, then we need to go beyond teaching future designers how to generate innovative interfaces, products, and systems. We need to teach them how to contribute to implementation and, ultimately, impact. This implies assuming a broader understanding of design, offering students tools and skills to become more sensitive to organisational context and helping them understand what influences implementation and what strategies they may pursue to achieve implementation. This requires a realisation that the road to implementation is paved with team players and that besides being great pitchers, designers need to learn how to knock the ball out of the park.Above all, this research emphasises the limits of the ‘rogue innovator’ narrative and provides principles for organisational leaders of service organisations that face transformation to mitigate their dependence on innovation champions and instead design organisational infrastructure that facilitates innovation implementation.","Service Innovation Implementation; Valley of Death; Service Organisation; Design Innovation; Action Research; Airline; Organisational Logics; Innovation Champions","en","doctoral thesis","","9789493026537","","","","","","","","","Marketing and Consumer Research","","",""
"uuid:052aa674-20a1-4af5-a543-52acf5c4f90b","http://resolver.tudelft.nl/uuid:052aa674-20a1-4af5-a543-52acf5c4f90b","Defect Identification through Partial Discharge Analysis on HVDC: Partial Discharge Fingerprinting","Abdul Madhar, S. (TU Delft DC systems, Energy conversion & Storage)","Mor, A. R. (promotor); Mraz, P. (promotor); Ross, Robert (promotor); Delft University of Technology (degree granting institution)","2021","The electricity grid spanning hundreds of thousands of kms is one of the most complex man-made network built in human history. Today, after a century of growth, progress and innovation, the electricity grid is in the process of undergoing another landmark shift in its operation. The introduction of renewable energy sources, especially, offshore wind connected to the load centres through >80 kms of underground subsea cable has caused a shift from AC transmission towards DC transmission. This is because AC cables suffer from high charging currents that reduce the useful current carrying capacity for long cables. On the contrary, High Voltage DC (HVDC) is acclaimed with higher current capacity for the same conductor dimension as AC and as a result the more sustainable alternative. Therefore, the infrastructure developed around the AC grid is now under pressure to adapt itself to the DC technology. This implies a dramatic change in a cascade of procedures and processes, beginning from designing new DC components, its testing and qualification, its validation and up to its commissioning, control and operation. Every step in the process is expected to be crucial and challenging given the newness of the technology and lack of experience.In the current scenario, this research rests itself in the testing and qualification phase of these DC components. The field of HV testing has not been exclusive tothis pressure to adapt and improvise its processes to accommodate the newest DC technological trends. New requirements are being defined to determine the quality of HVDC components and new methodologies developed to fulfil this. One of the most widespread test methodologies that has come to become a part of several tests such as factory acceptance tests (FATs), site acceptance test (SATs), routine tests and type tests is the measurement of Partial Discharge (PD). Partial discharge is a dielectric phenomenon that when measured is used as a proven marker for insulation quality. The inherent differences in the performance of the insulation under AC and DC operation have not allowed a direct adaption of the PD analysis techniques from AC to DC. This research will investigate the possibilities of defect identification through PD measurements under DC.With increasing HVDC installations such as GIS/GIL, cable links, convertors etc.,the method for its design validation and fitness through partial discharge measurement is gaining increasing popularity. This is only expected to rise with the introduction of renewable energy, electric vehicles (EVs) and its related infrastructure, lowered dependency on fossil fuels and an international policy shift towards the reduction of greenhouse gases. Moreover, given the remarkable success of partial discharge measurements in defect identification under AC, mounting expectations for a similar prospect under DC conditions is a thriving notion. Therefore, as a first steps towards characterizing PD defects under DC conditions this thesis studies the physics of discharge progression of 3 common defect types namely, corona, floating electrode and surface discharge in detail, in order to recognize minor if not major differences that will enable defect recognition. With this investigation, a comprehensive procedure is devised, enabling the identification of the three defects that were studied under DC conditions. The research also proposes the novel WePSA (Weighted Pulse Sequence Analysis) patterns discussed in chapter 7, section 7.2.2 as a prospective defect fingerprint that will allow identification of defects under DC.The simplicity and robust nature of these patterns make them self-explanatoryand easy to interpret. Several other unique defect behavioural features discovered during the study add value to this research and bring it closer to accomplishing the final goal of PD defect identification under DC stress conditions. This research could serve as a starting point for the scientific community to investigate further the other defect models and extend the defect discrimination strategy proposed in this thesis, chapter 7, section 7.5.","partial discharge; dielectric testing; HVDC; pattern recognition; corona; Defect identification","en","doctoral thesis","","978-94-6366-448-6","","","","","","","","","DC systems, Energy conversion & Storage","","",""
"uuid:b082d92b-ebdc-403c-a647-1eef4b2abe33","http://resolver.tudelft.nl/uuid:b082d92b-ebdc-403c-a647-1eef4b2abe33","Macroscopic Characteristics of Bicycle Traffic Flow: A bird's-eye view of cycling","Wierbos, M.J. (TU Delft Transport and Planning)","Hoogendoorn, S.P. (promotor); Knoop, V.L. (promotor); Delft University of Technology (degree granting institution)","2021","This dissertation studies the dynamics of bicycle traffic flow. The research focuses on busy situations such as bicycle queues at intersections and congestion upstream of a bottleneck. The cycling movements are analyzed on the aggregated scale in terms of density, speed and flow. Furthermore, cyclists are included in a traffic flow model by introducing a mode-specific speed function, which enables the modeling of a mixed traffic situation.","Bicycle Traffic Flow; Macroscopic Flow Model; Fundamental Diagram; Jam Density; Bicycle Dynamics","en","doctoral thesis","TRAIL Research School","978-90-5584-299-5","","","","TRAIL Thesis Series no. T2021/24, the Netherlands Research School TRAIL","","","","","Transport and Planning","","",""
"uuid:328a62be-28f0-4f7c-bc91-ee74479adb34","http://resolver.tudelft.nl/uuid:328a62be-28f0-4f7c-bc91-ee74479adb34","Ion Transport Mechanisms in Bipolar Membranes for Electrochemical Applications","Blommaert, M.A. (TU Delft ChemE/Materials for Energy Conversion and Storage)","Smith, W.A. (promotor); Vermaas, D.A. (copromotor); Delft University of Technology (degree granting institution)","2021","Electrochemistry can be a useful technology to enable the transition toward renewable energy sources and to prevent further climate change. Some electrochemical applications are already industrially implemented, like water electrolysers, while others, like CO2 reduction, need further development to be industrially competitive. Here, a bipolar membrane can provide the conditions for a step forward. It will not only separate (gaseous) products from the two electrodes in an electrochemical cell, but it will also allow the use of different pH and electrolyte at either side. This enables optimization of electrode compartments. As it is composed of a cation and anion exchange layer, no ion transport can theoretically occur across the entire membrane. To still provide the required charge transport, the BPM can dissociate water at the interface layer of the BPM, where often a catalyst is deposited to enhance this reaction. An ideal bipolar membrane has highly conductive membrane layers, fast kinetics at the interface layer and therefore high water permeability to the interface, a long lifetime, and a low ion crossover. As the BPM can be implemented in various electrochemical energy applications, likewater and CO2 electrolysis, batteries, fuel cells and resource recovery, different specific requirements exist per application. For batteries and resource recovery a low ion crossover is crucial, while for fuel cells, water and CO2 electrolysis fast kinetics are essential. Hence, for each application a specifically developed BPM is required for future applications. The development should be based on knowledge gained by studying the performance of BPMs in these conditions with techniques like electrochemical impedance spectroscopy.","bipolar membrane; CO2 reduction; Water electrolysis; ion transport","en","doctoral thesis","","978-94-6384-251-8","","","","","","","","","ChemE/Materials for Energy Conversion and Storage","","",""
"uuid:917ba84e-a013-4141-a9e4-71494b2c48e8","http://resolver.tudelft.nl/uuid:917ba84e-a013-4141-a9e4-71494b2c48e8","Microstructural Evolution in Medium Manganese Steels during Quenching and High-Temperature Partitioning Process: A combined experimental and modelling approach","Ayenampudi, S. (TU Delft Team Maria Santofimia Navarro)","Santofimia, Maria Jesus (promotor); Sietsma, J. (promotor); Delft University of Technology (degree granting institution)","2021","An effective way for the automotive industry to tackle the growing concern of CO2 emissions from automobiles is to reduce the overall weight of the vehicle, without compromising its performance and passenger safety. With the increasing demand of steels with enhanced properties in the last decade, the development of advanced high strength steels (AHSSs) has been focused on the design of complex microstructures leading to exceptional combinations of strength and ductility. One such steel is Quenching and Partitioning (Q&P) steel, which is typically composed of a high strength phase, martensite, and a softer phase, austenite, which contributes to the ductility of the material. The main strategy in developing Q&P steels involves partitioning of carbon, an interstitial alloying element, from supersaturated martensite (α׀, formed in an initial quenching step from the austenitisation temperature) into austenite (γ) during an isothermal holding (partitioning stage) to enhance the thermal and mechanical stability of austenite. If the partitioning step is subjected at higher temperatures, substitutional austenite-stabilising alloying elements, such as manganese, may partition to the austenite and significantly enhance the stability of austenite in the final microstructure. Keeping this in mind, experimental and modelling approaches are employed in this Ph.D. thesis to investigate the microstructural evolution and the mechanisms involved during the quenching and high-temperature partitioning process in five different medium manganese steels.","","en","doctoral thesis","","978-94-6423-427-5","","","","","","","","","Team Maria Santofimia Navarro","","",""
"uuid:386c834b-a534-4c0e-8710-1234cfd6bd9e","http://resolver.tudelft.nl/uuid:386c834b-a534-4c0e-8710-1234cfd6bd9e","Temporal Noise Reduction in CMOS Image Sensors","Ge, X. (TU Delft Electronic Instrumentation)","Theuwissen, A.J.P.A.M. (promotor); Delft University of Technology (degree granting institution)","2021","In pursuit of achieving the noise condition of single-photon imaging, system-level and circuit-level innovations and optimizations for CMOS image sensor (CIS) noise reduction are called for. Stimulated by this motivation, this thesis focuses on reducing the temporal noise generated in the pixels and the readout electronics.","","en","doctoral thesis","","978-94-6416-807-5","","","","","","","","","Electronic Instrumentation","","",""
"uuid:afa3dcd6-1dc7-41c7-ab10-775c3f04222e","http://resolver.tudelft.nl/uuid:afa3dcd6-1dc7-41c7-ab10-775c3f04222e","Modelling of Ultrasonic Array Signals in Anisotropic Media","Anand, C. (TU Delft Structural Integrity & Composites)","Groves, R.M. (promotor); Benedictus, R. (promotor); Delft University of Technology (degree granting institution)","2021","There has been a steady increase of composites and anisotropic materials in primary aircraft structures over the years. This increase is driven by the high strength to weight ratio of such materials leading to lighter and more efficient aircraft. As the uptake of such materials keeps increasing so does the complexity in geometry and manufacture of the parts which use these materials. As in the case of any structural component, these structures suffer from defects during manufacture and from damage inservice and have to be tested using nondestructive methods regularly. A plethora of NDT techniques exist for testing aircraft structures with ultrasonic NDT being a staple in the industry. Testing using single element transducers is being replaced by phased arrays as phased arrays can be used in different testing configurations such as beam steering or using phased arrays to capture the signals and then post process the data to form an image. Phased array testing of isotropic materials has been carried out for a number of years with a lot of research being devoted to the testing of such materials. The testing of isotropic materials is relatively less complicated than for anisotropic materials due to the constant material properties throughout the material, the types of defects or failures which such structures suffer and the effect of the material properties on the ultrasonic beam propagating through it or on the output signals. This leads to a simpler interpretation and easier understanding of results when such structures are tested. On the other hand testing of anisotropic materials is complicated by the fact that the material properties are not the same in every direction. The anisotropic nature of such materials has an effect on the ultrasonic beam propagation and output signals which makes the interpretation and understanding of the output more difficult. The layered structure of the composite materials also leads to multiple reflections and reverberations of the layers during inspection which are properties of the laminate, array parameters etc. leading to noise in the output signals and noise in the image. Due to this ultrasonic NDT remains a bottleneck in the further implementation of composites in aircraft structures. Understanding the effect of these various parameters experimentally would require dozens of experiments with different isolated parameters. To overcome the need for this enormous experimental campaign, modelling and simulations can be carried out to help understand these effects. There has been progress in the NDT community on the adoption of modelling methodologies to simulate the predict the response of the inspected material to the wave passing through it and the output signals which are generated. The numerical models already developed have been applied to a variety of scenarios and to different complex geometries but become quite computationally expensive as the material and inspection procedure complexity increases and take hours of runtime when run on personal computers. Some analytical and semi-analytical models which have been applied are restricted either in geometry, require the numerical evaluation of multiple integrals, are computationally expensive or do not take into consideration the array parameters or are singular when interacting with different geometries.","","en","doctoral thesis","","978-94-6421-445-1","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:e39ef7a3-7756-4ab4-a55b-92bce28a76fa","http://resolver.tudelft.nl/uuid:e39ef7a3-7756-4ab4-a55b-92bce28a76fa","In-pixel temperature sensors for dark current compensation of a CMOS image sensor","Abarca Prouza, A.N. (TU Delft Electronic Instrumentation)","Theuwissen, A.J.P.A.M. (promotor); Delft University of Technology (degree granting institution)","2021","This thesis describes the integration of temperature sensors into a CMOS image sensor (CIS). The temperature sensors provide the in-situ temperature of the pixels as well as the thermal distribution of the pixel array. The temperature and the thermal distribution are intended to be used to compensate for dark current affecting the CIS. Two different types of in-pixel temperature sensors have been explored. The first type of temperature sensor is based on a substrate parasitic bipolar junction transistor (BJT). The second type of temperature sensor that has been explored is based on the nMOS source follower (SF) transistor of the same pixel. The readout system that is used for the temperature sensors and for the image pixels is based on low noise column amplifiers. Both types of in-pixel temperature sensors (IPTS) have been designed implementing different techniques to improve their accuracy. The use of the IPTSs has been proved by measuring three prototypes chips. Also, a novel technique to compensate for the dark current of a CIS by using the IPTS has been proposed.","CMOS image sensor; temperature sensor; Tixel; In-Pixel; dark current","en","doctoral thesis","","978-94-6419-302-2","","","","","","","","","Electronic Instrumentation","","",""
"uuid:48b0221c-534f-4bdd-8b0f-b529375ec94a","http://resolver.tudelft.nl/uuid:48b0221c-534f-4bdd-8b0f-b529375ec94a","A free wake vortex model for floating wind turbine aerodynamics","Dong, J. (TU Delft Wind Energy)","Watson, S.J. (promotor); Viré, A.C. (copromotor); Delft University of Technology (degree granting institution)","2021","In order to significantly increase the share of wind energy produced worldwide, wind energy technology is moving from onshore to offshore and from shallow water to deep water. Floating offshore wind turbines (FOWTs) are expected to be economically better than bottom-mounted turbines when placed in water deeper than 60 metres. Despite key initiatives such as the installation of the world’s first floating wind farm off the coast of Scotland in 2017, many design and operational challenges need to be solved to make floating offshore wind turbines economically attractive.","wind energy; floating offshore wind turbine; vortex ring method; vortex ring state; BEM","en","doctoral thesis","","978-94-6384-250-1","","","","","","","","","Wind Energy","","",""
"uuid:e700d127-e035-4fc0-9fc0-9afa95186391","http://resolver.tudelft.nl/uuid:e700d127-e035-4fc0-9fc0-9afa95186391","Disclosing Interstices: Open-ended Design Transformation of Urban Leftover Spaces","Luo, S. (TU Delft Situated Architecture)","Avermaete, T.L.P. (promotor); Havik, K.M. (promotor); de Wit, S.I. (promotor); Delft University of Technology (degree granting institution)","2021","Leftover spaces are neglected and obsolete spaces within the city. As they are temporarily unoccupied by defined urban functions, leftover spaces provide unique “interstitial conditions” that open for wild species as well as different informal social activities, offering crucial complements to the formal and defined urban spaces. In this context, the design of leftover spaces poses a paradox between the practice of design that projects a set of definitions onto the site, and the indeterminacy of leftover spaces that opens for appropriation and interpretation. By recognizing this paradox within the design of leftover spaces, this thesis strives to explore open-ended design approaches that engage leftover spaces without losing their essential qualities of indeterminacy. Three case studies—Valby Smedestræde 2 in Copenhagen, Le Jardin Du Tiers-Paysage [the Garden of the Third Landscape] in Saint-Nazaire, and the Dalston Eastern Curve Garden in London are scrutinized with a uniform framework consisting of four lenses: the morphological, social, ecological and material lens. The plan, the section, the perspective and axonometric drawings are used as tools to examine the cases and further, to represent the results of reading through each lens. The study delivers four general modi operandi—disclosing, selecting, founding, and sustaining—for engaging with the interstitial condition of leftover spaces. This thesis further invites for an exploration on the role of “gardeners”, nurtures and balances diverse social and ecological practices in the on-going transformation of the site.","leftover space; interstitial; landscape architecture; site specific; indeterminacy","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-447-9","","","","A+BE | Architecture and the Built Environment No 16 (2021)","","","","","Situated Architecture","","",""
"uuid:fd04588a-842b-4851-959e-4f1d24fd0bc3","http://resolver.tudelft.nl/uuid:fd04588a-842b-4851-959e-4f1d24fd0bc3","Identification and elimination of biosynthetic oxygen requirements in yeasts","Dekker, W.J.C. (TU Delft BT/Industriele Microbiologie)","Pronk, J.T. (promotor); Mans, R. (copromotor); Delft University of Technology (degree granting institution)","2021","Saccharomyces cerevisiae is a natural producer of ethanol and industrial strains can produce ethanol at high volumetric rates and near-theoretical yields. In addition to its fast fermentative metabolism, its GRAS (generally recognized as safe) status, ease of genetic engineering, tolerance to low pH and high ethanol concentrations contribute to the popularity of S. cerevisiae as an industrial platform organism. Ethanolic fermentation is, however, not unique to S. cerevisiae. Other facultatively fermentative yeast species share many performance characteristics with S. cerevisiae and may even hold additional advantages for industrial application. However, they typically lack one key distinctive phenotype of S. cerevisiae: its capability to grow fast in the absence of oxygen on simple media with minimal addition of vitamins and anaerobic growth factors. This characteristic is essential for industrial application, as aeration of large bioreactors is expensive and near-theoretical yields of fermentation products can only be achieved in the absence of respiratory dissimilation of sugars.","","en","doctoral thesis","","978-94-6423-422-0","","","","","","","","","BT/Industriele Microbiologie","","",""
"uuid:b49b0f3f-3f23-4179-a17e-2a0c754c53a5","http://resolver.tudelft.nl/uuid:b49b0f3f-3f23-4179-a17e-2a0c754c53a5","Hydraulic modelling of liquid-solid fluidisation in drinking water treatment processes","Kramer, O.J.I.","van der Hoek, J.P. (promotor); Padding, J.T. (promotor); Delft University of Technology (degree granting institution)","2021","In drinking water treatment plants, multiphase flows are a frequent phenomenon. Examples of such flows are pellet-softening and filter backwashing where liquid-solid fluidisation is applied. A better grasp of these fluidisation processes is needed to be able to determine optimal hydraulic states. In this research, models were developed, and experiments performed to gain such hydraulic knowledge. As a result, treatment processes can be made more flexible. In a rapidly changing environment, drinking water production must be flexible to ensure robustness and to tackle challenges related to sustainability and long-term changes. In the hydraulic models, the voidage in the fluidised bed and the particle size of the suspended granules are crucial variables. Voidage prediction is challenging as the fluidised bed is a dynamic environment showing highly heterogeneous behaviour that is hard to describe with an effective model. And particle size causes a conundrum due to the irregular shapes of the applied granules. Through the combination of hydraulic dimensionless Reynolds and Froude numbers, an accurate voidage prediction model has now been developed. With a straightforward pseudo-3D image analysis for non-spherical particles measuring particle mass and density, the dimensioned shapes of, for instance, ellipsoids can be determined. Particle shape factors included in models are not constant as is commonly believed, but dynamic. Applying advanced computational fluid dynamics simulations confirmed significant heterogeneous particle-fluid patterns in fluidised beds. Comprehensive sedimentation experiments showed that the average drag coefficient and terminal setting velocity of individual grains can be estimated reasonably well, but with a significant degree of data spread around the mean values. For engineering purposes, this is relevant information which should be taken into consideration. A new soft-sensor was designed to determine the voidage gradient and particle size profile in a fluidised bed. The expansion degree of highly erratic, polydisperse and porous granular activated carbon grains can be predicted with a model, but in full-scale processes the grains are subject to change, and therefore it is most likely that the prediction accuracy will deteriorate rapidly. For reliable drinking water quality, smart models provide solutions to complex challenges, but they are only effective when they are calibrated and validated in advanced pilot plants and are applied in full-scale processes with diligence and commitment on the part of multidisciplinary teams.","Drinking water treatment; Fluidisation; Voidage prediction; Carman–Kozeny; Data-driven modelling; Drag relations; Fluidised bed reactors; Full-scale water softening; granular activated carbon; Hydraulic models; Hydraulic state; Hydrostatic soft sensor; Hydrometer; Liquid-solid fluidisation; CFD; Multiphase computational fluid dynamics; Multiphase flows; particle orientation; Pellet softening; Porosity prediction; modelling and experimentation; Reactor performance; Richardson–Zaki; Symbolic computation; terminal settling velocity; Unsteady behaviour; Void fraction distribution; Minimal fluidisation; Hydraulics drag relations; Filter backwashing; ETSW; Dynamic particle shape factors; Sphericity; drag coefficient; calcium carbonate pellets; Data spread; Water; Expansion column; symbolic regression; Education","en","doctoral thesis","","978-94-6366-436-3","","","","","","2021-09-10","","","Complex Fluid Processing","","",""
"uuid:a0fbc6f9-60e0-4cba-930f-9471911d48a4","http://resolver.tudelft.nl/uuid:a0fbc6f9-60e0-4cba-930f-9471911d48a4","Reducing the need for communication in remote, natural waveform co-simulations of electrical power systems: An adaptive approach","López, Claudio (TU Delft Intelligent Electrical Power Grids)","Palensky, P. (promotor); Cvetkovic, M. (promotor); Delft University of Technology (degree granting institution)","2021","Electrical power systems are becoming more interconnected and technologically diverse to accommodate ever increasing shares of non-dispatchable generation. These changes are imposing new requirements on the simulation of electrical power systems. One of these requirements is that simulations integrate models of different subsystems, developed by different experts, from different organizations, which may not wish to disclose the information embedded in their models. This, to study the interactions between neighboring, interconnected grids, or between existing grids and new devices. Another requirement is that they reproduce phenomena in a wider range of timescales, to study the interactions between subsystems with slow and fast dynamic behavior. One way for electrical power system simulations to comply with these requirements is with remote, natural waveform co-simulation. Co-simulation is a model integration approach in which each subsystem is simulated in a different simulator. These simulators exchange interface variables, at runtime, to represent interactions between the subsystems. Since the simulators can interact remotely, over a communication network, co-simulation has the advantage that the organization that owns the model needs not disclose it. It also has the advantage that each model can be simulated with the simulator for which it was intended, so organizations that use different simulators can collaborate without having to translate their models. If such a co-simulation is performed using natural waveform models, at a high time resolution, then it is also possible to reproduce a wide range of timescales, from slow to very fast phenomena. But the fact that such a co-simulation is performed remotely, with the communication delays this entails, and that its high time resolution translates into a high communication rate, make it rather slow. Thus, it is desirable to reduce the need for inter-simulator communication. In this thesis I explore a solution to this communication challenge, based on the hypothesis that slower phenomena are easier to predict. If the co-simulated phenomena can be classified as predictable according to some criterion, it should be possible to find expressions that predict interface variables, and that each simulator can use to compute its own inputs instead of expecting inputs to be communicated. I propose a criterion for classifying phenomena as predictable or unpredictable, as well as methods for finding these expressions based on an interpolated Fourier transform and Taylor-Kalman filters. Additionally, I propose a co-simulation algorithm where the simulators compute their own inputs while the co-simulated phenomena are predictable. After applying these ideas to the co-simulation of two different test systems, I was able to reduce the need for communication up to 60%. A co-simulation framework with these characteristics is a step towards more descriptive models and better performing simulations, and a tool that increases our ability to take better advantage of existing energy infrastructure, as well as to develop it further.
This research is focused on investigating the opportunities that floating services implemented on water spaces can bring to answer part of the constraints faced in slum upgrading projects dedicated to living conditions’ improvement in slums located close to or on water bodies. To do so, research by design approach is conducted. This approach addresses three elements: (1) analyzing upgrading projects and investigating slums located in flood-prone areas, (2) developing and testing, through an iterative process, a design proposition for services based on floating technology to improve living conditions in these locations, (3) understanding the governance and how to implement these propositions in these locations.","","en","doctoral thesis","","","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:b0661996-af2d-4bd2-9127-48c678435f68","http://resolver.tudelft.nl/uuid:b0661996-af2d-4bd2-9127-48c678435f68","Architectural Design Performance Through Computational Intelligence: A Comprehensive Decision Support Framework","Chatzikonstantinou, I. (TU Delft Design Informatics)","Sariyildiz, I.S. (promotor); Turrin, M. (copromotor); Delft University of Technology (degree granting institution)","2021","Identification of design solutions for a built environment that caters to the human needs at all levels, and more specifically, to the needs of the clients and the society, is the main task addressed by architectural design. Architectural design is a prime example of a design task that is characterized by a high degree of complexity. Architectural design problems by definition entail relationships between decisions and objectives that are all but transparent. For the decision-maker to be able to guide design towards fulfilling objectives, a ‘closed-loop’ approach where variations in design solutions are generated and evaluated in an iterative process is employed. Due to the sheer number of alternative solutions to problems of even a moderate scale (due to combinatorial explosion), it is only feasible to iterate over a minuscule fraction of possible solutions. Design intuition of the professionals involved in design is a strong driving force behind the identification of design direction, in which alternatives are explored as part of the preliminary design process. This is an approach that depends on the human cognitive capabilities to navigate the design space and identify potentially promising solutions. Regardless, the complexity associated with architectural design often poses significant challenges to human cognition. Human cognition, while formidable in its ability to flexibly and efficiently navigate challenging environments, is faced with difficulties in addressing the complexity factors outlined previously, namely: the excessive (combinatorially explosive) number of potential solutions to architectural problems, the complex and non-linear relations between objects and their properties and the conflicting nature of design goals that architectural design entails. Thus, design professionals are often faced with the real threat that their decisions may be biased due to the natural limitations of human cognition acting in complex environments.
Due to the reasons highlighted above, a systematic approach to design space exploration must be undertaken, to maximize the potential for discovering optimal solutions to design problems. Due to the nature of such problems that entail multiple conflicting objectives, a single best solution is generally not attainable. Nonetheless, best-tradeoff solutions are distinguished and highly desirable for such multi-objective design problems. The field of Computational Intelligence, and within that in particular Evolutionary Computation-based (EC) intelligent approaches, offer a lucrative option as decision-support tools in design, as they are able to efficiently address the aforementioned proponents of design complexity. EC approaches are able to navigate the design space efficiently and systematically, considering multiple conflicting objectives and hard constraints, and being able to deal with arbitrary relations between design decision variables and design objectives.
In today's setting, products of architecture must lead the way to a sustainable and environmentally friendlier society. As such, the performance of buildings has become the main driving force behind the design process, being referred to as ``performance-driven design''. This initiative emphasizes the quantitative evaluation of a design's function in accordance with established design objectives, related to aspects such as energy performance, visual and thermal comfort, cost and environmental footprint, etc. Simulation-based tools that enable accurate design evaluation are gaining ground and offering valuable insight into the performance of buildings. Nonetheless, making decisions in this multi-objective environment is not trivial, and, as stipulated above, may be challenging to human cognition. Thus, in today’s setting where the quantitative performance of buildings keeps gaining ground, the research on the application of EC in architectural design is high on the scientific agenda.
Recognizing the impact design complexity has on architectural design and the potential that EC-based approaches offer in addressing it, this thesis proposes a comprehensive computational intelligence decision support system that combines components based on intelligence with ones based on cognition, with the ultimate aim of enabling decision-makers manage design complexity and improve decision making. In particular, this thesis adopts the theoretical standpoint that efficient navigation of an unknown environment assumes a fusion of intelligence and cognition. In this sense, and given the already widespread adoption of intelligent approaches (such as EC mentioned above), the main contribution of this thesis is to endow the intelligent approach with cognitive facilities, so as to improve its efficiency to the point that it is readily applicable to the early stages of the architectural design process.
Fusion of intelligent with cognitive approaches, as outlined in the approach proposed by this thesis, offers the unique advantage of a decision support approach that is both powerful, owing to the extensive capabilities of intelligent search algorithms, and flexible, owing to the extensive knowledge modeling capabilities of cognitive approaches. As such, it is uniquely suited to the early conceptual design stage where the need to explore large design spaces, flexibly redefine the design problem, and satisfy preferences that are not included in the primary design goals, are all paramount.
Thus, the word ``comprehensive'' as it appears on this thesis' title obtains a twofold meaning: On one hand comprehension as in the combination of computational intelligence and cognition in a single approach; on the other hand, as in \textit{comprehension} of the environment, the result of an intelligent and cognitive approach to understanding.
Firstly, it seeks to address the excessive computational burden associated with the use of modern high-fidelity simulation software in architecture, to render computational optimization more approachable. There is a clear trend in modern design practice to employ accurate simulation-based performance assessment tools from the very early stages of design. The use of such tools provides a valuable advantage to the decision-maker, in endowing objective awareness regarding the performance of a design solution. On the other hand, such tools are associated with a heavy computational burden, which may limit their application to the conceptual design stage. There exist methods to alleviate the computational burden through the use of computational cognitive machine learning tools, also known as surrogate modeling. However, training of surrogate models can be time-consuming itself, thus limiting the application. This thesis proposes a surrogate model that is modular in that it considers each space of the building in question as a separate entity, encoded through generic variables, and as such promotes model reuse in different design cases.
Secondly, it seeks to advance the state of the art on post-Pareto decision support by proposing a cognitive machine-learning based approach that enables the decision-maker to combine near-optimality with preferences regarding concrete features of the design solution. Post-Pareto decision making is an important step of the decision-making process, that seeks to identify a best-tradeoff solution among the possible ones that best matches the decision-maker's preferences in terms of performance. Such preferences are termed second-order because they follow design objectives in terms of importance. Nonetheless, it is often in architectural design that preferences are expressed in terms of design properties and not performance. Due to the non-linearity between the objective function space and the decision variable space that dictates object properties, it is challenging to exercise decision making using second-order preferences. Here the contribution of this thesis is a machine cognitive approach that learns the underlying relationships between object properties, distinguishing those that are relevant when the object is optimal with respect to design objectives. In other words, only imposing relations that are relevant to achieve optimality, it enables the expression of preferences by the decision-maker that are minimally constrained.
The main output of this thesis is a comprehensive decision support framework; it is a framework, in the sense that it comprises a set of methods and implemented tools that seek to augment decision making in architectural design; it is termed comprehensive in that it employs computational cognition and machine learning to augment the intelligent decision support capabilities throughout the design decision support process. It is also generic and applicable as-is to a wide spectrum of architectural design problems. In the context of this thesis, validation of the proposed approach is performed mainly in case studies relevant to facade design, recognizing this design topic as a complexity-exhibiting exemplar in architectural design practice.","Computational intelligence; Complexity; Architectural design; decision support; Evolutionary computation; Neural Networks","en","doctoral thesis","","978-94-6384-247-1","","","","","","","","","Design Informatics","","",""
"uuid:d6fa8b1f-0f9b-4ed3-aedc-b4e93c83b2fc","http://resolver.tudelft.nl/uuid:d6fa8b1f-0f9b-4ed3-aedc-b4e93c83b2fc","Sensor-based quality inspection of secondary resources: Laser-induced breakdown spectroscopy","Xia, H. (TU Delft Resources & Recycling)","Rem, P.C. (promotor); Bakker, M.C.M. (copromotor); Delft University of Technology (degree granting institution)","2021","The quality control of materials from waste is an important step for acceptance in new high-quality products. Given the large volumes and complexity of waste streams, this requires advanced sensor technology such as the Laser Induced Breakdown Spectroscopy (LIBS) investigated in this thesis. Recyclers often rely solely on manual sorting and visual inspection, which is far less reliable and accurate than sensor technology that enables automated inspection. The main source of waste in this thesis is demolition concrete. It is characterized by large amounts of moisty granular material, varying amounts and types of impurities, and a dusty environment. This presents a major challenge to introduce an optical inspection technique such as LIBS. The research starts with a literature study into the state of the art of LIBS as a material identification technique. This is followed by a more physically oriented literature review of the transient processes and parameters involved in ablation and plasma formation by a high power pulsed Nd:YAG 1064 nm laser. From this follows the development of models to account for the different processes and phases of the matter. To this end, the different transient processes are separated and analysed by making use of local equilibrium conditions. Subsequently, the influence of the optics and associated hardware that is necessary to obtain good optical data is investigated. This knowledge is supplemented by a study of light collection and data processing techniques to further improve the quality and reproducibility of the optical data, given the challenging conditions in a recycling factory. Subsequently, to better understand the potential and limitations of experimental LIBS data, the useful information about the chemical composition and technical properties of the material sample is tested under different conditions. The aforementioned separate physical models have been compiled into a complete plasma model (MLIBS for short). This can explain the properties and emissions of a laser induced plasma. This model can also be used in reverse to determine the composition of sampled elements in a laser induced plasma. The focus is on the ability not to average data to eliminate noise, but to use the data from each laser shot usefully for identification. This is called single-shot data and it is the most efficient way to implement LIBS in practice, because the price of LIBS hardware increases rapidly with the number of shots a laser has to deliver per second. The last part of research is on statistical techniques that can use all relevant information in LIBS data to make the best decision when it comes to identifying different complex materials. By accumulating statistics, the material composition of contaminated concrete waste flows can be determined more reliably. Based on the entire research in this thesis, a prototype LIBS platform has been developed and integrated with a conveyor belt for the inspection of demolition concrete in a recycling plant. The platform withstood harsh outdoor conditions and successfully demonstrated the capabilities of automated inspection with LIBS. Development of the prototype included the design and integration of a laser system, optics, electronic control equipment, real-time data acquisition system, spectral data processing software, and a weatherproof protective frame.","LIBS; Quality inspection; Recycling; Demolition Concrete; Classification","en","doctoral thesis","","978-94-6366-437-0","","","","","","","","","Resources & Recycling","","",""
"uuid:49212445-a434-447f-95cc-bedd2167a0fd","http://resolver.tudelft.nl/uuid:49212445-a434-447f-95cc-bedd2167a0fd","On the Demand for Flexible and Responsive Freight Transportation Services","Khakdaman, M. (TU Delft Transport and Logistics)","Tavasszy, Lorant (promotor); Rezaei, J. (copromotor); Delft University of Technology (degree granting institution)","2021","Several innovations during the last decades have impacted freight transportation as one of the major drivers of global economic development. Both demand and supply of freight transportation services have been transformed through major recent logistics’ innovations such as globalization dynamics, freight network integration, mass-individualized logistics services, digitalization and advanced transportation technologies. Changes in the service deployment of logistics service providers (LSPs) will ultimately transform characteristics of their logistics service packages resulting in the emergence of new service features for their customers i.e., shipper firms. These new features not only influence business operations of the customer firms directly or indirectly, but also change shipper firms’ expectations in the long run. While a deep understanding of customers’ needs is one of the key factors to sustain competitive advantage of firms, in freight transportation less attention is being paid to understand beforehand whether an innovation is appreciated and will be utilized by customers. The innovations in freight transportation services are offering shippers new choices and service offerings which claim big advantages for shippers. However, no one has yet tried to measure these advantages. This lack of knowledge makes it difficult for LSPs to compose multi-dimensional service packages and to set prices in the market. Fulfilling this practical need requires a major research effort, to formulate and empirically model the demand for these services....","","en","doctoral thesis","","978-90-5584-300-8","","","","","","","","","Transport and Logistics","","",""
"uuid:d467c061-ef88-4b70-aaf2-5233923282eb","http://resolver.tudelft.nl/uuid:d467c061-ef88-4b70-aaf2-5233923282eb","Incentives and Cryptographic Protocols for Bitcoin-like Blockchains","Ersoy, O. (TU Delft Dataintensive Systems; TU Delft Cyber Security)","Lagendijk, R.L. (promotor); Erkin, Z. (copromotor); Delft University of Technology (degree granting institution)","2021","Bitcoin is a widely acknowledged digital currency that is designed in a decentralized manner. The recognition of Bitcoin has introduced the notion of cryptocurrencies and, in general, blockchain technology. Blockchain, within less than a decade, has become one of the most exciting technological developments. Among several exciting use cases and projects, there has been an inevitable hype in the industry as well. While in the research community, it has opened an interdisciplinary research field among cryptography, distributed systems, and economics.
Notwithstanding the interest and great effort, blockchain is still a new and evolving technology, and numerous challenges need to be addressed.
To name a few, security, privacy, scalability, smart contracts, and economic aspects with their manifold sub-challenges can be mentioned.Among the research challenges, in this thesis, we investigate three crucial ones for the long-term functionality of the Bitcoin-like blockchains, which are security, scalability, and economic aspects.Our works can be divided into two subjects: transaction propagation and payment channel networks.
Transaction propagation or advertisement refers to the dissemination of newly created transactions of clients in the mining network.In this thesis, we investigate the lack of incentives for transaction propagation and provide an incentive mechanism for peer-to-peer mining networks. Moreover, we focus on the inefficient routing of the transactions and propose a smart routing mechanism.
Payment channel networks (PCN) are promising layer-2 protocols aiming to improve the scalability of blockchains.In this thesis, we present three works on the PCNs.Firstly, we investigate the incentives to participate in multi-hop payments and propose a profit strategy that would encourage the use of PCNs.
Secondly, we propose the first Bitcoin-compatible virtual channel constructions on payment channels that improve the efficiency and availability of multi-hop payments. Finally, we introduce the first post-quantum PCN utilizing our post-quantum adaptor signature scheme. Our works mainly focus on Bitcoin and its PCN, Lightning Network, yet they can be applied to the blockchains and cryptocurrencies having similar characteristics.","","en","doctoral thesis","","978-94-6384-248-8","","","","","","","","","Dataintensive Systems","","",""
"uuid:db85753a-8d32-4a0b-b220-780fed198672","http://resolver.tudelft.nl/uuid:db85753a-8d32-4a0b-b220-780fed198672","Parameter estimation in single molecule microscopy","Thorsen, R.Ø. (TU Delft ImPhys/Computational Imaging)","Rieger, B. (promotor); Stallinga, S. (promotor); Delft University of Technology (degree granting institution)","2021","","photometry; aberrations; structured illumination; dipole emitters; point-spread-function engineering; polarization","en","doctoral thesis","","","","","","","","","","","ImPhys/Computational Imaging","","",""
"uuid:c4e6a875-4117-4552-88f6-ce0abeb80c54","http://resolver.tudelft.nl/uuid:c4e6a875-4117-4552-88f6-ce0abeb80c54","Early observability of EOR process effectiveness","Fatemi, S.A. (TU Delft Reservoir Engineering)","Rossen, W.R. (promotor); Delft University of Technology (degree granting institution)","2021","This dissertation focuses on polymer flooding, as an example of an EOR process. Chemical floods such as polymer floods are EOR techniques intended to increase sweep and/or displacement efficiency. Even though the compatibility and the efficiency of the injected chemicals are thoroughly tested and validated in the laboratory, uncertainty still remains regarding their actual performance in the reservoir. These uncertainties can result from the differences in the scale of investigation (core scale to field scale), lack of adequate understanding of geological, mineralogical and petrophysical properties of the formation, and the long-term performance of the chemical slug in the reservoir. Therefore, in addition to thorough laboratory tests, practitioners should compare the uncertainty surrounding the performance of the EOR agent in-situ to that arising from geological uncertainty, because, as noted, a process that did succeed in one formation might succeed in another field if achieves its technical objectives. In this dissertation, the effects of polymer rheology, mixing with different brines in-situ, temperature, pressure, adsorption, permeability reduction, inaccessible pore volume and non-Newtonian behavior on chemical-flood effectiveness is represented here indirectly as a simple loss of polymer viscosity in situ from that projected for the process. To discern the performance of the EOR agent in-situ in the midst of geological uncertainty, we propose a general workflow and present three case studies for this challenge. This workflow could be extended to another EOR process by including mechanisms or manifestations of technical failure corresponding to that process.","Petroleum; uncertainty analysis; polymer flood; waterflooding","en","doctoral thesis","","","","","","","","","","","Reservoir Engineering","","",""
"uuid:791a858e-c194-47bc-b57c-43346001b41b","http://resolver.tudelft.nl/uuid:791a858e-c194-47bc-b57c-43346001b41b","Distance metrology using optical frequency comb a step closer to industrial applications","Hei, K. (TU Delft ImPhys/Medical Imaging)","Bhattacharya, N. (promotor); Carroll, E.C.M. (copromotor); Delft University of Technology (degree granting institution)","2021","Length is one of the fundamental physical quantities and its precise measurement is very important for science and technology. Laser-based distance metrology technique is a powerful tool, which is widely applied in geodetic monitoring, environmental monitoring and precision measurement engineering including in space applications. Since the invention of the optical frequency comb (OFC) at the beginning of this century, it has been used as a versatile tool for many applications, such as frequency metrology, spectroscopy, etc. OFC has played an important role in distance metrology enabling the developed techniques to achieve high accuracy in long distance measurement. In the time domain, OFC is a pulse train, which can measure distance with time of flight method or correlation detection method. In the frequency domain, spectrum of OFC consists of a series of discrete lines with equal frequency difference. Methods of dispersive interferometry, multi-heterodyne interferometry, and multi-wavelength interferometry are the techniques proposed and demonstrated in spectral domain distance metrology. Though distance metrology techniques based on OFC have been developed for tens of years, there are still challenges that need to be overcome. In most applications the measurements are completed in the air, where the results are influenced by refractive index of air. As the OFC is a multi-wavelength laser source, refractive index of air is difficult to determine. The other challenge is complexity and high cost of measurement system. Until
now, to my knowledge, there is no commercial rangefinder which is based on OFC. The large size, complex configuration and high price are the bottlenecks restricting OFC’s entrance into industrial applications.","Frequency Comb; Integrated Optics; Distance Metrology","en","doctoral thesis","","978-94-6421-414-7","","","","","","","","","ImPhys/Medical Imaging","","",""
"uuid:735fe56b-c6ae-4432-ad80-6da39d3666ed","http://resolver.tudelft.nl/uuid:735fe56b-c6ae-4432-ad80-6da39d3666ed","Highway Traffic Congestion Patterns: Feature extraction and Pattern retrieval","Nguyen, T.T. (TU Delft Transport and Planning)","van Lint, J.W.C. (promotor); Vu, H.L. (promotor); Calvert, S.C. (copromotor); Delft University of Technology (degree granting institution)","2021","Traffic congestion occurs daily, which can have negative effects on not only the quality of mobility, but also other important aspects of life like economic growth, health and environment. Both understanding and efficiently managing traffic are therefore crucially important tasks. Vast amounts of data are collected daily to gain insights into the dynamics of traffic. However, these data are typically stored in the form of raw measurements, that might hamper their potential benefits to both researchers and practitioners. A more informative and compact way to store traffic data is in the form of spatio-temporal maps, which have been shown to have advantage in intuitively observing traffic states. However, collecting, managing and retrieving such 2D patterns of congested traffic on large networks are challenging tasks. Accordingly, this dissertation is dedicated to developing methodologies and tools to advance the utilisation of traffic data, in particular, congestion patterns. A conceptual framework for an intelligent search engine for congestion patterns (socalled CoSI) is designed. It covers the entire requirements necessary to develop such a system, ranging from processing raw data to searching through the resulting database of congestion patterns. Overall, the framework consists of two parts: database construction and search application (or so-called pattern retrieval). Their designs and relations are comprehensively presented in this research. The database construction is responsible for preparing a database of patterns of congested traffic which is carefully designed for the conveniences of a search application. Its conceptual design consists of three layers (or phases): pattern collection, feature extraction and pattern annotation. Regarding the search application, several possibilities for retrieving patterns are identified in association with the aforementioned steps of constructing the underlying database.","","en","doctoral thesis","","978-90-5584-297-1","","","","TRAIL Thesis Series no. T2021/22, the Netherlands Research School TRAIL","","","","","Transport and Planning","","",""
"uuid:a0d9289b-9f24-4805-86ab-09f12714a946","http://resolver.tudelft.nl/uuid:a0d9289b-9f24-4805-86ab-09f12714a946","Investigation of limestone-calcined clay-based cementitious materials for sustainable 3d concrete printing","Chen, Y. (TU Delft Materials and Environment)","Schlangen, E. (promotor); Veer, F.A. (promotor); Copuroglu, Oguzhan (copromotor); Delft University of Technology (degree granting institution)","2021","Extrusion-based 3D concrete printing (3DCP), as one of the emerging techniques, has received considerable attention from both academia and industry, due to its numerous benefits for concrete construction, through enhancing the freedom of architectural design, eliminating formwork, optimizing material use, and decreasing wastes, labors and costs. However, in most of proposed 3D printable cementitious materials, ordinary Portland cement (PC) still occupies a relatively high content, which partially neutralizes the sustainable benefits of 3DCP in aspects of formwork free and material-efficient designs. To date, considerable attempts have been made to develop sustainable cementitious materials in the context of 3DCP. Common supplementary cementitious materials (SCMs), i.e., fly ash, silica fume, and slag, are utilized as an ingredient of the binder in 3D printable cementitious materials, which is the most generic and applicable strategy for reducing the use of PC. Nevertheless, these common SCMs, which belong to industrial by-products, are gradually being depleted. For longer-term development, limestone and calcined clay appear to be suitable alternatives to SCMs, considered the worldwide abundance of raw materials and low CO2 footprint in the material production. The main goal of this thesis is to develop limestone-calcined clay-based cementitious materials for 3DCP. In order to develop such printable mixtures, investigations about the effect of different material and printing parameters on fresh and hardened properties were conducted. In Chapter 1, the subject of this research,","3D concrete printing; Limestone; Calcined clay; Sustainability; Viscosity modifying admixture; Fresh properties; Mechanical performance; Interlayer bonding; Air void","en","doctoral thesis","","978-94-6421-404-8","","","","","","","","","Materials and Environment","","",""
"uuid:faf85ca4-9020-469f-a79f-f1c7d6719337","http://resolver.tudelft.nl/uuid:faf85ca4-9020-469f-a79f-f1c7d6719337","Bi-modal solar thermal propulsion and power system: Modelling and optimisation for the next-generation of small satellites","Leverone, F.K. (TU Delft Space Systems Egineering)","Gill, E.K.A. (promotor); Cervone, A. (copromotor); Pini, M. (copromotor); Delft University of Technology (degree granting institution)","2021","Exploring and utilising space to benefit humankind is undeniably a costly venture where the higher the mass of the satellite, the higher the cost to launch. However, the rewards of space-related achievements are indisputable. These achievements range from increasing our understanding of the universe using satellites, such as the Hubble Space Telescope,to improving the predictive ability of weather, climate, and natural disaster phenomena with the help of Earth observation satellites.","Solar thermal propulsion; Micro-Organic Rankine Cycle; Small satellites; Latent heat","en","doctoral thesis","","978-94-6421-408-6","","","","","","","","","Space Systems Egineering","","",""
"uuid:5bea521a-d6ac-4f6a-b45f-d2a87fec53ed","http://resolver.tudelft.nl/uuid:5bea521a-d6ac-4f6a-b45f-d2a87fec53ed","The Zoom ADC: An Energy Efficient ADC for High Resolution","Gonen, B. (TU Delft Electronic Instrumentation)","Makinwa, K.A.A. (promotor); Delft University of Technology (degree granting institution)","2021","Analog-to-digital converters (ADCs) are an indispensable part of the digital age we are living in, as they form the interface between physical reality and virtual reality. Higher ADC energy efficiency is the dominant focus of ADC design research due to the high impact of ADC energy consumption to total energy consumption of the systems they are employed in. The energy consumption of an ADC increases with its resolution within a given signal bandwidth, which makes the efficiency of high-resolution ADCs even more important. Although the average energy efficiency of ADCs improved orders of magnitude in the last two decades, the high energy consumption of high-resolution ADCs was still restrictive for a large range of applications. This thesis investigates how the zoom ADC architecture can achieve both high
resolution and high energy efficiency.","zoom; adc; zoom adc; data conversion; analog; integrated circuits","en","doctoral thesis","","978-94-6384-229-7","","","","","","","","","Electronic Instrumentation","","",""
"uuid:fd3d84a7-c162-4cd4-8b19-bee53e00505f","http://resolver.tudelft.nl/uuid:fd3d84a7-c162-4cd4-8b19-bee53e00505f","Innovative Permeable Materials for Broadband Trailing-Edge Noise Mitigation","Rubio Carpio, A. (TU Delft Wind Energy)","Snellen, M. (promotor); Ragni, D. (promotor); Avallone, F. (copromotor); Delft University of Technology (degree granting institution)","2021","Permeable trailing edges are experimentally investigated in this thesis as a mean tomitigate aeroacoustic noise. The first part of the study is devoted to deciphering themechanisms of noise generation involved in these devices. To this aim, trailing-edge inserts are manufactured employing commercially-produced open-cell metallic foams with different material properties, i.e., permeability and pore size.","Trailing-edge Noise; Aeroacoustics; Porous materials","en","doctoral thesis","","978-94-6384-240-2","","","","","","2021-07-14","","","Wind Energy","","",""
"uuid:ab767694-02ee-49a4-a35d-f3eb7daaf538","http://resolver.tudelft.nl/uuid:ab767694-02ee-49a4-a35d-f3eb7daaf538","Characterisation of Sludge Rheology and Sludge Mixing in Gas-mixed Anaerobic Digesters","Wei, P. (TU Delft Sanitary Engineering)","van Lier, J.B. (promotor); de Kreuk, M.K. (promotor); Uijttewaal, W.S.J. (promotor); Delft University of Technology (degree granting institution)","2021","Excess sludge handling is as important as treating the wastewater in determining the operational performance and costs in modern municipal wastewater treatment plants (WWTPs). In practice, excess sludge handling is widely implemented using anaerobic digestion (AD), in which the organic matter inside the sludge is not only stabilised and partly degraded, but also converted to methane gas as alternative fuel. Thus, optimal AD performance aims for maximising both sludge reduction and energy recovery, which is sometimes difficult to achieve in practice. Troubleshooting and optimisation are challenging to implement, due to the limited accessibility of anaerobic digesters and uncertainties of key influencing factors, including mixing behaviour and feed sludge characteristics. This thesis is focused on the possible enhancement of AD performance, by characterising the sludge rheology and evaluating its impact on sludge mixing inside the digesters. A full-scale digester, applying the gas-mixing mode, was selected for detailed investigations. Rheological properties of waste activated sludge (WAS) and digestate (DGT) from the selected WWTP were characterised, using both rheological measurements and rheological model fitting. DGT showed yield-pseudoplastic behaviour well characterised by the Herschel-Bulkley model. However, WAS demonstrated complex yield-pseudoplastic behaviour, which was better characterised by hybrid model fitting. The rheological instability, characterised by the distinct flow status and transitions, and considerable correlations to solids content, could give more insight in viscoelasticity and thixotropy of concentrated WAS. Moreover, recommendations for developing a proper rheological measurement protocol were also formulated. The thixotropic behaviour was further explored by involving the rheological impacts of shearing duration and temperature. Under the long-term shearing conditions, the complex thixotropic behaviour was well characterised by two limitation states: Initial and Stable, which showed distinct rheological properties for concentrated WAS, while small rheological difference for DGT. These distinct rheological properties of concentrated WAS were clearly reflected in its pipe flow behaviour, which was well assessed using the computational fluid dynamics (CFD) model with effective rheological data integration. Temperature had a striking impact on the sludge rheology, and strongly correlated with the solids content and degree of sludge stabilisation. The discrepancy in impact between long-term shearing and temperature implied different mechanisms responsible for shifting the equilibrium of hydrodynamic and non-hydrodynamic interactions for sludge structural deformation and recovery. Abstract II Moreover, the gas-sludge flow and mixing were characterised in detail, firstly using a refined lab-scale CFD model. Bubble size, phase interaction forces, and liquid rheology significantly impacted predictions of the two-phase flows. A more reliable and complete model validation was obtained by performing a critical comparison. The mixing performance approximated a laminar-flow reactor (LFR) that distinctly deviated from the expected continuously stirred tank reactor (CSTR) design. The results underline the importance of a proper phase-interaction description for reliable flow and mixing characterisation. The developed model setup was further applied to the full-scale digester. Results revealed considerable dependency of the flow and mixing characterisation, with the rheological input data. The predicted dominant shear rate level was out of the effective shear rate range of the Ostwald model. This finding limited the model application, since the apparent viscosity overestimation at low shear rates led to flow and mixing overestimation. However, the Herschel-Bulkley model better fitted low shear rates, and predicted large gradients of apparent viscosity in the poor flow regime. Distinct flow and mixing compartments were obtained based on the gas-sparging height, including a plug-flow compartment above, and a dead-zone below. Although inducing insufficient mixing, the applied gas-sparging may still be useful to mitigate short-circuiting, accumulative sedimentation and effective volume reduction…","","en","doctoral thesis","","978-94-6421-407-9","","","","","","2022-06-30","","","Sanitary Engineering","","",""
"uuid:49102b45-50fc-4d17-a38b-eb9de43f6c1e","http://resolver.tudelft.nl/uuid:49102b45-50fc-4d17-a38b-eb9de43f6c1e","Image Acquisition and Attenuation Map Estimation for Multi-pinhole Clinical SPECT","Chen, Y. (TU Delft RST/Biomedical Imaging)","Beekman, F.J. (promotor); Goorden, M.C. (copromotor); Delft University of Technology (degree granting institution)","2021","Single-photon emission computed tomography (SPECT) is a well-established nuclear imaging modality for studying functional and pathological properties of the brain. Conventional general purpose SPECT systems typically offer a spatial resolution of about 10 mm with a sensitivity of 0.01-0.02%. A few dedicated brain SPECT scanners have been proposed, but resolutions and sensitivities are no better than 7 mm and 0.03% respectively, and some of these scanners are not manufactured anymore. This limited resolution hampers detection of localized brain abnormalities, while the low sensitivity requires a long scanning time that limits fast dynamic studies. Besides a compromised resolution and sensitivity, conventional SPECT systems require rotation of heavy detectors to obtain sufficient angular projections, which hamper fast dynamic imaging.","brain SPECT; pinhole collimation; DaTscan; sufficient sampling; convolutional neural network; Monte Carlo simulation","en","doctoral thesis","","978-94-6384-232-7","","","","","","","","","RST/Biomedical Imaging","","",""
"uuid:b4c67cd7-a5ca-4ee4-b2a5-6f61bb25f66a","http://resolver.tudelft.nl/uuid:b4c67cd7-a5ca-4ee4-b2a5-6f61bb25f66a","CBRN Threats, Counter-Terrorism, and Collective Moral Responsibility: Partnerships in preventing and preparing for terrorist attacks using common-use toxic and radiological substances as weapons","Feltes, J. (TU Delft Ethics & Philosophy of Technology)","Miller, S.R.M. (promotor); van de Poel, I.R. (promotor); Delft University of Technology (degree granting institution)","2021","The terrorist use of chemical, biological, radiological, and nuclear (CBRN) weapons is a worst-case scenario for most security agencies. Yet traditionally, the risk of CBRN-terrorism is characterized as a so-called “high impact – low probability” threat. Academics and analysts consider it challenging for terrorists to acquire these weapons and, hence, assign a low probability to the terrorist use of impactful CBRN weapons such as nuclear devices or weaponized microorganisms. Most researchers, however, assess the impact of a terrorist weapon solely based on its capability to physically destroy structures or harm organisms. This one-dimensional assessment rules out those toxic substances that are commonly considered CBRN-agents, but only possess limited destructive capabilities. Hence, these agents are not considered a priority for most security agencies. Rather, most resources in CBRN-defense are allocated to subjects like international non-proliferation efforts, whereas toxic substances that are openly available in hardware stores are often overlooked. The present study focuses on three of these toxic substances; ricin, phosphine, and americium. It will be shown that, while arguably having limited physical impact in the hands of terrorists, these and other toxic substances exhibit characteristics that could be of high value to the strategic and tactical goals of terrorist groups. For example, attacks with phosphine have the potential to inflict massive amounts of fear and disruption and are capable of causing political damage and damage to security institutions. The potential to inflict substantial amounts of non-kinetic damage as well as the availability and ease of use of these substances need to be properly acknowledged and met with a multi-layered web of countermeasures (web of prevention) by security institutions. However, as this thesis will show, this web of prevention ought to include not only Government agencies, but also other stakeholders such as manufacturers and vendors of these products, the press, researchers, and citizens. It will be argued that all of these stakeholder groups share a joint moral responsibility to combat terrorist attacks with toxic substances. This joint moral responsibility can be translated into specific actions of individuals in these groups of stakeholders that include, for example, the reporting of suspicious purchases in hardware stores or the flagging and deletion of weapon manuals on the internet. It will be shown that most of the current cooperative measures against these substances suffer from issues that can be traced back to the inability of the stakeholder groups to identify their respective responsibilities within the web of prevention. Furthermore, security institutions miss opportunities to operationalize the moral responsibilities of stakeholder groups such as vendors of toxic products. Based on this assessment, this thesis will give recommendations on how to improve the current CBRN security architecture. It will be shown that the responsibilities and actions of each stakeholder group have to be defined, discussed, and coordinated by all relevant stakeholder groups jointly. In order to do so, the theoretical concept of the web of prevention has to be turned into an institutionalized web in the form of a joint center. Such a joint center gives the stakeholder groups the opportunity to (1) assess the threat, (2) define tasks and actions of each group, and (3) equip each group with the means to perform these actions in an efficacious and ethically sustainable manner.","","en","doctoral thesis","","","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:25bd462e-5d62-4b9d-9817-692d62a36eab","http://resolver.tudelft.nl/uuid:25bd462e-5d62-4b9d-9817-692d62a36eab","Qubit arrays in germanium","Hendrickx, N.W. (TU Delft QCD/Veldhorst Lab)","Vandersypen, L.M.K. (promotor); Veldhorst, M. (copromotor); Delft University of Technology (degree granting institution)","2021","Spin quantum bits (qubits) defined in semiconductor quantum dots have emerged as a promising platform for quantum information processing. Various semiconductor materials have been studied as a host for the spin qubit. Over the last decade, research focussed on the group‐IV semiconductor silicon, owing to its compatibility with semiconductor manufacturing technology and the ability to eliminate magnetic noise through isotope purification. However, to this end, hole states in germanium can be considered as well. Furthermore, their low effective mass and high carrier mobility allow for well‐controlled devices, the lack of valley states ensures a well‐defined qubit manifold and the intrinsic spin‐orbit coupling enables all‐electric control. In this thesis, we study strained planer germanium quantum wells, with a focus on applications for quantum information processing.In Chapter 5, we discuss the material platform growth and properties. We show that starting from a silicon wafer, using a reverse grading process, defect‐free, undoped, strained, and shallow germanium quantum wells can be grown, as confirmed by transmission electron microscopy, secondary ion mass spectrometry, and x‐ray measurements. Using heterostructure field‐effect transistors, we characterise the transport properties of the material and find a carrier mobility of μ > 500,000 cm2/Vs. Furthermore, we study the effect of the quantum well depth on the quantum mobility and charge noise sensitivity (Chapter 6) and observe an improvement in both parameters when the quantum well depth is increased from 20 nm to 60 nm.The spin qubit is defined by a hole spin confined in a gate‐defined quantum dot. In Chapter 7 we study the properties of a quantum dot in planar germanium. We describe the nanofabrication process we use to define gate‐controllable quantum dots, contacted by metallic ohmic leads. A nearby quantum dot is used as a charge sensor, which can be read out using high‐bandwidth reflectometry measurements. This allows us to deplete a two‐by‐two quantum dot array to the single‐hole charge occupation, as a host for the spin qubits.Having established a fabrication integration scheme to define quantum dots and ohmic regions, we move to qubit operation in Chapter 8. We measure a double quantum dot in transport and observe a blockade of the transport current for certain hole occupation numbers. This is found to be caused by Pauli spin blockade and can be used to perform the spin‐to‐charge conversion. When a microwave tone resonant with the magnetic field induced Zeeman splitting is applied, the blockaded transport current recovers. This is the result of an induced spin flip, mediated by electric dipole spin resonance (EDSR). Using a tailored measurement technique to increase the signal‐to‐noise ratio of the transport measurements, we demonstrate coherent rotations of the spins in both quantum dots at a Rabi frequency of up to 100 MHz. By operating at the point of the lowest charge noise sensitivity, we find qubit dephasing times beyond 800 ns and a single qubit control fidelity above 99 %. To form a universal quantum gate set, an entangling operation is needed as well. We implement a two‐qubit conditional rotation gate, mediated by the exchange interaction between the qubits. Using the dedicated tunnel barrier gate, we can set the exchange interaction as high as 60 MHz, enabling fast and coherent two‐qubit rotations.Transport measurements only allow for sampling of the average measurement outcome over an ensemble of individual shots. In Chapter 9 we establish single‐shot measurements of a single‐hole spin qubit by making use of a separate radio‐frequency charge sensor. This allows us to isolate the qubits from their hole reservoirs, and we find increased spin relaxation times of over 1 ms. Furthermore, we observe a strong electric modulation of the hole g‐factor that can be attributed to the spin‐orbit coupling and ensures individual qubit addressability.Practical quantum computing applications require large numbers of qubits and many proposals rely on two‐dimensional (2D) layouts to achieve this. As a first step towards 2D grids of spin qubits, we operate a two‐by‐two qubit array in Chapter 10. A latched readout process is implemented to increase the readout visibility and overcome spin relaxation during spin‐to‐charge conversion. Fast single‐qubit gates are achieved using EDSR, with control fidelities of over 99 % for all four qubits. By implementing dynamical decoupling sequences, low‐frequency noise can be mitigated and the phase coherence of the qubit can be increased by several orders of magnitude, up to 100 μs.Harnessing the electric control over the quantum dot coupling, we show the gate‐controlled isolation and coupling of all four qubits, enabling one‐, two‐, and threefold conditional qubit rotations. The large range of control over the exchange interaction also allows performing a controlled phase (CZ) two‐qubit gate in only 10 ns. Implementing a quantum circuit based on CZ gates between all qubits, we coherently entangle and disentangle the four qubits in a Greenberger‐Horne‐Zeilinger (GHZ) state.Finally, in Chapter 11 we study the integration of superconductors into the platform and define gate‐controlled Josephson junctions. We observe a supercurrent through the quantum well over a length up to 6 μm. The critical current of the junction can be modulated using the top gate, up to a maximum IcRN of 17 μV. We demonstrate the Josephson nature of the supercurrent by showing the presence of both the dc and ac Josephson effect. From multiple Andreev reflection and excess current measurements, we extract a characteristic superconducting gap size of 0.2 meV and a junction transparency of 0.6. Finally, we define a superconducting quantum point contact and observe discretisation of the supercurrent, showing superconducting transport restricted to individual channels.","quantum bits; quantum dots; spin qubits; hole spins; germanium","en","doctoral thesis","","978‐90‐8593‐482‐0","","","","","","","","","QCD/Veldhorst Lab","","",""
"uuid:2f965fc6-c8df-4f4c-af20-702067901c91","http://resolver.tudelft.nl/uuid:2f965fc6-c8df-4f4c-af20-702067901c91","Energy Effectiveness and Operational Safety of Low-Powered Ocean-going Cargo Ship in Various (Heavy) Operating Conditions","Sui, Congbiao (TU Delft Ship Design, Production and Operations)","Hopman, J.J. (promotor); de Vos, P. (copromotor); Delft University of Technology (degree granting institution)","2021","The shipping industry, which remains the backbone of international merchandise trade, is striving to reduce its operational cost and more importantly its environment impact. New ships need meet the EEDI (Energy Efficiency Design Index) requirements of IMO (International Maritime Organization). However, the current EEDI is not able to accurately evaluate the real lifetime carbon emissions of the ship. Under the guidance of current EEDI regulation, the ship designers, owners and policymakers could be misled to adopt the configurations that are underperforming or even leading to an increase of CO2 emissions in reality. A technically easy and effective solution to meet the EEDI requirement is to lower the installed engine power and thus the ship design speed. However, reducing the installed engine power could lead to an underpowered ship, which could have insufficient power for propulsion and steering in adverse weather conditions.
The main research question addressed in this dissertation is:
What is the transport performance of ocean-going cargo ships with small EEDI when sailing in realistic operating conditions; are these ships safe when sailing in heavy operating conditions; and, how to improve both the transport performance and operational safety of ocean-going cargo ships by using the short-term applicable ship propulsion options?
The ship transport performance investigated in this dissertation includes the energy conversion performance, fuel consumption performance and emissions performance. The influences of the operational ship speed reduction, propulsion control, PTO (power-take-off)/PTI (power-take-in), and using LNG (liquefied natural gas) as the fuel as well as the combination of these measures on the ship transport performance have been systematically investigated.
The operational safety investigated in this dissertation includes both engine operational safety and ship operational safety. The engine dynamic behaviour during ship acceleration, deceleration, crash stop, and turning in normal sea condition have been investigated. The ship propulsion and manoeuvring performance when sailing in head sea, accelerating in head sea and turning to head sea in adverse sea conditions have been investigated. The influences of propeller pitch and PTO/PTI on the ship thrust limit and engine behaviour have also been investigated.
As a reflection of the research in this dissertation, suggestions on amendments of IMO’s current EEDI has been provided. The proposal for amending the current EEDI formula tries to make the EEDI calculation more realistic and representative when evaluating ship transport performance at the design stage. Moreover, it can also partly solve the other weakness of the current EEDI with respect to the issues of underpowered ships.","Ship propulsion system; hybrid propulsion; power take off/in; energy conversion effectiveness; diesel engine; mean value first principle engine model; engine dynamic behaviour; low-powered ship; ship operational safety; fuel consumption; emissions","en","doctoral thesis","","978-94-6421-410-9","","","","","","","","","Ship Design, Production and Operations","","",""
"uuid:98eb22bf-6d0b-4cfb-8bd2-b3f0d316e316","http://resolver.tudelft.nl/uuid:98eb22bf-6d0b-4cfb-8bd2-b3f0d316e316","Time Use and Travel Behaviour with Automated Vehicles","Pudane, B. (TU Delft Transport and Logistics)","Chorus, C.G. (promotor); van Cranenburgh, S. (copromotor); Delft University of Technology (degree granting institution)","2021","Automated vehicles (AVs) have been a dream for a long time. From science fiction in the 1930s to countless prototypes, extensive road testing, and first use cases at present, the technology has clearly come a long way. So too has the vision of practitioners and academics matured to recognise the various potential benefits (e.g., accessibility, traffic safety, productivity, well-being) as well as threats (e.g., safety and security risks, induced travel demand, urban sprawl) of automation. The task at hand is to comprehensively assess these impacts in preparation for the AV future. In order to perform such assessment, the analyst needs to anticipate the travel behaviour and aggregate travel patterns of the future AV users. This is not a trivial task: letting go of the steering wheel may mean more than making travel more pleasant for some travellers (or perhaps less so for others who prefer to stay in control). For current car drivers, this may mean gained time and energy in a day that could let them re-optimise their activity schedule. For instance, they may choose to perform work tasks during commute, and spend less time at work as a result. That would let them increase the time spent – and potentially, trips made – for leisure. The schedule changes may be even larger for those who may become new car users with the introduction of AVs. In the aggregate, such individual-level transitions will likely form complex and significant trends in the transport system, in terms of, for example, changing person- and vehicle-kilometres, modal split, spatial and temporal distribution of travel demand and land-use patterns. How can the policy makers anticipate such complex developments? The answer to such queries has, for a long time, been provided by the coupling of travel behaviour and (large-scale) transport models. However, these models have so far been developed, successfully applied and fine-tuned for predicting travel patterns with the current, non-AV travel modes. The question that needs to be answered before using them to predict transport system developments with AVs is evident: can they reliably describe the travel behaviour of future AV users? This PhD is, for the largest part, inspired by my conviction that the answer to this question is ‘no’. In particular, I argue that the time-use dimension of travel demand models – that is, the effects of time-use in AVs on daily time-use – has not been sufficiently developed. Even state-of-the-art models commonly assume that on-board activities in AVs will lower the so-called travel time penalty or the value of travel time. In the prediction context, this inevitably leads to a prediction of more person- (and vehicle-) travel. In the evaluation context, this approach gives an illusion that the benefits from travel time savings will accrue gradually and not step-wise, due to, for example, discrete schedule re-arrangements. A simplified modelling approach such as this can bias the predictions of aggregate travel patterns, which can lead to misguided policy decisions. This thesis aims to narrow the gap between the expected travel and time-use behaviour of AV users on the one hand and the models that describe it on the other. Throughout the chapters, it, first, provides intuition that such gap indeed exists. Second, it analyses empirical evidence that partially supports this intuition. Third, it develops three time-use and travel behaviour models that incorporate some of the missing behavioural elements. Lastly, this thesis provides first insights into how these model updates make a difference for the predictions of aggregate travel patterns – a crucial input for transport policy making for the AV era.","Automated vehicles; Travel behaviour; Time use; Transport model","en","doctoral thesis","Trail / TU Delft","978-90-5584-296-4","","","","TRAIL Thesis Series no. T2021/21, the Netherlands Research School TRAIL","","","","","Transport and Logistics","","",""
"uuid:ac6113f5-9777-49a3-b559-f14554ffc210","http://resolver.tudelft.nl/uuid:ac6113f5-9777-49a3-b559-f14554ffc210","To fold or not to fold?: An exploration of deployable porous biomaterials for the treatment of large bone defects","Bobbert, F.S.L. (TU Delft Biomaterials & Tissue Biomechanics)","Zadpoor, A.A. (promotor); Delft University of Technology (degree granting institution)","2021","Without our musculoskeletal system, which consists of bones, joints, and muscles, we would not be able to live. Our bones are responsible for the protection of our organs, the support of our body, and they enable our mobility. Therefore, it is important to keep them healthy. This is done by cells who repair small cracks and fractures caused by our daily activities through continuous remodeling of the skeleton. However, severe bone damage and defects can occur, for example, due to trauma (e.g., car accidents) and bone tumor resection. In this case, the defects are too large for the cells to repair and surgical intervention is required to support the bone regeneration process. Bone substitutes or porous biomaterials are used to fill these defects to help the cells to regenerate the bone. Bone substitutes require implantation via open surgery due to their large dimensions and rigidity. This causes great damage to the body, which results in a long recovery time for the patient and increases the risk of infections. To reduce the invasiveness of the implantation process, minimally invasive surgery (MIS) could be used. MIS techniques make it possible to perform surgical treatments through specific minimally invasive tools that are inserted into the body through small incisions. In order to make minimally invasive implantation possible, the dimensions of porous biomaterials should be reduced to fit through these small incisions. In addition, it has been demonstrated that the bone regeneration process can be optimized and infections could be prevented by applying precisely controlled nanopatterns to the surface of bone substitutes. However, surface patterning techniques can only be applied to flat surfaces. Therefore, it is not possible to apply surface patterns to the inner surfaces of three-dimensional porous structures, such as those fabricated through 3D printing techniques. To resolve","Bone tissue engineering; Deployable structures; Biomaterials; Origami; Kirigami","en","doctoral thesis","","978-94-6419-248-3","","","","","","","","","Biomaterials & Tissue Biomechanics","","",""
"uuid:2b2a4f0a-81d9-4e5a-9a6b-807a73d617d0","http://resolver.tudelft.nl/uuid:2b2a4f0a-81d9-4e5a-9a6b-807a73d617d0","Your Car Knows Best","van Gent, P. (TU Delft Transport and Planning)","Farah, H. (promotor); Nes, Nicole Van (promotor); van Arem, B. (promotor); Delft University of Technology (degree granting institution)","2021","Reducing congestion through persuasive in-car advice.","","en","doctoral thesis","TRAIL Research School","978-90-5584-295-7","","","","TRAIL Thesis Series no. T2021/20, the Netherlands Research School TRAIL","","","","","Transport and Planning","","",""
"uuid:fb8c99cc-24d6-4718-8986-95833ffc1f49","http://resolver.tudelft.nl/uuid:fb8c99cc-24d6-4718-8986-95833ffc1f49","Balancing and redispatch: the next stepping stones in European electricity market integration: Improving the market design and the efficiency of the procurement of balancing and redispatch services","Poplavskaya, K. (TU Delft Energie and Industrie)","De Vries, Laurens (promotor); Weijnen, M.P.C. (promotor); Delft University of Technology (degree granting institution)","2021","Balancing and redispatch are essential services for the security and stability of the electricity network. Balancing refers to continuously maintaining a balance between supply and demand through activating flexible resources. Redispatch refers to changing the dispatch of generators to remedy network congestion. The need for flexibility resources for balancing and congestion management is ever more pressing due to several policy, market and technological aspects.
In a time of the fast-paced, massive transformation that is the energy transition, the electricity system and network are becoming more vulnerable to disturbances, requiring more flexibility. In this dissertation, we test the hypothesis that the efficiency of procurement can be improved with the help of market design adjustments. Thus, the author explores the following main question:
How can market design changes help transmission system operators procure balancing and redispatch services in a more economically efficient manner?
The answer to the main research question is subdivided into two parts: the first one studying a well-defined and well-established balancing market and the second one, building upon the analysis produced in the former, addresses issues related to redispatch. For this, market modelling was combined with analytical and empirical approaches to study the procurement of the two services.
Market harmonization and network integration are developing rapidly in the EU, creating new challenges for the electricity system. This dissertation addresses key issues that system operators, regulators, policymakers and market participants face in the electricity markets today and provides practical recommendations as to how market design can be improved and what other measures are required to ensure economic efficiency. The developed tools provide new means of decision support for energy system stakeholders.
This study does not only contribute to improving network security through market design but, by helping reduce system costs, contributes to the overall economic welfare and the achievement of EU policy goals. Finally, it provides the scientific community with the insights and methodological know-how, in particular in the field of agent-based modelling and machine learning, for the study of numerous future questions in the area of electricity market design, bidder incentives and market integration.
energy-efficient design is therefore often studied. Architectural space layout also can affect BEP. However, only a few of the numerous studies on energy-efficient design considered the effect of space layout. Within these studies, the isolated effect of space layout on the BEP has hardly have been analysed systematically.
The framework of Performative Computational Architecture (PCA) had been proven to be effective to improve BEP. PCA includes form generation, performance evaluation, and computational optimisation. With this framework, the building’s geometry and material properties are parametrised, and the performance is assessed for different combinations of design parameters. It aims to find the proper parameters that satisfy the defined objectives. This method can include the generation and assessment of space layouts. However, only a few studies have tried to combine the automatic generation of space layout with energy performance optimisation; nor the systematic analysis of the effects and relations between space layout and energy performance.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-435-6","","","","","","","","","Climate Design and Sustainability","","",""
"uuid:710856a6-4f0e-49f4-a9f2-b3b75cb72570","http://resolver.tudelft.nl/uuid:710856a6-4f0e-49f4-a9f2-b3b75cb72570","Sensing and data fusion opportunities for raw material characterisation in mining: Technology and data-driven approach","Desta, F.S. (TU Delft Resource Engineering)","Buxton, M.W.N. (promotor); Jansen, J.D. (promotor); Delft University of Technology (degree granting institution)","2021","The rising demands for mined products lead to the extraction of materials in geologically complex regions. This calls for mining process changes and interventions driven by technology and advanced data analytics. The dynamic development of state-of-the-art sensor technologies and their potential use in mining is projected to significantly reduce costs in the industry. However, despite rapid advances in sensor technologies, there is still a demand for novel data analytical approaches to enable accurate characterisation of material along the mining value chain, as advanced data analytics is key to gain knowledge from the complex sensor-derived data. Therefore, sensor technology, coupled with advanced data analytics is crucial for the rapid and accurate characterisation of material in mining operations. Access to rapid and accurate data on the key geological attributes (e.g., mineralogy and geochemistry) along the mining value chain has significant implications for the production process efficiency in commercial mines. Such data would greatly assist the improvement of deposit models, optimise ore processing, specify product quality and improve operational decision-making. Sensor technologies operate over a specific range of the electromagnetic spectrum and provide information on certain aspects of material properties that are of potential interest for mining extraction. However, a single sensor might not provide a sufficiently comprehensive description of a material’s composition. This introduces uncertainty into both resource estimation and requirements definition for mineral processing. Thus, it is necessary to utilise strategic sensor combinations to improve accuracy, minimise uncertainty, and enhance specific insights of material compositions. Combinations of sensors can be implemented using a data fusion approach. The fusion of sensed data can be realised at different levels: low-, mid-, and high-level, when the integration occurs at the data level, features level and decision level, respectively. This research aims to develop methods for the characterisation of raw materials using multiple sensor technologies and sensor combinations concept (data fusion at different levels), that can be potentially applicable to mining operations. The study involved the multispectral and hyperspectral imaging techniques, such as red-green-blue (RGB) imaging, visible and near-infrared (VNIR) and short-wave infrared (SWIR) hyperspectral imaging, and point spectroscopic techniques, such as mid-wave infrared (MWIR), long-wave infrared (LWIR) and Raman spectroscopy to acquire spectral information over a wider range of the electromagnetic spectrum. First, an investigation was conducted on the usability of the individual sensor technologies coupled with data analytics for the characterisation of a polymetallic sulphide deposit at different levels. The different levels of material characterisation aimed to allow mineral mapping, ore–waste discrimination, fragmentation analysis, and semi-quantitative analysis of elements and minerals. The positive outcomes of the use of the individual techniques led to the development of a data fusion framework that enables data integration (including multi-scale and multi-resolution data) at different levels (e.g., low-level and mid-level). The developed data fusion concept was implemented and validated using different test scenarios...","","en","doctoral thesis","","978-94-6423-318-6","","","","","","","","","Resource Engineering","","",""
"uuid:f4dac086-ca05-4c49-acc2-748dcc4fcd42","http://resolver.tudelft.nl/uuid:f4dac086-ca05-4c49-acc2-748dcc4fcd42","Dynamic Polymer Hydrogels through Reversible Thiol Conjugate Addition Crosslinks","Fan, B. (TU Delft ChemE/Advanced Soft Matter)","Eelkema, R. (promotor); van Esch, J.H. (promotor); Delft University of Technology (degree granting institution)","2021","This thesis describes the experimental development of new dynamic hydrogels based on reversible thiol conjugate additions. Redox-controlled hydrogels and self-healing injectable hydrogels have been achieved by introducing reversible thiol conjugate additions to crosslink polymers, leading to hydrogel formation. The overall objective in this thesis was to develop a new fuel-driven transient polymeric hydrogel formation system. Although this final aim was not entirely met, we developed several important concepts along the way, which are described in Chapters 2-5. Chapter 2 describes a new chemical reaction network for fuel-driven transient formation of covalent S-C bonds, based on redox-controlled conjugate addition and elimination. We found that the formation and breaking of covalent bonds in the reaction cycle can be realized in separate reactions, but side reactions hindered the operation in full cycle. If such problems would be solved, this CRN could have potential to be used to form fuel-driven polymer materials. Chapter 3 investigates the formation of a self-healing injectable hydrogel by introducing dynamic thiol-alkynone double addition crosslinks in a polymer network. Such dynamic hydrogels show self-healing and shear thinning properties, confirmed by rheological measurements, macroscopic self-healing, and injection tests. Good cytocompatibility of these hydrogels opens an opportunity for future biomedical applications such as tissue engineering and drug delivery. Chapter 4 describes a redox-controlled reversible thiol-alkynone double addition. First, we created a redox-responsive hydrogel by using such reversible addition for the formation of crosslinks in hydrogels. Second, based on this thiol-alkynone double addition, we developed a fuel-driven transient formation of thiol-alkynone double adduct on small molecules. Chapter 5 explores coupling and decoupling reactions of thiols to an azanorbornadiene bromo sulfone. A self-healing hydrogel can be formed by using azanorbornadiene bromo sulfone to couple two thiol groups together. Such hydrogels are also degradable, trigged by glutathione. Glutathione-triggered dye release experiments suggest this self-healing hydrogel is a potential carrier of drugs, cells or vaccines for biomedical applications.","Dynamic hydrogels; Thiol addition; Fuel-driven; Self-healing; Dynamic covalent chemistry","en","doctoral thesis","","","","","","","","","","","ChemE/Advanced Soft Matter","","",""
"uuid:fac93ccf-7e0b-4971-a797-d2617e378a1d","http://resolver.tudelft.nl/uuid:fac93ccf-7e0b-4971-a797-d2617e378a1d","Simultaneous Optimisation of Composite Wing Structures and Control Systems for Active and Passive Load Alleviation","Binder, S. (TU Delft Aerospace Structures & Computational Mechanics)","De Breuker, R. (promotor); Bisagni, C. (promotor); Delft University of Technology (degree granting institution)","2021","Future composite aircraft wing designs will exploit anisotropic material properties by aeroelastic tailoring and include active control methods for manoeuvre and gust load alleviation. The research focuses on the simultaneous optimisation of aeroelastically tailored wing structures with active manoeuvre and gust load alleviation as well as the analysis of their interaction.
The development of the framework allowing the rapid integrated preliminary design of aeroservoelastically tailored wing structures includes the formulation of a suitable model order reduction method for the aerodynamic models. The approach establishes reduced-order models that have high robustness against structural modifications and thereby can be used throughout the entire optimisation process. Besides passive structural tailoring facilitated by exploiting the anisotropic properties of composite materials, active aeroelastic control is implemented by scheduled control surface deflections redistributing the aerodynamic loads during manoeuvres to achieve manoeuvre load alleviation and a feed-forward control law for gust load alleviation. The panel-based aerodynamic modelling of spoiler deflections is improved by a correction of the spatial distribution of the boundary condition derived from higher fidelity simulation data. Rate and deflection saturation is considered in a nonlinear manner. Various structural weight optimisations are performed, with the individual technologies being activated or deactivated. Besides the use of different material allowables, alternative approaches such as simultaneous, separate and iterative optimisation are investigated. Also, the influence of configurational changes on the optimisation results is analysed, considering the addition of winglets and a modification of the control surface layout.
The results of the individual and combined optimisations reveal significant design differences. A substantial shift of effectiveness from active aeroelastic control to passive structural tailoring is observed with increased allowables resulting in more flexible and hence less stiff wing designs. While the results confirm that simultaneous optimisation is the only way to find the optimal solution, separate and iterative approaches offer the possibility to separate certain subspaces of the optimisation and still reach results close to the optimum. The investigated configurational changes influence the prevailing load hierarchy and active constraints. As a result, the optimiser reacts to these changes by adapting the mechanism of structural tailoring to optimally fulfil the respective active constraints.","Aeroelasticity; Aircraft Design; Active Load Control; Aeroelastic Tailoring; Model Order Reduction","en","doctoral thesis","","978-94-6421-412-3","","","","","","","","","Aerospace Structures & Computational Mechanics","","",""
"uuid:a6a5f8a8-e328-44d8-a265-dfe2b75a9bf8","http://resolver.tudelft.nl/uuid:a6a5f8a8-e328-44d8-a265-dfe2b75a9bf8","Calibration Techniques for Power-efficient Residue Amplifiers in Pipelined ADCs","Sehgal, R.K. (TU Delft Electronic Instrumentation)","Bult, K. (promotor); Makinwa, K.A.A. (promotor); Delft University of Technology (degree granting institution)","2021","Residue amplification plays a key role in determining the energy efficiency, area and performance of high-speed pipelined ADCs. At its core, a residue amplifier simply consists of four transistors that transfer a differential input voltage to a capacitive load with the desired gain. However, in order to ensure gain accuracy over PVT, the core amplifier has to be augmented with extra circuitry to achieve high settling accuracy, at the expense of area and power dissipation. In this dissertation, techniques to improve the power efficiency of residue amplifiers were investigated, by adopting a two-pronged approach -
1. Developing new amplifier topologies which can achieve high resolution without relying on high DC-gain and settling accuracy, with the help of linearization in the analog domain, and
2. a deterministic calibration architecture which allows the calibration of linear gain error and distortion in background while achieving a fast convergence.
LF problems and OF problems have been widely studied for single-carrier (SC) systems. However, conventional LF models for the separate single-carrier networks (SCNs) are not able to capture the full extent of the coupling. Recently, different LF models for MESs have been proposed, either using the energy hub (EH) concept, or using a case specific approach. Yet, they do not state how the graphs of the SCNs can be combined into one multi-carrier network (MCN). A good description of integrated networks of multiple energy carriers is very important. Some couplings between energy systems, while possible in practice, can lead to model problems. Although the EH concept can be applied to a general MES, it is unclear in the existing models how the EH should be represented in the graph of the MES. On the other hand, the case specific approaches are not easily applicable to general MESs. Moreover, the effect of the coupling on solvability and well-posedness of the system of nonlinear LF equations for a MES has had little attention in these models.
Operational optimization requires the detailed LF equations to be incorporated into the optimization problem. Nonlinearities of these equations cause issues with convexity and solvability of the OF problem. Hence, the formulation of the LF equations, and the way they are incorporated in the OF problem, greatly influence the solvability of the OF problem and the convergence of the optimization algorithms.
In this thesis, we address some of the existing issues and possibilities to improve on the available models. We present a graph-based framework for steady-state load flow analysis of general MESs that consist of gas, electricity, and heat. The framework is based on connecting the SCNs to heterogeneous coupling nodes, using homogeneous dummy links, to form one connected MCN. Load flow equations are associated with each network element, including the coupling nodes, which are combined with boundary conditions to form one integrated system of nonlinear equations, that needs to be solved to find the solution to the LF problem. This is the integrated approach to formulate the LF problem of a MES.
Alternatively, the model of the connected MCN can be reformulated, such that a MES is represented by a disconnected MCN that consists of the SC networks and a coupling network. This allows for a more decoupled approach to the LF problem, in which the system of nonlinear equations, now consisting of interface conditions connecting the coupling network with the SC networks and the LF equations per SC network, can be solved making use of individual solves for each SC network.
The model framework is validated using a small example MES. Using the integrated approach, we formulate the LF problem of various example MESs, of varying size, with various coupling models and topologies, and various formulations in the single-carrier parts, and solve their LF problems using the Newton-Raphson method (NR). Using these examples, we investigate the effect of coupling on the system of LF equations and discuss the problems arising due to the coupling of SC networks on the solvability of the LF problem. Based on numerical experiments, we compare the convergence behavior of NR for the various single- and multi-carrier systems. Finally, we formulate and solve the LF problem of MESs using the integrated approach and using the decoupled approach. We compare the systems of equations, and we compare the convergence of the solution methods for the two approaches.
Furthermore, in this thesis, we consider two ways to include the LF equations in the OF problem for general MESs, called formulation I and formulation II. In formulation I, optimization is over the combined control and state variables, with the LF equations included explicitly as equality constraints. In formulation II, optimization is over the control variables only, and the LF equations are included as a subsystem, which is solved to obtain the state variables for given control variables. We compare the two formulations theoretically, and we illustrate the effect of the two formulations on the solvability of the OF problem by optimizing two MESs.
This study shows that the graph-based framework can be used to formulate and solve the steady-state LF problem for general MESs that consist of gas, electricity, and heat, both with the integrated approach and with the decoupled approach. Moreover, the framework can be used with different components and models, both in the SCNs and for the coupling units. Therefore, our framework includes and extends the currently available LF models for MESs. Furthermore, the model framework provides guidelines to obtain a solvable steady-state LF problem for MESs. We find that using the decoupled approach to perform LF analysis is slower than using the integrated approach. For the LF problem of an example MES with a tree-like structure, NR is independent of the size of the network and of the coupling, and NR requires at most as many iterations as the slowest single-carrier network.
Both formulation I and formulation II result in a solvable OF problem. For the two example MESs, the optimization algorithms require significantly fewer iterations with formulation II than with formulation I.","Adjoint approch; Decoupled load flow; Fixed-point method; Gas networks; Heat networks; Integrated energy systems; Load flow analysis; Load flow problem; Multi-carrier energy systems; Multi-carrier energy networks; Natural gas; Newton-Raphson (N-R) method; Newton's method; Nonlinearly constrained optimization problem; Numerical analysis/modeling; Optimal Power Flow problem; Power flow analysis; Power grids; Scaling","en","doctoral thesis","","978-94-6366-419-6","","","","","","","","","Numerical Analysis","","",""
"uuid:5e5eee1b-3ed8-4479-944b-0b5e35a05047","http://resolver.tudelft.nl/uuid:5e5eee1b-3ed8-4479-944b-0b5e35a05047","Unravelling the hydrolytic activity of sludge degrading aquatic worms","de Valk, S.L. (TU Delft Sanitary Engineering)","van Lier, J.B. (promotor); de Kreuk, M.K. (promotor); Delft University of Technology (degree granting institution)","2021","The overall objective of this thesis was to investigate ways to improve the extent and rate of waste activated sludge (WAS) hydrolysis by researching the WAS degrading activities and mechanisms of the aquatic worm Tubifex tubifex (T. tubifex) as a starting point. The WAS degrading aquatic worms were taken as a model “biochemical reactor” of which its conversion processes still need to be unravelled. Because the worms are known for their excellent performance in WAS-solids reduction, i.e., up to 45% volatile solids (VS) reduction in 4 – 5 days, the focus was on worm-based enzymatic processes for improving WAS hydrolysis.
Generally, T. tubifex predation shows significantly higher WAS conversion rates compared to anaerobic and aerobic digestion processes. However, information on the effect of WAS predation on the overall WAS biodegradability was lacking. Hereto, experiments were conducted to assess the ultimate WAS biodegradability potential, after which results were used as a reference to compare the biodegradability potential of different combinations of worm predation and anaerobic digestion. Interestingly, worm predation combinations showed superior solids removal rates and superior overall conversion rates, compared to solely conventional anaerobic digestion. However, the overall WAS biodegradability potential was similar in both experimental set-ups, reaching 58% and 49% removal for chemical oxygen demand (COD) and VS respectively.
The improved WAS conversion rates during worm predation were related to the efficient removal of protein-like and, to a smaller extent, polysaccharide-like substances from the sludge matrix. Additionally, alginate-like exopolysaccharides (ALE), were partly consumed during worm treatment of
WAS. The removal of protein, polysaccharide and ALE-like substances resulted in the disintegration of sludge flocs and the release of fulvic and humic substances as well as the cations Mg2+, Al3+ and Fe3+ from the sludge matrix. The cations and the humic and fulvic substances have a known structural function in the extracellular polymeric substances (EPS) of sludge flocs and are therefore, most
likely tightly associated with the removed protein-like fraction.
Corroborating with the removal of a protein-like fraction, an increased protease activity was observed in the predated WAS. The improved protease activity was likely related to T. tubifex based enzymes and/or the excretion of intestinal proteolytic bacteria. More specifically, a maximum of 73% of the proteolytic activity, related to the conversion of the model substrate casein, was due to the activity of the worms, while the remaining activity could be linked to the intestinal proteolytic bacteria.
The synergy between bacteria and worms was further investigated using microbial community analysis. We showed that the worm faeces produced through WAS predation shared more similarities in microbial structure with predated protein rich substrates as compared to the WAS itself. The microbial change towards a microbiome, which was apparently related to protein degradation, was probably due to favourable conditions in the worm gut that facilitated a protein-degrading microbial community. It was further found that the genera Burkholderiales, Chryseobacterium and Flavobacterium were
associated with predation by T. tubifex and are likely related to protein degradation.
Overall, the research demonstrated that the key aspects of efficient WAS hydrolysis are related to the removal and conversion of protein- and alginate-like substances as well as elevated protease activity. The type of proteases and possibly other mechanisms such as the lytic capabilities of the aquatic worms
are yet to be investigated.
extended the traditional signal processing tools to the graph domain. Under
these circumstances, the emergence of graph signal processing has offered a
brand new framework for dealing with complex data. In particular, the graph
Fourier transform (GFT) lets us analyze the spectral components of a graph signal in the graph frequency domain. Based on the GFT, graph filters provide useful tools to modify or extract spectral parts in terms of different objectives, e.g., using a low-pass graph filter to construct graph signals without noise. This thesis mainly focuses on designing and implementing graph filters. Similar to traditional signal processing, we investigate two types of graph filters: finite impulse response (FIR) and infinite impulse response (IIR) graph filters. Moreover, this thesis takes both undirected and directed graphs into account for the design methods and implementations.
The prospect of stricter national and international emission standards for the shipping industry are a driving force in the search for alternative shipping fuels, such as liquefied natural gas (LNG). However, new challenges arise with the widespread use of LNG. For example, there is a desire to use LNG cargo containment systems at lower filling levels. These filling levels are strictly limited to prevent the movement of liquid inside the containment system, which is known as sloshing. Sloshing inside a cargo containment system can result in extreme wave impact events with the potential to cause structural damage. Therefore, a fundamental understanding of these extreme wave impact events is required before studying increasingly complex phenomena. The study of wave impacts on a wall has been an active area of research for decades. Moreover, the impact of waves upon structures is relevant for many fields such as ocean, coastal, and maritime engineering. The generation of repeatable waves in a laboratory environment is not trivial (Bagnold, 1939). Small changes in the experimental conditions, such as the water depth and the wave generation method, result in significant impact pressure variability. The impact pressure variability is even observed in carefully repeated wave impact experiments with minimal variability of the input parameters. For these measurements, the source of the impact pressure variability is thought to be the instability development on the wave crest. However, the mechanism that is responsible for the formation of these instabilities is still largely unknown. The aim of this work is to gain insight in the sources of wave impact pressure variability. This is accomplished using direct measurements of the liquid free surface and particle image velocimetry of the surrounding air. The measurements are limited to a single wave (i.e., with a fixed steering signal and water depth) at atmospheric conditions, because of the complexity of the experimental measurements. A plunging breaking wave with a large gas pocket is generated that impacts on a vertical wall. The compression of the large gas pocket induces a significant gas flow between the wave crest and the vertical impact wall, which results in the formation of instabilities on the wave crest. Quantitative measurements of the liquid free surface are obtained with an extension of the planar laser induced fluorescence (PLIF) method. The newly developed scanning stereo-PLIF measurement technique uses a stereo-camera set-up with a self-calibration procedure adapted for free surface flows. Thereby, the stereo-PLIF technique enables measurements of a free surface over a two-dimensional domain (e.g., y=f(x,z,t)). The system is versatile with a minimal influence on the fluid properties and the measurement domain can be scaled as needed. A repeatable plunging breaking wave is created in the wave flume of the Hydraulic Engineering Laboratory at the Delft University of Technology. The wave encloses a gas pocket as it approaches the vertical impact wall. Initially, the plunging breaking wave is globally comparable to waves that do not impact on a vertical wall. The aspect ratio of the cross-sectional area of the gas pocket remains relatively constant at Rx/Ry = 1.6 ( ∼√3). Furthermore, the wave velocity (√gh0) and wave tip velocity (1.2√gh0) are initially similar to that of a plunging breaking wave. On the other hand, the trajectory of the wave tip is altered compared to that of a typical plunging breaking wave. The trajectory of the wave tip is globally similar over repeated wave impact measurements. However, moments before impact the wave tip is deflected by the gas expelled from the gas pocket. The deflection of the wave tip introduces significant variation between the repeated measurements. On close inspection, the wave tip resembles a liquid sheet, that is destabilized by an initial Kelvin-Helmholtz instability (Villermaux et al., 2002). The flapping liquid sheet accelerates the wave tip, which triggers the development of a Rayleigh-Taylor instability. This results in approximately equally spaced liquid filaments (i.e., liquid fingers) over the spanwise direction of the wave. The spanwise wavelength depends on the density ratio (ρa/ρl) and surface tension, which was previously shown to be a source of wave impact pressure variability. Additionally, particle image velocimetry measurements are performed to determine the interaction between the liquid and gas during a wave impact event. The global gas flow is similar to that of a plunging breaking wave, where a vortex develops on the leeward side of the wave. The vortex consistently separates from the breaking wave and lingers at the back of the breaking wave in the stagnant air. The development of circulation is typical for a vortex that eventually separates at a universal time scale denoted by the formation number (Gharib et al., 1998). However, a typical formation number can not be defined in this particular case, due to the simultaneous change of both the length and velocity scales. The velocity profile between the wave tip and the vertical impact wall resembles that of a flow past a bluff body. A fit of the measured velocity profile agrees well with the velocity derived from mass conservation. Interestingly, the velocity close to the wave tip is approximately 2 times higher than the bulk velocity estimate. The high velocity close to the wave tip can thus result in an earlier onset of instability development compared to estimates based on the bulk velocity. Furthermore, the flow tends to separate close to the tip just before impact on the vertical wall. The effect of this flow separation on the impact pressure variability depends on the global wave shape prior to impact. For the case with a disturbance on the wave crest, the secondary vortex that forms close to the wave tip tends to break up. On the other hand, if the wave crest is smooth the secondary vortex remains attached. The attached secondary vortex increases the lift on the wave tip, which results in a significant deflection of the wave tip. Consequently, the development of secondary vortices close to the wave tip results in wave impact pressure variability, as the typical deflection is larger than the membrane diameter of a contemporary pressure transducer. Local phenomena such as flow separation and the development of instabilities define the variability of the peak pressure during wave impacts. On the other hand, the global characteristics of an air-water wave impact on a vertical wall can be retrieved with pressure impulse models (Cooker et al., 1995). The maximum wave impact pressure is relevant for LNG containments systems and wave energy converters. Consequently, numerical models that aim to quantify wave impact pressure variability require accurate models of both the gas phase and the development of free surface instabilities.","Variability; Wave impacts; Free surface waves; PIV measurements; Laser-induced fluorescence; LNG; Wave dynamics; Stereo Image","en","doctoral thesis","","978-94-6384-226-6","","","","","","","","","Multi Phase Systems","","",""
"uuid:8b23ce5e-fd8c-4475-972d-bc50d7a2df1a","http://resolver.tudelft.nl/uuid:8b23ce5e-fd8c-4475-972d-bc50d7a2df1a","Sub synchronous oscillations in modern transmission grids: Design and validation of novel concepts for mitigating adverse dfig-ssr interactions","Sewdien, V.N. (TU Delft Intelligent Electrical Power Grids)","van der Meijden, M.A.M.M. (promotor); Rueda, José L. (promotor); Delft University of Technology (degree granting institution)","2021","The ongoing energy transition results on the one hand in a proliferation of power electronics interfaced devices and on the other hand in a decreasing availability of conventional synchronous generation. These developments pose important challenges for transmission system operators to operate a low inertia power system. As part of my research I have created a list of 28 related challenges, validated by industry, that are grouped into three categories: (i) Reduced Voltage and Frequency Support, (ii) New Operation of the Power System and (iii) New Behaviour of the Power System. The focus of this research is on category (iii) and addresses the sub synchronous resonance (SSR) phenomenon between a doubly fed induction generator (DFIG) and a series compensated transmission line. This phenomenon is denoted as DFIG-SSR in this thesis. Failing to adequately address resonances results in among others degradation of the power quality, protection tripping, physical damage to power system equipment and ultimately instability in the power system. The main objective of this research is to investigate and validate the degree of effectiveness of the existing phase imbalance compensation concept, as well as to design and validate a new prediction gain scheduling control concept for mitigating DFIG-SSR. For these investigation, design and validation activities, electromagnetic transient (EMT) simulation models of the DFIG wind turbine are developed using Power System Computer Aided Design (PSCAD). In line with common practice, the topology of the IEEE First Benchmark Model is used as a smallsize study model, whereas the larger IEEE 39-Bus Model is used for validation of the obtained results. The impedance based stability method is used to quantify the impact of potential mitigation solutions on DFIG-SSR...","","en","doctoral thesis","","978-94-6384-215-0","","","","","","2022-06-24","","","Intelligent Electrical Power Grids","","",""
"uuid:81e2fcf1-dbde-4814-8337-06f757348d6e","http://resolver.tudelft.nl/uuid:81e2fcf1-dbde-4814-8337-06f757348d6e","Kirkwood–buff integrals from molecular simulation","Dawass, N. (TU Delft Engineering Thermodynamics)","Vlugt, T.J.H. (promotor); Moultos, O. (copromotor); Delft University of Technology (degree granting institution)","2021","The Kirkwood–Buff (KB) theory is one of the most rigorous solution theories that
connects molecular structure to macroscopic behaviour. The key quantity, the so–called KB Kirkwood–Buff Integrals (KBIs), are defined either in terms of fluctuations in the number of molecules or integrals over radial distribution functions over open subvolumes. In the grand–canonical ensemble, KBIs of infinitely large and open systems are directly related to thermodynamic properties such as partial derivatives of chemical potentials and partial molar volumes. Using molecular simulations, it is only possible to study small systems with a finite number of molecules, and therefore finite–size effects should be considered.","Molecular Thermodynamics; Kirkwood-Buff theory","en","doctoral thesis","","978-94-6384-224-2","","","","","","","","","Engineering Thermodynamics","","",""
"uuid:a586c76e-ff97-4f24-9b0f-f62417539495","http://resolver.tudelft.nl/uuid:a586c76e-ff97-4f24-9b0f-f62417539495","Prognostics and health management of safety relevant electronics for autonomous driving","Prisacaru, Alexandru (TU Delft Electronic Components, Technology and Materials)","Zhang, Kouchi (promotor); Gromala, P.J. (copromotor); Delft University of Technology (degree granting institution)","2021","This thesis describes a series of experiments, algorithms, and methodology development for implementing Prognostics and Health Management (PHM) in the field of automotive electronics. Furthermore, a new PHM framework is proposed explicitly tailored for the harsh environment electronics. In addition, the entire apparatus is built, such as the sensing capabilities of electronic packages and control units. A central PHM ECU is also developed to acquire the signal from sensors, process it, and perform calculations.","prognostics; piezoresitive stress sensor; diagnostics; machine learning; reliability; electronic packages; virtual twin","en","doctoral thesis","","978-94-6421-400-0","","","","","","","","","Electronic Components, Technology and Materials","","",""
"uuid:510dd3e1-e5eb-4032-a785-c59df38f8c58","http://resolver.tudelft.nl/uuid:510dd3e1-e5eb-4032-a785-c59df38f8c58","Modeling Human Spatial Behavior Through Big Mobility Data","Wang, Y. (TU Delft Transport and Planning)","van Arem, B. (promotor); Timmermans, Harry (promotor); Correia, Gonçalo (copromotor); Delft University of Technology (degree granting institution)","2021","People are engaged in a variety of activities through space every day. The choice of type and location of activities is known as human spatial behavior. Urban decision makers need to understand how land use and transportation systems can shape human spatial behavior in order to design better systems. In the past, they have collected mobility data through travel surveys to understand human spatial behavior. Today, a wide range of automatically collected data have become available as alternative data sources.
Big mobility data vs. traditional travel survey data has been a topic of long-time debate in human mobility and travel behavior research. Big data are intuitively better but this is not always the case. Big mobility data relate to a large number of travelers and trips but little is known about each individual individual traveler and trip, not to mention that sometimes their information has to be aggregated for privacy concerns. On the other hand, travel survey data, despite reporting only a small group of respondents, tend to include abundant features about each individual traveler, such as age and attitudes, and each trip, such as trip purpose. Assuming that each row represents one traveler and each column represents one feature, big mobility data should have been described as long and thin, and “small” survey data (Chen et al., 2016) as short and wide.","","en","doctoral thesis","","978-90-5584-293-3","","","","","","","","","Transport and Planning","","",""
"uuid:106952cc-6ac9-4c5c-9b2b-e0d07b3bd8df","http://resolver.tudelft.nl/uuid:106952cc-6ac9-4c5c-9b2b-e0d07b3bd8df","Development of high-resolution ex vivo single-photon and positron emission tomography","Nguyen, M.P. (TU Delft RST/Biomedical Imaging)","Beekman, F.J. (promotor); Goorden, M.C. (copromotor); Delft University of Technology (degree granting institution)","2021","Molecular imaging aims for the visualisation, characterisation, and quantification of biological processes in humans and other living systems at the molecular and cellular level. For today’s patient care, molecular imaging allows for (early) detection and characterisation of disease, efficient planning and assessment of treatments, and contributes to improved patient care in ten-thousand clinics across the globe. In clinical molecular imaging, planar scintigraphy, single-photon emission computed tomography (SPECT), and positron emission tomography (PET) are among the most commonly used modalities. This thesis focuses on preclinical SPECT and PET, which are applied to image small animals such as mice and rats in basic and translational research.","ex vivo; SPECT; PET; pinhole; small animal; molecular imaging; collimator; system matrix; Monte Carlo simulation","en","doctoral thesis","","978-94-6423-317-9","","","","","","","","","RST/Biomedical Imaging","","",""
"uuid:62e7441c-72c7-4445-bf72-fb8ef047308e","http://resolver.tudelft.nl/uuid:62e7441c-72c7-4445-bf72-fb8ef047308e","Inzicht in de praktijk van het toezicht: Een empirisch onderzoek naar het verloop van operationele inspectieprocessen in de luchtvaart en zeevaart","Goosensen, H.R. (TU Delft Organisation & Governance)","de Bruijn, J.A. (promotor); van Bueren, Ellen (promotor); Delft University of Technology (degree granting institution)","2021","‘Falend toezicht’, ‘handhaving als probleem’, ‘toezicht onder vuur’ en ‘toezicht in crisis’. Uitspraken als deze laten zien dat het toezicht in Nederland vaak onder druk staat. Dat leidt tot veel aandacht van bestuurders en beleidsmakers - en tot een veelheid aan initiatieven om het toezicht te verbeteren. Dit boek gaat over toezicht - en richt zich op het meest operationele niveau van toezicht: de interacties tussen inspecteurs en inspectees. De onderzoeker volgde inspecteurs van de Inspectie Verkeer en Waterstaat bij inspecties en audits in de sectoren luchtvaart en zeevaart. Ze opent de ‘black box’ van de uitvoering. De veronderstelling was dat inzicht in deze operationele processen noodzakelijk is om goed beleid te kunnen maken. Inzicht in de operatie, vermindert de kans op onverwachte en ongewenste uitwerkingen van beleid. Dit boek geeft een diepgaand inzicht in het toezicht op het meest operationele niveau. Het staat vol met observaties die een rijk beeld opleveren van de interactie tussen inspecteurs en inspectees. Het laat zien dat inspecteurs een veelheid van overwegingen hanteren wanneer zij inspecties en audits uitvoeren en biedt waardevolle inzichten voor beleidsmakers en toezichthouders.","","nl","doctoral thesis","","978-94-6419-247-6","","","","","","","","","Organisation & Governance","","",""
"uuid:477007fb-bfa5-4284-8705-b7644cc0b248","http://resolver.tudelft.nl/uuid:477007fb-bfa5-4284-8705-b7644cc0b248","Towards Personalised Dementia Care: Approaches, Recommendations and Tools from Design","Wang, G. (TU Delft Applied Ergonomics and Design)","Delft University of Technology (degree granting institution)","2021","According to Person-Centred Care, as far as possible, people with dementia should be cared for in a way that takes into account their personality, life experiences and preferences. Personalisation is hence the core of Person-Centred Care, yet the approaches, recommendations and tools are lacking for this purpose. Therefore, the author investigated how this personalisation could be facilitated by design. Specifically, the author explored how to personalise the care for Behavioural and Psychosocial Symptoms of Dementia (BPSD). This is because BPSD contributes to the most stressful, complex, and costly aspects of dementia care. Non-pharmacological interventions for BPSD care have been developed, which offers ample room for personalisation. From the field of healthcare, the author drew on Person-Centred Care, and from there, she looked at BPSD from the lens of the Need-driven Dementia-compromised Behaviour (NDB) model, where BPSD is interpreted as a way for people with dementia to express their unmet needs. Factors contributing to BPSD have been categorised by this model, which could be unique for each person with dementia. From the field of design, she approached the challenge from the lens of Human-Centered Design and explored three design approaches that are most relevant in designing for personalised BPSD care, namely, Ergonomics in Ageing, Co-design and Data-enabled Design. The author hypothesised that a combination of these three design approaches could reveal insights into the factors contributing to BPSD, as mentioned in the NDB model, for each person with dementia exhibiting BPSD symptoms. She further hypothesised that gaining insights about these factors could facilitate the design of personalised dementia care. The author implemented a series of steps in evaluating these hypotheses from the literature and from the field. The learnings gained throughout the literature and field research enabled the integration of the three design approaches into Knowme, a toolkit for designing for personalised dementia care. The author concludes with a summary of research findings, a reflection on the research approach, and ends with recommendations for future work.","Human-Centered Design; Dementia; Person-centred care; Ergonomics; Co-design; Data-enabled design; Design tools; Research through design; Personalisation; Personalised care","en","doctoral thesis","","978-94-6366-409-7","","","","","","","","","Applied Ergonomics and Design","","",""
"uuid:cd993e69-6310-4e64-87ab-1ac1a1fcb149","http://resolver.tudelft.nl/uuid:cd993e69-6310-4e64-87ab-1ac1a1fcb149","4d open spatial information infrastructure: Participatory urban plan monitoring in indonesian cities","Indrajit, A. (TU Delft GIS Technologie)","van Oosterom, P.J.M. (promotor); van Loenen, B. (promotor); Delft University of Technology (degree granting institution)","2021","An urban plan contains a set of agreements from all stakeholders that may directly impact livelihood. However, many cities show a ‘plan and forget’ behavior by not monitoring and evaluating their urban plans. While local citizens are often excluded after the urban plan is enacted. Gibbs (2016) warned of the risk of this behavior by saying, “local communities are given the impression that the risk is being managed, when in fact it is not.” Therefore, as the affected party, local citizens should be included in the development of the plan and the monitoring, evaluating, and reporting of urban plan implementation. However, in reality, a collaboration between authorities and local citizens in monitoring land development is rare. In some cases, cities do not share urban plans with society. This situation motivates this research by developing a framework to make urban plans interoperable and accessible to the broader community by determining four particular objectives: (i) to identify what type and specification of spatial data are required to support participatory monitoring of the implementation of the urban plan; (ii) to design information interoperability of land-use plans for participatory urban plan monitoring; (iii) to construct spatial data governance that allows two-way information flows between stakeholders in participatory urban plan monitoring; and (iv) to develop a prototype for PUPM that enables two-way information flows and multidimensional spatial representation to support participatory urban plan monitoring. This study was built upon the four functions of land management: land tenure, land valuation, land-use planning, and land development. Information interoperability is essential for allowing interaction between these functions, particularly in PUPM. This study supports the revision of the ISO 19152 on the Land Administration Domain Model (LADM) by developing Spatial Plan Information Package (SP Information Package) for accommodating information from land-use planning and land development planning. In recent years, cities have adopted the digital twin concept to represent physical urban objects by exploiting 3D spatial information for improving the spatial thinking of all stakeholders. A common interest of urban planners in using an updated 3D spatial information for Rights, Restrictions, and Responsibilities (RRRs) was depicted for further analysis. Therefore, this study proposes the digital triplets concept for representing the legal situation of the land in four-dimensional representation (3D geometry with temporal aspect managed as an attribute). This thesis presents the development of a prototype using 4D spatial representation for supporting PUPM. The prototype enables two-way information flows between urban planners and citizens to enable the co-production of urban information. This study also proposes user-centered and data governance aspects in a holistic approach to implementing the proposed standard and technology, particularly for sharing RRRs with all stakeholders through an Open Spatial Information Infrastructure. The result of this study is implemented with actual urban plan data in the two biggest Indonesian cities: Jakarta and Bandung City. A usability test was conducted to assess the implementation of participatory urban plan monitoring using RRRs. The result shows that our approach can accommodate RRRs from the spatial planning process, providing a complete overview of the legal situation of the land or urban space to all stakeholders to monitor the implementation of urban plans to support the Sustainable Development Goals: ‘plan and progress’.","open spatial information infrastructure; urban plan; participatory monitoring; information interoperability; multidimensional representation","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-433-2","","","","A+BE | Architecture and the Built Environment (2021)","","","","","GIS Technologie","","",""
"uuid:5b19a1ef-8c82-41f0-a6ee-c16402daa110","http://resolver.tudelft.nl/uuid:5b19a1ef-8c82-41f0-a6ee-c16402daa110","Design and processing of silicon and silicon carbide sensors","el Mansouri, B. (TU Delft Electronic Components, Technology and Materials)","Zhang, Kouchi (promotor); Delft University of Technology (degree granting institution)","2021","Downscaling of transistors, also known as Moore’s law, has been the main propelling force behind the microelectronics industry. This trend will eventually come to an end due to physical limitations, hence an alternative is required to further drive technological progress. This gave an incentive to evolve in other directions as well, also known as More than Moore (MtM). This concerns all technologies adding functionality to integrated circuits (IC), all packaged as a single system. This can be done by combining digital and non-digital elements, e.g. analog/RF, passives, microelectromechanical devices (MEMS), and so on. The non-digital elements are not necessarily scalable according to Moore’s law. Therefore, in this thesis we investigated the possibilities of using silicon and silicon carbide (SiC) to fabricate a number of sensors, capable of measuring weak signals. This is done by having a multidisciplinary investigation spanning from electrical, to mechanical and optical domains.","","en","doctoral thesis","","","","","","","","","","","Electronic Components, Technology and Materials","","",""
"uuid:cef506da-3ec7-40b8-9a2c-30cbfbcd9f5c","http://resolver.tudelft.nl/uuid:cef506da-3ec7-40b8-9a2c-30cbfbcd9f5c","Biofouling in open recirculating cooling systems: Characterization and control of biofilms and Legionella pneumophila","Pinel, I.S.M. (TU Delft BT/Environmental Biotechnology)","van Loosdrecht, Mark C.M. (promotor); Vrouwenvelder, J.S. (promotor); Delft University of Technology (degree granting institution)","2021","Open recirculating cooling systems have been vital elements in industry since the early 20th century. Their purpose is to release excess heat from processes through water evaporation so that production can be carried out at optimal temperature. With the global development of industrial activities such as manufacturing, electricity and chemical production, the demand for cooling capacity keeps increasing. Biofouling is one of the main phenomena negatively affecting the performance of wet cooling systems. This phenomenon leads to: (i) loss of heat transfer efficiency, (ii) clogging, (iii) microbiologically influenced corrosion, and (iv) health risk associated to the development of pathogens. Controlling bacterial growth in open recirculating cooling systems is very challenging, and is generally performed via dosage of disinfectants such as sodium hypochlorite. Due to biofouling and the increasing concerns linked to chemical consumption and water discharge, attention has been given to investigate more sustainable approaches to cooling systems operation. Knowledge on the bacterial communities, disinfection impact and biofilm composition is however limited and a more in-depth characterization of biofouling is required to effectively establish new control strategies.
The objectives of this thesis were to contribute to the available knowledge on biofouling in cooling systems and to investigate sustainable and predictable alternative operations, providing results of direct relevance to practice. Attention was given to the identification of factors selecting the microbiome of cooling water subjected to conventional operation, characterization of cooling tower biofilms without disinfection, and assessment of the impact of temperature on biofilm composition. Then, alternative approaches to biofouling control were tested at pilot-scale. Phosphorus depletion was investigated as a solution for limiting biofilm growth and high pH was considered as a Legionella pneumophila control method. These studies were performed with the use of recent analytical methods such as next generation amplicon sequencing (NGS), quantitative polymerase chain reaction (qPCR), and flow-cytometry, which allowed collection of valuable data and a better characterization of biofouling...","","en","doctoral thesis","TU Delft OPEN Publishing","978-94-6366-426-4","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:a7c34b83-8e01-4c54-a27b-bb202500abfd","http://resolver.tudelft.nl/uuid:a7c34b83-8e01-4c54-a27b-bb202500abfd","Data Assimilation in High Dimensional Systems Using Local Particle Filters: Overcoming the curse of dimensionality in hydrology","Wang, Z. (TU Delft Water Resources)","van de Giesen, N.C. (promotor); Hut, R.W. (copromotor); Delft University of Technology (degree granting institution)","2021","This dissertation's ultimate goal is to provide solutions to two problems that the promising data assimilation method, called the Particle Filter, has when applied to high dimensional non-linear models, such as those often used in hydrological research and forecasting. Two local particle filters have been proposed to overcome three major issues. Firstly, the curse of dimensionality caused by high dimensional models. Secondly, the uncertainty brought by the data assimilation method itself and finally the problem of nonlinearity in observation operators that link model states to observations. Both newly introduced data assimilation algorithms have been assessed using the Lorenz model (1996), a toy model that provides a perfect evaluation environment for such methods because it is a one-dimensional discrete chaotic model, which can simulate the behavior of changes of atmosphere. One local particle filter has been used in a practical application in hydrology to improve discharge accuracy in the Rhine river basin by assimilating satellite soil moisture into the PCR-GLOWB hydrological model.
The curse of dimensionality is well-known in particle filters. It happens in high dimensional models because, to remain accurate, the number of particles needs to increase exponentially with the increase of the model scale (ie. model dimension). One possible solution to avoid this curse is to apply localization in particle filters. Both proposed particle filters are based on a localization method. Uncertainty sources in data assimilation are many, and it is not easy to separate all of them clearly and directly. The two variants of the particle filter proposed in this thesis focus on different issues.
The localization used in the first particle filters divided the whole analysis of data assimilation into small batches for each model state. Each local analysis is independent, and it only assimilates observations within the localization scale. In the process it quantifies the uncertainty that is introduced by the data assimilation process itself. The localization method for the second local particle filter variant used another strategy. In its procedure, all observations are assimilated one by one, and each observation only affects near model states within the localization radius. When all observations are assimilated sequentially, all model states are updated. In addition, the second particle filter variant tried to solve the problem caused by nonlinear observation operators. To overcome the latter problems, the nonlinear observation operator was replaced by a surrogate model, named the Gaussian process regression model. For the calculation of the weights for each particle, model states needed to be transferred into the observation space. A Gaussian process regression surrogate model makes the transition process more straightforward in the nonlinear case because it provides the mean and standard deviation of estimates. Both local particle filter variants introduced in this thesis were evaluated thoroughly, and all results demonstrated that they performed satisfactorily in the specific nonlinear case and can be applied in high dimensional systems.
In addition to testing both local particle filters in the controlled Lorenz model, LPF-GT has also been verified as beneficial in a case study with the hydrological model PCR-GLOBWB. The specific study area focused on the Rhine river basin. The local particle filters have been applied to assimilate satellite soil moisture from the SMAP mission into the PCR-GLOBWB model to improve discharge estimates. Results show that the local particle filter performed well and significantly improved discharge accuracy by assimilating SMAP soil moisture. The new LPF-GT only requires a handful of particles to reach better performance in the Rhine river basin. This is particularly useful and practical for large-scale models that are often used in hydrology. Only requiring a small number of particles is the primary advantage of this data assimilation method because it saves lots of computational costs. In addition, the use of the localization in this particle filter makes the update for each model state independent from each other and can be conducted in parallel. Thus, the efficiency of this data assimilation method can be improved further.
In conclusion, the new additions to the particle filter proposed in this thesis are stable and can provide satisfying accuracy in nonlinear cases and for high dimensional models. Both of them have been proven to perform well in a toy model with many dimensions where they have direct value in solving the curse of dimensionality and nonlinearity. More importantly, they are valuable data assimilation methods to give direct insights into how to cope with uncertainty in nonlinear cases and to offer data assimilation frameworks for developing new particle filters in the future. The successful hydrological application of data assimilation using local particle filters in this research shows its considerable potential in hydrology.","Data Assimilation; Particle filters; Hydrology; Localization; PCR-GLOBWB 2.0 model; Satellite soil moisture","en","doctoral thesis","","978-94-6384-228-0","","","","","","2022-12-21","","","Water Resources","","",""
"uuid:9a296d85-2a48-4425-9a82-cfa617a0ef3a","http://resolver.tudelft.nl/uuid:9a296d85-2a48-4425-9a82-cfa617a0ef3a","Adaptive Disaster Risk Assessment: Combining Multi-hazards with Socioeconomic Vulnerability and Dynamic Exposure","Medina Pena, N.J. (TU Delft BT/Environmental Biotechnology)","Brdjanovic, Damir (promotor); Vojinovic, Zoran (promotor); Delft University of Technology (degree granting institution)","2021","Climate change, combined with the rapid and often unplanned urbanisation trends, is associated with a rising trend in the frequency and severity of disasters triggered by natural hazards. Among the weather-related disasters, floods and storms (i.e. hurricanes) account for the costliest and deadliest in the last decades. The situation is of particular importance in Small Islands Developing States (SIDS) because their relative higher vulnerability to the impacts of climate change, due to their location, fragile economies, limited resources, and more vulnerable habitats. Therefore, SIDS must implement adaptation measures to face the impacts of climate change and those of the urbanisation growth; for which is necessary to have an appropriate Disaster Risk Assessment (DRA), which should include the hazard itself, the intrinsic socio-economic vulnerability of the system and the exposure of infrastructure and humans to the hazard. Traditional DRA approaches for disaster risk reduction (DRR) have focused mainly on the natural and technical roots of risk, this is the modelling of the hazard and implementation of physical and structural defences, for which the hazard component is the centre. Traditional DRA methods pay no or little attention to the other dimensions of disaster risk, and do not often investigate the spatial and temporal relationships between the hazard, the vulnerability and the exposure components. A better alternative when dealing with DRA is a holistic risk assessment, which looks at risk as a whole, looking into the components and seeking to understand the interactions, interrelatedness and interdependences between different processes and parts of the whole.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-1-032-11617-4","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:57651a9a-5dab-4459-a320-302b6c680b8e","http://resolver.tudelft.nl/uuid:57651a9a-5dab-4459-a320-302b6c680b8e","Biological Production of Spatially Organized Functional Materials","Yu, K. (TU Delft BN/Marie-Eve Aubin-Tam Lab)","Aubin-Tam, M.E. (promotor); Lin, Y. (copromotor); Delft University of Technology (degree granting institution)","2021","Catastrophic breakage of a material might bring severe accidents in aerospace engineering, construction, and transportation field. Therefore, engineering material with high toughness values is very important for these special applications. Many biological materials in nature, such as nacre, silk, and wood, possess high toughness values because of their highly organized micro- and nanostructure. Inspired by these natural materials, many scientists tried to build tough materials by improving their orientation of the micro- and nanostructure. However, most of the current fabrication methods are either energy-consuming or labor-intensive, the mild and scalable production of engineering tough materials remains challenging.","Bioinspired materials; Living materials; 3D bioprinting; Bacterial cellulose; Nacre","en","doctoral thesis","","978-90-8593-479-0","","","","","","","","","BN/Marie-Eve Aubin-Tam Lab","","",""
"uuid:025cfbf8-75bd-4273-b29b-eae15010c6c2","http://resolver.tudelft.nl/uuid:025cfbf8-75bd-4273-b29b-eae15010c6c2","Atomic and Molecular Layer Deposition for Controlled Drug Delivery","La Zara, D. (TU Delft ChemE/Product and Process Engineering)","van Ommen, J.R. (copromotor); Delft University of Technology (degree granting institution)","2021","The majority of pharmaceutical products is made of solid powders. The morphology and surface characteristics of drug particles affect both their bulk behaviour, e.g., flowability, dispersibility and tabletability, in the manufacturing process of dosage forms as well as their bioavailability upon administration into the human body. For instance, in pulmonary drug delivery, particles with an aerodynamic diameter <5 µm are required to reach the action sites of the lungs. Surface modification provides the means to tailor crucial functionalities of pharmaceutical particles, such as dissolution, wettability, flowability and dispersibility, based on the desired formulation design. Atomic layer deposition (ALD) and molecular layer deposition (MLD) are gas-phase film technologies that enable atomic-level control over surface properties through the fabrication of nanoscale films on individual particles, which impact the powder performance. The benefits of ALD and MLD for pharmaceuticals compared to existing surface modification techniques include (i) gas-phase and fully solventless nature of the process, (ii) wide range of process conditions, including low temperature and atmospheric pressure, (iii) control over the amount of deposited material and film thickness in the sub-nanometer and low-nanometer range, (iv) high drug loadings due to the nanoscale films, (v) uniform and conformal films, crucial for tailored functional properties. Moreover, the possibility to carry out ALD and MLD in fluidized bed reactors offers scalable processing and manufacturing of bulk quantities of nano-engineered powders, relevant for pharmaceutical applications. This thesis deals with the development of ALD and MLD processes on excipient and drug particles, especially for pulmonary delivery, to control their release and enhance their dispersibility and flowability.","Atomic layer deposition; Molecular layer deposition; Pharmaceuticals; Drug delivery; Inhalation; Budesonide; Controlled release; Wetting; Flowability","en","doctoral thesis","","978-94-6421-383-6","","","","","","","","","ChemE/Product and Process Engineering","","",""
"uuid:62f0968e-d0e3-4dcd-89c8-0ba6a615c239","http://resolver.tudelft.nl/uuid:62f0968e-d0e3-4dcd-89c8-0ba6a615c239","The Interplay between Land Use, Travel Behaviour and Attitudes: a Quest for Causality","van de Coevering, P.P. (TU Delft Transport and Planning)","Maat, C. (promotor); van Wee, G.P. (promotor); Delft University of Technology (degree granting institution)","2021","Governments increasingly embrace land-use policies to promote sustainable travel behaviour. However, the causality of this relationship, and in particular the role of travel-related attitudes, is not clear. This thesis takes a longitudinal approach and explores the directions of causality. It shows that the built environment influences travel behaviour and that travel-related attitudes play an important intervening role. Implications for land-use policies and alignment with accompanying measures are discussed.","","en","doctoral thesis","","978-90-5584-290-2","","","","TRAIL Thesis Series no. T2021/18, the Netherlands Research School TRAIL","","","","","Transport and Planning","","",""
"uuid:87c8e518-2abf-476b-a482-9a31049367d1","http://resolver.tudelft.nl/uuid:87c8e518-2abf-476b-a482-9a31049367d1","Barium Disilicide for Photovoltaic Applications: Thin-Film Synthesis and Characterizations","Tian, Y. (TU Delft Photovoltaic Materials and Devices)","Zeman, M. (promotor); Isabella, O. (promotor); Delft University of Technology (degree granting institution)","2021","Energy and materials have been assigned with great significance for the development of society over the past centuries. For the sake of environmental sustainability, earth-abundant and eco-friendly materials for energy utilization have been gaining increasing attention. Among them, barium disilicide (BaSi2) possesses attractive optical and electrical properties, enabling its potential for achieving low-cost and high-efficiency thin-film solar cells. This research provides a systematical investigation on sputtered BaSi2 ranging from thin-film fabrication to properties characterizations. Chapter 1 gives a general introduction about solar energy and photovoltaics. The prospects and challenges of thin-film solar cell technology are discussed. Chapter 2 is a literature review of BaSi2, including material structure, optical and electrical properties, thin-film fabrications, and recent advancements in BaSi2-based solar cell development. Chapter 3 lists experimental methods used in this research including deposition techniques and material characterization methods. Chapter 4 exhibits the fabrication of poly-crystalline BaSi2 films via sputtering with subsequent high-temperature annealing in N2 atmosphere. The film thickness uniformity is determined by the target-to-substrate distance. The surface oxidation during high-temperature annealing results in the inhomogeneous structure of sputtered BaSi2 films. An oxidation-induced structural transformation mechanism of BaSi2 is proposed, which describes the complex reactions and elemental diffusion within the BaSi2 film at high-temperature conditions. Chapter 5 explores the effects of vacuum annealing condition on sputtered BaSi2 film properties. The vacuum annealing method enables the BaSi2 crystallization at 600 °C, and decreases the thickness of the surface oxide layer from ∼200 nm (in N2 atmosphere) to ∼100 nm. In Chapter 6, a face-to-face annealing (FTFA) approach is applied for the post-growth treatment of sputtered BaSi2 films, which improves surface composition homogeneity and crystal quality of sputtered BaSi2. By employing various covers for FTFA including BaSi2, silicon, and glass, a transition of conductivity type from n- to p-type is observed. Thermal resistance analysis is carried out to understand the mechanism of the FTFA method and its impacts on the film crystallization process and properties. Chapter 7 investigates the interface properties of Si/BaSi2/Si hetero-structures serving as the fundamental for the development of BaSi2/Si heterojunction solar cells. The effects of Si layer thickness on the composition and structure of Si/BaSi2/Si under high temperature conditions are analyzed. A thick Si layer (dSi › 20 nm) can effectively suppress the surface oxidation and elemental diffusion during the high-temperature annealing. The process of structure and composition variations of Si/BaSi2/Si samples consist of the oxidation of deposited Si layer, growth of the oxide layer, Ba diffusion and depletion, as well as Si isolation and crystallization. These interfacial phenomena lead to the complex structure and composition of Si/BaSi2/Si heterostructures. Conclusions of this thesis and outlook for the future development of the material and devices are listed in Chapter 8. Recommendations are given for high-quality BaSi2 film fabrications and solar cell development. This thesis provides insights into BaSi2 films from perspectives of thin-film depositions via sputtering and property characterizations. These results and knowledge shed light on fabrications of BaSi2 films for the goal of efficient BaSi2-based solar cells.","Barium Disilicide; Sputtering; Thin Films; Photovoltaics","en","doctoral thesis","","978-94-6421-380-5","","","","","","","","","Photovoltaic Materials and Devices","","",""
"uuid:811faec9-9688-4f60-829e-3b073fc6fe59","http://resolver.tudelft.nl/uuid:811faec9-9688-4f60-829e-3b073fc6fe59","Setting Africa’s rainfall straight: A warping approach to position and timing errors in rainfall estimates","le Coz, C.M.L. (TU Delft Mathematical Physics; TU Delft Water Resources)","van de Giesen, N.C. (promotor); Heemink, A.W. (promotor); Delft University of Technology (degree granting institution)","2021","There is an increasing number of rainfall products available over Africa and globally. Rainfall has considerable socio-economic impacts in sub-Saharan Africa, and the sparse gauge and radar networks make such estimates particularly valuable. They are used in many important applications such as drought/flood forecasting, water management or climate monitoring. The choice of which one to use has a significant influence on the output and performance of such applications. The large number of available rainfall products makes it difficult to select the “best” one for one’s need. Among the rainfall products, there is an increasing number of satellite-based estimates with ever finer resolution. They are particularly valuable in Africa where the gauge network is not dense enough to represent the high variability of the rainfall during the monsoon season. However, there are substantial differences between them. Rainfall events are moving systems which can be described by their positions and timings beside of their intensity. A position or timing error will also lead to mismatches in the rainfall occurrence or intensity. This is especially true for localized rainfall events such as the convective rainstorms occurring during the rainy season in sub-Saharan Africa. However, rainfall is mainly evaluated with respect to its intensity or occurrence, while position and timing errors are rarely studied.","African rainfall; precipitation estimation; field displacement; warping","en","doctoral thesis","","978-94-6421-387-4","","","","","","","","","Mathematical Physics","","",""
"uuid:caeb7b8e-1d0c-4a10-8af3-cb6662267243","http://resolver.tudelft.nl/uuid:caeb7b8e-1d0c-4a10-8af3-cb6662267243","CRISPR's little helpers: CRISPR-Cas Proteins involved in PAM selection","Kieper, S.N. (TU Delft BN/Stan Brouns Lab)","Brouns, S.J.J. (promotor); Joo, C. (promotor); Delft University of Technology (degree granting institution)","2021","For millennia, humanity has been plagued by pathogenic bacteria. Until the advent of antibiotic treatments, seemingly harmless bacterial infections could have fatal consequences. However, in the microcosm that these single celled organisms inhabit, the line between being the invader or being invaded is a thin line. Bacteria and archaea are con¬stantly targeted by their viruses (bacteriophages – from Greek “to de¬vour”-bacteria). Without mechanisms in place to protect the prokar¬yotic cell from infection, bacteriophages would drive whole species to almost extinction. This thesis presents the work in which we applied techniques of molecular biology and biochemistry to investigate the mechanism certain bacterial species use to develop immunity against bacteriophages.","CRISPR Adaptation; Spacer Integration; PAM Selection; Cas4","en","doctoral thesis","","978-90-8593-480-6","","","","","","","","","BN/Stan Brouns Lab","","",""
"uuid:a4f750b6-5ac5-4709-80c5-71eb71ac7b35","http://resolver.tudelft.nl/uuid:a4f750b6-5ac5-4709-80c5-71eb71ac7b35","Decentralization and Disintermediation in Blockchain-based Marketplaces","de Vos, M.A. (TU Delft Dataintensive Systems)","Epema, D.H.J. (promotor); Pouwelse, J.A. (promotor); Delft University of Technology (degree granting institution)","2021","Marketplaces facilitate the exchange of services, goods, and information between individuals and businesses. They play an essential role in our economy. The standard approach to devise digital marketplaces is by deploying centralized infrastructure, entirely operated and managed by a market operator. In such centralized marketplaces, trusted intermediaries often provide various services to traders, such as managing market information, processing payments, and providing arbitration services when a dispute arises. Advancements in information technology have challenged the need for both authoritative market operators and trusted intermediaries. In particular, blockchain technology is increasingly being applied to deploy digital marketplaces. Blockchain-based marketplaces facilitate trade directly between peers while reducing the dependency on both authoritative parties and trusted intermediaries. The role of blockchain in such marketplaces is to replace social trust with cryptographic primitives. This enables the decentralization and disintermediation of different components in digital marketplaces. In the context of this thesis, decentralization refers to the concept of delegating decision-making and activities away from a central authority. Disintermediation reduces or removes the involvement of trusted intermediaries when trading on a digital marketplace. This thesis introduces innovative approaches to decentralize and disintermediate all aspects of blockchain-based marketplaces. We first identify the five aspects of blockchainbased marketplaces: information management, matchmaking, settlement, fraud management, and identity management. We then design, implement, evaluate, and deploy five decentralized mechanisms. Each introduced mechanism focusses on one or two aspects of blockchain-based marketplaces. For each mechanism, we consider feasibility and realworld deployment as crucial requirements for successful adoption.","decentralization; disintermediation; electronic markets; e-commerce; blockchain; decentralized exchanges; matchmaking; settlement; fraud; information management; identity management; decentralized finance; trading; money","en","doctoral thesis","","978-94-6384-225-9","","","","","","","","","Dataintensive Systems","","",""
"uuid:806e1965-8320-4a40-b0fc-0b060ac62799","http://resolver.tudelft.nl/uuid:806e1965-8320-4a40-b0fc-0b060ac62799","Improving Capabilities in Modeling Aircraft Noise Sources","Vieira, A.E. (TU Delft Aircraft Noise and Climate Effects)","Simons, D.G. (promotor); Snellen, M. (promotor); Delft University of Technology (degree granting institution)","2021","Today's globalized world depends on civil air transportation, which has been continuously growing over the last decades. Nevertheless, the sustainability of this expansion is a challenge due to environmental problems. Along with greenhouse gas emissions, noise represents a severe hazard for human health, and consequently, noise regulations limit the airport capacity and impose night curfews.
Noise is therefore an important design driver for future aircraft, and accurate noise predictions are essential at an early design stage. The total noise emission of an aircraft poses a complex problem, as the distinct
components emit noise with different characteristics. High fidelity methods are computationally demanding and time-consuming at an early design phase and less complex solutions, such as semi-empirical methods, are often considered to be more suitable. This thesis focuses on aspects that can improve noise predictions for a new generation of silent aircraft.
The concept of noise shielding is present in many future aircraft designs, in which engine noise is partially shielded by the airframe, resulting in a noise reduction on the ground. The noise shielding predictions presented in this work use a theory based on the Kirchhoff integral and the Modified Theory of Physical Optics. This method was extended to consider other noise source radiations patterns than the monopole and to calculate the creeping rays originated by smooth edges.
Experiments in the wind tunnel were used to validate these methods and showed that the values of noise shielding are strongly dependent on the source directivity and the shape of the obstacle.
Flyover measurements of rear engined aircraft were compared with predictions of noise shielding. The good agreement obtained considering a sharp-edged wing in the predictions was further improved by considering the curvature of the leading edge.
A low-noise variation of the Boeing 747-400 is explored using noise shielding predictions, and the optimal engine positions were found to be different when considering the wing leading edge as sharp and with a curvature. This analysis shows how the design of an aircraft is affected by the approximations adopted in the noise shielding predictions, therefore also affecting its performance.
For conventional aircraft, the noise emission is commonly estimated using semi-empirical methods. These models are based on experimental data and require detailed input of the aircraft geometry and engine settings. This work uses experimental data to test the limitations of such empirical methods during take-off and landing.
The efforts to reduce aircraft noise are only meaningful when resulting in a decrease of annoyance. Traditional metrics such as the Effected Perceived Noise Level are used to assess the annoyance caused by aircraft flyovers but do not provide information about the sound characteristics, such as tonal content and fast and slow amplitude oscillations. Sound quality metrics provide a more complete characterization of a sound and can be combined in psychoacoustics annoyance metrics. Flyover measurements of different aircraft types during take-off and landing were used to investigate the correlation between the sound quality metrics and the aircraft geometry and propulsion system. Strong correlations were found between the sound quality metrics and a number of aircraft characteristics, indicating that psychoacoustic metrics can be used to drive the design process, similarly to existing methods that apply traditional metrics for the same purpose. The variability of the sound quality metrics and psychoacoustic annoyance within the same aircraft type was also investigated. This variability was attributed to the aircraft operating conditions.","Aircraft noise; Noise shielding; Noise semi-empirical methods; Sound quality metrics","en","doctoral thesis","","","","","","","","","","","Aircraft Noise and Climate Effects","","",""
"uuid:71f02179-f4e9-4c51-a2a4-945b9679a857","http://resolver.tudelft.nl/uuid:71f02179-f4e9-4c51-a2a4-945b9679a857","Towards estimation of optical and structural ophthalmic properties based on optical coherence tomography","Ghafaryasl, B. (TU Delft ImPhys/Computational Imaging)","van Vliet, L.J. (promotor); Vermeer, K.A. (copromotor); Delft University of Technology (degree granting institution)","2021","Early diagnosis of retinal diseases such as glaucoma will benefit from unbiased an precise estimation of both optical and structural properties of the RNFL as they provide a better understanding of the tissue characteristics. The main objective of this thesis was first to improve the estimation of the attenuation coefficients of layered samples and, second, to estimate the structural properties of RNFL. Unbiased estimation of optical tissue properties such as the attenuation coefficient require a model of the recorded OCT signal. To study the characteristics of the OCT signal, in Chapter 2, two simulation methods were presented for homogeneous samples. In both methods single-scattering of the OCT light was assumed and the effect of the shape of OCT beam was taken into account. The more complex simulation also takes into account the interference of the electrical fields in the sample and reference arms and several post processing steps. Later in this thesis the simpler model was used to model the OCT signal since both simulation methods generated similar results. In Chapter 3, we improved an existing depth-resolved method to estimate the attenuation coefficients. The existing method does not handle noise at the larger depths, where the OCT light is fully attenuated, which results in a variation of the estimated attenuation coefficient values. We introduced a technique to detect and exclude the noise regions from the OCT scans to improve the accuracy and reduce the Aline-by-Aline variation of the estimated attenuation coefficients. The results show a better accuracy of the estimated attenuation coefficient, especially in sub-RPE regions and a better quality of the attenuation coefficient images. In Chapter 4, a method was presented to estimate the attenuation coefficients of a homogeneous medium accounting for the shape of the focused light beam. For this, the model presented in Chapter 2 was fitted to the measured OCT signal of a homogeneous sample to estimate the model parameters. The presented method was first implemented for the semi-infinite samples and was tested for different concentrations of TiO2 in silicone for different locations of focus. In addition, a statistical and numerical analysis was performed to evaluate the presented method under various experimental conditions. The estimation result shows a reasonable correlation between the TiO2 weight-concentration and the estimated attenuation coefficient. While the method could estimate the attenuation coefficients of a uniform samples, most biological tissues such as the retina are layered, hence the method was extended in Chapter 5 to estimate the attenuation coefficients of the multi-layer samples. This method was tested on the simulation and measurements of a multi-layer phantom with different concentration of TiO2 in silicone with two systems: one with a small (40 μm) and one with a larger (300 μm) Rayleigh length. The numerical results show an acceptable estimation of the attenuation coefficients for the Rayleigh lengths less than 0.5 mm in air and acceptable for clinical application using clinical OCT systems. For both single- and multi-layer samples, a linear relation between the estimated attenuation coefficients and the particle concentration of the perspective layer was observed while using single and multiple B-scans. In Chapter 6, an automatic technique was developed to estimate the orientation of RNFBs from volumetric OCT scans. The RNFB orientations of six macular scans from three subjects were used to evaluate the results. We observed a good correlation between the manual tracing and the estimated orientations of the RNFLs. RNFBs orientation in combination with other techniques such as VF and SAP can assist the ophthalmologists to have a more reliable measurement for an early diagnosis of retinal diseases such as glaucoma.","","en","doctoral thesis","","","","","","","","","","","ImPhys/Computational Imaging","","",""
"uuid:0e4d9d7f-f79b-41ce-9c20-0db218603ddf","http://resolver.tudelft.nl/uuid:0e4d9d7f-f79b-41ce-9c20-0db218603ddf","Achieving Sustainable Rural Water Services in Uganda: Collaborative Model-based Policy Analysis for Collective Reflection and Action","Casella, D.C. (TU Delft Research Data and Software; TU Delft Energie and Industrie)","Herder, P.M. (promotor); Nikolic, I. (promotor); Delft University of Technology (degree granting institution)","2021","In this empirically-driven, practice oriented research, the rural water sector in sub-Saharan Africa is examined from a systems perspective. In the face of rapidly changing and uncertain futures, the need for policy makers and practitioners to identify and respond to the multiple, seemingly intractable problems that give rise to stagnating water services levels despite decades of national and international efforts to achieve universal water service coverage, has never been more pressing. The aim of the research is to establish a way to consistently conceptualise the dynamics among actors in multi-level, multi-actor socio-technical systems, involved in the implementation of national policies and strategies to deliver nationally determined water service levels. The domain of inquiry is rural water services in the Republic of Uganda. It was conducted under the auspices of the Triple-S action research programme in which the researcher took part as a team member in the period 2008-2014. This study seeks practical, actionable insights into the extent to which the complexity sciences, and in specific agent-based modelling, can provide a useful policy analysis and planning approach for examining promising policy, technological and learning mechanisms for achieving universal water services in a given context.","rural water services; Uganda; agent based modelling; collaborative methods; Complexity science; systems thinking; Interdisciplinary research","en","doctoral thesis","TU Delft OPEN Publishing","978-94-6366-428-8","","","","","","","","","Research Data and Software","","",""
"uuid:06d15862-66ba-4872-95c8-76c2a3361a72","http://resolver.tudelft.nl/uuid:06d15862-66ba-4872-95c8-76c2a3361a72","Influence of Microstructure on Mechanical Properties and Damage Initiation of Bainitic Steels in Railway Applications","Hajizad, O. (TU Delft Railway Engineering)","Li, Z. (promotor); Dollevoet, R.P.B.J. (promotor); Delft University of Technology (degree granting institution)","2021","In this PhD thesis, we investigated possible steel candidates for use in railway crossings in order to reduce the damage in them. Pearlitic R350HT together with Bainitic grades including CrB, B1400 and carbide free B360 were investigated for their mechanical properties such as ultimate strength, yield strength, ductility and hardness. The influence of their microstructure on these mechanical properties was studied using microscopy techniques such as light optical microscopy (LOM), scanning electron microscopy (SEM) and electron backscatter diffraction (EBSD). The effect of an isothermal heat treatment was also investigated on the bainitic steels which were mostly manufactured using continuous cooling. Carbide free bainitic steel B360 was found to have the highest strength, ductility and toughness among all the steels. These properties became even better after the isothermal heat treatment. It was decided to investigate this grade further in detail regarding its damage initiation properties. Micromechanical modelling and in-situ experiment with micro Digital Image Correlation (μDIC) was used to measure local strain maps during tensile loading. Microscopic strain partitioning was used to investigate the damage initiation behavior of this steel before and after the isothermal heat treatment. The deformation localization in the Continuously Cooled Carbide Free Bainitic Steels (CC-CFBS) (B360) was modelled using elastic plastic and crystal plasticity material models. Both models were validated using the in-situ tensile experiment. A 2D real geometry was used as the micromechanical Representative Volume Element. The blocky retained austenite (BRA) was considered as martensite from the beginning of the loading since during the experiments, it was confirmed that large portion of the BRA transform into martensite in a strain-induced transformation mechanism. The main damage mechanism in this steel was observed to be the strain localization in narrow bainitic channels between martensitic islands and the large BRA (which turn into martensite) and in the interfaces of bainite with martensite. The initiated micro cracks can later fracture the martensitic islands. xii Other factors such as the interface of martensite/bainitic ferrite, the orientation of this interface and the phase morphology also influence the damage initiation in the continuously cooled B360 steel. An isothermal heat treatment was performed on this steel in order to remove/reduce the main damage initiating factors such as martensitic islands and the large BRA which was proved to improve the mechanical properties and damage characteristics . The deformation localization in isothermally heat treated CFBS (B360-HT) was modelled and the modelling results were validated using the in-situ experimental tensile tests. The effect of the isothermal heat treatment on B360 was to remove martensite, form finer bainitic microstructure and remove the unstable large BRA. As a result, small and homogeneously distributed BRA was observed in the B360-HT. The combination of numerical simulation and in-situ test revealed that the new proposed microstructure of carbide free bainitic steel has less strain localization compared to the continuously cooled B360 steel. The maximum local strain was reduced from 35% to 25% using the isothermal heat treatment. In the B360-HT, the strain bands usually form in 45 to the tensile axis. This new proposed microstructure of carbide free bainitic steel could be a good candidate to be used in the crossing nose.","Bainitic steel; Pearlitic steel; Isothermal heat treatment; Microstructure; Mechanical properties; Carbide free bainitic steel; Damage initiation; Microstructural modelling; Crystal plasticity finite element method (CPFEM); Crystal plasticity fast Fourier transform (CPFFT)","en","doctoral thesis","","","","","","","","2022-07-12","","","Railway Engineering","","",""
"uuid:579b7c68-8c13-493c-87d9-0f77d5fba34c","http://resolver.tudelft.nl/uuid:579b7c68-8c13-493c-87d9-0f77d5fba34c","Energy Invariant Mechanisms","Kuppens, P.R. (TU Delft Mechatronic Systems Design)","Herder, J.L. (promotor); Bessa, M.A. (copromotor); Delft University of Technology (degree granting institution)","2021","This thesis presents energy invariant mechanisms that can be scaled to micro size. They are also called statically balanced, because all static forces are balanced against each other. They enable effortless suspension of weight, and flexible devices without stiffness, seemingly defeating gravity and elasticity. In large scale applications such as movable bridges and ship lifts gravitational forces dominate elastic forces and counterweight balancing is the only practical approach. At small length scales, roles reverse: gravity becomes insignificant and elastic forces dominate. While other conservative forces such as magnetics or electrostatics also become relevant, focus will be placed on elastic forces. Static balancing is investigated at various scales, working our way down as we progress through the chapters. This investigation starts with a method for the analysis and synthesis of energy invariance in rigid body mechanisms, which are often used to aid the more difficult design of compliant mechanisms. It allows symbolic derivation of balancing conditions for serial kinematic chains with any number of links and zerofree-length springs. Virtual transmissions are introduced to temporarily constrain the chain to single degree of freedom and ease solving. Examples are provided in 2D and 3D. Static balancing is commonly used in civil engineering, robotics and large scale compliant mechanisms, where preloading is relatively easily applied by hand or preloading assembly. However, preloading becomes difficult at small scales and is identified as the primary reason the state of the art is not down-scalable. It is addressed by the design of various monolithic and planar architectures. The fully compliant mechanisms with linear and rotary motion incorporate a bistable mechanism that sustains the required preload in a reversible way. In one case, opposing constant force and torque achieves stiffness reduction by 98.5%and 90.5%respectively with increased relative range of motion (from 3.3% to 6.6 %) and reduced complexity compared to the state of the art. In the second case a new V-shape plate spring is used that minimizes and maximizes stiffness when preloaded and when not preloaded respectively. An unprecedented stiffness reduction of 99.9%and 98.5%is achieved under large deformations in fusedmodel deposition prototypes made out of polylactic acid. In all four cases switching between soft and hard modes can be done reversibly by toggling the bistable switch directly. Alternatively, the soft mode can be entered by actuating the shuttle over a force threshold. In addition, preloading is addressed by making it part of the manufacturing process by exploiting residual stress from thin film deposition. A stiffness reduction by a factor 9 to 46 is achieved over a range of 380 &m by thermal oxidation of silicon. The stiffness reduction is achieved in a passive way, allows for parallelized manufacturing, is space efficient and is scalable, since surface effects become more dominant at smaller scales. Miniature statically balanced mechanismswill be an enabling technology for low frequency sensors, mechanical energy harvesting, mechanical watch oscillators, mechanical logic and computing, microrobotics, compliant transmission mechanisms, and make small scale compliant mechanisms more energy efficient.","","en","doctoral thesis","","978-94-6384-227-3","","","","","","","","","Mechatronic Systems Design","","",""
"uuid:ed3ea41a-3b1a-4a90-a49f-4307be342edf","http://resolver.tudelft.nl/uuid:ed3ea41a-3b1a-4a90-a49f-4307be342edf","Solar Geometry in Performance of the Built Environment: An Integrated Computational Design Method for High-Performance Building Massing Based on Attribute Point Cloud Information","Alkadri, M.F. (TU Delft Design Informatics)","Sariyildiz, I.S. (promotor); Turrin, M. (copromotor); Delft University of Technology (degree granting institution)","2021","As part of the passive design strategy, the development of computational solar envelopes plays a major role to construct a cooperative environmental performance exchange between new buildings and their local contexts. However, the state-of-the-art computational solar envelopes pose a great challenge in understanding site characteristics from a given context. Existing methods predominantly construct 3D context models based on basic architectural geometric shapes, which are often isolated from the surrounding properties of local contexts (i.e., vegetation, materials). Thus, they only focus on context-oriented buildings and energy quantities that unfortunately lack a contextual solar performance analysis. It is clear that this condition may result in a fragmented understanding of the local context during the design and simulation process. With the potential application of attribute point cloud information, it is necessary to consider relevant parameters such as surface and material properties of existing contexts during the simulation of solar geometries, which are currently absent in computational frameworks. As such, the new method is required to enable architects not only to measure specific performances of the local context but also to identify vulnerable areas that may affect the proposed design. This research focuses on exploring an integrated computational design method for solar geometry based on solar and shading envelopes, and geometric and radiometric information from point cloud data. In particular, two computational models consisting of SOLEN (Subtractive Solar Envelopes) and SHADEN (Subtractive Shading Envelopes) are developed, which are applied to temperate and tropical climates, respectively. In design practice, these models help architects to produce informed-design decisions towards high-performed building massing.","Solar envelopes; Point cloud data; solar access; Material properties; Computational design method; Passive design strategies","en","doctoral thesis","A+BE | Architecture and the Built Environment","9789463664219","","","","A+BE | Architecture and the Built Environment No 21 (2021)","","","","","Design Informatics","","",""
"uuid:299183f0-77cc-43ab-a425-60bc2fe4cecc","http://resolver.tudelft.nl/uuid:299183f0-77cc-43ab-a425-60bc2fe4cecc","Illuminating the highly dynamic on-cell target search of bacteriophage and phage-like particles","Dreesens, L.L. (TU Delft BN/Bertus Beaumont Lab)","Aubin-Tam, M.E. (promotor); Beaumont, H.J.E. (copromotor); Delft University of Technology (degree granting institution)","2021","Phages are nanomachines composed of a protein coat encapsulating a genome. Since they are metabolically inert, they depend on a bacterial host for replication. They are abundantly present in all kinds of environments, patiently awaiting their target. The nanoscopic mechanical details of how phages or phage-like particles find the correct target and commit to infect remains unresolved. Traditionally bulk methods have been used to investigate the molecular properties of phages and phage-host interaction dynamics, which does not provide the required detailed information to understand how a phage moves on the cell prior to the decision to commit to infecting it. More detailed insights on this process have been gained by single-particle EM studies, providing information on the main structural configurations near atomic level that occur during the initial binding process (i.e. interaction between tail-fibers and host) up till commitment (i.e. sheath contraction followed by penetration of the cell membrane and DNA ejection). These static EM snapshots imply that the phage might use their tail-fibers to walk over the cell surface. However, any dynamical information of this process is scarce. Within this thesis we optimized a method based on fluorescence microscopy to study the fast dynamics of the on-cell motion and decision-making process of phages and evolutionary related structures, phage-like particles, at single-particle level with high temporal resolution and implemented a control for detecting possible artefacts due to cell-movement. Here we revealed, for the first time, the detailed interaction dynamics between labeled bacteriophage T4 and host Escherichia coli B, as well as R2-type pyocin and Pseudomonas aeruginosa 13s. We showed that both T4 and R2-type pyocins have a preference for irreversible binding to the cellular poles. Further, we showed that this method is capable of discriminating different motion regimes corresponding to the different search states for both T4 phage and R2-type pyocin. Most importantly we provided direct evidence of step-wise near/on-cell motion for T4 phage. We believe this discrete near/on-cell motion is facilitated by either a tethered-walk through binding and subsequent unbinding of individual long tail-fibers with host cell receptors and/or hopping through repeated brief attach- and detachment of the phage to host receptors. Together, these findings provide the first step towards an in-depth understanding of the mechanism behind the target-finding and decision-making process. ","phage-host interactions; phage-like particles; fluorescence microscopy; single-particle tracking","en","doctoral thesis","","978-90-8593-472-1","","","","","","","","","BN/Bertus Beaumont Lab","","",""
"uuid:02596b75-8108-444f-957d-dff4bf9226fa","http://resolver.tudelft.nl/uuid:02596b75-8108-444f-957d-dff4bf9226fa","Operational Control Solutions for Traffic Management on a Network Level","Landman, R.L. (TU Delft Transport and Planning)","Hoogendoorn, S.P. (promotor); Hegyi, A. (copromotor); Delft University of Technology (degree granting institution)","2021","Due to the ever-growing traffic demand, urban areas all over the world are
facing serious congestion problems. To mitigate the negative impacts of congestion, for many years more roads were built and traffic management measures locally implemented. Regardless all the extra asphalt and local control solutions, nowadays demands are often still exceeding the network capacity at more and more locations within a network. Which led us to the following situation: by solving one bottleneck, others might be easily activated elsewhere. The current focus of traffic management has therefore been put on realizing collaboration between traffic management measures that deal with traffic problems from a network perspective.
However, in the operational field of traffic management there are more objectives to satisfy than just improving the overall network performance. This is related to the many different stakeholders involved in the process of formulating a vision upon the functional use of a road network, as well as our spirit of the times that emphasizes the value of the livability of our environment. Therefore, when planning for the improvement of network performance, it is increasingly important to take note of the many different stakeholder interests. With the stakeholders agreeing on a common vision, systems are needed that are able to operationalize the vision based on realtime conditions at the involved freeways and urban roads.","","en","doctoral thesis","","978-90-5584-292-6","","","","","","","","","Transport and Planning","","",""
"uuid:8d149766-f4b1-4b6c-b73e-faadfd5deae6","http://resolver.tudelft.nl/uuid:8d149766-f4b1-4b6c-b73e-faadfd5deae6","High temperature turbulence: Coupling of radiative and convective heat transfer in turbulent flows","Silvestri, S. (TU Delft Energy Technology)","Roekaerts, D.J.E.M. (promotor); Pecnik, Rene (promotor); Delft University of Technology (degree granting institution)","2021","Radiative heat transfer has a large influence in engineering systems, especially when the temperature involved is elevated. For this reason a correct assessment of heat transfer in presence of radiation is of great importance in high temperature and pressure equipment such as combustion chambers, heat exchangers as well as reentry vehicles and rockets. This thesis presents the results of innovative coupled radiative heat transfer and turbulence simulations, which aim at investigating turbulence-radiation interactions and the effect of radiation in the turbulent heat transfer process. The simulations are performed employing heterogeneous high performance computing systems in which the radiative heat transfer is solved on graphical processing units while the fluid flow is solved on CPUs.","Turbulence; Heat transfer; Radiation","en","doctoral thesis","","","","","","","","","","","Energy Technology","","",""
"uuid:4ada8565-2efe-472b-a182-b94c935048ae","http://resolver.tudelft.nl/uuid:4ada8565-2efe-472b-a182-b94c935048ae","Frequency-Domain Analysis of ""Constant in Gain Lead in Phase (Cglp)"" Reset Compensators: Application to precision motion systems","Ahmadi Dastjerdi, A. (TU Delft Mechatronic Systems Design)","Herder, J.L. (promotor); Hassan HosseinNia, S. (copromotor); Delft University of Technology (degree granting institution)","2021","Proportional Integral Derivative (PID) controllers dominate the industry and are used in more than 90 percent of machines in this era. One of the reason for the popularity of these controllers is the existence of easy to use frequency-domain analysis tools such as loop-shaping for this type of controller. Due to the advancement of technology in recent decades, industry needs machines with higher speed and precision. Thus, an advanced industry-compatible control capable of a simultaneous increase in precision and speed is needed. Unfortunately, linear controllers, including integer and fractional order controllers, cannot satisfy this requirement of industry because of fundamental limitations such as the “water-bed"" effect. In other words, precision and speed are conflicting demands in linear controllers, and designers should consider a proper trade-off between them when they tune these controllers. The reset control strategy which is one of the well-known non-linear controllers, has shown a great capacity to overcome the limitation of linear controllers. In our group, a newtype of reset compensator, which is termed “Constant in gain Lead in phase (CgLp)”, has been proposed as a potential solution for this significant challenge. Considering the first harmonic of the steady-state response of the CgLp compensator, which is called the Describing Function (DF) analysis, this compensator has a constant gain while providing a phase lead. As a result, this novel compensator can improve the precision of the control system, while simultaneously maintaining the high quality level of transient response (throughput of the system). As mentioned before, industry favours designing controllers in the frequency-domain because it provides an easy to use tool for performance analysis of control systems. Therefore, in order to interface this compensatorwell with the current control design in industry and broaden its applicability, it is important to study this type of reset compensator in the frequency-domain. So far, CgLp compensators have been studied in the frequency-domain using the DF method. However, there are some major drawbacks which have to be solved in order to make these compensators ready for industry utilization. Essentially, there is a lack of knowledge about the closed-loop steady-state performance of reset control systems. In addition, since the high order harmonics generated by CgLp compensators are neglected in the DF method, this method by itself is not an appropriate method for predicting open-loop and closed-loop steady-state performance, particularly for precision motion applications. Second, it is necessary to develop an intuitive frequency-domain stability method to assess the stability of CgLp compensators, similar to the Nyquist plot for linear controllers. Finally, to achieve a favourable dynamic performance, it is highly needed to propose a systematic frequency-domain tuning method for this type of reset compensators.","Constant in gain Lead in phase (CgLp); Frequency-Domain Analysis; Loop-Shaping; Pseudo-Sensitivities; Reset Control Systems; Stability Analysis; TuningMethod","en","doctoral thesis","","","","","","","","","","","Mechatronic Systems Design","","",""
"uuid:5a458cb4-b825-4fb1-a212-391891b4eda6","http://resolver.tudelft.nl/uuid:5a458cb4-b825-4fb1-a212-391891b4eda6","Exploration and engineering of acetyl‑CoA and succinyl‑CoA metabolism in Saccharomyces cerevisiae","Baldi, N. (TU Delft BT/Industriele Microbiologie)","Pronk, J.T. (promotor); Mans, R. (copromotor); Delft University of Technology (degree granting institution)","2021","In the past decades, biotechnology has become ever more prominent in our society. On supermarket shelves, biotechnological staple products such beer, bread and wine are accompanied by enzyme-containing detergents and soaps with biotechnologically produced fragrances, all of which are possibly packaged with bio-derived plastics. In the medical field, new molecules aimed at treating the most debilitating diseases known to humanity, from diabetes to malaria, have been developed and commercialized using biotechnology, saving thousands of lives in the process.","","en","doctoral thesis","","978-94-6423-270-7","","","","","","","","","BT/Industriele Microbiologie","","",""
"uuid:52c5c8c7-edc7-4cf8-9945-bcd94e30f9d8","http://resolver.tudelft.nl/uuid:52c5c8c7-edc7-4cf8-9945-bcd94e30f9d8","Simulation of Electron-Matter Interaction in Electron Beam Lithography and Metrology","Arat, K.T. (TU Delft ImPhys/Microscopy Instrumentation & Techniques)","Hagen, C.W. (promotor); Kruit, P. (promotor); Delft University of Technology (degree granting institution)","2021","Integrated circuit (IC) technology lies at the heart of today’s digital world. The immense amount of computational power that came along with the downscaling of circuits allowed us to place faster and smaller chips almost everywhere. However, with today’s requirements, even a one-nanometer error on those chips can drastically change the performance of the chip. Therefore, maintaining product quality is challenging, and more accurate techniques are needed to manufacture future generation chips. Electron beam based techniques are known to provide very high resolution both in production and testing of these chips. However, the ongoing trend is also challenging the established e-beam technologies. Two common problems in both imaging and lithography are addressed in the thesis. One of them is the emerging importance of the 3rd dimension (3D) in imaging and lithography. The other one is the notorious charging effect when the samples involved, such as gate oxides, are not sufficiently electrically conductive. The experimental trial and error approach to understand and solve these problems is too time-consuming and can also be very expensive. Therefore, tools such as Monte Carlo simulations are needed that aid in getting a better fundamental understanding of these issues. The development of a simulator that can help to find solutions for the problems above is one of the primary objectives of this thesis.","","en","doctoral thesis","","978-94-6384-222-8","","","","","","","","","ImPhys/Microscopy Instrumentation & Techniques","","",""
"uuid:6b3801df-3151-4f7e-a04e-419d5460e84a","http://resolver.tudelft.nl/uuid:6b3801df-3151-4f7e-a04e-419d5460e84a","High-resolution focal-plane wavefront sensing for time-varying aberrations","Piscaer, P.J. (TU Delft Team Raf Van de Plas)","Verhaegen, M.H.G. (promotor); Delft University of Technology (degree granting institution)","2021","Phase aberrations in optical systems,which occur in various applications such as astronomy, microscopy and ophthalmology, degrade the quality of obtained images. The exact cause and nature of the aberrations depends on the application. In astronomy, turbulence within the Earth’s atmosphere creates fluctuations of the refractive index, leading to phase aberrations. In order to compensate for the distorting effect of aberrations, adaptive optics (AO) systems are used to correct for phase aberrations in real-time. A deformable mirror (DM) is often used to apply the necessary corrections to improve the image quality. Due to the temporally dynamic nature of atmospheric turbulence and the corresponding phase aberrations, estimation errors caused by delays within the AO control loop are a significant part of the total estimation error. Accurate prediction of the phase aberrations is therefore an important aspect when aiming to improve the performance of AO systems. Reconstructing the phase aberrations from focal plane images only is known as focal plane wavefront sensing. Many focal plane sensing methods are based on solving the phase retrieval problem, which is the problem of reconstructing the phase aberrations from the point spread function (PSF). Due to the non-linear optimization problem that underlies phase retrieval, developing real-time solvers is very challenging and a wavefront sensor (WFS) is often included to avoid the phase retrieval problem. Due to the linear relation between the phase aberration (i.e. wavefront) and WFS signal, WFSbased AO is often preferred over the wavefront sensorless (WFSless) AO systems that use focal plane sensing. There are, however, also a number of disadvantages. First, the addition of extra hardware components, including the WFS and a beam splitter, makes the system more complex and expensive than WFSless systems. Second, splitting the light between the focal plane camera and WFS results in non-common path aberrations (NCPAs), which can be a limiting factor in high-resolution imaging systems.","","en","doctoral thesis","","978-94-6419-234-6","","","","","","","","","Team Raf Van de Plas","","",""
"uuid:f85769e2-c1ff-4404-a59f-552773ddb7b4","http://resolver.tudelft.nl/uuid:f85769e2-c1ff-4404-a59f-552773ddb7b4","Designing Tactful Objects for Sensitive Settings","D'Olivo, P. (TU Delft Design Aesthetics; TU Delft Human Information Communication Design)","Giaccardi, Elisa (promotor); Grootenhuis, Martha A. (promotor); Rozendaal, M.C. (copromotor); Delft University of Technology (degree granting institution)","2021","Childhood cancer is a disruptive life event that creates high levels of stress and anxiety in families. It turns everyday routines up-side-down, and can block the child’s psychosocial development when families have difficulties to emotionally cope with this potentially traumatic event. D’Olivo developed three interactive objects aimed at preserving space for quality time and stimulate interpersonal communication between family members. These objects were deployed in the homes of children who are receiving cancer treatment in order to better understand how families responded to them, and whether they were appropriate to support their situation. The broader question addressed by the work is ‘how can vulnerable users be empowered by design in sensitive settings?’. Tactfulness was found to be a critical expressive design quality of such objects, leading to the idea of Tactful Objects as a design perspective on interactive artefacts that function in sensitive settings. According to this perspective, designing tactful objects for sensitive settings means to design objects that behave like sensitive partners, establish a balanced collaboration with people, resemble familiar characters and maintain a discreet presence in the context where they are introduced. The thesis discusses the practical value of Tactful Objects in healthcare as well as the methodological implications of conducting Research-through-Design in sensitive settings.","Tactfulness; Sensitive Settings; Childhood Cancer; Interactive Artefacts; Tactful Objects; Intelligence; Research-through-Design","en","doctoral thesis","","978-94-6421-365-2","","","","","","","","","Design Aesthetics","","",""
"uuid:9ed1cff8-55b9-4be3-abc3-3ba05ec0bce4","http://resolver.tudelft.nl/uuid:9ed1cff8-55b9-4be3-abc3-3ba05ec0bce4","Deactivation pathways in methane upgrading catalysis","Franz, R.P.M. (TU Delft ChemE/Inorganic Systems Engineering)","Pidko, E.A. (promotor); Urakawa, A. (promotor); Delft University of Technology (degree granting institution)","2021","The abundance of methane has led to a strong interest to use methane as a feedstock in the chemical industry. One of the main challenges is the initial activation of the methane molecule. In this thesis, heterogeneous catalysts for two different methane conversion processes are investigated. Firstly, the deactivation of supported Ni catalysts during the dry reforming of methane is studied. Secondly, the applicability of Metal-Organic Frameworks in the selective partial oxidation of methane to methanol is evaluated.","Methane conversion; heterogeneous catalysis; Catalyst deactivation; Dry reforming of methane","en","doctoral thesis","","978-94-6384-212-9","","","","","","","","","ChemE/Inorganic Systems Engineering","","",""
"uuid:f495fd3f-d131-40c9-a51e-e4a8bcc12c84","http://resolver.tudelft.nl/uuid:f495fd3f-d131-40c9-a51e-e4a8bcc12c84","Statistical Analysis in Cyberspace: Data veracity, completeness, and clustering","Roeling, M.P. (TU Delft Cyber Security)","Verwer, S.E. (promotor); van den Berg, Jan (promotor); Lagendijk, R.L. (promotor); Delft University of Technology (degree granting institution)","2021","This thesis presents several methodological and statistical solutions to problems encountered in cyber security. We investigated the effects of compromised data veracity in state estimators and fraud detection systems, a model to impute missing data in attributes of linked observations, and an unsupervised approach to detect infected machines in a computer network.","Cybersecurity; Unsupervised learning; Imputation; Data veracity","en","doctoral thesis","","978-94-6423-299-8","","","","","","","","","Cyber Security","","",""
"uuid:7fd6d0af-1618-4edf-9947-1897a53c4e15","http://resolver.tudelft.nl/uuid:7fd6d0af-1618-4edf-9947-1897a53c4e15","Hybrid Intelligence in Architectural Robotic Materialization (HI-ARM): Computational, Fabrication and Material Intelligence for Multi-Mode Robotic Production of Multi-Scale and Multi-Material Systems","Mostafavi, Sina (TU Delft Architectural Engineering; TU Delft Digital Architecture)","Oosterhuis, K. (promotor); Bier, H.H. (copromotor); Biloria, N.M. (copromotor); Delft University of Technology (degree granting institution)","2021","With increasing advancements in information and manufacturing technologies, there is an ever‑growing need for innovative integration and application of computational design and robotic fabrication in architecture. Hybrid Intelligence in Architectural Robotic Materialization (HI-ARM) provides methods and frameworks that target this need. HI-ARM introduces methodologies and technologies that incorporate computational, fabrication and material intelligence in integrated design-to-robotic-production workflows. The intelligence is explored at multiple architectural scales (Macro, Meso, Micro) through hybridization of building processes or multi-mode robotic production and multi-materiality.
Porosity, Hybridity, and Assembly are introduced as main constituents for materialization frameworks relying on computational design and robotic production. These are tested in a series of original experiments that are presented in this thesis together with four peer-reviewed published papers discussing the process of developing integrated design-to-production methodologies in detail. The contributions show how both architectural materialization processes and building products can be customized in different phases and scales. Moreover, the developed discourse and definitions address the impacts of this research through the lenses of computation and automation in research, education, and practice in the fields of Architecture, Engineering, and Construction.","HI-ARM; Hybrid Intelligence; Architectural Robotics; Robotic Fabrication; Computational design; Fabrication Intelligence; Material Intelligence; Computational Intelligence; Porosity; Hybridity; Assembly; Multi-mode robotic fabrication; Robotic 3D printing; Architectural Materiality; Automation in Construction; Digital Design; Material Architecture; Performance Driven Design; Advanced Manufacturing","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-430-1","","","","","","","","","Architectural Engineering","","",""
"uuid:a5689f32-6eed-4949-9527-60723e16c8b5","http://resolver.tudelft.nl/uuid:a5689f32-6eed-4949-9527-60723e16c8b5","Context-based Cyclist Path Prediction: Crafted and Learned Models for Intelligent Vehicles","Pool, E.A.I. (TU Delft Intelligent Vehicles)","Gavrila, D. (promotor); Kooij, J.F.P. (copromotor); Delft University of Technology (degree granting institution)","2021","This thesis addresses the problem of path prediction for cyclists.
Instead of solely focusing on how to predict the future trajectory based on previous position measurements, this thesis investigates how to leverage additional contextual information that can inform on the future intent of cyclists.
This thesis does this with the application of intelligent vehicles in mind.
That means all measurements come from the point of view of a vehicle on the road.
Additionally, the resulting predictions must be usable by a motion planner.
In practice, this means the predictions are a probability distribution over the future position rather than a single point in space.
This thesis starts with an investigation of one of the modules that allow path prediction in the first place: 3D object detection.
Two existing state-of-the-art 3D object detectors that exploit Lidar data are evaluated beyond the standard metrics of 3D object detection.
3D object detectors predict an oriented 3D bounding box. The standard metric determines a correct detection based on the accuracy of the position, extent, and orientation of the bounding box all at once.
By loosening the requirements for when a detection is considered correct, the accuracy of the estimated position, extent, and orientation can be evaluated separately.
The results show that a large number of detections are considered incorrect largely because of inaccurate bounding box extent rather than bounding box position, which is arguably a more important aspect for path prediction.
As a result, the performance of these 3D object detectors when used for path prediction can be considered to be higher than what the common metrics suggest.
After this, this thesis investigates how knowledge of the road topology can be used to improve the accuracy of cyclist path prediction.
The trajectories of cyclists near an intersection are extracted from a naturalistic cyclist detection dataset.
These are categorized and grouped based on the action taken by each cyclist (hard left/right, slight left/right, or straight).
A Linear Dynamical System (LDS) is fitted on each group. These LDSs are used together to create a Mixture of Linear Dynamical Systems (MoLDS).
During online inference, the relative probability of each underlying LDS allows the MoLDS to evaluate which direction the cyclist is most likely to take.
This chapter demonstrates that the highest prediction accuracy is obtained when this model is additionally given prior knowledge on which directions are available for the cyclist to take.
Next, context cues related to a specific scenario are considered.
In the scenario, a cyclist in front of the ego-vehicle approaches an intersection and has the option to either continue straight or turn left.
The three context cues considered are the distance of the cyclist to the intersection, whether the cyclist is raising their arm, and the criticality of the situation.
This last context cue is based on the time it will take the ego-vehicle to overtake the cyclist: the lower this is, the more risk a left turn brings.
This scenario is first modeled with a Switching Linear Dynamical System (SLDS) with two motion models that represent ""cycling straight"" and ""turning left"", respectively.
This model does not yet use any context cues.
Still, the SLDS is shown to outperform a baseline model that represents the scenario with a single motion model.
By letting the context cues inform the SLDS whether switching from one motion model to the other is likely to happen the performance is increased even further. The resulting model is referred to as a Dynamic Bayesian Network (DBN).
The context-based path prediction methods described so far have been designed with specific motion models and interplay of context cues in mind: the overall state representation has been hand-crafted.
The advantage of this approach is that the state representation is then interpretable, making it easy to understand why a model predicts what it does, even when it fails to predict something correctly.
However, methods with a learned state representation often attain higher performances.
The next point of investigation of this thesis is then to compare a model with a crafted state representation to a model with a learned one. Specifically, the DBN is compared to a Recurrent Neural Network (RNN), using the cyclist scenario from before.
To level the playing field as much as possible two actions are taken. First, the contextual cues are supplied to the RNN as well, and experiments assert that the performance of the RNN does in fact improve when it incorporates these cues.
Secondly, the optimization method used in the RNN is applied to the DBN as well, but in such a way that the interpretation of its crafted state representation remains the same.
Of the two methods, the RNN attains the highest performance. Still, optimizing the DBN largely closes the performance gap between the two.
Finally, this thesis determines whether the DBN is not only performant but also useful in practice: it is integrated in an intelligent vehicle.
The cyclist scenario is performed live, in which the intelligent vehicle extracts the relevant context cues directly from sensor data.
The resulting predictions are used to create an early warning system for the driver, to warn them if the cyclist intends to turn left.
The model is also used for predictions in an autonomously driving intelligent vehicle, but due to safety reasons on a different scenario that contains comparable contextual cues.
An automated dummy plays the role of a pedestrian on the sidewalk who walks towards the curbside in order to cross the road.
The intelligent vehicle is driving on this road towards the pedestrian and has right of way.
In this scenario, a pedestrian is only expected to cross the road if they are unaware of the approaching vehicle.
Furthermore, if they will stop, they are expected to only stop at the curbside.
The intelligent vehicle determines whether the pedestrian is aware of it by estimating the head orientation of the pedestrian.
Additionally, it measures the distance between the pedestrian and the curbside, and predicts the future trajectory of the pedestrian accordingly.
With the model in place, the vehicle can autonomously follow a planned trajectory and evade the pedestrian if the pedestrian does indeed cross the road.
The real-world experiments confirm the feasibility of the system. By evaluating the entire pipeline at once, from detections to motion planning, this chapter is able to propose future work that bridges these various disciplines and shows what intelligent vehicles can already realistically achieve.","Context modeling; Predictive models; Intelligent Vehicles","en","doctoral thesis","","978-94-6416-489-3","","","","","","","","","Intelligent Vehicles","","",""
"uuid:c2d667d7-09a2-435f-9142-4368b06a0631","http://resolver.tudelft.nl/uuid:c2d667d7-09a2-435f-9142-4368b06a0631","From waste to products: Valorizing food side streams to recover natural products","Moreno Gonzalez, M. (TU Delft BT/Bioprocess Engineering)","Ottens, M. (promotor); van der Wielen, L.A.M. (promotor); Delft University of Technology (degree granting institution)","2021","This thesis presents novel ways of recovering of valuable compounds from food industry side streams. It does so via evaluating two case studies for the valorization of such streams. These types of streams are often referred to as waste and discarded. However, they are rich sources of nutraceuticals (e.g polyphenols), proteins and dietary fiber. Recently the use of these side streams has gained significant interest in the scientific community and different alternatives to recover high value products are being investigated that may contribute to the transition from the current linear economy to a more circular economy.","Polyphenols; Adsorption; Continuous chromatography; Off-flavor removal; Proteins; Cation exchange; Conceptual process design","en","doctoral thesis","","","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:46ef8e00-d465-42c3-aeaf-0f11c814f304","http://resolver.tudelft.nl/uuid:46ef8e00-d465-42c3-aeaf-0f11c814f304","Understanding of Crack Growth in Single- and Bi-Material Bonded Joints with “Extra-Thick” Adhesive Bond-Lines","Lopes Fernandes, R. (TU Delft Structural Integrity & Composites)","Benedictus, R. (promotor); Teixeira De Freitas, S. (copromotor); Delft University of Technology (degree granting institution)","2021","","adhesives joints; ""extra-thick"" bond-line; fracture behaviour","en","doctoral thesis","","","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:7ea21785-c7ec-49db-85c4-d2e2f6ce6e9b","http://resolver.tudelft.nl/uuid:7ea21785-c7ec-49db-85c4-d2e2f6ce6e9b","Fatigue analysis of wind turbine blade materials using a continuum damage mechanics framework","Bhangale, J.A. (TU Delft Structural Integrity & Composites)","Alderliesten, R.C. (promotor); Benedictus, R. (promotor); Delft University of Technology (degree granting institution)","2021","The work done for this thesis is related to fatigue analysis of various material types used in the wind turbine blades. For the analysis, a framework from the thermodynamics of irreversible processes with internal variables and Continuum Damage Mechanics (CDM) is used. Thermodynamic principles provide a generic framework that is valid for the entire fatigue phenomenon. CDM framework is then applied to characterize a specific mechanism under consideration. As the fatigue phenomenon consists of many mechanisms and their interactions, the scope of work is limited to setting the generic framework and to characterize only a few and their interactions to demonstrate the framework potential. The thesis consists of four main sections: introduction, theory, mathematical formulation, and validation. Before starting the framework construction, a decent idea about vastness in fatigue analysis methodologies adopted by the research community is required. Hence chapter 1 is prepared to give readers, not in detail, but a helicopter view of the field. This overview allows drafting the achievable scope and methodology for this research work keeping in mind the ultimate goal of analysing full-scale wind turbine blade sustaining fatigue throughout its operational life.","Wind turbine blade; Material fatigue analysis; Cyclic deformation; Continuum damage mechanics; Thermodynamics","en","doctoral thesis","","978-94-6423-259-2","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:79489c85-e9be-41ff-b79a-10d2b974fc94","http://resolver.tudelft.nl/uuid:79489c85-e9be-41ff-b79a-10d2b974fc94","Towards High Energy Density Anode-less Lithium Metal Batteries: A Study of Lithium Dendrites Suppression and Elimination","Wang, C. (TU Delft RST/Storage of Electrochemical Energy)","Wagemaker, M. (promotor); Brück, E.H. (promotor); Delft University of Technology (degree granting institution)","2021","Rechargeable Li-ion batteries for the market of electrical vehicles, portable equipment for entertainment, computing and telecommunication surge for the past decades, but the increasing demands introduce great challenges towards future battery systems that require higher energy and power density, improved safety as well as a longer lifespan. Lithium metal batteries can deliver higher energy densities compared with commercialized LIBs but the practical applications have been hindered due to the growth of lithium dendrites in liquid lithium metal batteries. The uncontrollable dendrite leads to the repeated formation of solid electrolyte interphase, irreversible capacity loss, short circuits, and safety hazards with liquid electrolytes. Compared to liquid electrolytes, solid-state electrolytes might be a better choice, but the reliance of ionic diffusion at the contact of solid particles is crucial presenting a major challenge. Moreover, the effective use of high capacity cathodes in combination with Li metal in a solid-state battery is another big challenge for future battery development. Therefore, to unlock the full potential of LMBs with high energy density and safe operation, it is imperative to devote efforts in solid-state batteries design. This thesis aims to search effective methods for enabling safe and high-energy-density solid-state Li metal batteries, starting from the developments of new concepts in liquid-based batteries and heading for an anode less Li metal solid-state battery configuration step by step.","Lithium metal batteries; high dielectric; dendrite suppression and elimination; high reversibility; high energy density","en","doctoral thesis","","978-94-6423-289-9","","","","","","2023-06-02","","","RST/Storage of Electrochemical Energy","","",""
"uuid:26692245-3fb7-460e-8d3a-419301eef8e7","http://resolver.tudelft.nl/uuid:26692245-3fb7-460e-8d3a-419301eef8e7","Future DC Smart Homes: Key Power Electronics Development","Bandyopadhyay, S. (TU Delft DC systems, Energy conversion & Storage)","Qin, Z. (promotor); Bauer, P. (promotor); Delft University of Technology (degree granting institution)","2021","The advent of rooftop photovoltaics (PV), energy storage systems like a battery (BESS) and high capacity electric vehicles (EVs) is changing the landscape of electrical distribution rapidly. Traditional electricity consumers like households, buildings (both commercial and residential) are augmented with these emerging technologies which are turning them into ""Prosumers"" (producer and consumer). This is beneficial to the current radial grid structure as Prosumers are inherently more self-sufficient in terms of energy and thus provide grid-relief from congestion and will potentially delay or obviate the need for expensive grid upgrades. To fully realize the potential of these multiple sources and loads as a single dispatch-able prosumer, this thesis focuses on developing electronic hardware to accurately control these individual elements which result in an intelligent, flexible and safe grid.","","en","doctoral thesis","","978-94-6384-220-4","","","","","","","","","DC systems, Energy conversion & Storage","","",""
"uuid:cfa75a61-32e8-40df-946e-5f022940cdd0","http://resolver.tudelft.nl/uuid:cfa75a61-32e8-40df-946e-5f022940cdd0","On the build-up of storm water solids in gully pots","Rietveld, M.W.J. (TU Delft Sanitary Engineering)","Clemens, F.H.L.R. (promotor); Langeveld, J.G. (promotor); Delft University of Technology (degree granting institution)","2021","A substantial part of urban surfaces is to some extent impermeable. Rainfall on these areas turns into runoff, which mobilises solids present on these surfaces. This runoff is removed from urban built areas by different type of drainage systems via gully pots. The objective of a gully pot is twofold, namely 1) to convey runoff to the drainage system with minimal hydraulic losses, while 2) to remove entrained solids to protect the downstream system. The continuous removal of suspended solids results in a growing sediment bed in the gully pot. This sediment bed can eventually reduce the hydraulic capacity of the gully pot and increase the probability of urban flooding due to rainfall. The increasing sediment bed also reduces the removal efficiency, which implies that more solids are transported to the downstream drainage system. Therefore, the objective of this study is to quantify the related processes, which are the build-up of solids on the street, the transport of solids to gully pots, and the removal of solids in gully pots. Which would assist the decision of the optimal cleaning interval of the sand trap. Four research questions have been formulated to meet the study objective:
1. What is the solids loading to gully pots in terms of mass and composition?
2. Does street sweeping reduce the solids loading to gully pots?
3. What is the removal efficiency of solids of a gully pot?
4. How do the in-gully-pot hydraulics influence the removal efficiency?
Om een evenwicht in kennis te verwerven tussen het met de hand getekende gedachtengoed van Van der Laan en het gecomputeriseerde gedachtengoed van de vrijevormarchitectuur concentreerde het onderzoek zich in eerste instantie op de ontwerp- en tekenprogramma’s waarmee vrijevormarchitectuur ontworpen, getekend en gemaakt worden. Die kennis over vrijevormarchitectuur werd pas echt verworven door de vrijevormgebouwen ook daadwerkelijk te ontwerpen, te tekenen, te herontwerpen en (op een speelse wijze) te hertekenen: ‘Research by Redrawing’, ’Research by Redesigning’ en ‘Research by Playing’.
Gaandeweg werd duidelijk dat het onderzoek twee invalshoeken omvatte. Een invalshoek vanuit het standpunt van de praktiserende architect en een invalshoek vanuit het standpunt van de beschouwende architect. De praktiserende architect die ontwerpt, tekent en maakt (of laat maken) en de beschouwende architect die naar filosofische en fenomenologische betekenis zoekt. De hoofdvraag van dit onderzoek luidt: Hoe wordt vrijevormarchitectuur ontworpen, getekend en gemaakt en wat is haar filosofische en fenomenologische betekenis?
This dissertation is the result of a long search for the why and how of free-form architecture is designed, drawn and made, and what the philosophical and phenomenological meaning of free-form architecture is. Initially, the doctoral student looked at the strict architectural theory of the Benedictine monk Hans van der Laan, with whom he had intensive contact for many years. At times, it proved an impossible task to clash or compare Van der Laan's strictly orthodox world of architecture with its opposite, the curved world of free-form architecture.
In order to acquire a balance in knowledge between the hand-drawn ideas of Van der Laan and the computerised ideas of free-form architecture, the research initially concentrated on the design and drawing programmes with which free-form architecture is designed, drawn and made. The knowledge of free-form architecture was only really gained by actually designing, drawing, redesigning and (playfully) redrawing the free-form buildings: 'Research by Redrawing', 'Research by Redesigning' and 'Research by Playing'.
Gradually it became clear that the research comprised two angles of approach. A perspective from the point of view of the practising architect and a perspective from the point of view of the contemplative architect. The practicing architect who designs, draws and makes (or has made) and the contemplative architect who searches for philosophical and phenomenological meaning. The main question of this research is: 'How is free-form architecture designed, drawn and made and what is its philosophical and phenomenological meaning?","","nl","doctoral thesis","A+BE | Architecture and the Built Environment","","","","","","","","","","Building Product Innovation","","",""
"uuid:dd3c8524-413c-405b-98f2-9dba974965db","http://resolver.tudelft.nl/uuid:dd3c8524-413c-405b-98f2-9dba974965db","Hybrid Josephson junction-based quantum devices in magnetic field","Uilhoorn, W. (TU Delft QRD/Kouwenhoven Lab)","Kouwenhoven, Leo P. (promotor); DiCarlo, L. (promotor); Delft University of Technology (degree granting institution)","2021","The technology of quantum computing is believed to solve certain computational problems significantly faster than classical computers, enabling classically inaccessible problems. However, the technology is still in its infancy and it is still too early to conclude which physical system(s) will form the basis of tomorrow's quantum computer. Small-scale quantum processors, built on the superconducting transmon qubit, demonstrated already the anticipated quantum speed-up. Despite this tremendous milestone, scaling up to a full-fledged quantum computer is far from trivial: with these qubits, the flux-control causes crosstalk between qubits and heating, the room-temperature microwave control is hardly scalable and comes with high energetic radiation, and most importantly, the loss of quantum information in time leads to computational errors. Recent advances in various hybrid semiconductor materials enabled novel voltage-controlled transmons, gatemons, which are less susceptible to heating and crosstalk. Even more exciting, gatemons can be designed to host Majorana zero modes in a way that renders the qubit inherently protected against decoherence. In order to induce Majorana zero modes in such nanowire systems strong magnetic fields are required. Problematically, magnetic fields destroy the superconductivity that these microwave circuits rely on. In this thesis we integrate three types of hybrid Josephson junctions in magnetic field compatible microwave devices. Chapter 4 demonstrates the first graphene-based transmon. Due to the mono-atomic thickness of graphene in combination with magnetic field resilient coplanar waveguide resonators, we are able to operate the transmon circuit at an in-plane magnetic field of 1 Tesla. Chapter 5 embeds an InAs-Al semiconducting nanowire Josephson junction in a high quality factor superconducting transmission line resonator, demonstrating on-chip microwave generation. The gate-controllable semiconducting platform lends itself for fast pulse control, providing a perspective for the coherent on-chip manipulation of artificial two-level systems, in particular superconducting qubits such as transmons. Chapter 6 continues the development of InAs-Al nanowire transmons. The offset-charge-sensitive regime, additional plunger gates and magnetic field compatibility prepares the platform for the detection of coherent coupling between Majorana zero modes, a phenomena which unfortunately still remains unobserved. Additionally, we realise the first InSb-Al gatemon. The higher spin-orbit coupling makes InSb to be a preferred material in the search for Majorana signatures. Chapter 7 reports on the dynamics of quasiparticle tunneling events in real-time across the InAs-Al nanowire junction in a transmon architecture. The magnetic field compatibility of our device up to 1 Tesla together with in-situ voltage-controlled quasiparticle trap engineering, allows us to measure the survival of the charge-parity lifetime up to strong magnetic fields. A result which is extremely important in the research field of topological quantum computing, where the qubit space is defined in the charge-parity.","Hybrid devices; Josephson junctions; Superconducting nanowires; cQED; Magnetic fields; Quasiparticle dynamics","en","doctoral thesis","","978-90-8593-477-6","","","","","","","","","QRD/Kouwenhoven Lab","","",""
"uuid:e57ae88c-19eb-4bac-be3a-903fa68d319b","http://resolver.tudelft.nl/uuid:e57ae88c-19eb-4bac-be3a-903fa68d319b","Effect of nanoparticles and pre-shearing on the performance of water-soluble polymers flow in porous media","Mirzaie Yegane, M. (TU Delft Reservoir Engineering)","Zitha, P.L.J. (promotor); Boukany, P. (promotor); Delft University of Technology (degree granting institution)","2021","Polymer flooding is a commercially viable chemical enhanced oil recovery (cEOR) method. It includes the injection of water-soluble polymers into a reservoir to improve the drive water viscosity and consequently to increase the sweep and displacement efficiency of the water. Despite the success in both laboratory and field applications, there are still some challenges associated with the application of polymers for cEOR. The first challenge is that the implementation of conventional polymers used for cEOR at high salinity, hardness, and temperature reservoirs is difficult and costly because of the viscosity loss and polymer precipitation at these harsh conditions. The second challenge concerns the injectivity of the polymers. Long polymer chains tend to block the small pores which leads to a time-dependent injectivity decline. This thesis investigates the potential solutions to address these two challenges in order to improve the performance of water-soluble polymers.","Water-soluble polymers; nanoparticles; pre-shearing; flow in porous media; enhanced oil recovery","en","doctoral thesis","","978-94-6366-420-2","","","","","","","","","Reservoir Engineering","","",""
"uuid:38a95ca3-6986-4723-8231-2c0bb11c12fc","http://resolver.tudelft.nl/uuid:38a95ca3-6986-4723-8231-2c0bb11c12fc","A Dynamic and Integrated Approach for Modeling and Managing Domino-effects (DIAMOND)","Chen, C. (TU Delft Safety and Security Science)","Reniers, G.L.L.M.E. (promotor); Yang, M. (copromotor); Delft University of Technology (degree granting institution)","2021","Process and chemical industrial areas consist of hundreds and even thousands of installations situated next to each other, where quantities of hazardous (e.g., flammable, explosive, toxic) substances are stored, transported, or processed. These installations are mutually linked in terms of the hazard level they pose to each other in the system. As a result, a primary undesired disruption (e.g., an accidental event, intentional attack, or natural disaster) may escalate to nearby installations, triggering a chain of accidents. This phenomenon is well known as the potential for “knock-on effects” or so-called “domino effects”. This dissertation is devoted to modeling the spatial-temporal evolution of domino effects, preventing the escalation, mitigating the consequences, thereby developing a safer, securer, and more resilient chemical industrial area.","Process industry; Domino effect; Dynamic risk assessment; security; Resilience; Integrated safety and security; Vapor cloud explosion; Fire; Toxic release; Safety economics; Dynamic graphs; Monte Carlo (MC); Dynamic event tree","en","doctoral thesis","","978-94-6384-221-1","","","","","","","","","Safety and Security Science","","",""
"uuid:4b70aa0a-8c13-421d-9043-6274311df2aa","http://resolver.tudelft.nl/uuid:4b70aa0a-8c13-421d-9043-6274311df2aa","Simulating Human Routines: Integrating Social Practice Theory in Agent-Based Models","Mercuur, R.A. (TU Delft Information and Communication Technology)","Dignum, M.V. (promotor); Jonker, C.M. (promotor); Delft University of Technology (degree granting institution)","2021","Our routines play an important role in a wide range of social challenges such as climate change, disease outbreaks and coordinating the staff and patients of a hospital. Studying these systems via agent-based simulations (ABS) enables researchers to gain insight into complex aspects of these challenges such as human interaction, learning, heterogeneity, feedback loops and emergence. Current agent frameworks do not integrate social and psychological evidence on human routines: humans make habitual decisions, interconnect these habits throughout the day and use these interconnected habits as a blueprint for social interaction. This thesis provides the domain-independent SoPrA (Social Practice Agent) framework that integrates theories on social practices to support the simulation of human routines. Social practice theory is a socio-cognitive theory applicable to model human routines as the theory aims to describe our ‘daily doings and sayings’. The first part of the thesis identifies the aspects of social practice theory that are relevant for agent-based simulation, distils requirements from the literature, reviews current agent models and provides the SoPrA framework that satisfies said requirements. The second part describes applications of SoPrA on the value-alignment problem in AI, identifying social bottlenecks in hospitals and comparing theories on how habits break. This results in an agent framework with a clear relation to current evidence and, due to its modularity and focus on domain-independence, is usable for a wide range of ABS studies that involve human routines. As such, SoPrA is relevant for scientific work in (1) ABS by enabling a new way to know, explore and improve the world, grounded in evidence on human routines; (2) in multi-agent systems by enabling agents that understand and interact with human routines; and (3) in the social sciences by crystallizing theories on human routines and enabling exploration of these theories via simulation. Furthermore, this thesis shows the societal relevance of SoPrA for understanding and improving the role of routines in AI safety, emergency rooms, commuting behaviour and consumption behaviour.","Social Practice Theory; Agent-based modelling; Agents; Agent Architecture; Simulation; Habits; Social agents; Human values","en","doctoral thesis","","978-94-6384-213-6","","","","","","","","","Information and Communication Technology","","",""
"uuid:1b713961-4e6d-4bb5-a7d0-37279084ee57","http://resolver.tudelft.nl/uuid:1b713961-4e6d-4bb5-a7d0-37279084ee57","Rearranging Phylogenetic Networks","Janssen, R. (TU Delft Discrete Mathematics and Optimization)","Aardal, K.I. (promotor); van Iersel, L.J.J. (copromotor); Delft University of Technology (degree granting institution)","2021","","Graph theory; Mathematical biology; Phylogenetics; Rearrangement moves","en","doctoral thesis","","978-94-6332-758-9","","","","","","","","","Discrete Mathematics and Optimization","","",""
"uuid:90c588b1-ebef-4ab8-b51f-b3221d8ad6cc","http://resolver.tudelft.nl/uuid:90c588b1-ebef-4ab8-b51f-b3221d8ad6cc","Coupling Harmonic Oscillators to Superconducting Quantum Interference Cavities","Corveira Rodrigues, I.C. (TU Delft QN/Steele Lab)","Steele, G.A. (promotor); Groeblacher, S. (promotor); Delft University of Technology (degree granting institution)","2021","This thesis explores the coupling of microwave cavities containing a Superconducting QUantum Interference Device (SQUID) to a mechanical resonator by means of a flux-mediated optomechanical coupling scheme, and to a Radio-Frequency (RF) circuit via a photon-pressure interaction. While flux-mediated optomechanical systems open the door for the exploration of optomechanical single-photon effects, ultimately allowing for the generation of non-classical states in mechanical oscillators and for the creation of optomechanical qubits, the realization of photon-pressure systems brings rich possibilities for quantum-limited sensing and quantum signal processing, representing a first step towards radio-frequency quantum photonics. Besides presenting the experimental work done with the systems described above, this thesis also provides a theoretical description of their working principle, details on their fabrication, and insights on maximizing their single-photon coupling strengths as well as an overview of the experimental challenges associated with SQUID cavities.","SQUID cavities; Optomechanics; Superconducting circuits; Photon-pressure coupling","en","doctoral thesis","","978-90-8593-474-5","","","","","","","","","QN/Steele Lab","","",""
"uuid:8d278e8f-0972-4d96-8acf-1dcd1cd0e358","http://resolver.tudelft.nl/uuid:8d278e8f-0972-4d96-8acf-1dcd1cd0e358","Non-collocated methods to infer deformation in steel structures: The magnetomechanical effect in cylindrical structures subjected to impact loads","Meijers, P.C. (TU Delft Dynamics of Structures)","Metrikine, A. (promotor); Tsouvalas, A. (copromotor); Delft University of Technology (degree granting institution)","2021","Increasing demand for energy from renewable sources has resulted in ambitious plans to construct a large number of offshore wind farms in the coming years. In relatively shallow water depths, the preferred support structure for wind turbines is the steel monopile, which is a thin-walled cylindrical structure. To decrease the cost of the generated electricity, larger wind power generators are commissioned, which has led to a significant increase of the size of the foundation piles. Currently, monopiles are most frequently driven into the seabed by means of hydraulic impact hammering. Aided by the compressive stress wave generated by each hammer blow, the pile gradually progresses to the desired penetration depth. The stress generated by each hammer blow can inflict plastic deformations at the pile head, which can jeopardise the delicate alignment required for the bolted connection between the superstructure and the monopile. Furthermore, the repeated elastic deformation of the pile leads to material fatigue, which reduces the service life of the structure. Hence, monitoring the deformation and stress resulting from the hammer blows is vital to assess the structural health. Offshore, however, dedicated sensors are seldom employed, due to time constraints and the harsh marine environment. In addition, contact sensors can easily be damaged by hammer-induced high-amplitude strains. To this end, this thesis develops several alternative methods to monitor the deformation in a monopile during installation. These methods are non-collocated (i.e. a quantity is measured at certain location to infer the structural quantity of interest at another position), and, preferably, non-contact. By considering the propagation of elasto-plastic waves, a non-collocated method to quantify the amount of plastic deformation inflicted by a hammer blow is first proposed. As a part of the energy contained in the stress wave excited by the hammer blow is used to permanently deform the structure, the stress wave becomes distorted. At a certain distance below the pile head, the energy flux is determined that is carried out by the stress wave through a cross-section of the pile. The difference between the measured value and the expected energy flux from a linear-elastic simulation with the same hammer forcing provides an upper bound for the amount plastic deformation inflicted by a hammer blow. The main benefit of this proposed method is that the sensors are employed outside the region where the highest strains occurs, reducing the risk of damaging the sensors. However, data is collected with sensors which are attached to the pile, leaving the aforementioned restrictions to the sensor deployment in place. To enable the widespread monitoring of steel structures subjected to dynamics loads, non-contact methods are needed. For the development of a non-contact method to infer the hammer-induced deformations, the magnetic stray field of the steel structure is analysed, which permeates the space around it. As the structure's magnetisation depends on elastic and plastic strains through the magnetomechanical effect, it is expected that the magnetic stray field, which is generated by the magnetisation, conveys the information about the present strain state of the structure to the sensor. Contrary to experiments on the magnetomechanical response of structural steel reported in literature so far, a steel cylinder has a significant demagnetising field due to its geometry, creating a non-uniform spatial distribution of the magnetisation. Additionally, magnetomechanical data under dynamic loads are scarce. Hence, a unique laboratory-scale experiment was designed, in which a steel cylinder was repeatedly impacted by a free-falling concrete mass, providing the first insights into the magnetomechanical effect in dynamically-loaded structures with a substantial demagnetising field. In between impacts, the magnetic stray field was mapped to analyse the evolution of the remanent stray field, i.e. the stray field when the structure is unloaded. Due to repeated impacts which only generate elastic strains in the structure, the remanent stray field evolves towards a metastable magnetic equilibrium. When a new peak strain is introduced, the stray field converges towards a new equilibrium, displaying a tendency towards a global magnetic equilibrium. However, as soon as plastic deformation forms, the evolution of the remanent field deviates from this trend as a result of the increased dislocation density, which, in turn, reduces the material's ability to remain magnetised. This behaviour serves as a basis for a non-contact method to detect and localise regions of plastic deformation in a steel structure subjected to repeated impact loads. This novel method is the first non-contact technique to infer structural deformation proposed in this dissertation. In the lab-scale experiment, strain gauges and a magnetometer registered the transient magnetomechanical response during each impact. When the magnetisation is at a magnetic equilibrium, a strong correlation is found between the axial strain and the magnetic field variation around the remanent state. The amplitude and direction of the transient magnetic stray field varies with the circumferential position of the magnetometer, indicating that the response is partly determined by the magnetisation in the vicinity of the sensor. To simulate the measured response, an isotropic magnetomechanical model was developed in this thesis that, for the first time, accounts for the demagnetising field of the structure. The capability of this model to reproduce the measurement results are limited though. It is envisaged that the model may be improved by accounting for anisotropy and by including the remanent magnetisation. To date, limited data have been published on the in-situ magnetomechanical response of large-scale steel structures in a weak ambient magnetic field. Consequently, an in-situ measurement campaign was performed to measure the magnetomechanical response of a monopile installed onshore with a hydraulic impact hammer. During the campaign, several magnetometers were employed using different sensor configurations. Similar to the lab-scale experiment, the position of the magnetometer relative to the pile determines the amplitude and direction of the transient magnetic field. Next to a good correspondence between the strain and magnetic signals, a polynomial relation was found between the peak strain and the maximum deviation from the remanent field expressed along the major principle axis. Using the inverse of this relation and a magnetometer which retains its position with respect to the pile, a novel method to infer the elastic strain from the transient stray field is proposed, which shows a promising correspondence between the inferred and measured strain signals. Additionally, the working principles for a new alternative technique to monitor the pile penetration using non-contact sensors are proposed. For each of the four non-collocated methods introduced in this work, directions for improvements and steps to generalise the techniques are discussed. The main benefit of the non-contacts methods in particular is that they eliminate the onerous process of attaching the sensors, enabling swift deployment and providing the opportunity to reuse the sensors. Although the new methods in this dissertation have mainly been applied to the installation of monopiles, the potential application of these non-collocated methods is much wider. Ultimately, they could be used to monitor any large-scale steel structure subjected to dynamic loads.","non-contact measurement; magnetomechanical effect; impact pile driving; large-diameter monopile; plastic deformation, structural health monitoring","en","doctoral thesis","","978-94-6384-217-4","","","","","","","","","Dynamics of Structures","","",""
"uuid:0c301dea-605b-47a1-a8f5-13b61c4e52ee","http://resolver.tudelft.nl/uuid:0c301dea-605b-47a1-a8f5-13b61c4e52ee","Privacy Threats and Cryptographic Solutions to Genome Data Processing","Ugwuoke, C.I. (TU Delft Cyber Security)","Lagendijk, R.L. (promotor); Erkin, Z. (copromotor); Delft University of Technology (degree granting institution)","2021","The genome is the blueprint of life and has a detailed genotype and phenotype description of any organism. This in itself attributes sensitivity to genetic data, be it in the biological or electronic format. The possibility of sequencing the genome has opened doors to further probing of the data in its electronic form. Post sequencing of the biological genome sample, the electronic genome is stored, processed, and transmitted for variety of purposes including but not limited to Medicare, research, solving crimes and entertainment. However, due to the sensitivity of the genome data, security and privacy of the electronic data is considered to be imperative.
Owing to the privacy and security concerns associated with sharing genome data with third-party entities for processing, various secure and privacy-preserving solutions have been considered. Such scenarios include, a researcher obtains research data which includes genome of individuals, orwhen a healthcare institution outsources the genome of its patients to a cloud environment for storage and processing. In all of these scenarios, it is important that the utility (accuracy and efficiency) of the data is maintained while preserving privacy (confidentiality and unlinkability) simultaneously.
In this thesis,we focus on maintaining data utility when processing electronic genome data as well as preserving the privacy of the individuals whose data are analysed. We employ privacy enhancing techniques such as secure multi-party computation and homomorphic encryption to existing problems and develop provably secure cryptographic protocols that are fit for purpose for each scenario.
MOFs are considered soft or flexible materials, a characteristic that includes structural dynamics or large amplitude deformations. This flexibility can usually be attributed to the framework’s topology and the degrees of freedom of some of its bonds. However, the linkers themselves may also have degrees of freedom allowing independent molecular dynamics, in particular in the form of rotation. This type of dynamics is particularly common in MOFs because their porous architectures often provide enough space for the rotation of a molecular fragment to occur. It is this type of dynamics that this thesis is centered on, starting from the fact that, although it is an intriguing phenomenon that occurs in MOFs, it has remained relatively unexplored.
Nevertheless, the past four years have seen an increase in researchers’ interest in rotational dynamics in MOFs. This may be due to two main reasons: First, linker rotation influences MOF properties, not only when guest molecule interactions are involved, but also in optical and mechanical properties. Development of our knowledge on linker rotation is therefore essential for a more complete understanding of these materials’ properties and how they may be modified to enhance a specific trait. Second, the exploitation of linkers’ rotational freedom could potentially lead to important technological advances. The latter category includes various innovative ideas, such as the design of ferroelectric MOFs by means of controllable dipolar rotors, or the realization of crystalline molecular machines able to produce useful work.","metal-organic framework; molecular dynamics; Crystal engineering; Rotation behaviour","en","doctoral thesis","","978-94-6366-414-1","","","","","","","","","ChemE/Catalysis Engineering","","",""
"uuid:be3935cb-1e7a-401d-a5a8-cd484648fff1","http://resolver.tudelft.nl/uuid:be3935cb-1e7a-401d-a5a8-cd484648fff1","Weyl Points In Superconducting Nanostructures","Chen, Y. (TU Delft QN/Nazarov Group)","Nazarov, Y.V. (promotor); Blanter, Y.M. (copromotor); Delft University of Technology (degree granting institution)","2021","Topological band theory has contributed to some of the most astonishing developments in solid-state physics. The unique attributes that arise from topological effects are at the focus of modern experimental and theoretical research. Weyl point, a topological defect at the Fermi surface, enables topological transitions and transport phenomena. Its existence is considerably restricted in natural materials due to the tuning and dimension constraint. Recently, The Weyl points have been predicted to accommodate within superconducting nanostructures in the spectrum of Andreev bound states. Theoretically, one can easily manipulate the dimensionality and the tuning process through elementary approaches with specially designed structures. This opens up a new window for explorations in a higher dimension, high-order topological effects,Majorana states, and other complications even though it may be still experimentally challenging. One realization of such structures is the multi-terminal Josephson junction. The parameters are the superconducting phase differences of the terminals and theWeyl points reside at low energies within the superconducting gap. Chapter 2 of this thesis investigates the topological effect in the quantized transconductance of such a structure considering the presence of the continuous spectrum that is intrinsic to superconductors. This research is based on scattering formalism and relates the Landauer conductance to the continuous spectrumas a background field in the regular topological charge picture. Chapter 3 is based on a very generic superconducting nanostructure setup so long as it hosts Weyl points in it. The research proposes a unit that tunnel-couples such a setup with a quantum dot. The distinct feature of the spectrum, especially the distinction between its spin-singlet and spin-doublet due to spin-orbit coupling, leads to an exploration of the state manipulation. Eventually, through adiabatic and diabatic approaches, one can feasibly realize a full unitary transformation of the spectrum. Because of this, the unit could easily find its promising application in entangled qubits. Chapter 4 also relies on the generic low-energy Weyl point setup in the superconducting nanostructure, but instead, it is weakly tunnel-coupled to regularmetallic leads. We know that spintronics explores the intrinsic spin degree of freedom. It is usually realized on magnetic materials. In the setup of this research, the energy spectrum contains a natural spin-orbit that creates a minimalistic magnetic state in the vicinity of theWeyl point. The spin structure of the spectrum allows fine-controls over the spin and switch between magnetic/non-magnetic state. Hence this chapter’s research focuses on the possible spintronics features based on master equations. Chapter 5 furthers the research of chapter 4. It considers a universal energy scale sets up by the tunnel coupling strength. In the language of the Green’s function, this chapter studies the topological effect through the response function. This set up is a suitable example of low energy Weyl points situated in the presence of a low-energy continuous spectrum brought by electrons in the leads. We have seen in Chapter 1 how the continuous spectrum above the gap modifies the topology leading to a non-quantized contribution to the transconductance. The peculiarity of couplingWeyl points to a low energy continuous spectrum is that the dissipation gives rise to a redefinition of the Berry curvature, whichmanifests as a continuous density of topological charge instead of a pointlike one. This unusual characteristic can be captured by the tunnel current and thus can assist the detection ofWeyl points experimentally.","Weyl Point; Topology; Superconducting","en","doctoral thesis","","9789085934783","","","","","","","","","QN/Nazarov Group","","",""
"uuid:9385fec0-f79d-49de-b524-9b73eb248cdd","http://resolver.tudelft.nl/uuid:9385fec0-f79d-49de-b524-9b73eb248cdd","Convincing stuff: Disclosing perceptually-relevant cues for the depiction of materials in 17th century paintings","Di Cicco, F. (TU Delft Human Information Communication Design)","Pont, S.C. (promotor); Dik, J. (promotor); Wijntjes, M.W.A. (copromotor); Delft University of Technology (degree granting institution)","2021","This thesis explores convincing stuff depicted in 17th century paintings, with the primary aim of understanding their visual perception. ”Stuff” is the term first introduced by Edward Adelson in 2001 to differentiate materials from objects, and to call attention on the research gap in material perception. In an interesting parallel, the representation of materials in paintings constitutes a knowledge gap in art history as well. Both gaps have only recently been recognized and started to be addressed in their respective research fields. In this thesis, representation and perception come together to create a virtuous circle in which the knowledge of painters about the representation of materials is used to understand the mechanisms of the visual system for material perception, and this is in turn used to explain the pictorial features that make the representation of materials so convincing. The common thread used here to connect representation and perception, is ”The big world painted small”, a long-forgotten booklet of pictorial recipes written by the Dutch painter Willem Beurs in 1692. We argue that this book represents an index of key features for material perception, that means an index of image features that always work as perceptual cues regardless of the illumination and the viewing conditions of the depicted scene. The main research objective of this dissertation is: To understand the convincing depiction and perception of materials in 17th century paintings, connecting the image features found in paintings and listed by Beurs to their role as perceptual cues. In order to achieve this objective, we employed a novel, interdisciplinary research approach, merging science of human and computer vision, technical art history, and the historical textual source of Beurs.","Material perception; Willem Beurs; paintings; image features; perceptual cues","en","doctoral thesis","","","","","","","","","","","Human Information Communication Design","","",""
"uuid:939f6f53-67c1-4bf8-8df7-14d92c7c0cc5","http://resolver.tudelft.nl/uuid:939f6f53-67c1-4bf8-8df7-14d92c7c0cc5","Ceramic nanofiltration-based treatment of NOM-rich ion exchange brine","Caltran, I. (TU Delft Sanitary Engineering)","Rietveld, L.C. (promotor); Heijman, Sebastiaan (promotor); Delft University of Technology (degree granting institution)","2021","Natural organic matter (NOM) in drinking water sources causes several problems in water consumption and distribution, and decreases the efficiency of water treatment steps. Ion exchange (IEX) with anion resin can be used to remove NOM in combination with or as an alternative to other techniques, such as coagulation and activated carbon. IEX resins require periodic regeneration with an electrolyte solution that is usually made of sodium chloride. A crucial problem of IEX for NOM removal is related to waste management of the regenerant electrolyte. The regenerant solution is reused several times before disposal, which increases the concentrations of NOM and anions like sulfate. The resulting spent IEX brine is a pollutant and is expensive to dispose, which hampers full-scale applications. In this research, we proposed a spent IEX brine treatment that is based on ceramic nanofiltration. Ceramic membranes have potential advantages over polymeric membranes, such as higher fluxes and lower fouling characteristics. The treatment aims to recover a permeate of a reusable IEX regeneration salt solution, which is typically sodium chloride, by removing NOM and other anions from the spent IEX brine. Also, concentrated NOM could be used in agriculture and industry, due to the presence of humic substances.","","en","doctoral thesis","","97890-6562-4536","","","","","","","","","Sanitary Engineering","","",""
"uuid:01ba927d-28e7-4abd-8193-e4ebef3b8218","http://resolver.tudelft.nl/uuid:01ba927d-28e7-4abd-8193-e4ebef3b8218","Increasing trust in complex machine learning systems: Studies in the music domain","Kim, Jaehun (TU Delft Multimedia Computing)","Hanjalic, A. (promotor); Liem, C.C.S. (copromotor); Delft University of Technology (degree granting institution)","2021","Machine learning (ML) has become a core technology for many real-world applications. Modern ML models are applied to unprecedentedly complex and difficult challenges, including very large and subjective problems. For instance, applications towards multimedia understanding have been advanced substantially. Here, it is already prevalent that cultural/artistic objects such as music and videos are analyzed and served to users according to their preference, enabled throughML techniques.
One of the most recent breakthroughs in ML is Deep Learning (DL), which has been immensely adopted to tackle such complex problems. DL allows for higher learning capacity, making end-to-end learning possible, which reduces the need for substantial engineering effort, while achieving high effectiveness. At the same time, this also makes DL models more complex than conventional ML models. Reports in several domains indicate that such more complex ML models may have potentially critical hidden problems: various biases embedded in the training data can emerge in the prediction, extremely sensitive models can make unaccountable mistakes. Furthermore, the black-box nature of the DL models hinders the interpretation of the mechanisms behind them. Such unexpected drawbacks result in a significant impact on the trustworthiness of the systems in which the ML models are equipped as the core apparatus.
In this thesis, a series of studies investigates aspects of trustworthiness for complex ML applications, namely the reliability and explainability. Specifically, we focus on music as the primary domain of interest, considering its complexity and subjectivity. Due to this nature of music, ML models for music are necessarily complex for achieving meaningful effectiveness. As such, the reliability and explainability of music ML models are crucial in the field.","Trustworthy Machine Learning; Music Information Retrieval; Transfer Learning; Recommender Systems","en","doctoral thesis","","978-94-6366-418-9","","","","","","","","","Multimedia Computing","","",""
"uuid:c4d3cee2-6287-4a62-b425-063caa4868db","http://resolver.tudelft.nl/uuid:c4d3cee2-6287-4a62-b425-063caa4868db","Computational Design of Indoor Arenas (CDIA): Integrating multi-functional spaces and long-span roof structures","Pan, W. (TU Delft Design Informatics)","Sariyildiz, I.S. (promotor); Sun, Y. (promotor); Turrin, M. (copromotor); Delft University of Technology (degree granting institution)","2021","Indoor arenas are important public buildings catering for various activities (e.g., sports events, stage performances, assemblies, exhibitions, and daily sports for the public) and serving as landmarks in urban contexts. The multi-functional space and long-span roof structure of an indoor arena are highly interrelated, which impact the multi-functionality and structural performance and mainly define the overall form of the building. Therefore, it is crucial to integrate the multi-functional space and long-span roof structure to formulate proper forms for indoor arenas, in order to satisfy various design requirements during the conceptual design.
This thesis aims at formulating a computational design method, ‘Computational Design of Indoor Arena (CDIA)’, to support the conceptual design of indoor arenas by using the computational techniques of parametric modelling, Building Performance Simulations (BPSs), Multi-Objective Optimizations (MOOs), surrogate models based on Multi-Layer Perceptron Neural Network (MLPNN), and clustering based on Self-Organizing Map (SOM clustering). In the formulation of CDIA, these techniques are modified, improved and organized into five components and three workflows, to satisfy the demands of the conceptual design of indoor arenas.","Computational design; indoor arenas; multi-functional spaces; long-span roof structure","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-5366-423-3","","","","","","","","","Design Informatics","","",""
"uuid:1491f45d-103e-49db-8688-5e9cd1273f0b","http://resolver.tudelft.nl/uuid:1491f45d-103e-49db-8688-5e9cd1273f0b","Perspectives of Cost-Efficient GNSS Equipment for Wide-Spread and High-Quality Meteorological and Positioning Applications","Krietemeyer, A. (TU Delft Water Resources)","van de Giesen, N.C. (promotor); ten Veldhuis, Marie-claire (promotor); Delft University of Technology (degree granting institution)","2021","Whether in cars, smartphones, watches or fitness-trackers - the use of Global Navigation Satellite Systems (GNSS) has become a part of our daily life. Currently there are more than 100 GNSS satellites in orbit. They are routinely utilized for positioning and timing purposes, but their signals can also be used to monitor our environment. The basic principle GNSS measurements rely on is measuring the time difference between the transmitted signal of the satellite antenna and the receiving antenna (typically on the ground). While propagating through the atmosphere, the signal is delayed by the physical properties of the particles in its various layers. This delay is traditionally seen as undesired noise that should be eliminated from the data. This noise however also includes information about the state of the atmosphere which can be described by various parameters. One of such parameters is the delay caused by the 'wet' particles (predominantly water vapor) in the troposphere (lower 20km of the atmosphere). Weather models can use this information to correct the amount and location of atmospheric humidity which has proved to be beneficial for rainfall forecasts. To extract this information from the total signal delay, the delay caused by the ionosphere (upper part of the atmosphere, up to about 1000km) must be eliminated. A standard method is to make use of the dispersive character of the ionized particles in this layer and to eliminate the majority of this error by forming a so-called ionosphere-free linear combination. This requires signals on at least two different frequencies. Traditionally, only geodetic instruments e.g. utilized as permanent ground receivers operated by (inter-) national organizations use hardware that track GNSS signals on at least two frequencies. Such receivers are expensive (in the order of several thousand Euros) and as a result many GNSS networks outside developed areas lack the station density that is needed to capture the complex distribution of atmospheric water vapor. A densification for meteorological purposes with geodetic-grade GNSS receivers and antennas is economically not feasible. Similarly, local precision positioning equipment is not accessible for many regions, foremost situated in the Global South, due to the coarse distribution of static GNSS ground stations and expensive equipment to perform surveying tasks. Technological advances in recent years enabled the release of cost-efficient single- and dual-frequency GNSS receivers and antennas which may offer an alternative to the high-grade technology. However, the use of consumer-grade hardware is associated with challenges that need to be overcome. In this thesis, the performance of low-cost GNSS receivers in combination with antennas of a range of different type and qualities for high-precision applications was analyzed. In particular, the efficiency of using this equipment for meteorological and positioning applications was experimentally quantified and methods to enhance their performance were developed and implemented.","GNSS and GPS; water vapor; Zenith Tropospheric Delay (ZTD); GNSS antenna; antenna calibration; Phase Center Variation (PCV); goGPS; ZED-F9P","en","doctoral thesis","","978-94-6421-351-5","","","","","","","","","Water Resources","","",""
"uuid:d8bdaccc-e926-4a98-ad05-eea449a915aa","http://resolver.tudelft.nl/uuid:d8bdaccc-e926-4a98-ad05-eea449a915aa","Epidemics on Networks: Analysis, Network Reconstruction and Prediction","Prasse, B. (TU Delft Network Architectures and Services)","Van Mieghem, P.F.A. (promotor); Smeitink, E. (copromotor); Delft University of Technology (degree granting institution)","2021","The field of epidemiology encompasses a broad class of spreading phenomena, ranging from the seasonal influenza and the dissemination of fake news on online social media to the spread of neural activity over a synaptic network. The propagation of viruses, fake news and neural activity relies on the contact between individuals, social media accounts and brain regions, respectively. The contact patterns of the whole population result in a network. Due to the complexity of such contact networks, the understanding of epidemics is still unsatisfactory. In this dissertation, we advance the theory of epidemics and its applications, with a particular emphasis on the impact of the contact network. Our first contribution focusses on the analysis of the N-Intertwined Mean-Field Approximation (NIMFA) of the Susceptible-Infected-Susceptible (SIS) epidemic process on networks. We propose a geometric approach to clustering for epidemics on networks, which reduces the number of NIMFA differential equations from the network size N to the number m << N of clusters (Chapter 2). Specifically, we show that exact clustering is possible if and only if the contact network has an equitable partition, and we propose an approximate clustering method for arbitrary networks. Furthermore, for arbitrary contact networks, we derive the closed-form solution of the nonlinear NIMFA differential equations around the epidemic threshold (Chapter 3). Our solution reveals that the topology of the contact network is practically irrelevant for the epidemic outbreak around the epidemic threshold. Lastly, we study a discrete-time version of the NIMFA epidemic model (Chapter 4). We derive that the viral state is (almost always) monotonically increasing, the steady state is exponentially stable, and the viral dynamics is bounded by linear time-invariant systems. In the second part, we consider the reconstruction of the contact network and the prediction of epidemic outbreaks. We show that, for the stochastic SIS epidemic process on an individual level, the exact reconstruction of the contact network is impractical. Specifically, the maximum-likelihood SIS network reconstruction is NP-hard, and an accurate reconstruction requires a tremendous number of observations of the epidemic outbreak (Chapter 5). For epidemic models between groups of individuals, we argue that, in the presence of model errors, accurate long-term predictions of epidemic outbreaks are not possible, due to a severely ill-conditioned problem (Chapter 6). Nonetheless, short-term forecasts of epidemics are valuable, and we propose a prediction method which is applicable to a plethora of epidemic models on networks (Chapter 7). As an intermediate step, our prediction method infers the contact network from observations of the epidemic outbreak. Our key result is paradoxical: even though an accurate network reconstruction is impossible, the epidemic outbreak can be predicted accurately. Lastly, we apply our network-inference-based prediction method to the outbreak of COVID-19 (Chapter 8). The third part focusses on spreading phenomena in the human brain. We study the relation between two prominent methods for relating structure and function in the brain: the eigenmode approach and the series expansion approach (Chapter 9). More specifically, we derive closed-form expressions for the optimal coefficients of both approaches, and we demonstrate that the eigenmode approach is preferable to the series expansion approach. Furthermore, we study cross-frequency coupling in magnetoencephalography (MEG) brain networks (Chapter 10). By employing a multilayer network reconstruction method, we show that there are strong one-to-one interactions between the alpha and beta band, and the theta and gamma band. Furthermore, our results show that there are many cross-frequency connections between distant brain regions for theta-gamma coupling.","Complex Networks; Epidemics on Networks; Network Reconstruction; Prediction of Epidemics; Structural and Functional Brain Networks","en","doctoral thesis","","978-94-6421-330-0","","","","","","","","","Network Architectures and Services","","",""
"uuid:0a863dda-9120-459b-ad54-fa15760dd39b","http://resolver.tudelft.nl/uuid:0a863dda-9120-459b-ad54-fa15760dd39b","Land in Limbo: Understanding path dependencies at the intersection of the port and city of Naples","De Martino, P. (TU Delft History, Form & Aesthetics)","Hein, C.M. (promotor); Russo, Michelangelo (promotor); Daamen, T.A. (copromotor); Delft University of Technology (degree granting institution); Università degli Studi di Napoli Federico II (degree granting institution)","2021","Numerous actors have been involved in the planning of the port and city of Naples. National and local authorities—namely central government, the Region, the Municipality of Naples, and the Port Authority—act upon the port at different scales, according to diverging interest and by using different planning tools. Each entity has different spatial claims and contrastive views on what port city integration can be. Their diverse goals have led port and city to develop into separate entities, from a spatial, cultural, economic as well as administrative perspective. The different scopes of their planning are particularly visible in the areas at the intersection of land and water, where the relationship is characterized by waiting conditions across different dimensions and scales. The separation between port and city in Naples originates from history. This PhD thesis looks at the past as a resource, sometimes as a problem in the way it produces inertia, but certainly as a heritage made of signs, traces, and cultures, written and rewritten on the urban palimpsest. Using and challenging the concept of path dependence—defined here as a resistance by institutions and people to change patterns of behavior and to repeat previous decisions and experiences— this PhD thesis argues that in order to overcome inertia, it is important to recognize the interests and spatial claims of all the stakeholders involved port city planning and to identify shared goals and values as a foundation for future design.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-413-4","","","","","","","","","History, Form & Aesthetics","","",""
"uuid:c579128f-9e96-4e9e-9997-6ce9486e1e25","http://resolver.tudelft.nl/uuid:c579128f-9e96-4e9e-9997-6ce9486e1e25","Soap bubbles for large-scale PIV: Generation, control and tracing accuracy","Engler Faleiros, D. (TU Delft Aerodynamics)","Scarano, F. (promotor); Sciacchitano, A. (copromotor); Delft University of Technology (degree granting institution)","2021","Particle Image Velocimetry (PIV) relies upon the introduction of particle tracers that scatter sufficient light and follow the flow accurately. The use of submillimetre helium-filled soap bubbles (HFSB) as flow tracers for PIV is investigated for the purpose of enabling velocity measurements in large-scale industrial wind tunnels. That soap bubbles reflect more light than scattered by small liquid droplets or solid particles, allowing larger volumes to be illuminated, is a long known fact and has caught the attention of aerodynamicists since the 1930s. The difficulty encountered during initial efforts on using soap bubbles for accurate measurements revolves around the lack of control during the generation of these tracers, and the failure in presenting evidence that they could accurately follow the flow. Proof of concept that HFSB could be used for accurate flow measurements in wind tunnels was presented in the year that preceded the beginning of this work. In this thesis, the generation and control of HFSB and their tracing fidelity are studied through a series of experiments and simulations, bringing large-scale PIV using HFSB to the technology maturity level required for industrial measurements. High-speed shadowgraphy at the bubble generator exit revealed the main regimes of bubble generation. A regular, periodic and controlled generation bubbles of monodisperse size distribution, namely, the bubbling regime, was obtained by properly tuning of the input flow rates. The relation of the later with the bubble size and production rate was also obtained from these visualizations. Measurements of the HFSB velocity in the stagnation region ahead of a cylinder, obtained with Particle Tracking Velocimetry (PTV), relative to the flow velocity (slip velocity) were used to retrieve the HFSB time response and the ratio of helium to soap flow rates that satisfy the neutral buoyancy condition, in which the soap bubble density equals that of the surrounding air flow. Simulations of the particle motion in a rectilinear oscillatory flow was used to quantify the importance of the unsteady forces acting on a particle and to derive empirical relations for estimating the HFSB slip velocity in flows where the unsteady forces are relevant. In this case, the particle slip velocity is shown to depend on three parameters: the particle Reynolds number, the ratio of particle-to-fluid density and the flow time-scale. These cannot be combined into a single non-dimensional Stokes number. The validity of the empirical relations were extended for the analysis of the slip velocity of a particle travelling around an object. Based on the later, a method for deriving the density of a nearly-neutrally-buoyant particle that comprises the effects of unsteady forces and allows mismatch of acceleration between the particle and the flow was described. The tools developed for slip velocity analysis using the simulations were applied to assess experimental data from large-scale PIV measurements performed at the Low-Speed Tunnel (LST) of the German-Dutch Wind Tunnels (DNW). The experiments were realized in the flow around an airfoil of 70 cm chord at free stream velocity up to 70 m/s, reaching a chord-based Reynolds number of 3.2 million. PIV measurements using HFSB at this speed and Reynolds number were unprecedented. The results have indicated variations of the bubble density (20-30%) occurring post-generation. The tracing fidelity of HFSB in wall-bounded turbulence is investigated by comparing measurements in a turbulent-boundary layer of the mean velocity and Reynolds stress profiles, with those obtained with micrometre oil droplets (reference) and submillimetre air-filled soap bubbles (AFSB). The results have shown that the statistics of the first and second moments of velocity are well captured by all three investigated tracers, even by the heavier-than-air AFSB, which were shown to be poor tracers in the stagnation of a cylinder. Mechanisms of preferential concentration in turbulence were attributed as the cause of the better traceability observed. The thesis is concluded with a successful industrial application in the Large Low-Speed Facility (LLF) of DNW (9.5×9.5 m2 test section) around a tiltrotor aircraft in three flight modes, hover, transition and cruise, and tunnel speeds up to 60 m/s. The bubbles were introduced into the flow using a 3×3 m2 seeding rake, containing 400 bubble generators. The PIV measurements were performed in stereoscopic configuration in a field-of-view of 1.1×1.1 m2.","neutrally buoyant tracers; helium-filled soap bubbles; large-scale PIV; aerodynamics","en","doctoral thesis","","978-94-6366-403-5","","","","","","","","","Aerodynamics","","",""
"uuid:013fe685-755f-4a76-8428-53be5c67fa51","http://resolver.tudelft.nl/uuid:013fe685-755f-4a76-8428-53be5c67fa51","Air Traffic Control Advisory System for the Prevention of Bird Strikes","Metz, I.C. (TU Delft Control & Simulation)","Hoekstra, J.M. (promotor); Ellerbroek, Joost (copromotor); Delft University of Technology (degree granting institution)","2021","Bird strike prevention in civil aviation has traditionally focused on the airport perimeter. Since the risk of especially damaging bird strikes outside the airport boundaries is rising, this PhD thesis researches the safety potential of operational bird strike prevention involving pilots and controllers. In such a concept, controllers would be equipped with a bird strike advisory system, allowing them to delay departures which are most vulnerable to the consequences of bird strikes. However, the introduction of take-off delays reduces the maximum capacity of a runway. This PhD thesis investigates the feasibility of a bird strike advisory system with regard to safety and capacity by performing fast-time simulations including different air traffic intensities and bird abundance. In a first step, a system assuming perfect predictability of bird movement is developed, demonstrating a strong safety potential. However, when preventing all bird strikes, the induced delays can exceed tolerable limits for high air traffic intensities. In a second step, the system includes the limited predictability of bird movement. Bird tracks are predicted based on a simple linear regression model, considering variability of velocity and heading. To limit the negative effects on runway capacity, delays are only imposed on aircraft, for which strikes are predicted with a high probability and a damaging potential. The number and duration of delays remains reasonable even for airports operating at their capacity limits. However, linear regression proves insufficient to suitably evaluate the risk of collisions. To achieve reliable predictions, in-depth studies of multi-year bird movement data from various sensor types are recommended to develop site- and species-specific bird models. As such, the concept of a bird strike advisory system can be further developed to exploit the entire safety potential demonstrated by the initial study of the thesis.","airports; air traffic control; bird strikes; capacity; Collision detection and avoidance; risk; safety","en","doctoral thesis","","978-94-6366-381-6","","","","","","","","","Control & Simulation","","",""
"uuid:3dbc6a37-b5f5-48fc-9a20-9c929c387dd9","http://resolver.tudelft.nl/uuid:3dbc6a37-b5f5-48fc-9a20-9c929c387dd9","Cities in interaction: Analysing the Dutch system of cities with computational methods","Peris, A.F.T. (TU Delft Urban Studies)","Meijers, E.J. (promotor); van Ham, M. (promotor); Delft University of Technology (degree granting institution)","2021","Cities never function in isolation but as nodes in overarching systems characterised by flows of goods, people, and information. To fully understand the evolution of cities, a relational approach is needed, which investigates cities in relation to other cities and urban regions. While a significant part of urban system research has focused on aspects such as the concentration of populations and economic activities, the understanding of the actual networks connecting cities and their impact is still limited. However, the required data is notoriously difficult to obtain. This dissertation contributes to knowledge on the relationship between cities in the Netherlands by exploiting – in novel ways – three data sources: web pages mentioning cities, local historical newspapers, and administrative registers. After providing an overview of the systems of cities literature, the toponym co-occurrences method is explored. This method aims at identifying patterns of relations between cities in a systematic way by looking at the co-mentions of cities in text documents (here in web pages). Using text as data appeared as a great direction for studying urban systems, and elements from this first exploration are used in the next section of the thesis where the past dynamics of the Dutch urban system is reconstructed using information flows retrieved from digitised historical newspapers. Finally in a last empirical part, the potential of information from individual-level registers about professional and residential trajectories for measuring relations between places at multiple spatial scales is investigated. This measure is then used to reveal the nested hierarchy of functional regions in the Netherlands.","System of cities; Netherlands; Flows and networks; Computational Social Science","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-400-4","","","","A+BE | Architecture and the Built Environment No. 07 (2021)","","","","","Urban Studies","","",""
"uuid:83069b1b-4680-443d-aea7-9fae746514db","http://resolver.tudelft.nl/uuid:83069b1b-4680-443d-aea7-9fae746514db","Quantitative analysis of Saccharomyces cerevisiae's growth and metabolism on sucrose","Soares Rodrigues, C.I. (TU Delft BT/Industriele Microbiologie)","van Loosdrecht, Mark C.M. (promotor); K Gombert, Andreas (promotor); Wahl, S.A. (copromotor); Delft University of Technology (degree granting institution)","2021","In recent decades, there has been an increase demand for renewable sources of energy and chemicals in replacement of their fossil-based counterparts to tackle the economic, social, and environmental issues associated with the processing and use of petrochemicals by humanity. Sucrose has proven to be a suitable alternative feedstock to substitute petroleum for the commercial manufacture of not only fuel ethanol but also higher value-added compounds, such as trans β-farnesene and polyethylene. And there is a great potential to expand this portfolio. Besides its low market price, sucrose is also advantageous to industrial applications owing to its ready-to-use property that results in reduced overall production costs. Industrial sucrose-based microbial fermentation is feasible to a great extent due to the yeast Saccharomyces cerevisiae’s natural ability to metabolize this sugar at high rates. Also, yeast’s robustness under harsh industrial conditions, its simple nutritional requirements and the availability of modern genetic tools for the engineering of taylor-made strain has made it an appropriate catalyst in a wide range of bioprocesses. In spite of all this, S. cerevisiae's physiology on sucrose, as well as the regulatory mechanisms triggered by this disaccharide in yeast are still rather under-researched topics.","Sucrose; Saccharomyces cerevisiae; Yeast physiology; Proteomics","en","doctoral thesis","","978-94-6419-205-6","","","","Errata for Ph.D. dissertation “Quantitative analysis of Saccharomyces cerevisiae's growth and metabolism on sucrose” by Carla Inês Soares Rodrigues • Incorrect language used on page ii: The english equivalent for Fundação de Amparo à Pesquisa do Estado de São Paulo is São Paulo Research Foundation. • Missing information on page ii: Where it reads “The project was financed by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP, São Paulo, Brazil)” should read “The project was financed by São Paulo Research Foundation (FAPESP, São Paulo, Brazil; grant numbers 2016/07285-9, 2017/18206-5, and 2017/08464-7) and by the BE-BASIC/BIO-EN program (project F06.006).","","","","","BT/Industriele Microbiologie","","",""
"uuid:122bcbc0-72c6-4ab9-8aea-e39716961f46","http://resolver.tudelft.nl/uuid:122bcbc0-72c6-4ab9-8aea-e39716961f46","Unravelling Urban Wayfinding: Studies on the development of spatial knowledge, activity patterns, and route dynamics of cyclists","Zomer, L. (TU Delft Transport and Planning)","Hoogendoorn, S.P. (promotor); Cats, O. (promotor); Duives, D.C. (copromotor); Delft University of Technology (degree granting institution)","2021","Every day residents and visitors find their way through the complex urban
network to go to work or get education, or go sightseeing. This thesis
contains studies on the development of spatial knowledge, activity patterns,
and route dynamics of cyclists. The contributions and findings narrowed
the gap between research on travel behaviour research and research on
urban spatial knowledge.","","en","doctoral thesis","","9978-90-5584-291-9","","","","TRAIL Thesis Series no. T2021/16, the Netherlands Research School TRAIL","","","","","Transport and Planning","","",""
"uuid:8248a6de-1e98-4fcc-9acd-4bffd54232a4","http://resolver.tudelft.nl/uuid:8248a6de-1e98-4fcc-9acd-4bffd54232a4","Enhancing Engagement for All Pupils in Design & Technology Education: Structured Autonomy Activates Creativity","Roël-Looijenga, A. (TU Delft Science Education and Communication)","de Vries, M.J. (promotor); Klapwijk, R.M. (copromotor); Delft University of Technology (degree granting institution)","2021","This thesis is searching for ways to engage all pupils in class in an ongoing way during primary Design & Technology lessons, so that all pupils are able to profit from the lessons. The aim of Design & Technology education is that pupils acquire knowledge, skills and attitudes related to technology as they encounter it in daily life and later in professions. Some of those skills can be instructed, but others need to be taught until understanding emerges, for instance designing. Designing is a way of thinking with many aspects. Creativity is one of them. Design can be seen as the imagination of ideas in reality. Thinking happens in one's mind and is invisible. That is why designing requires making decisions, so that the design can be expressed. Not only design requires making decisions, but also other Design & Technology activities do so. Deciding is an important subtask of designing, solving and making, which requires a lot of practice before it can be done in an informed way. Therefore, Design & Technology education must provide pupils with opportunities to practise decision making broadly. When pupils have learned how to make their own decisions, and they have the freedom to do so, every pupil can make their own decisions, anytime, anywhere. Design can have many functions. Design can be used to do research and construct knowledge, to think out solutions and make them, or to re-create reality to someone’s personal taste. In turn technology is an important means to experiment with the design in reality to fine-tune the knowledge or idea. Children go to school to prepare for their future lives. So personal development should be an important goal of learning. Then tasks are needed that focus on this. The exercise of deciding for themselves how to approach design and technology is useful for personal development. Design & Technology education can offer such exercises. In this way, children can discover that it is enjoyable to be able to decide for themselves. By being allowed to decide for themselves how they learn, pupils can make use of their strengths and work on their weaknesses. They can also discover that it is useful to be able to decide for themselves. Through the discoveries made during exercises in deciding for themselves, their personal development grows. The result, a well-matured personal development, will manifest itself in social behaviour, flexibility and creativity. Although Design & Technology activities have a huge potential, many teachers experience that children are not always engaged in these activities. That is a problem because without engagement, learning is impeded.","","en","doctoral thesis","","978-94-6419-202-5","","","","","","","","","Science Education and Communication","","",""
"uuid:79a1f6f1-52d1-48a9-a0df-03c1b1ce0ac6","http://resolver.tudelft.nl/uuid:79a1f6f1-52d1-48a9-a0df-03c1b1ce0ac6","Dissolution and Electrochemical Reduction of Rare Earth Oxides in Fluoride Electrolytes","Guo, X. (TU Delft Team Yongxiang Yang)","Yang, Y. (promotor); Sietsma, J. (promotor); Delft University of Technology (degree granting institution)","2021","Rare earth elements (REEs) are a group of 17 metallic elements, including 15 lanthanides, scandium and yttrium, which have remarkably similar chemical and physical properties. Nowadays, rare earth metals are widely used in such fields as electronics, petroleum, and metallurgy. Rare earth elements are considered as vitamin to modern industry and critical resources to many countries.
Neodymium is a light lanthanide, and its demand has been substantially boosted due to the broad application of NdFeB permanent magnets in electronics and new energy industries.
Oxide-fluoride electrolysis is the main commercial method to produce rare earth metals and their alloys, especially light lanthanides, in both primary and secondary production. The oxide-fluoride electrolysis process involves first the dissolution of rare earth oxide(s) (REO(s)) in a molten fluoride, which serves as both a solvent and an electrolyte. During an electrochemical process, rare earth cations are reduced at the cathode and the respective metal is formed. Even though this method was adopted from laboratory to industrial production about 50 years ago, the exact mechanism of the process is not fully clarified. A deeper understanding of the process from both physicochemical and electrochemical points of view is crucial for process optimization, improving its current efficiency and power consumption. Maintaining enough REOs in the electrolyte and having a fast dissolution are crucial factors for good industrial practice. Identifying the electrochemical reactions involved during the electrolysis is vitally important for promoting target reactions and restricting side reactions, which are linked directly to the economic indicators of the process.
Therefore, this thesis focuses on the solubility of REOs in molten fluorides, developing a semi-empirical model for the estimation of REO solubility, dissolution behavior of Nd2O3 in molten fluoride, and electrochemical behavior of Nd(III) in fluoride melt.","rare earth metals; oxide-fluoride electrolysis; solubility; dissolution kinetics; electrochemical reduction","en","doctoral thesis","","","","","","","","","","","Team Yongxiang Yang","","",""
"uuid:a1b062bb-876e-4271-9c00-db3c8aa866dd","http://resolver.tudelft.nl/uuid:a1b062bb-876e-4271-9c00-db3c8aa866dd","Responsible Innovation in Data-Driven Biotechnology","Bruynseels, K.R.C. (TU Delft Ethics & Philosophy of Technology)","van den Hoven, M.J. (promotor); Delft University of Technology (degree granting institution)","2021","Innovations in biotechnology increasingly shape our societies and our planet. The stream of innovations that could be witnessed in the past few decades opens up new ways to do agriculture, to provide healthcare and to produce compounds and materials, amongst many other things. Many of these innovations rely on biological data. The ability to extract a plethora of biodata vastly increased over the past few decades. These biodata provide deeper insights into the workings of biological systems, thus constituting a fertile ground for bio-inspired innovations. For example, population genomics data provides the basis for a personalized health care. Biodiversity data derived from ecosystems provides the basis for the identification of novel drugs, high value chemicals and materials. Biodata is increasingly crucial when aiming for a flourishing bio-economy and biomedicine. Biodata-based innovations thereby raise very substantial ethical questions. Pronounced cases like human genome editing, or the engineering of entire species via gene drive technologies, make clear that innovations need to go hand in hand with societal deliberation and an ethical accompaniment of technology development. The question though is how such responsible innovation can be organized. Extraction of biodata is done at a speed that surpasses Moore’s law. And the resulting biodata-based innovations are fast-paced. It is therefore highly needed to consider how a responsible guidance of innovation in biotechnology can be accomplished, in view of the biodata-avalanche. This question provides the entry point for this dissertation. Central to this analysis is the special ontological and epistemological position of biodata. Biodata resides at the interface between the biophysical world and the realm of human language and meaning. This makes biodata a central locus when pursuing a value-driven accompaniment of innovation in the field of biotechnology.","","en","doctoral thesis","","978-90-386-5265-8","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:631f552e-26b2-403b-b0fd-2756ba2e3a1d","http://resolver.tudelft.nl/uuid:631f552e-26b2-403b-b0fd-2756ba2e3a1d","Quality failures in Energy-saving renovation projects in Northern China","Qi, Y. (TU Delft Housing Quality and Process Innovation)","Visscher, H.J. (promotor); Qian, QK (copromotor); Meijer, F.M. (copromotor); Delft University of Technology (degree granting institution)","2021","The energy-saving renovation of an existing building is a critical strategy in achieving a longterm energy goal in the Chinese context. However, in China, building energy renovation projects are subjected to quality failures resulting in energy wastage, a decrease in the energy efficiency of the project, an increase in project cost, and thus negatively affecting the overall performance of the renovation projects. In order to avoid them happening in the future, it is essential to find and analyse the causes of quality failures in energy-saving renovation projects. Therefore, using a four-step process, this research aims to deepen the understanding of the causes of quality failures in energy-saving renovation projects of the existing residential buildings. The first and second steps are to identify and analyse the quality failures and their causes. The deeper insights from a quality management perspective are explored in the third step. The fourth step is to investigate how the actors and their interactions affect and cause quality failures during the renovation policy implementation process. This research mainly concludes the causes of quality failures in the building energy renovation projects. It is important to state that most of the quality failures can be avoided at the management level. Some external causes originated at a policy level and outside the project. The findings of this research would be valuable for policy-makers and project coordinators both for predicting and avoiding quality failures and for developing proper action and policy interventions to ensure successful building energy renovations in the future.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-415-8","","","","","","","","","Housing Quality and Process Innovation","","",""
"uuid:aecd60c9-f7f9-416a-92ff-80261d7c954c","http://resolver.tudelft.nl/uuid:aecd60c9-f7f9-416a-92ff-80261d7c954c","Degradation of Biomass Pellets during Transport, Handling and Storage: An experimental and numerical study","Gilvari, H. (TU Delft Large Scale Energy Storage)","Schott, D.L. (promotor); de Jong, W. (promotor); Delft University of Technology (degree granting institution)","2021","Presently, biomass pellets play a significant role in energy transition scenarios worldwide. Due to the lack of local supplies, many countries import their pellets from countries with enormous resources. For instance, in Europe, a big share of pellets is imported from the USA, Canada, and Asian countries. Pellets are normally transferred in bulk using ocean vessels with a capacity of up to 40,000 metric tons. Due to mechanical forces and environmental changes throughout the transport and storage steps, pellets are prone to degradation. This may degrade pellets physically or chemically. As a result, fines and dust are generated. Moreover, as pellets absorb and adsorb moisture from the environment, the moisture content and the heating value of pellets may change, and this may also weaken the physical structure because of swelling. The presence of fines and dust may lead to self-ignition and dust explosion, material loss, equipment fouling, and environmental and health issues. The goal of this dissertation is to investigate to what extent biomass pellets degrade during transport and storage. To achieve this, first, we conducted an extensive literature review to reveal the factors that affect the extent of degradation of pellets. Moreover, we studied the commonly used methods to assess the quality parameters and the degradation behavior of pellets in detail. Then, we carried out a series of experiments on physical and chemical degradations of pellets from laboratory to large-scale and analyzed them in the operational and environmental context. By conducting these experiments, we unveiled the relationship between the laboratory test results and the pilot or large-scale transport impact on the proportion of generated fines. Furthermore, a model in the discrete element method (DEM) was developed and used to simulate the breakage pattern of individual pellets under the compression test. The model shows high fidelity in simulating the breakage behavior of pellets under compressive forces in two directions.","biomass pellets; transport and storage; mechanical strength; discrete element methods; breakage and degradation","en","doctoral thesis","","978-94-6421-329-4","","","","","","","","","Large Scale Energy Storage","","",""
"uuid:8810b5a9-91d3-4add-9b39-1d466e4e7dd1","http://resolver.tudelft.nl/uuid:8810b5a9-91d3-4add-9b39-1d466e4e7dd1","Opportunistic Adaptation: Using the Urban Renewal Cycle to Adapt to Climate Change","Nilubon, P. (TU Delft Hydraulic Structures and Flood Risk)","Zevenbergen, C. (promotor); Veerbeek, W. (copromotor); Delft University of Technology (degree granting institution)","2021","Urban climate adaptation currently focusses mainly on hazards but often ignores opportunities which arise in both space and time. Opportunistic Adaptation provides a rationalized approach to mainstream measures for climate adaptation into urban renewal cycles. Adaptation opportunities are identified by projecting the lifespans of urban assets into the future to obtain an operational urban adaptation agenda for the future. Upscaling of the adaptation process is done by synchronizing the end-of-lifecycle of a group of assets to develop adaptation clusters that comprise multiple dwellings, infrastructure as well as public spaces. An extensive catalogue of adaptation measures for different scale-levels ensures flexibility in the type of measures that can be integrated. Sequencing the adaptation measures over long periods of time provides insight and flexibility in the long-term protection standards that can be achieved. By applying a design-centered approach, the potentials of obtaining co-benefits in the urban landscape are maximized. Potentials of clustering of nature-based solutions are being considered which ensures to maximize the delivery of ecosystem services. This research aims to assess \the adaptation potential of Bangkok, based on a case study area (Lat Krabang) by mapping the adaptation opportunities and flood vulnerability. The resulting outputs will contribute to the development of a flexible and inclusive FRM strategy.","Climate Adaptation; Flood risk management; Flexibility in adaptation; Lifecycle assessment; Opportunistic adaptation; Urban renewal cycle","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","9781032055091","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:a8c89eb8-b60f-4904-8425-f38fcb956a15","http://resolver.tudelft.nl/uuid:a8c89eb8-b60f-4904-8425-f38fcb956a15","Full-Waveform Inversion for Breast Ultrasound","Taskin, U. (TU Delft ImPhys/Medical Imaging)","de Jong, N. (promotor); Verschuur, D.J. (promotor); van Dongen, K.W.A. (copromotor); Delft University of Technology (degree granting institution)","2021","Breast cancer is the most common type of cancer for women and in developed countries it forms one of their largest threats. Many studies have shown that early detection by screening is important for achieving a successful treatment and reducing the mortality rate. Nowadaysmammography is the gold standard for breast cancer screening. However, mammography has several drawbacks including the use of ionizing radiation, a painful procedure, and poor performance with dense breasts. Magnetic resonance imaging (MRI) could form an alternative as it has some powerful features. However, the high examination and equipment costs as well as the use of contrast agents limits its applicability. Another potential alternative for breast cancer screening is ultrasound. Ultrasound has the advantage over mammography orMRI that it is safe, cheap and patient-friendly. With ultrasound, a tumor can be detected since healthy breast tissues and cancerous tissues have different acoustic properties. All these features make ultrasound a promising candidate as a screening modality for breast cancer. Hand-held ultrasound scans are frequently used for breast imaging in hospitals. With these scanners reflectivity images are generated. These images typically show the boundaries between different tissues. Even when these exams are conducted by trained radiologists operator-dependency occurs. To eliminate this, automated full-breast ultrasound scanners have been developed where the transducer slides over the breast. However, as the imaging principle remains the same, only reflectivity images are generated. To avoid significant breast deformation as well as to scan the breast from as many sides as possible water-bath scanning systems have been developed. These systems have the additional advantage that both reflection and transmission measurements are obtained. This mixture of different measurement types make it feasible to obtain better images by employing advanced processing techniques. One promising imaging method is full-waveform inversion (FWI). FWI aims to match a modeled wavefield to a measured wavefield by adjusting the acoustic medium parameters. A minimization problem is constructed and solved to this aim. As a result, images showing quantitative information about the different tissues are obtained. This quantitative information aids to the characterization and identification of the different tissues. However, there are some challenges when applying FWI. One of the biggest challenges is its computational complexity. By the inclusion of wave phenomena such as diffraction, refraction, scattering and dispersion - needed to explain the measured data in great detail - the computational complexity of FWI has become significantly larger than conventional - mainly ray based - imaging methods. In this work, we investigate the applicability of contrast source inversion (CSI) as an FWI method for breast ultrasound. To this end, we first introduce our full-waveform forward modeling method which is based on solving an integral equation. With a synthetic example,we investigate howeach mediumparameter (compressibility, density, and attenuation) affects the scattered pressure field. The obtained results show that attenuation, in contrast to compressibility and density, has only little effect on the wavefield for frequencies below 1MHz. From that we conclude, that for these frequencies only attenuation can be neglected in our inversion. We also compare the results from our full-waveform modeling method with results obtained after commonly made approximations such as Born, ray-based and paraxial approximations. We observe from the presented numerical results that with each approximation important phenomena normally present in the full-wave data are absent. For this reason, we recommend to use a full-wave modeling method to compute synthetic measurement data.","","en","doctoral thesis","","978-94-6384-211-2","","","","","","","","","ImPhys/Medical Imaging","","",""
"uuid:2a2a3b0d-7dee-4518-b96d-42dd58492ffd","http://resolver.tudelft.nl/uuid:2a2a3b0d-7dee-4518-b96d-42dd58492ffd","Soft robotic manipulators with proprioception","Scharff, R.B.N. (TU Delft Materials and Manufacturing)","Wang, C.C. (promotor); Geraedts, Jo M.P. (promotor); Wu, J. (copromotor); Delft University of Technology (degree granting institution)","2021","Agriculture and horticulture depend heavily on human labor to perform tasks that are often dirty, hazardous, and highly repetitive. One reason for the lack of automation of these tasks is the absence of suitable robotic handling equipment. Rigid robotic manipulators are typically incapable of performing dexterous manipulation tasks such as harvesting apples as they lack the ability to adapt to objects of various shapes and sizes. Such robotic manipulators need a large number of sensors and actuators to overcome these challenges, making them overly complex and not very robust. Therefore, the development of robotic manipulators for dexterous manipulation tasks has begun to focus on morphological computation, in which at least some aspects of the control are outsourced to the body of the robot. Taking inspiration from grasping mechanisms in natural systems, the field of soft robotics attempts to address this problem by constructing robots from soft materials. Although soft robotics may be the key to realizing automation of dexterous manipulation tasks, the current commercially available soft robotic grippers are only capable of performing simple pick-and-place tasks with open-loop control. This limited capability is in large part due to a lack of techniques to endow these manipulators with a sense of self-movement and body position, known as proprioception. Proprioception is a simple problem for conventional robots with rigid members and discrete joints, as the body position can be easily reconstructed using the information from encoders in the robots’ joints. However, it is a highly challenging problem for soft robots with virtually infinite degrees of freedom and above all, no suitable off-the-shelf sensors…","Soft Robotics; Proprioception","en","doctoral thesis","","978-94-6384-193-1","","","","","","","","","Materials and Manufacturing","","",""
"uuid:d07534b8-d3bf-404e-ae6e-ccf1acc7bc1d","http://resolver.tudelft.nl/uuid:d07534b8-d3bf-404e-ae6e-ccf1acc7bc1d","Aqueous two-phase systems applied to the enzymatic hydrolysis of sugarcane bagasse: Screening methodology, thermodynamic modelling and process design","Consorti Bussamra, B. (TU Delft BT/Bioprocess Engineering)","Ottens, M. (promotor); van der Wielen, L.A.M. (promotor); Carvalho da Costa, Aline (promotor); Mussatto, Solange (promotor); Delft University of Technology (degree granting institution); Unicamp, Campinas (degree granting institution)","2021","Targeting to improve the utilization of lignocellulosic residues in the ethanol processing industry, this work aimed to test if the product inhibition of the enzymatic hydrolysis could be relieved by extractive reaction using aqueous two-phase systems (ATPS). The performance of enzymatic hydrolysis in ATPS is not well defined in literature. In this thesis, this extractive reaction was tested in terms of experimental conversion of sugarcane bagasse, simulations through conceptual process design and economic feasibility. A thermodynamic framework was developed in order to predict ATPS formation. The screening of ATPS and partition coefficient of the solutes were performed in a high throughput station. The ATPS were composed by polymer and salt. The enzymes were represented by the enzymatic cocktail Cellic CTec (Novozymes). The development of this platform consisted of two main parts: determination of phase diagrams (binodal curves and tie lines) and quantification of the solutes (sugar and proteins) in both top and bottom phases. The most promising ATPS were experimentally explored for enzymatic hydrolysis of sugarcane bagasse. Process design simulated two scenarios: hydrolysis occurring in the bottom phase and in the top phase. Topics such as the adsorption of phase forming components to the bagasse fibers and the influence of enzyme load on the hydrolysis were explored. The sugarcane bagasse hydrolysis in ATPS was conceptually assessed through the implementation of a model composed by two parts: hydrolysis and ATPS multi-batch separation. The designed case characterized by the ATPS hydrolysis was compared to the base case defined as conventional hydrolysis. Regarding the thermodynamic modelling of ATPS, the application of Flory-Huggins (FH) model to predict phase separation in polymer-salt systems was assessed. The implementation and analysis of FH theory involved the estimation of interchange energy (푤푖푗) and the calculation of phase diagrams. There were no statistical differences in determining the phase diagram in HTP platforms and bench-scale, verifying the reliability of methods and equipment suggested in this work. Moreover, tailored approaches to quantify the solutes were 8 presented, taking into account the limitations of techniques that can be applied with ATPS due to the interference of phase forming components with the analytics. This fast methodology proposes to screen up to six different polymer-salt systems in eight days and supplies the results to understand the influence of sugar and protein concentrations on their partition coefficients. Exploring experimentally the ATPS hydrolysis provided strategies on how to conduct extractive enzymatic hydrolysis in ATPS and how to explore the experimental results in order to design a feasible process. In the conceptual design of extractive enzymatic hydrolysis, one of the major bottlenecks identified was the partitioning of glucose to both phases. The resultant conceptual process design operates as a tool to evaluate ATPS hydrolysis and compare it to conventional one. On the other hand, the thermodynamic model could not quantitatively describe the data. This occurs mainly because of the strong influence of random experimental errors on the estimation of interchange energy, systematic errors when translating the observed data to calculated partition concentrations, and FH not being an exact description of phase separation in salt based ATPS. The high throughput screening methodology indicated ATPS able to partition sugar and enzymes. The selected ATPS presented no significant improvements to perform the enzymatic conversion of sugarcane bagasse compared to the conventional hydrolysis. The main reasons were the influence of phase forming components on the enzymatic activity and the low selectivity of sugars in the ATPS. To disclose the application of ATPS in the ethanol processing industry, the recovery and reuse of the phase forming components are imperative for economic feasibility. Moreover, the developed high throughput platform could be further employed to exhaustively screen systems to design effective ATPS for the partition of sugars and proteins in polymer-salt systems.","aqueous two-phase systems (ATPS); Process design; sugarcane bagasse; enzymatic hydrolysis; Thermodynamic analysis","en","doctoral thesis","","978-94-6366-411-0","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:d332c7e3-87be-4ed6-aa71-e629ef77e07a","http://resolver.tudelft.nl/uuid:d332c7e3-87be-4ed6-aa71-e629ef77e07a","Investigation of turbulence-surface interaction noise mechanisms and their reduction using porous materials","Zamponi, R. (TU Delft Novel Aerospace Materials; TU Delft Wind Energy)","Scarano, F. (promotor); Schram, C (promotor); Ragni, D. (promotor); Delft University of Technology (degree granting institution)","2021","The interaction of an airfoil with incident turbulence is an important source of aerodynamic noise in numerous applications, such as turbofan engines, cooling systems for automotive and construction industries, high-lift devices on aircraft wings, and landing gear systems. In these instances, turbulence is generally produced by elements that are installed upstream of the wing profile and generate inflow distortions. A possible strategy for the reduction of turbulence-interaction noise, also referred to as leading-edge noise, is represented by the integration of porous media in the structure of the airfoil. However, the physical mechanisms involved in this noise mitigation technique remain unclear. The present thesis aims to elucidate these phenomena and, particularly, how porosity affects the incoming turbulence characteristics in the immediate vicinity of the surface. This problem has been addressed from different perspectives, namely from the technological, experimental, and analytical ones. An innovative design for a porous NACA-0024 profile fitted with melamine foam is proposed. The noise reduction performance achieved with such a porous treatment is evaluated through a novel version of the generalized inverse beamforming (GIBF) implemented with an improved regularization technique. The algorithm is first applied to different experimental benchmark datasets in order to evaluate its ability to reconstruct distributed aeroacoustic sources and to assess its accuracy and variability in different conditions. Results indicate that the implemented method provides an enhanced representation of the distributed noise-source regions and higher performance in terms of accuracy and variability if compared with other common beamforming techniques. GIBF is then employed together with far-field microphone measurements to characterize the leading-edge noise radiated by solid and porous NACA-0024 profiles immersed in the wake of an upstream cylindrical rod at different free-stream velocities. A noise reduction of up to 2dB is found for frequencies around the vortex-shedding peak, with a trend that is independent of the Reynolds number, whereas significant noise regeneration is observed at higher frequencies, most probably due to surface roughness. Subsequently, the flow-field alterations due to porosity in the stagnation region of the airfoils are investigated by means of mean-wall pressure, hot-wire anemometry, and particle image velocimetry measurements. The porous treatment mostly preserves the integrity of the NACA-0024 profile’s shape but yields a wider opening of the jet flow that increases the drag force. Moreover, porosity allows for damping of the velocity fluctuations near the surface and has limited influence on the upstream mean-flow field. In particular, the upwash component of the root-mean-square of the velocity fluctuations turns out to be significantly attenuated in a porous airfoil in contrast to a solid one, resulting in a strong decrease of the turbulent kinetic energy in the stagnation region. The present effect is more pronounced for higher Reynolds numbers. The mean spanwise vorticity close to the body appears also to be mitigated by the porous treatment. Furthermore, the comparison between the power spectral densities of the incident turbulent velocities demonstrates that porosity has an effect mainly on the low-frequency range of the turbulent-velocity spectrum, with a spatial extent up to about two leading-edge radii from the stagnation point. In addition, the vortex-shedding frequency peak in the power spectrum of the streamwise velocity fluctuations close to the airfoil surface is found to be suppressed by porosity. The present results show analogies with the outcomes of the aeroacoustic analysis, highlighting the important role played by the attenuated turbulence distortion due to the porous treatment of the airfoil in the corresponding noise reduction. An analytical model based on the rapid distortion theory (RDT) to predict the turbulent flow around a porous cylinder is formulated with the aim of improving the understanding of the effect of porosity on turbulence distortion and interpreting the experimental results. The porous treatment, characterized by a constant static permeability, is modeled as a varying impedance boundary condition applied to the potential component of the velocity that accounts for Darcy’s flow within the body. The RDT implementation is first validated through comparisons with published velocity measurements in the stagnation region of an impermeable cylinder placed downstream of a turbulence grid. Afterwards, the impact of porosity on the velocity field is investigated through the analysis of the one-dimensional velocity spectra at different locations near the body and the velocity variance along the stagnation streamline. The porous surface affects the incoming turbulence distortion near the cylinder by reducing the blocking effect of the body and by altering the vorticity deformation caused by the mean flow. The former leads to an attenuation of the one-dimensional velocity spectrum in the low-frequency range, whereas the latter results in an amplification of the high-frequency components. This trend is found to be strongly dependent on the turbulence scale and influences the evolution of the velocity fluctuations in the stagnation region. The porous RDT model is finally adapted to calculate the turbulence distortion in the vicinity of the porous NACA- 0024 profile leading edge. The satisfactory agreement between predictions and experimental results suggests that the present methodology can improve the understanding of the physical mechanisms involved in the airfoil-turbulence interaction noise reduction through porosity and can be instrumental in designing such passive noise-mitigation treatments.","Aeroacoustics; Turbulence-interaction noise; Porous materials; Beamforming; Rod-airfoil configuration; Rapid distortion theory","en","doctoral thesis","","978-2-87516-163-5","","","","","","","","","Novel Aerospace Materials","","",""
"uuid:3353f734-2d23-4dbd-b80d-ffac899c69e8","http://resolver.tudelft.nl/uuid:3353f734-2d23-4dbd-b80d-ffac899c69e8","Thermodynamic Effects in Enzyme Regulation, Stereochemistry and Process Control","Marsden, S.R. (TU Delft BT/Biocatalysis)","Hanefeld, U. (promotor); McMillan, D.G.G. (copromotor); Delft University of Technology (degree granting institution)","2021","Thiamine diphosphate dependent enzymes are excellent catalysts for the asymmetric synthesis of the α-hydroxyketone (acyloin) structural motif, which is found in many pharmaceuticals and fine chemicals. In chapter 2, variants of transketolase from Saccharomyces cerevisiae were screened for the conversion of aliphatic aldehydes with hydroxypyruvate as donor substrate. The formation of a new hydrogen bond network was observed in the most successful variant D477E, which allowed for the accommodation of hydrophobic aldehydes within the enzyme’s polar active site. Decarboxylation of hydroxypyruvate was shown to render the carboligation reaction kinetically controlled, correcting the preceding notion of an irreversible conversion of substrates in literature.","thermodynamics; kinetic control; thiamine diphosphate; aldolase","en","doctoral thesis","","","","","","","","","","","BT/Biocatalysis","","",""
"uuid:1d5f7ea6-8464-48dd-b593-f2cba9c1f493","http://resolver.tudelft.nl/uuid:1d5f7ea6-8464-48dd-b593-f2cba9c1f493","Design for Sanitation: How does design influence train toilet hygiene?","Loth, M. (TU Delft Applied Ergonomics and Design)","van Eijk, D.J. (promotor); Molenbroek, J.F.M. (copromotor); Delft University of Technology (degree granting institution)","2021","Humans are travelling, and they may need a toilet on the go.
However, they try to avoid this toilet because they perceive it as being dirty.
This research project improved the Dutch train toilet's hygiene by reducing physical, mental, and social distances between toilet, dirt, and train travellers.","train; toilet; hygiene; accessibility; observational research; urinal","en","doctoral thesis","","978-94-6421-320-1","","","","","","","","","Applied Ergonomics and Design","","",""
"uuid:aac5f17a-63d5-45c7-9570-3cea057cd016","http://resolver.tudelft.nl/uuid:aac5f17a-63d5-45c7-9570-3cea057cd016","Carving Information Sources to Drive Search-Based Crash Reproduction and Test Case Generation","Derakhshanfar, P. (TU Delft Software Engineering)","Zaidman, A.E. (promotor); van Deursen, A. (promotor); Panichella, A. (copromotor); Delft University of Technology (degree granting institution)","2021","Software testing is one of the essential and expensive tasks in software development. Hence, many approaches were introduced to automate different software testing tasks. Among these techniques, search-based test generation techniques have been vastly applied in real-world cases and have shown promising results. These strategies apply search-based methods for generating tests according to various test criteria such as line and branch coverage. In this thesis, we introduce new search objectives and techniques using various knowledge carved from resources like source code, hand-written test cases, and execution logs. These novel search objectives and approaches (i) improve the state-of-the-art in search-based crash reproduction, (ii) present a new search-based approach to generate class-integration tests covering interactions between two given classes., and (iii) introduce two new search objectives for covering common/uncommon execution patterns observed during the software production.","Search-based Software Testing; Crash Reproduction; Class Integration Testing; Carving Information Sources","en","doctoral thesis","","978-94-6421-312-6","","","","","","","","","Software Engineering","","",""
"uuid:cbec4bb0-a54c-424a-b1ba-d32c5567b366","http://resolver.tudelft.nl/uuid:cbec4bb0-a54c-424a-b1ba-d32c5567b366","Exploring persuasive technology in the context of health and wellbeing at work","de Korte, E.M. (TU Delft Applied Ergonomics and Design)","Vink, P. (promotor); Kraaij, W (promotor); Wiezer, N.M. (copromotor); Delft University of Technology (degree granting institution)","2021","We are not able to imagine life without technology. We use technology for almost every task in our daily life and also in the work setting, technology is everywhere around us. Developments in ICT have brought about many changes in work, and these changes will continue as technology evolves.
In her thesis, Elsbeth de Korte explores the potential of persuasive technology to improve health and wellbeing at work. Persuasive technology is designed to change attitudes or behaviors of users through persuasion and social influence and without coercion. With apps, sensors and data, behavior, physical and mental activity and bodily functions can be monitored. Smart algorithms are used to provide active feedback to the user, to help them to achieve their goals. Persuasive technology shows real potential to drive improvements in working life, to reduce health risks or to better manage risk factors. However, can we trust persuasive technology? On which theories, models or standards do they base their feedback and recommendations? Are they effective? Who is actually profiting from persuasive technology? These questions need to be answered to explore how, where and for whom persuasive technology can be meaningfully implemented.
One of these new views comes from the field of graph signal processing which provides models and tools to understand and process data coming from such complex systems. With a principled view, coming from its signal processing background, graph signal processing establishes the basis for addressing problems involving data defined over interconnected systems by combining knowledge from graph and network theory with signal processing tools. In this thesis, our goal is to advance the current state-of-the-art by studying the processing of network data using graph filters, the workhorse of graph signal processing, and by proposing methods for identifying the topology (interactions) of a network from network measurements.
To extend the capabilities of current graph filters, the network-domain counterparts of time-domain filters, we introduce a generalization of graph filters. This new family of filters does not only provide more flexibility in terms of processing networked data distributively but also reduces the communications in typical network applications, such as distributed consensus or beamforming. Furthermore, we theoretically characterize these generalized graph filters and also propose a practical and numerically-amenable cascaded implementation.
As allmethods in graph signal processingmake use of the structure of the network, we require to know the topology. Therefore, identifying the network interconnections from networked data is much needed for appropriately processing this data. In this thesis, we pose the network topology identification problem through the lens of system identification and study the effect of collecting information only from part of the elements of the network. We show that by using the state-space formalism, algebraic methods can be applied to the network identification problem successfully. Further, we demonstrate that for the partially-observable case, although ambiguities arise, we can still retrieve a coherent network topology leveraging state-of-the-art optimization techniques.","distributed processing; graph filtering; graph theory; graph signal processing; topology identification","en","doctoral thesis","","978-94-6416-560-9","","","","","","","","","Signal Processing Systems","","",""
"uuid:b232e542-4881-4b02-8677-a7b1dd37b6b0","http://resolver.tudelft.nl/uuid:b232e542-4881-4b02-8677-a7b1dd37b6b0","Grabs and Cohesive Bulk Solids: Virtual prototyping using a validated co-simulation","Mohajeri, M. (TU Delft Transport Engineering and Logistics)","Schott, D.L. (promotor); van Rhee, C. (promotor); Delft University of Technology (degree granting institution)","2021","Due to the high demand of iron ore products in the steel industry, they have the largest share in dry bulk trading per year, above coal and grains. Approximately 9000 Cape-size bulk carriers with capacities up to 400 000 tonnes (DWT) transport the annual demand of iron ore to destination ports. Grabs are employed extensively to unload iron ore from ship holds. A fast and reliable unloading process is required to maintain a minimized cost for port operators and to deliver iron ore products to customers on time. In practice, many factors, such as moisture, varying material properties over the cargo depth and grab’s dynamics, contribute in creating challenges for achieving the desired performance during the unloading process. A solution for improving the unloading process is to enhance the design of grabs by using simulation-based methods. This enables a higher mass of iron ore to be collected per grab cycle, thus minimizing the total unloading time of a bulk carrier. Virtual prototyping of grabs is a novel simulation-based method that allows for evaluating the design performance in an affordable way. The virtual prototype of a grab as it interacts with bulk material are co-simulated at full-scale by coupling two different solvers: Discrete Element Method (DEM) and MultiBody Dynamics (MBD). The co-simulation requires virtual crane operator, CAD model of grab connected to a crane, and calibrated DEM material model as inputs. Over the past decade, reliable DEM calibration procedures have been developed to model free-flowing bulk solids, such as iron ore pellets, sand and gravel. However, due to moisture content the majority of iron ore products show cohesive and stress-history dependent behaviours, which should be considered in the calibration procedure. Additionally, considering particle size and shape of such fine iron ore products, the extreme computation time of DEM simulations is a challenge to be solved. Furthermore, a grab is often used to handle a broad variety of iron ore cargoes that are different in their properties, such as moisture content, shear strength and bulk density. The variability of bulk solid properties influences the grabbing process considerably, and thus, the grab’s efficiency. The primary objective of this dissertation is to develop an accurate co-simulation of grab and cohesive iron ore, and utilizing it for optimizing virtual prototypes. Once properties of an iron ore product in interaction with equipment are characterized, a reliable multi-variable calibration procedure needs to be employed to set various input parameters of a DEM material model, including continuous and categorical variables. Furthermore, once proper scaling rules are applied on the DEM simulation, a full scale grab-material co-simulation can be set up to be validated. Next, by determining the optimal settings of design variables the effect of bulk cargo variation on the grab’s efficiency can be minimized. This is the fundamental strategy of robust grab design. Bulk terminal operators value grabs that are optimized for multiple objectives, including a maximized efficiency with a minimized deviation.","grab; discrete element method; cohesive bulk material; iron ore; virtual prototype optimization; Full-scale validation; Design of experiments (DoE)","en","doctoral thesis","","978-94-6421-324-9","","","","","","","","","Transport Engineering and Logistics","","",""
"uuid:0bfb7a4a-366a-4492-b897-741d3422f9ff","http://resolver.tudelft.nl/uuid:0bfb7a4a-366a-4492-b897-741d3422f9ff","Optimal Decision Making for Aircraft Maintenance Planning: From Maintenance Check Scheduling to Maintenance Task Allocation","Deng, Q. (TU Delft Air Transport & Operations)","Mulder, Max (promotor); Santos, Bruno F. (copromotor); Delft University of Technology (degree granting institution)","2021","Aircraft maintenance is the process of overhaul, repair, inspection, or modification of an aircraft or aircraft systems, components, and structures, to keep these in an airworthy condition. Airlines must perform regular maintenance on their fleet to keep their aircraft airworthy and, ultimately, prevent any systems or components failures during commercial operations. Coupled with the rapid growth of the global commercial aircraft fleet, aircraft maintenance demands have increased significantly in the past few decades. Since aviation is a very competitive industry, the growing aircraft maintenance demands and associated operation costs put a huge financial burden on airlines, forcing them to reduce costs while still respecting safety regulations. Therefore, airlines are laying increasing emphasis on planning aircraft maintenance efficiently. An efficient planning approach for aircraft maintenance is a dual-edged sword. It reduces not only the time and effort of organizing maintenance tasks and coordinating maintenance activities but also increases the time fleet availability for operations and associated revenues. Before introducing wide-body aircraft in the 1970s, airlines used a bottom-up, task-oriented approach to plan aircraft maintenance, as then the commercial fleet sizeswere small. Nowadays, most airlines adopt a top-down approach, and first groups the maintenance tasks with the same or similar inspection intervals into a large task block. These, in turn, are commonly divided into four types and labeled as: A-check (every 4–6 months), B-check (every 4–6 months), C-check (every 18–24 months), and Dcheck (every 6–10 years). After planning the letter checks, airlines further determine the maintenance tasks to be added or removed in each letter check. This dissertation innovates the aircraft maintenance planning (AMP) process by presenting a comprehensive digital solution. It replaces the current sequential computeraided manual approachwith an integrated scheduling methodology to automate the aircraft maintenance planning process. Given a specific time horizon, it considers all check types together when making the maintenance check decisions and generates the optimal schedules for all letter checks in one comprehensive solution. After that, it plans a long-term (3–5 years) task execution plan based on the optimal maintenance check schedule. These features are integrated into a decision su","","en","doctoral thesis","","978-94-6366-398-4","","","","","","","","","Air Transport & Operations","","",""
"uuid:4dd0034d-587e-4b9b-9b97-0a24210af123","http://resolver.tudelft.nl/uuid:4dd0034d-587e-4b9b-9b97-0a24210af123","Ultrasonic welding of epoxy- to thermoplastic-based composites","Tsiangou, E. (TU Delft Aerospace Structures & Computational Mechanics)","Villegas, I.F. (promotor); Benedictus, R. (promotor); Teixeira De Freitas, S. (copromotor); Delft University of Technology (degree granting institution)","2021","Welding is a promising alternative to mechanical fastening, as currently used, to join dissimilar (i.e., thermoset- to thermoplastic-based) composite parts in modern aircraft. Thermoset composites can be indirectly welded through a thermoplastic coupling layer co-cured on the surface of the laminate that needs to be welded. One of the main challenges when welding thermoset to thermoplastic composites, is the high welding temperatures that are needed to melt the thermoplastic matrix, especially when high-performance thermoplastic polymers are used such as in aerospace applications. The most efficient way to overcome this challenge is by ensuring very fast and localized heating in order to prevent thermal degradation mechanisms from occurring. Out of the currently most developed welding methods, ultrasonic welding can offer exceptionally short heating times of even less than 500 ms, which makes it an excellent candidate for joining thermoset and thermoplastic composites. However, further understanding of the process as applied to dissimilar composite joints is still lacking in order for it to be utilized in actual applications. The aim of this PhD thesis was to further the knowledge on ultrasonic welding of thermoset to thermoplastic composites by firstly identifying suitable practices for successfully welding the dissimilar composites and secondly assessing the robustness of the ultrasonic welding process with respect to changes in process parameters. The comparable strength of the welded, dissimilar composite joints to both co-cured, dissimilar composite joints and to welded, thermoplastic composite joints, demonstrated that ultrasonic welding is a very promising joining technique. Moreover, this process was proven to be robust (with respect to the variations in the heating time), since despite the sensitivity of the thermoset composite adherend to the high welding temperatures, a relatively wide processing interval, i.e., range of heating times that result in a certain mechanical performance, could be obtained. Additionally, the weld strength presented a certain degree of insensitivity to changes in the process parameters, i.e., welding force and amplitude of vibrations.","CFRP; thermoplastic composites; thermoset composites; ultrasonic welding; process parameters; energy director","en","doctoral thesis","","978-94-6421-307-2","","","","","","","","","Aerospace Structures & Computational Mechanics","","",""
"uuid:8f654bb3-e951-4bc7-a303-50adf79f8155","http://resolver.tudelft.nl/uuid:8f654bb3-e951-4bc7-a303-50adf79f8155","Development and evaluation of a motorcycle riding simulator for low speed maneuvering","Grottoli, M. (TU Delft Intelligent Vehicles)","Happee, R. (promotor); Mulder, Max (promotor); Delft University of Technology (degree granting institution)","2021","Driving simulators have been extensively used over the last decades and technological advancements have propelled their development for cars, trucks and other vehicles with four (or more) wheels. This dissertation focuses on the use of driving simulators for two wheeled vehicles and in particular on the development and evaluation of a motorcycle riding simulator for low speed maneuvering. The reason to focus on low speed maneuvers is related to the unstable nature of motorcycles at low speeds. A dedicated riding simulator could be used to train riders to cope with vehicle instabilities and develop active safety systems that can help them to maintain the vehicle balanced and avoid falling. Existing riding simulators adopt simplified vehicle models to simulate motorcycle dynamics. In some cases, advanced non-linear models are adopted, but their validation is not always sufficiently described for the simulator application. Once the model has been integrated in the complete simulator, the results of its real-time simulation are used to provide feedback to the simulator rider through the cueing systems. Motion cueing is particularly interesting due to the peculiar vehicle dynamics of two wheelers. Different approaches are found in literature, however the applied motion cueing methods are not based on understanding of human motion perception. Finally, the riding simulator should also be validated for its usage in the specific application domain and its fidelity and behavioral validity are often neglected. In this thesis, specific aspects of development and validation of a riding simulator for low speed maneuvering are investigated.","Motorcycle Dynamics; Riding Simulator; Motion Cueing; Motion perception","en","doctoral thesis","","978-94-6421-323-2","","","","","","","","","Intelligent Vehicles","","",""
"uuid:b6ad7ddd-c660-4aab-8277-65f7a22a4a52","http://resolver.tudelft.nl/uuid:b6ad7ddd-c660-4aab-8277-65f7a22a4a52","Automatic Design of Verifiable Robot Swarms","Coppola, M. (TU Delft Control & Simulation; TU Delft Space Systems Egineering)","de Croon, G.C.H.E. (promotor); Gill, E.K.A. (promotor); Guo, J. (promotor); Delft University of Technology (degree granting institution)","2021","The paradigm of swarm robotics aims to enable several independent robots to collaborate together toward collective goals. The distributed nature of a swarm, whereby each robot acts independently in accordance with its perceived environment, is expected to provide the system with a high degree of flexibility, robustness, and scalability. However, this comes at the cost of increased system complexity. This thesis explores how to automatically design a collective behavior in a way that is transparent and verifiable. The thesis begins by taking a step back and analyzing the design choices that need to be made when designing a swarm of robots. Through an in-depth literature study, focusing on swarms of small drones as a case study, we found how sensor and actuator choices can create constraints for the swarm behavior that can be achieved, and how desired swarm behaviors can create requirements for the hardware design and local-level controllers. Coincidentally, we found a prominent example of this in our own research on relative localization sensors for swarms of tiny drones (performed in addition to the research in this thesis), whereby we developed a communication-based relative localization approach that enabled teams of tiny drones to fly together in tight areas, the advantages being: omni-directional sensing, independence from lighting conditions and/or visual clutter, low mass, and low computational costs. However, this solution also comes with the restriction of ensuring that robots never move parallel to each other, as this will present an unobservable situation. Based on such lessons, the remainder of the thesis aims for a framework that is agnostic with respect to the robot and the swarm's collective task. The framework proposed in this thesis is centered around the following notion: a collective goal can be broken down into a set of locally observable objectives which the robots can sense, referred to as ``desired'' objectives. The robots then take actions in order to reach these desired objectives. When all robots achieve the desired objectives, then the global goal and/or collective behavior emerges. This framework was first developed for the specific case study of pattern formation by cognitively limited robots, which could only sense the relative location of close-by neighbors. It was later generalized, and its use was demonstrated on other collective tasks, namely: aggregation, consensus, and foraging. Through a local model of agent transitions, it was possible to: 1) identify potential obstructions to achieving the collective goal, and 2) optimize the behavior of the robots so as to maximize the likelihood of achieving the desired objectives. The optimization is performed by an evolutionary algorithm that leverages the local model, whereby the fitness function maximizes the probability of being in a desired local state. Using this approach, the policy evaluation only scales with the size of the local state space, and demands much less computation than swarm simulations would. In the final stage of this research, a complete framework was further developed to alleviate the need to manually define the desired objectives as well as the local models required for potential verification and/or optimization. The framework uses a data-driven approach to automatically extract two models: 1) a deep neural network that estimates the global performance of the swarm from the distribution of local sensor data, and 2) a probabilistic state transition model that explicitly models the local state transitions (i.e., transitions in observations from the perspective of a single robot in a swarm) given a policy. The framework can efficiently lead to effective controllers, as demonstrated via multiple case studies. It can also be used in combination with an evolutionary optimization process, leading to higher efficiency, or for heterogeneous online learning. Overall, the methods and insights developed in this thesis propose a new way to approach the development of verifiable and understandable behaviors for swarms of robots, using models in order to perform analysis, verification, and optimization.","swarm intelligence; swarm robotics; disitruted systems; micro air vehicles; robots; machine learning; verification","en","doctoral thesis","","978-94-6421-287-7","","","","","","","","","Control & Simulation","","",""
"uuid:2f633968-c018-44d4-a05c-0d513c86d8eb","http://resolver.tudelft.nl/uuid:2f633968-c018-44d4-a05c-0d513c86d8eb","In Pursuit of Success: Evaluating the Management of Engineering Projects, Cross-Sectoral Analysis of Project Management Efforts","Molaei, M.","Bakker, H.L.M. (promotor); Bosch-Rekveldt, M.G.C. (copromotor); Delft University of Technology (degree granting institution)","2021","Successful delivery of projects is the ultimate goal of many organisations. What is observed in practice, however, is that projects do not usually follow what is recommended in literature. Moreover, the dynamic nature of projects calls for continuous adjustments regarding the required project management practices contributing to performance. Therefore, this research aims at evaluating the current practice of managing engineering projects and investigating potential learning points across two main industry sectors: construction (including infrastructure) and process industry. The main output of this research is a model called “Nexcess model” that could help in improving project performance by providing practical recommendations. The model offers a space for interaction in which practitioners can understand the extent to which they can contribute positively to the performance by promoting an integrated approach.","Project management; project performance; engineering project; evaluation; cross-sector analysis","en","doctoral thesis","","978-94-6419-182-0","","","","","","2022-04-15","","","Integral Design & Management","","",""
"uuid:f74eb433-5a58-4055-a3ae-155c0b331495","http://resolver.tudelft.nl/uuid:f74eb433-5a58-4055-a3ae-155c0b331495","The balancing act: How public construction clients safeguard public values in a changing construction industry","Kuitert, L. (TU Delft Public Commissioning)","Hermans, M.H. (promotor); Volker, L. (promotor); Delft University of Technology (degree granting institution)","2021","Public bodies acting in the construction industry have to deal with major transitional issues, such as globalization and urbanization, population ageing, climate change and digitalization. Moreover, the public domain, private parties and society are becoming increasingly interdependent. As a result, safeguarding public values in the built environment has become ever more complex. Public bodies face the challenge to adhere to collective public values while confronted with private and societal values of external partners. This means that they have to deal with value pluralism and value-conflicts. In research, scarce attention has been paid to providing guidance to practitioners for dealing with multi-value trade-offs in operational processes. Hence, this research provides a construction-sector specific operationalization and a network perspective to the field of public value research. This research highlights the important role to be played by public commissioning in terms of safeguarding public values. It consists of three qualitative studies that utilize a range of different methods, including interviews, observations and document analysis. By this the research provides a contemporary perspective through which to study and execute the safeguarding of public values by public clients in the transition towards network governance in the construction industry. The dynamics of the sector-specific value interests of public construction clients, the occurrence of value conflicts in commissioning, and the safeguarding processes within both internal and external commissioning are studied. The practical implications derived from the research were translated into a value dialogue tool that can be used by public construction clients to professionalize safeguarding in their daily practice.","safeguarding public values; public commissioning; public construction client; value pluralism; value conflict; new public governance","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-401-1","","","","A+BE | Architecture and the Built Environment No. 6 (2021)","","","","","Public Commissioning","","",""
"uuid:10522213-d19e-4b33-98a1-e5f87bf64f79","http://resolver.tudelft.nl/uuid:10522213-d19e-4b33-98a1-e5f87bf64f79","Advances in semiconducting-superconducting nanowire devices","Borsoi, F. (TU Delft QRD/Kouwenhoven Lab)","Kouwenhoven, Leo P. (promotor); Wimmer, M.T. (copromotor); Delft University of Technology (degree granting institution)","2021","After a century from the quantum description of nature, the scientific community has laid the basis for using nature's properties to our advantage. The quantum technology vision stems from the idea of capitalizing these principles in various sectors, such as computation and communication. However, in contrast to classical processors, encoding and processing quantum information suffer from the quantum states' fragility to environmental disturbances. To mitigate their susceptibility, disruptive proposals suggested encoding information in non-local degrees of freedom such as in pairs of delocalized Majorana modes in topological superconductors. Although these materials remain elusive in nature, it is possible to engineer solid-state devices with the same properties such as semiconducting-superconducting nanowires. Starting from this idea, experimental signatures of zero-energy Majorana modes have been accompanied in recent years by continuous theoretical validations and rejections. The refinement in the theoretical understanding aligns with the swift advances on the experimental side, and this thesis finds its place in this phase of advancement, focusing on the intricate physics of the building blocks of Majorana qubits and proposing solutions to various nanofabrication challenges. In particular, we consider with attention the challenge of reading out the Majoranas information by detecting changes in their transmission phase. To this purpose, the minimal circuit requires a phase-coherent interferometer embedding a semiconducting-superconducting segment. Despite the apparent simplicity of this experiment, the Majoranas fingerprint in the transmission phase remains mostly unexplored due to the complexity of the circuit building blocks. Motivated by this challenge, our quest begins by considering each piece of the puzzle separately. We start by exploiting recent breakthroughs in the growth of nanowire-based interferometers to study the transmission phase of a large quantum dot, a setup similar to the one required for the Majoranas read-out. The conductance of this Aharonov-Bohm loop manifests gate- and magnetic field-tunable Fano resonances, that arise from the interference between electrons that travel through the reference arm and undergo resonant tunnelling in the dot. This experiment serves to point out the limitations of the currently available nanowire networks and provide critical insights for future topological interferometers' design. Thereafter, we explore the intricate physics of Coulomb semiconducting-superconducting wires, commonly known as hybrid island devices. Here, we demonstrate for the first time that InSb nanowires coupled to superconducting Al films manifest charging mediated by Cooper pairs of electrons. This observation implies that the low-energy spectrum of the semiconductor is fully proximitized by the superconductor, a fundamental requirement for achieving parity control in topological circuits. Starting from a Cooper pair condensate with an even electron parity, we can tune the nature of the island ground state with experimental knobs such as magnetic field and gate voltages. In particular, when a spin-resolved subgap state moves from the edge of the induced gap down to zero energy, single electrons can charge the island leading to conductance oscillations with a gate-voltage periodicity halved than for Cooper pairs. By mapping out such a 2e-to-1e transition in large ranges of gate voltage and magnetic field, we identify potential topological regions where the 1e oscillations are caused by discrete subgap states oscillating around zero energy. Part of the challenges concerning the realization of scalable hybrid devices lies in the complexity of their nanofabrication and the open questions in the material science involved. Stimulated by these interrogatives, the second part of this thesis introduces significant advances in the arena of hybrid nanowire devices. Having so far dealt with InSb nanowires with a maximum length of 3 µm, we turn our attention to the synthesis and the characterization of much longer InSb nanowires with a higher chemical purity than their predecessors and electron mobility exceeding 40000 cm2/Vs. Having quantified their pronounced spin-orbit interaction, adding a superconductor in the game is the logical next step. At the time of these experiments, hybrid nanowire devices were obtained by interfacing the two materials in situ, directly after the growth of the semiconductor. Despite ensuring a barrier-free semiconducting-superconducting interface, this approach has significant drawbacks in creating gate-tunable junctions due to the challenges in controlling the selectivity and the accuracy of the superconductor etching step. Considering that the semiconducting-superconducting interface is unstable even at room temperature, the devices quality, turnaround, and reproducibility become severely affected by extensive and low-yield fabrication processes. To circumvent these roadblocks, we have established a new fabrication paradigm based on on-chip shadow walls and shadow evaporations that offers substantial advances in device quality and reproducibility. Our approach results in devices with a hard induced superconducting gap and ballistic hybrid junctions. In Josephson junctions, we observe large gate-tunable supercurrents and high-order multiple Andreev reflections indicating the resulting junctions' exceptional coherence. Crucially, our approach enables the realization of three-terminal devices, where zero-bias conductance peaks emerge in a magnetic field concurrently at both boundaries of the one-dimensional hybrids. In the near future, correlating such Majoranas' signatures with the measurement of the induced gap in the bulk will enable a better classification of the observed subgap states. In conclusion, once this technology is applied to nanowire networks, it will allow verifying topological parity read-out schemes, which is a milestone toward verifying the Majorana states' exotic exchange statistics.","Hybrid devices; Semiconducting nanowires; Superconductivity; Interfaces; Josephson junctions; Aharonov-Bohm interferometers","en","doctoral thesis","Delft University of Technology","978-90-8593-470-7","","","","","","","","","QRD/Kouwenhoven Lab","","",""
"uuid:54fa083f-5ddf-4b6c-b663-a1b61c6681f5","http://resolver.tudelft.nl/uuid:54fa083f-5ddf-4b6c-b663-a1b61c6681f5","Rate-constrained multi-microphone noise reduction for hearing aid devices","Amini, J. (TU Delft Signal Processing Systems)","Heusdens, R. (promotor); Hendriks, R.C. (promotor); Delft University of Technology (degree granting institution)","2021","Many people around the world suffer from hearing problems (In the Netherlands, around 11%of the population is considered hearing-impaired). To overcome their hearing problems, advanced technologies like hearing aid devices can be used. Hearing aids are meant to assist the hearing-impaired to improve the speech intelligibility and the quality of sounds that they intend to hear. Usually these include processors which are mainly designed to enhance the sound signals originating from the source of interest by reducing the environmental noise. Binaural hearing aids, on the other hand, can also help to preserve some spatial information from the acoustic scene, which can help the hearing aid user to hear the sounds from the correct locations. To construct the binaural hearing aid system, two hearing aids are needed to be placed in the left and the right ears, which can potentially communicate through a wireless link. In addition, one can think of additional assisting devices with microphones placed in the environment. One common way to reduce the noise is to use advanced binaural multi-microphone noise reduction algorithms, which aim at estimating some desired sources while reducing the power of the undesired sources. One typical method is to use spatial filtering, which aims at estimating the target signal by shaping the beam towards the location of the desired source while canceling/suppressing the other sources. To perform binaural noise reduction, while assuming centralized processing, the signals recorded at remote microphones (for example from additional assisting devices or in the binaural hearing aid setup, the sound signals from the contralateral hearing aid) need to be transmitted to the central processor. Due to the power and bandwidth limitations, the data needs to be compressed before transmission. Therefore, the main question would be, at which rate the data should be compressed to have reasonably good noise reduction performance. This links the noise reduction problem to the data compression problem. Generally, the higher the data rate, the better the noise reduction performance. Therefore, there is a trade-off between the performance of the noise reduction algorithm and the data-rate at which the information is compressed. This problem is closely connected to the rate-distortion problem from an information-theoretic viewpoint. Studying the effect of data compression on the performance of noise reduction problems would be of great interest to reduce the power consumption of hearing assistive devices. Oneway to incorporate data compression into the noise reduction problem is to perform quantization, which leads to a rate-constrained noise reduction problem. In the rate-constrained noise reduction, the goal is to estimate the desired sources based on the imperfect data. The observations from remote sensors are quantized and transmitted to the fusion center. The main challenge in the binaural rate-constrained noise reduction is to find the best quantization rates for the different sensors at different frequencies, given the physical constraints like bitrate and power constraints. Another aspect of the rate-constrained noise reduction is to expand the network to receive more information on the acoustic scene using additional assistive devices. Target source estimation using information from such assistive devices (rather than only binaural hearing aids) is shown to result in better noise reduction performance. Now the question is how to allocate the bitrates to the assistive devices as well. These assistive devices can be thought of as the remote embedded microphones on the cell-phones (mobile) or wearable microphones placed at the users’ bodies. The binaural hearing aid system can thus be generalized to allow other assistive devices to contribute to noise reduction. In this dissertation, we study and propose different rate-constrained multi- microphone noise reduction algorithms. We try to expand the notion of the binaural rate constrained noise reduction to multi-microphone rate-constrained noise reduction for general wireless acoustic sensor networks (WASNs). The WASN in this case can include the binaural setup along with other assistive devices. We propose different algorithms to cover the main objectives of rate-constrained noise reduction problems. These objectives mainly include good target estimation (less environmental noise power) given the compressed data, good rate allocation strategies in WASNs, and preferably preserved spatial information of the sources in the acoustic scene to get the correct impression of the acoustic scene.","","en","doctoral thesis","","","","","","","","","","","Signal Processing Systems","","",""
"uuid:8f42f588-17a1-4e1e-af12-dcc52e7a26b2","http://resolver.tudelft.nl/uuid:8f42f588-17a1-4e1e-af12-dcc52e7a26b2","Modeling of hydrodynamics and sediment transport in the Mekong Delta","Thanh, Vo Quoc (TU Delft Coastal Engineering)","Roelvink, D. (promotor); van der Wegen, M. (copromotor); Delft University of Technology (degree granting institution)","2021","Deltas are low-lying plains which are formed when river sediments deposit in coastal environments. Deltas are nutrient-rich, and productive ecological and agricultural areas with high socio-economic importance. Globally, deltas are home to about 500 million people and are considerably modified by human activities. In addition, they are vulnerable to climate change and natural hazards like changing river flow and sediment supply, coastal flooding by storminess or sea level rise. To encourage better delta management and planning, it is of utmost importance to understand existing delta sediment dynamics. The objective of this study is to investigate the prevailing sediment dynamics and the sediment budget in the Mekong Delta by using a process-based model. Understanding sediment dynamics for the Mekong Delta requires high resolution analysis and detailed data, which is a challenge for managers and scientists. This study introduces such an approach and focuses on modeling the entire system with a process-based approach, Delft3D-4 and Delft3D Flexible Mesh (DFM). The first model is used to explore sediment dynamics at the coastal zone. The latter model allows straightforward coupling of 1D and 2D grids, making it suitable for analysing the complex river and canal network of the Mekong Delta. This study starts by generating trustworthy bathymetries based on limited data availability. It describes a new interpolation method for reproducing the main meandering channel topographies of the Mekong River. The reproduced topographies are validated against high resolution measured data. The proposed method is capable of reproducing the thalweg accurately. Next, this study describes the development of a Delft3D Mekong Delta model. The model is validated for hydrodynamics and sediment dynamics data for several years and focuses on describing near shore sediment dynamics. The model shows that sediment transport changes in the Mekong Delta are strongly modulated by seasonally varying river discharges and monsoons. The nearshore suspended sediment concentration (SSC) is significantly decreased due to a lack of wave-induced stirring when there is no monsoon. 3D Gravitational circulation effects limit the SSC field from expanding seaward in case of high river flow. In addition, the bed composition has an important role in reproducing sediment fluxes which were considerably decreased when a sandy bed layer is included. This happens due to effects of the initially mostly sandy mixing layer, where resuspension of the mud is proportional to the fraction of mud present. It takes time for an equilibrium bed composition to develop. Seasonally, the sediment volumes deposited in the river mouths increase regularly during the high flow season. During October they remain more or less constant and then, as wave action increases and discharges decrease, the deposited material is resuspended and transported southward along the coast. The DFM model explores the hydrodynamics and sediment dynamics in the fluvial reach of the Mekong River including the anthropogenic effect of dyke construction. After an extremely high flood in 2000 which caused huge damages, a dyke system has been built to protect agriculture in the Vietnamese Mekong Delta (VMD). These structures change hydrodynamic characteristics on floodplains by avoiding floodwaters coming into the floodplains. The DFM model shows that the high dykes slightly change hydrodynamics in the VMD downstream. These structures increase daily mean water levels and tidal amplitudes along the mainstreams. Interestingly, the floodplains protected by high dykes in Long Xuyen Quadrangle and Plain of Reeds influence water regimes not only on the directly linked Mekong branch, but also on other branches. Based on the validated hydrodynamic model, the model is validated against sediment data and used to derive a sediment budget for the Mekong Delta. For the first time, this study has computed sediment dynamics over the entire Mekong Delta, considering riverbed sediment exchange. The model suggests that the Mekong Delta receives ~99 Mt/year sediment from the Mekong River This is much lower than the common estimate of 160 Mt/year. Only about 23% of the modelled total sediment load at Kratie is exported to the sea. The remaining portion is trapped in the rivers and floodplains of the Mekong Delta. Located between Kratie and the entrance of the Mekong Delta, the Tonle Sap Lake receives Mekong River flow at increasing flow rates seasonally and returns flow when Mekong River flow rates decay. As a result Tonle Sap Lake traps approximately 3.9 Mt/year of sediments and explains the hysteresis relationship between water discharges and SSC at downstream stations. The VMD receives an amount of 79.1 Mt/year (~80 % of the total sediment supply at Kratie) through the Song Tien, the Song Hau and overflows. The model results suggest that the Mekong mainstream riverbed erodes in Cambodia and accretes in Vietnam. The results of this study advance understanding of sediment dynamics and sediment budget in the Mekong Delta. The model developed is an efficient tool in order to support delta management and planning. The validated model can be used in future studies to explore impact of climate change and human interference in the Mekong Delta.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","9781032046143","","","","","","","","","Coastal Engineering","","",""
"uuid:28108302-2d9b-4560-a806-8ba6d381812e","http://resolver.tudelft.nl/uuid:28108302-2d9b-4560-a806-8ba6d381812e","Resistor-based Temperature Sensors in CMOS Technology","Pan, S. (TU Delft Electronic Instrumentation)","Makinwa, K.A.A. (promotor); Delft University of Technology (degree granting institution)","2021","This thesis describes the design and implementation of integrated temperature sensors based on the temperature dependency of CMOS resistors.","temperature sensor; resistor-based sensor; smart sensor; delta-sigma modulator; accuracy; energy-efficiency","en","doctoral thesis","","978-94-6423-202-8","","","","","","","","","Electronic Instrumentation","","",""
"uuid:e0b20592-a0ce-4ec4-8df0-a5aa25084301","http://resolver.tudelft.nl/uuid:e0b20592-a0ce-4ec4-8df0-a5aa25084301","On the creation, coherence and entanglement of multi-defect quantum registers in diamond","Degen, M.J. (TU Delft QID/Hanson Lab)","Taminiau, T.H. (copromotor); Hanson, R. (promotor); Delft University of Technology (degree granting institution)","2021","Due to its long spin coherence and coherent spin-photon interface the nitrogen vacancy (NV) center in diamond has emerged as a promising platform for quantum science and technology, including quantum networks, quantum computing and quantum sensing. In recent years larger quantum systems have been demonstrated by using optical entanglement links between distant NV centers. These systems were based on high-quality NV centers that exhibit good optical coherence. State-of-the-art experiments with such systems have shown deterministic delivery of entanglement across a two-node quantum network as well as genuine multi-partite entanglement across a three-node quantum network. The additional capability to create larger quantum registers by direct magnetic coupling between high-quality NV centers and to other nearby defects would provide new opportunities for quantum memories in quantum networks but also for enhanced sensing protocols and spin chains for quantum computation architectures. In this thesis, we investigate methods to create larger quantum registers based on magnetic coupling and develop techniques to address and control individual defects in a system consisting of multiple defects. The results provide new insights for extended quantum registers based on magnetically coupled defects...","","en","doctoral thesis","","","","","","","","","","","QID/Hanson Lab","","",""
"uuid:543a8a46-0b49-487c-9600-678a416d67ff","http://resolver.tudelft.nl/uuid:543a8a46-0b49-487c-9600-678a416d67ff","Aircraft interiors, effects on the human body and experienced comfort","Anjani, S. (TU Delft Applied Ergonomics and Design)","Vink, P. (promotor); Song, Y. (promotor); Delft University of Technology (degree granting institution)","2021","Have you ever sat in a cramped airplane? Sitting shoulder-to-shoulder with limited legroom might not be a comfortable experience while flying in an airplane. Therefore, human anthropometrics or body dimensions are important to consider when designing for interiors used by a large population. To accommodate people of all sizes, a certain minimum pitch (distance of rows of seats) and seat-width are needed in an aircraft. However, increasing pitch and width is probably not the best for airline revenues, as an increasing pitch will reduce the number of passengers and thereby the income. Therefore, other solutions are needed as well. This Ph.D. research can be helpful for airlines to find the optimum as background information is gathered about the level of comfort experienced by passengers in different seat sizes. This research aims to understand how to predict comfort by looking at the physical entities, their interaction with the human, the human body effects, and perceived effects. The application area of the model is the aircraft interior. Experiments with a variety of participants, products, and tasks were conducted and measurements of the interaction, human body effects, and perceived effects were recorded. These studies prove that indeed comfort and discomfort are a result of the interaction, human body effects, and perceived effects, and these aspects could be used as a predictor of comfort. And comfort can be predicted, for instance, based on pitch and width related to anthropometry, but also based on heart rate variability (HRV) parameters. This research proves that physical entities can predict comfort, and observing the interaction and recording human body effects like HRV can predict comfort as well. Additionally, there are good questionnaires available for many situations predicting and recording comfort. Designers can use these methods to create a better functional aircraft interior which then increases passenger comfort.","Aircraft interior; Comfort; Discomfort; Passenger; Airplane seat","en","doctoral thesis","","978-94-6384-206-8","","","","","","","","","Applied Ergonomics and Design","","",""
"uuid:64711b53-10de-4a3f-92ae-98369e095333","http://resolver.tudelft.nl/uuid:64711b53-10de-4a3f-92ae-98369e095333","Adsorption and Separation of C8 Aromatic Hydrocarbons in Zeolites","Caro Ortiz, S.A. (TU Delft Engineering Thermodynamics)","Vlugt, T.J.H. (promotor); Dubbeldam, D. (promotor); Delft University of Technology (degree granting institution)","2021","The separation of C8 aromatic hydrocarbons (e.g. xylenes) is one of the most important processes in the petrochemical industry. Current research efforts are focused on materials that can decrease the energy consumption and increase the efficiency of the separation process. Industrial processing of C8 aromatics typically considers adsorption in a zeolite from a vapor or liquid stream of mixed aromatics. Adsorption in porous materials can be used to separate the isomers or to promote catalytic reactions to transform aromatics into high value products. However, little is known about the chemical equilibrium of the adsorbed phase at reaction conditions. Most studies of adsorption of aromatics in zeolites, either experimental or computational, have focused on adsorption of pure components from the vapor phase. Experimentally, it is very difficult to determine adsorption equilibrium at saturation conditions. In molecular simulations, very difficult insertions and deletions of molecules make simulations very inefficient. Nowadays, advanced simulations techniques can be used to overcome this issue. Computer simulations of adsorption of aromatics in zeolites are typically performed using rigid zeolite frameworks. However, it is known that adsorption isotherms for aromatics are very sensitive to small differences in the atomic positions of the zeolite. In this thesis, the following types of questions are addressed: (1) how does framework flexibility influence adsorption and diffusion of C8 aromatics in zeolites?; (2) what is the role of the pore topology? For the separation and catalytic conversion of xylenes; (3) how does the type of framework influence the product distribution of xylene isomers?; (4) are there any possible zeolite structures that may have been overlooked for the processing of aromatics? For this, the different aspects that affect the interactions between aromatic molecules and the aromatics/zeolite systems in the simulations are discussed. The intermolecular interactions between aromatic molecules are studied by computing the vapor-liquid equilibria of pure xylenes and binary mixtures using four different force fields. The densities of pure p-xylene and m xylene can be well estimated using the TraPPE-UA and AUA force fields. The largest differences of computed VLEs with experiments are observed for o-xylene. Binary mixtures of p xylene and o-xylene are simulated, leading to an excellent agreement for the predictions of the composition of the liquid phase compared to experiments. For the vapor phase, the accuracy of the predictions of the composition are linked to the quality of the density predictions of the pure components of the mixture. The phase composition of the binary system of xylenes is very sensitive to small differences in vapor phase density of each xylene isomer, and how well the differences are captured by the force fields. Most of the models commonly used for framework flexibility in zeolites include a combination of Lennard-Jones and electrostatic intra framework interactions. The effect of these models for framework flexibility on the predictions of adsorption of aromatics in zeolites is studied. It is observed that the intra framework interactions in flexible framework models induce small but important changes in the atom positions of the zeolite, and hence in the adsorption isotherms. Framework flexibility is differently ’rigid’: flexible force fields produce a zeolite structure that vibrates around a new equilibrium configuration with limited capacity to accommodate to bulky guest molecules. The simulations show that models for framework flexibility should not be blindly applied to zeolites and a general reconsideration of the parametrisation schemes for such models is needed. The effect of framework flexibility on the adsorption and diffusion of aromatics in MFI-type zeolite is systematically studied. It is found that framework flexibility has a significant effect on the adsorption of aromatics in zeolites, specially at high pressures. For very flexible zeolite frameworks, loadings up to two times larger than in a rigid zeolite framework are obtained at a given pressure. Framework flexibility increases the rate of diffusion of aromatics in the straight channel of MFI-type zeolites by many orders of magnitude compared to a rigid zeolite framework. The simulations show that framework flexibility should not be neglected and that it significantly affects the diffusion and adsorption properties of aromatics in a MFI-type zeolite. The interactions of aromatic molecules inside different zeolite types are studied by computing adsorption isotherms of pure xylenes and a mixture of xylenes at chemical equilibrium. It is observed that for zeolites with one dimensional channels, the selectivity for a xylene isomer is determined by a competition of entropic and enthalpic effects. Each of these effects is related to the diameter of the zeolite channel. For zeolites with two intersecting channels, the selectivity is determined by the orientation of the methyl groups of xylenes. m-Xylene is preferentially adsorbed if xylenes fit tightly in the intersection of the channels. If the intersection is much larger than the adsorbed molecules, p-xylene is preferentially adsorbed. This thesis provides insight on how the zeolite framework can influence the competitive adsorption and selectivity of xylenes at reaction conditions. Different selectivities are observed when molecules are adsorbed from a vapor phase compared to the adsorption from a liquid phase. This suggests that screening studies that consider adsorption only from a vapor phase may have overlooked well-performing candidates for C8 aromatics processing. This insight has a direct impact on the design criteria for future applications of zeolites in industry. It is observed that MRE-type and AFI-type zeolites exclusively adsorb p-xylene and o-xylene from a mixture of xylenes in the liquid phase, respectively. These zeolite types show potential to be used as high-performing molecular sieves for xylene separation and catalysis.
Significant reduction in the cost of energy can be achieved by reducing the maintenance expenses and improving the capacity factor. In other words, improving reliability can make tidal energy substantially cheaper. In this context, this thesis investigates a horizontal axis tidal turbine (HATT) power take-off system with a direct-drive generator.
The focus of this thesis is on improving the reliability of the electrical subsystems in the HATT power take-off system. From this perspective, power converter and generator are the two most important components in the drive train. For the converter, the reliability improvement is analyzed from the objective of delaying the thermal cycling failure in the power semiconductor modules beyond the turbine lifetime. Whereas on the generator side, a flooded generator is investigated as a potentially more reliable alternative to conventional airgap generator.","Tidal turbines; Power Electronic Converter; flooded generator; permanent magnet machine; Turbulence; waves; Thermal cycling; Eddy current losses","en","doctoral thesis","","978-94-6366-394-6","","","","","","","","","Transport Engineering and Logistics","","",""
"uuid:aa022541-52aa-4287-ace1-1773e10e3d79","http://resolver.tudelft.nl/uuid:aa022541-52aa-4287-ace1-1773e10e3d79","Moving in sync: Designing and implementing transport policy packages","Yang, W. (TU Delft Organisation & Governance)","Veeneman, Wijnand (promotor); de Jong, W.M. (promotor); Delft University of Technology (degree granting institution)","2021","Congestion in and pollution by traffic are amongst the most severe and urgent problems faced by both developed and developing countries these days. It is regarded as a ""wicked"" problem, which implies it is both hard to define the inherent problem and to find adequate measures to deal with. The complexity of transport systems makes it impossible for policy makers to fully grasp the effectiveness of each measure or intervention in detail. In policy maker‘s policy toolkits, there are traditionally two categories of transport measures that transport infrastructures supply (TIS) or transport demand management (TDM). However, these transport measures in reality are usually designed and implemented uncooperatively, some of which hardly receive political or public acceptance and others possibly cause unexpected negative side effects. Policy packaging is regarded as a prominent approach to solve these problems of single measures, because it can improve the acceptance of single policy measures, eliminate their negative effects after implementation, and produce larger synergy effects. However, in spite of these advantages, policy packaging complicates the whole policy making and implementation process, involving complex values, actors, and measures, and challenges policy maker‘s consciousness and capacities. This is why there is rare successful policy packaging in reality...","policy packaging; China; transport policy; Integration","en","doctoral thesis","","978-94-6384-208-2","","","","","","","","","Organisation & Governance","","",""
"uuid:c9dc7f63-012d-4f80-8d71-e6b0c737244f","http://resolver.tudelft.nl/uuid:c9dc7f63-012d-4f80-8d71-e6b0c737244f","From Best Practices to Next Practices: Project-based learning in the development of large infrastructure","Liu, Y. (TU Delft Integral Design & Management)","Hertogh, M.J.C.M. (promotor); Bakker, H.L.M. (promotor); Houwing, E.J. (copromotor); Delft University of Technology (degree granting institution)","2021","Over the last decades of development of knowledge management and organizational learning, there has been an increase in learning research within and across projects. Learning from past lessons in projects and preparing for the next project management practices is very important in large infrastructure projects. The autonomy of projects brings opportunities for generating new knowledge to solve problems but makes diffusing the knowledge between projects and even within stages of the project difficult. This poses a significant gap that may be negatively affecting practices. A clear and in-depth understanding of project-based learning is needed. The research aims to stimulate discussions and further debate about learning at the project level to identify and implement capabilities and structures that enable more efficient learning within and between projects in terms of value creation...","Project-based learning; knowledge management; organizational learning; project management; large infrastructure project; co-creation; exploitative learning; explorative learning; collaboration","en","doctoral thesis","","978-94-6384-209-9","","","","","","2024-04-06","","","Integral Design & Management","","",""
"uuid:b06f8d39-0c8b-405f-ad60-4df46f291ab8","http://resolver.tudelft.nl/uuid:b06f8d39-0c8b-405f-ad60-4df46f291ab8","Topological properties of superconducting nanostructures","Repin, E. (TU Delft QN/Nazarov Group)","Nazarov, Y.V. (promotor); Delft University of Technology (degree granting institution)","2021","One of the pillars of the scientific method is the fact that... Oh wait, it’s a different one. One of the pillars of the technological development is the fact that if the existing design does not achieve the goal or cannot be applied in new conditions, one could propose a totally different design that may achieve the goal. The only constraints in this way being the laws of physics. This is the main message of the lecture by Richard Feynman on tiny machines. The role of different designs can also be noted on a purely theoretical level. There, changing the well-known model can have far reaching consequences on its properties and possible applications.
One of the main goals in the focus of modern quantum technology is realization of a quantum computer. The appeal of this device is in the difference from the classical analogous computer, being reasonable proposals for error correction. Another aspect is that one may use topological quantum states that are robust by themselves against certain noises. There is a lot of effort in trying different approaches and designs to experimentally realize and detect these states. Two main approaches are to either realize topological compounds or combine topologically trivial compounds to effectively realize non-trivial topological properties. There have been advances in both topological and non-topological quantum computation. One of the most famous examples being the achieved quantum supremacy (or, after censorship, quantum advantage). Despite that, the technology is still far away from being used at home. Also, during the process of development of technology other things may come about on the way. Anyhow, regardless of the outcome, the way itself is always more important than the resulting point. In this thesis we discuss certain theoretical findings discovered on the way.","","en","doctoral thesis","","978-90-8593-471-4","","","","","","","","","QN/Nazarov Group","","",""
"uuid:40af58ca-9f3b-491f-8f21-998b45bfecb8","http://resolver.tudelft.nl/uuid:40af58ca-9f3b-491f-8f21-998b45bfecb8","Numerical Modelling for Underwater Excavation Process: A Method Based on DEM and FVM","Chen, X. (TU Delft Offshore and Dredging Engineering)","van Rhee, C. (promotor); Miedema, S.A. (promotor); Delft University of Technology (degree granting institution)","2021","A 3D dynamic numerical model is established for modelling the excavation process for dredging purposes. The interaction between the solid and fluid phases is realized by a specially designed DEM-FVM coupling mechanism, where the fluid-particle interaction forces, the volume fraction information and the particle information are constantly updated and exchanged. Dry and underwater sand cutting simulations are conducted and validated against experimental results. Simulation results of cutting of cohesive soil in atmospheric condition match with the experimental data within acceptable error margin, while the underwater cutting simulations of cohesive soil have not been validated due to the lack of experimental data. Besides, the general applicability of using Discrete Element Modelling (DEM) to create rock samples, and the calibration of DEM rock samples have been investigated, which are essential for conducting atmospheric and underwater rock cutting simulations in the future.","Discrete element modelling; Excavation Process; DEM-FVM Coupling","en","doctoral thesis","","978-94-6384-204-4","","","","","","2022-04-01","","","Offshore and Dredging Engineering","","",""
"uuid:4f688b2f-1f2c-4e14-a1be-cc867bb0cd25","http://resolver.tudelft.nl/uuid:4f688b2f-1f2c-4e14-a1be-cc867bb0cd25","Sustainability of engineered fractured systems: An experimental study on hydro-mechanical properties","Kluge, C. (TU Delft Reservoir Engineering)","Bruhn, D.F. (promotor); Barnhoorn, A. (promotor); Blöcher, Guido (copromotor); Delft University of Technology (degree granting institution)","2021","The Earth’s subsurface exhibits a high potential for generating and storing energy. Engineered fractured systems, for example geothermal or carbon storage reservoirs, highly depend on the capacity of rock to conduct and store fluids. Faults and fractures create the largest contrasts in flow in these reservoirs and can enhance the reservoir potential when being generated or engineered. While the scientific focus is mainly on the effectiveness of enhancements and the risks associated with them, the sustainability of these enhancements must be better understood. In this thesis, the dependence of fracture permeability on a variety of parameters is studied. The aim is to develop a better systematic understanding of the hydro-mechanical processes controlling the potential and sustainability of fractures to conduct fluids at a variety of conditions. Several parameters that are assumed to control fracture permeability are considered in laboratory experiments. These include the rock type (clastic vs. crystalline), the fracture type (shear vs. tensile), the fracture geometry (aperture and roughness) and effective stress changes (pore and external stress). Potential geothermal rocks are considered in order to directly relate the findings to potential geothermal exploration projects. The results demonstrate the complex dependency on a variety of parameters and highlights the different physical processes depending on mainly rock and fracture type. An attempt was made to assess the potential of fractures to act as fluid conduits in reservoirs, as well their hydraulic sustainability during effective pressure changes. From these results, general implications are made concerning the ability and sustainability of fractures to conduct fluids depending on rock and fracture type. The main controlling parameters are assessed and possible mitigation strategies are developed to reduce the risk of permeability losses. Generally, only reservoir enhancement strategies resulting in a sustainable productivity increase can guarantee the scientific and political breakthrough of geothermal energy supply.","Fractures; sustainability; permeability; laboratory; geothermal","en","doctoral thesis","","978-94-6366-392-2","","","","","","","","","Reservoir Engineering","","",""
"uuid:6e162595-1a0e-4f26-8355-910645acc739","http://resolver.tudelft.nl/uuid:6e162595-1a0e-4f26-8355-910645acc739","Temporal dynamics in C. Elegans Development and stress response","Filina, O. (TU Delft BN/Gijsje Koenderink Lab)","Koenderink, G.H. (promotor); van Zon, J.S. (copromotor); Delft University of Technology (degree granting institution)","2021","Multicellular development is a sequence of genetically programmed, intricately linked micro- and macroscopic events that start from a single cell - zygote - and lead to the formation of a functional individual, primed for survival and reproduction. All developmental events, fromgene expression to cell division, differentiation andmigration, as well as tissue and organ formation, have to follow a precise temporal and spatial order, as the smallest mistake can be detrimental. Development is remarkably reproducible, despite occurring under a wide variety of environmental conditions, such as nutrient availability and temperature, and being subject to a substantial amount of molecular noise. In this thesis, we use the nematode worm Caenorabditis elegans as a model system to investigate i) how the timing of development adjusts to changes in the outside world and ii) how organisms respond to external stresses that do not support normal developmental progression. C. elegans is an ideal model organism to address these questions, due to its simplicity, stereotypical developmental pattern and the vast extent to which its biology and genetics are known.","","en","doctoral thesis","","978-94-6332-746-6","","","","","","","","","BN/Gijsje Koenderink Lab","","",""
"uuid:983b06e7-c30b-465c-a032-2439a7e9863f","http://resolver.tudelft.nl/uuid:983b06e7-c30b-465c-a032-2439a7e9863f","A multi-scale approach towards reusable steel-concrete composite floor systems","Nijgh, M.P. (TU Delft Steel & Composite Structures)","Veljkovic, M. (promotor); Sluys, Lambertus J. (promotor); Pavlovic, M. (copromotor); Delft University of Technology (degree granting institution)","2021","Traditionally welded headed studs have been used to generate composite interaction between a steel beam and a (cast in-situ) concrete floor. This permanent connection impairs the demountability of the structural components and therefore demolition of the composite floor system is inevitable at the end of the functional service life. The demolition of functionally obsolete but technically sound building components is in contradiction with the globally prevailing ambition of more sustainable development of the built environment through reduced demand for primary resources and reduced emissions of harmful substances. This dissertation aims to overcome the need for demolition of composite floor systems by developing methods, tools and recommendations to enable easy demountability of the structural components. The recommendations are both based on practical experience obtained by full-scale laboratory experiments on a demountable composite floor system consisting of large prefabricated concrete floor elements (2.6 × 7.2 m), and on the (analytical) methods and tools developed to predict the response of the floor system during execution (e.g. instability) and service life (e.g. deflection and stresses).","Demountable shear connector; design for reuse; steel-concrete composite floor system; injected bolted connection; (steel-reinforced) epoxy resin system; multi-scale approach","en","doctoral thesis","","978-94-6384-207-5","","","","","","","","","Steel & Composite Structures","","",""
"uuid:780a2f7f-b96a-4a55-928c-4ea8cc8fdfe5","http://resolver.tudelft.nl/uuid:780a2f7f-b96a-4a55-928c-4ea8cc8fdfe5","Developing Solid Oxide F uel Cell Based Power Plant For Water Treatment Plants: Experimental and System Modelling Studies","Saadabadi, S.A. (TU Delft Energy Technology)","Aravind, P. V. (promotor); Boersma, B.J. (promotor); Delft University of Technology (degree granting institution)","2021","Fossil fuels are currently the primary source for electrical power generation, which subsequently increases the rate of greenhouse gas (CO2, CH4) emission. It has been agreed at the Climate Change Conference 2015 in Paris (COP21) to reduce greenhouse gas emissions in order to limit the global temperature increase to less than 2°C compared to pre-industrial era temperature. The GHG (Greenhouse Gas) effect is mostly attributed to methane and carbon dioxide emissions into the atmosphere. In order to reduce the use of fossil fuels and their negative impact on the environment, renewable energy resources have been receiving much attention in recent years. Sanitation systems, centralized Wastewater Treatment Plants (WWTPs) and organic waste digesters give an ample opportunity for resource recovery to produce biogas that contains mainly methane and carbon dioxide. The low conversion efficiency of conventional energy conversion devices like internal combustion engines and turbines prevents biogas from reaching its full potential as over 50% of chemical energy is dissipated. Wastewater treatment is a developed technology from human health and environmental-friendliness points of view. However, from energy aspects, it is still an energy-intensive process step. Wastewaters might contain significant amounts of organic matter and nutrient (nitrogen and phosphorus) compounds. The chemical energy in domestic wastewater is approximately 3.8 kWh.m-3 based on theoretical Chemical Oxygen Demand (COD) of 1 kg m-3. At wastewater treatment plants (WWTPs), collecting and treating wastewater streams need a considerable amount of electricity (0.5 kWh m-3) to reach an acceptable quality of discharge requirements. In a conventional WWTP, nitrogen is removed through nitrification, and biodegradable organic matter is converted to methane in anaerobic digestion. The energy demand at WWTPs could be partially offset by an efficient recovery of nutrient and organic matter from the wastewater stream. Biogas production is an important technology widely applied in Europe. Biogas can be converted to energy through thermal conversion with combined heat and power (CHP) plants. However, the electrical efficiency of the system is limited to 25-30%. In parallel, nitrogen can be removed from wastewater and converted and stored in the form of an ammonia-water mixture from ammonium-rich streams after anaerobic digestion. Solid oxide fuel cell (SOFC) is an energy conversion device that directly converts chemical energy into electrical energy based on electrochemical reactions. SOFC can operate with different types of fuels, especially unconventional or renewable fuels. The efficiency of SOFC is higher compared to conventional combustionbased processes. Therefore, the sustainability of WWTPs can be improved first by a recovery of nutrient and organic material from the wastewater stream and then, replacing the inefficient combustion process with an efficient high-temperature electrochemical reaction in SOFC. Due to the modularity of SOFC, this can be used for a wide range of biogas production capacities at WWTPs. However, the development of SOFC is still facing many challenges, and a better understanding of the constraints is needed. This dissertation aims to provide design concepts and thermodynamic system analysis for the biogas-ammonia fuelled SOFC system at wastewater treatment plants with a focus on achieving a safe operating condition and high electrical efficiencies. Thereupon, extended experimental studies have been conducted in this work on biogas dry and combined reforming. Moreover, the influence of mixing ammonia-water to biogas in SOFC has been experimentally investigated. After indicating the safe operating condition of biogas-ammonia fuelled SOFC, system modelling studies have been carried out in order to design an efficient conceptual biogas-ammonia fuelled SOFC system at wastewater treatment plants. Additionally, a complete biogas SOFC pilot system consists of a gas cleaning unit and an external gas processing system has been designed.","","en","doctoral thesis","","978-94-6421-290-7","","","","","","","","","Energy Technology","","",""
"uuid:e3615629-cfe2-4fc7-920d-5bc2e776e7c5","http://resolver.tudelft.nl/uuid:e3615629-cfe2-4fc7-920d-5bc2e776e7c5","Shear failure of prestressed girders in regions without flexural cracks","Roosen, M.A. (TU Delft Concrete Structures)","Hendriks, M.A.N. (promotor); Yang, Y. (copromotor); Delft University of Technology (degree granting institution)","2021","In the design process of prestressed bridges and viaducts, the required amount of shear reinforcement is determined with a model that assumes the presence of flexural cracks. In order to keep the design process simple, this model is also prescribed to determine the amount of shear reinforcement for the regions of the structure in which, at the ultimate load, no flexural cracks are present. This is a conservative approach, as the conditions for shear transfer are more favourable in the regions without flexural cracks. From structural assessments of existing prestressed bridges and viaducts, it is found that the amount of shear reinforcement is frequently too low in the regions that remain free of flexural cracks. Accordingly, these structures are considered as unqualified, although the actual shear resistance could possibly be sufficient. This is the prime motivation for this research, in which the shear behaviour of prestressed girders in regions without flexural cracks is investigated.Two models are proposed in this dissertation for the determination of the shear resistance in the regions without flexural cracks: –a model for diagonal tension cracking and –a model that considers the contributions of stirrups, aggregate interlock and uncracked flanges after diagonal tension cracking. Depending on the amount of shear reinforcement and the level of prestressing, the governing resistance will be present in either one of these stages. With the proposed models it has become possible to determine the shear that can be resisted in regions without flexural cracks more accurately. The use of the proposed models will therefore prevent that numerous bridges and viaducts are strengthened or replaced while the actual shear resistance is sufficient.
First, we create and make publicly available Pydriller, a Python framework to analyze software repositories that will help us gather important information for the following studies. Then, we study test design issues and their impact on the overall software code quality, demonstrating how important it is to have a good and effective test suite. Afterward, together with SIG, a company setting in which part of this dissertation was conducted, we study how developers in industry react to these test design issues. Our results show that the current detection rules for test issues are not precise enough and, more importantly, do not support prioritization. We present new rules that can be used to prioritize these issues and show that the results achieved with the new rules better align with developers' perception of importance. In the second part of the dissertation we focus on how to help developers better reviewing test code. First, we investigate developers' needs when it comes to code reviewing, identifying seven high-level information needs that could be addressed through automated tools, saving up time for reviewers. Then, we focus on code review of test code specifically: we first study when and how developers review test code, identifying current practices, revealing the challenges faced during test code reviews, and uncovering needs for tools that can support the review of test code. Later, we investigate the impact of Test-Driven Code Review (TDR) on the code review effectiveness, showing that it can increase the number of test code issues found. We discuss when TDR can and can not be applied and why not all developers see TDR as a worthy practice.","Software Testing; Modern Code Review","en","doctoral thesis","","","","","","","","","","","Software Engineering","","",""
"uuid:347cafb2-73e2-4edd-adfd-8733ef7ec258","http://resolver.tudelft.nl/uuid:347cafb2-73e2-4edd-adfd-8733ef7ec258","Creating and Capturing Value: A Consumer Perspective on Frugal Innovations in Water and Energy in East Africa","Howell, R.J. (TU Delft Economics of Technology and Innovation)","van Beers, Cees (promotor); Knorringa, P. (promotor); Doorn, N. (promotor); Delft University of Technology (degree granting institution)","2021","Frugal innovation emphasizes the reduced use of resources and cutting of costs through the process of innovating around constraints. However, how innovating around constraints leads to profitable (value creation) businesses and local economic development impact (value capture) is still unclear. Early frugal innovation literature assumed that a reduction in cost would be a means to reach low income consumers. Yet, many companies in emerging markets are not reaching their intended low income customer group. Most early frugal innovation literature was conceptual and case study based with most case studies being from India and Asia. Additionally, frugal innovation literature focused more on the design process and less on the consumer and what drives decision making of frugal innovations.","","en","doctoral thesis","","978-94-6419-165-3","","","","","","","","","Economics of Technology and Innovation","","",""
"uuid:6ad1f5f3-c8b0-4d01-b875-5586a25744bd","http://resolver.tudelft.nl/uuid:6ad1f5f3-c8b0-4d01-b875-5586a25744bd","Structure and Dynamics of Self-Healing Polyurethanes","Montano, V. (TU Delft Novel Aerospace Materials)","Garcia, Santiago J. (promotor); van der Zwaag, S. (promotor); Delft University of Technology (degree granting institution)","2021","The work exposed in this thesis addresses two main scientific challenges in the field of intrinsic self-healing polymers that limit the translation of the academic research efforts into commercial products, namely: i) the implementation of the self-healing functionality in polymer commodities by minimal chemical modifications; and ii) the establishment of characterization protocols that allow a deeper understanding of the relation between polymer structure and efficient healing capability.","Self-healing; Dynamics; Polyurethanes; Entropy","en","doctoral thesis","","978-94-6384-195-5","","","","","","","","","Novel Aerospace Materials","","",""
"uuid:7292e35d-d45a-4ad1-9663-ae2b5c5a9f16","http://resolver.tudelft.nl/uuid:7292e35d-d45a-4ad1-9663-ae2b5c5a9f16","Modelling Individual Driver Trajectories to Personalise Haptic Shared Steering Control in Curves","Barendswaard, S. (TU Delft Human-Robot Interaction; TU Delft OLD Intelligent Vehicles & Cognitive Robotics; TU Delft Control & Simulation; TU Delft Cognitive Robotics)","Abbink, D.A. (promotor); Pool, D.M. (promotor); Boer, E.R. (promotor); Delft University of Technology (degree granting institution)","2021","Road safety is still a challenging issue. In 2020, 1.35 million people have died as a result of traffic accidents, where the number one cause of death for young adults between the age of 5 and 29 is car accidents. In an attempt to improve road safety, the automotive industry has developed numerous types of Advanced Driver Assistance Systems (ADAS). These systems are in general effective in improving safety. However, these systems will only be used if and only if drivers perceive the assistance as intuitive and cooperative. It is recently found that 61% of drivers sometimes switch off the assistance, 23% feel that current assistance are annoying and bothersome, whereas only 21% find them helpful. A safe system that is not used has no safety benefits. A promising way to improve driver acceptance and to increase safety is to employ haptic shared control (HSC), which is an effective way of keeping drivers in the active control loop. Support in the form of HSC benefits situation awareness and ensures effective monitoring of the environment and automation. However, torque conflict resulting from opposing intentions of driver and automation is reported to be a bottleneck for drivers' acceptance of HSC. Particularly, such conflicts are found to be most debilitating in curves. With each driver having an individual driving style, with different preferences and skill levels, the current standard 'one-size-fits-all' assistance approach to HSC, and driver support in general, is not satisfactory for every individual. An effective approach to increase acceptance in ADAS, and a reliable way to align the automation to the driver's preferences, is through personalisation. Here, personalisation is generally defined as 'making something suitable for the needs and preferences of a particular person'. For HSC, personalisation can be effectively realised by adapting the system's adopted trajectory to that of the driver. Therefore, the personalisation of HSC requires a driver modelling approach that predicts an individual driver's behaviour. Before this thesis, the personalisation of HSC was attempted by adjusting the gains of a corrective feedback HSC, as though it were a driver steering model itself. What was missing was 1) a HSC that allows for personalisation, i.e., a framework where a personalisable reference trajectory is independent of the haptic controller and, 2) a computational driver model or a data-driven driver classification approach that is able to describe individual drivers. When this thesis was started, a theoretical HSC concept, the 'Four-Design-Choice-Architecture' (FDCA) was introduced within our group. This promising concept was, however, not realised or implemented yet. As for modelling individual drivers, it was not known what type of driver steering and trajectory model(s) are suitable to generate personalised trajectories, if any, due to the lack of a standardised way to compare and evaluate the output performance of driver behaviour models with different structures and complexities. It was not known exactly how to achieve successful personalisation in curves, nor was the needed level of personalisation understood, i.e., adapting to the intricacies of each individual or adapting to a more general style. Moreover, whether personalisation in itself improves the acceptance of HSC systems, was still to be verified. These challenges are addressed in the four parts of this thesis: 1) Driver model assessment: The development of an assessment method and application on prominent control-theoretic driver models in the literature. %This was done to gain in-depth understanding of what is needed to model and describe individual drivers. 2) Driver trajectory classification: Understanding and categorising the types of individual driver trajectories present in the driving population. 3) Driver prepositioning: Understanding and modelling driver prepositioning behaviour, a behaviour found to be an essential, yet mostly overlooked aspect of curve-driving behaviour. 4) Application to Haptic Shared Control: Apply and evaluate personalised haptic shared control. This thesis has achieved it's highest level goal, which is to improve the acceptance of the haptic shared control driver support. This thesis provides an improved understanding and new insights into 1) how the novel FDC HSC has solved much of the acceptance issue put forward, and 2) an understanding of how to personalise with the FDC HSC. In terms of modelling tools and methods, this thesis has contributed with: 1) a model assessment procedure that can highlight the strengths and weaknesses of any control theoretic model, 2) a trajectory classifier, which can categorise different types of drivers, 3) a prepositioning path model, which, when combined with the Van Paassen control-theoretic driver model results in the first individual control-theoretic driver model, i.e., a model that can capture all main styles of individual driver behaviour and 4) the first personalisable HSC, where the developed modelling methods are applied to evaluate personalised haptic shared control. The findings and insights from this thesis have contributed to design guidelines and, can accelerate future research. Some examples include 1) using the individualised driver steering model, personalisation of ADAS can now be done in real-time, 2) using the developed trajectory classifier, explicit personalisation can be achieved, i.e., the driver can select the type of trajectory guidance he may want, and, 3) the driver trajectory modelling methods developed in this thesis can be used for the personalisation of path-planning in fully autonomous-vehicles.","Driver Modelling; Personalisation; Haptic shared control; Advanced driver assistance systems (ADAS); Driver Identification; Driver classification","en","doctoral thesis","","978-94-6421-255-6","","","","","","","","Cognitive Robotics","Human-Robot Interaction","","",""
"uuid:cfb1c40b-464d-4cae-b0af-72ce29a53f96","http://resolver.tudelft.nl/uuid:cfb1c40b-464d-4cae-b0af-72ce29a53f96","Autogenous shrinkage of alkali-activated slag and fly ash materials: From mechanism to mitigating strategies","Li, Z. (TU Delft Materials and Environment)","van Breugel, K. (promotor); Ye, G. (promotor); Delft University of Technology (degree granting institution)","2021","Alkali-activated materials (AAMs), as eco-friendly alternatives to Ordinary Portland cement (OPC), have attracted increasing attention of researchers in the past decades. Unlike cement, which requires calcination of limestone, AAMs can be made from industrial by-products, or even wastes, with the use of alkali-activator. The production of AAMs consumes 40% less energy and emits 25-50% less CO2 compared to the production of OPC.
Despite the eco-friendly nature of AAMs, doubts about these materials as an essential ingredient of concrete exist, regarding, for example, their volume stability. One possible volume change concerns autogenous shrinkage. Autogenous shrinkage is the reduction in volume caused by the material itself without substance or heat exchange with the environment. If the autogenous shrinkage of a binder material is too large, cracking might happen, which will seriously impair the durability of concrete. According to the literature, AAMs can show higher autogenous shrinkage than OPC-based materials. However, the mechanism behind the high autogenous shrinkage of AAMs is still unclear. Existing shrinkage-mitigating strategies for OPC are not necessarily applicable for AAMs. There is also a lack of new strategies particularly designed for AAMs. Moreover, the cracking sensitivity of AAMs-based concrete induced by restrained autogenous shrinkage has not been investigated yet.
The aim of this study is, therefore, set to understand and mitigate the autogenous shrinkage and the cracking tendency of AAMs.","Alkali-activated materials; autogenous shrinkage; slag; fly ash; metakaolin; internal curing; mechanism; cracking; mitigating strategies; microstructure; modeling","en","doctoral thesis","","978-94-6421-279-2","","","","","","","","","Materials and Environment","","",""
"uuid:285e9079-e12c-4cd4-9a34-555bf66237c7","http://resolver.tudelft.nl/uuid:285e9079-e12c-4cd4-9a34-555bf66237c7","Electron wave front modulation with patterned mirrors","Krielaart, M.A.R. (TU Delft ImPhys/Microscopy Instrumentation & Techniques)","Kruit, P. (promotor); Delft University of Technology (degree granting institution)","2021","We propose a microscopy scheme for the controlled modulation of the electron wave front that utilizes patterned electron mirrors. The ability to control the wave front of the electron finds many applications in electron microscopy, for instance in contrast enhancement techniques, beam mode conversion, low-dose imaging techniques such as quantum electron microscopy (QEM) and multi-pass transmission electron microscopy (MP-TEM), or structural hypothesis testing.","Electron microscopy; Wave front modulation; Electron mirror; Aberration correction; Electron beam separator","en","doctoral thesis","","978-94-6384-202-0","","","","","","","","","ImPhys/Microscopy Instrumentation & Techniques","","",""
"uuid:282dc887-2be8-41a3-a8f1-fb7e5f6b7093","http://resolver.tudelft.nl/uuid:282dc887-2be8-41a3-a8f1-fb7e5f6b7093","Spatial Planning and Design for Resilience: The Case of Pearl River Delta","Dai, W. (TU Delft Urban Design)","Meyer, Han (promotor); Sun, Y. (promotor); Kuzniecow Bacchin, T. (copromotor); Delft University of Technology (degree granting institution)","2021","Faced with the highly overlapping factors of the external disturbances -- natural disasters caused by extreme climate change, and internal interactions -- the contradiction between natural conditions and rapid urbanization, traditional spatial planning and design used to pursue economic development could not be flexible enough to respond to the dynamic and uncertain future of the Pearl River Delta (PRD). Therefore, spatial planning and design should pay great attention to the fragile natural base layer and unexpected external disturbances that will negatively impact the PRD caused by increasing natural disasters, such as flooding and land subsidence situation. Based on the idea of spatial resilience, this doctoral dissertation aims to give an answer to the research question: What are the theories and methods of spatial planning and design for resilience? How is it possible to apply the theory and method of spatial planning and design for resilience to the PRD? Five major research contents are conducted. First of all, literatures on exploring the physical context, the crucial stages of spatial transformation, as well as spatial planning and design practices of the PRD are reviewed. Secondly, the theory of spatial planning and design for resilience is systematically researched. Thirdly, implementation method for spatial planning and design for resilience is provided. Fourthly, the empirical research of the theory and method of spatial planning and design for resilient PRD is conducted and possible new schemes are produced. Fifthly, the corresponding principles and strategies of resilient flood control and drainage on Hengli Island are proposed. The research outcomes obtained from this doctoral dissertation can be possibly applied to the further spatial planning and design practice for establishing a resilient PRD.","","en","doctoral thesis","","978-94-6366-391-5","","","","A+BE | Architecture and the Built Environment No 4 (2021)","","","","","Urban Design","","",""
"uuid:fc85fc09-65a1-4d0f-ac1c-2fe3a7b48ee7","http://resolver.tudelft.nl/uuid:fc85fc09-65a1-4d0f-ac1c-2fe3a7b48ee7","Catalysis in fuel-driven chemical reaction networks","van der Helm, M. (TU Delft ChemE/Advanced Soft Matter)","van Esch, J.H. (promotor); Eelkema, R. (promotor); Delft University of Technology (degree granting institution)","2021","The entire research described in this thesis is part of the larger field of Systems Chemistry. This field of chemistry deals with the understanding of the complexity of biology by mimicking biochemical reaction networks with emergent properties attributed to the entire system. In particular, this research focuses on the design of artificial non-equilibrium chemical reaction networks (CRNs) inspired by signal transduction pathways in living organisms. Specific attention is given to catalysis in such networks. For the design of the catalytic CRNs, organocatalysts are considered as the ideal candidates as they are simple, cheap, recyclable and robust compared to enzymes and frequently less toxic catalysts than metals. On top of that, since organocatalysts can operate under mild conditions, it can pave the way for future applications in biological environments.","","en","doctoral thesis","","978-94-6421-229-7","","","","","","","","","ChemE/Advanced Soft Matter","","",""
"uuid:0feb1d21-cb08-40a3-99a2-37331248daab","http://resolver.tudelft.nl/uuid:0feb1d21-cb08-40a3-99a2-37331248daab","STACKED: The building design, systems engineering and performance analysis of plant factories for urban food production","Graamans, L.J.A. (TU Delft Climate Design and Sustainability)","van den Dobbelsteen, A.A.J.F. (promotor); Tenpierik, M.J. (promotor); Stanghellini, Cecilia (copromotor); Delft University of Technology (degree granting institution)","2021","Expanding cities across the world rely increasingly on the global food network, but should they? Population growth, urbanisation and climate change place pressure on this network, bringing its resilience into question. For decades urban agriculture has been discussed in popular media and academia as a potential solution to improve food security, quality and sustainability. The new idol in this discussion is the plant factory: A fully closed system for crop production. Arrays of LEDs provide light and hydroponics provide water and nutrients to vertically stacked layers of crops, hence the term vertical farming. The plant factory features more extensive climate control than high-tech greenhouses. The question remains whether this level of climate control is necessary, effective and/or efficient. The scope of this research is therefore to investigate the potential and limitations of plant factories for urban food production. The STACKED method was developed to address the performance of plant factories across multiple scales, from leaf to facility to city. The role of plant processes in the total energy balance was outlined first. Performance was assessed by analysing the resource requirements, including energy, electricity, water, CO2 and land area use, for the production of fresh vegetables. The impact of façade and cooling system design was analysed in detail. Lastly, the effects of local food production on the urban energy balance were assessed for various scenarios. The results of this dissertation can serve as a foundation for future studies on the application of plant factories in both theoretical and real world applications.Expanding cities across the world rely increasingly on the global food network, but should they? Population growth, urbanisation and climate change place pressure on this network, bringing its resilience into question. For decades urban agriculture has been discussed in popular media and academia as a potential solution to improve food security, quality and sustainability. The new idol in this discussion is the plant factory: A fully closed system for crop production. Arrays of LEDs provide light and hydroponics provide water and nutrients to vertically stacked layers of crops, hence the term vertical farming. The plant factory features more extensive climate control than high-tech greenhouses. The question remains whether this level of climate control is necessary, effective and/or efficient. The scope of this research is therefore to investigate the potential and limitations of plant factories for urban food production. The STACKED method was developed to address the performance of plant factories across multiple scales, from leaf to facility to city. The role of plant processes in the total energy balance was outlined first. Performance was assessed by analysing the resource requirements, including energy, electricity, water, CO2 and land area use, for the production of fresh vegetables. The impact of façade and cooling system design was analysed in detail. Lastly, the effects of local food production on the urban energy balance were assessed for various scenarios. The results of this dissertation can serve as a foundation for future studies on the application of plant factories in both theoretical and real world applications.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-408-0","","","","A+BE | Architecture and the Built Environment No 5 (2021)","","","","","Climate Design and Sustainability","","",""
"uuid:095689cb-03d4-458a-93a9-4a82c0f83cdc","http://resolver.tudelft.nl/uuid:095689cb-03d4-458a-93a9-4a82c0f83cdc","State-independent apparent aero-elastic properties of wind turbine rotors: A method for the preliminary design of offshore wind support structures","van der Male, P. (TU Delft Offshore Engineering)","Metrikine, A. (promotor); van Dalen, K.N. (copromotor); Delft University of Technology (degree granting institution)","2021","In the previous decade, offshore wind undeniably took off as an important player in the European energymarket, which resulted in continuously enhancing turbine sizes and foundation structures pushing the boundaries of engineering, with the purpose of minimizing the levelized cost of energy to the range of optimal competitiveness. Regarding the foundation structure – or support structure – an important trade-off exists with respect to optimization and differentiation within an offshore wind farmon the one hand, and the required computational effort at an early stage of the design on the other. This effort comprises the extensive set of environmental conditions that require evaluation and the level of complexity of the modelling of the different environmental interactions, be it with wind, waves or soil.
Concerning the modelling of the environmental interactions with the structure, a decoupling of the turbine and the support structure is commonly applied, allowing the turbine manufacturer and the offshore contractor to develop their designs separately. The analysis of the support structure, however, has to account for the effect of the aerodynamic force, particularly for the aerodynamic damping, as this is known to affect the structural response to wave actions substantially. In this respect, the shared information usually concerns the damping ratio of the first fore-aft mode of vibration. This damping ratio does not explicitly express its dependency on the operational conditions of the turbine, e.g., the mean wind velocity and the rotor speed. Moreover, the provided ratio is only valid for the fore-aft motion in the first mode or vibration, and can therefore not be applied for higher fore-aft modes, or modes describing different motions, such as side-to-side or torsional.","wind turbines; support structures; aerodynamic force; aerodynamic damping; hydrodynamic force","en","doctoral thesis","","978-94-6419-155-4","","","","","","","","","Offshore Engineering","","",""
"uuid:5437884e-0078-4b36-b2c7-c6edfea3b418","http://resolver.tudelft.nl/uuid:5437884e-0078-4b36-b2c7-c6edfea3b418","The Intersection of Planning and Learning","Moerland, T.M. (TU Delft Interactive Intelligence)","Jonker, C.M. (promotor); Plaat, Aske (promotor); Broekens, D.J. (copromotor); Delft University of Technology (degree granting institution)","2021","Intelligent sequential decision making is a key challenge in artificial intelligence. The problem, commonly formalized as a Markov Decision Process, is studied in two different research communities: planning and reinforcement learning. Departing from a fundamentally different assumption about the type of access to the environment, both research fields have developed their own solution approaches and conventions. The combination of both fields, known as model-based reinforcement learning, has recently shown state-of-the-art results, for example defeating human experts in classic board games like Chess and Go. Nevertheless, literature lacks an integrated view on 1) the similarities between planning and learning, and 2) the possible combinations of both. This dissertation aims to fill this gap. The first half of the book presents a conceptual answer to both questions. We first present a framework that disentangles the common algorithmic space of both fields, showing that they essentially face the same algorithmic design decisions. Moreover, we also present an overview of the different ways in which planning and learning can be combined in one algorithm. The second half of the dissertation provides experimental illustration of these ideas. We present several new combinations of planning and learning, such as a flexible method to learn stochastic dynamics models with neural networks, an extension of a successful planning-learning algorithm (AlphaZero) to deal with continuous action spaces, and a study of the empirical trade-off between planning and learning. Finally, we also illustrate the commonalities between both fields, by designing a new algorithm in one field based on inspiration from the other field. We conclude the thesis with an outlook for the planning-learning field as a whole. Altogether, the dissertation provides a broad theoretical and empirical view on the combination of planning and learning, which promises to be an important frontier in artificial intelligence research in the coming years.
This thesis presents a series of strategies to guaranteeing service quality throughout operational scenarios arising in the timeline of AV technology deployment. First, a precondition to providing service quality in autonomous transportation is safety. During a transition phase to full automation, AV operation will likely be restricted to areas where safe operations are guaranteed, leading to the formation of hybrid street networks comprised of autonomous and non-autonomous vehicle zones. In this setting, meeting user service quality expectations is primarily a matter of coverage, once mobility services will have to access both AV-ready and not AV-ready areas. Accordingly, this thesis proposes solutions to overcome the challenges entailed by such a transition scenario, where infrastructures, regulatory measures, and AV technology are gradually evolving.
Then, assuming that widespread automated driving is the new status quo, we set out to model rich autonomous transportation scenarios comprised of heterogeneous users and vehicles. Central to our analysis is finding an adequate tradeoff between fleet size and service quality. In traditional AMoD systems, providers can do only so much to prevent user dissatisfaction since, to some extent, this is a matter of having enough vehicles. When the demand outstrips the supply, users inevitably experience longer delays or even rejections, ultimately undermining trust in the service. However, these shortcomings may plague future transportation systems only if setting the fleet size and mix remains a strategic decision. In contrast to most related literature, this thesis investigates a disseminated AV ownership scenario, where ridesharing platforms can occasionally hire available privately-owned AVs on-demand. In this scenario, customers can simultaneously own and share AVs, a setup that better resembles the operation of today's transportation network companies (TNCs), which rely entirely on micro-operators. As a result, AMoD systems can increase and decrease vehicle supply in the short term, thus shifting fleet sizing to the operational planning level.
Moreover, analogously to other transportation modes, we consider that the system must deal with a diversified user base with different service quality expectations. This setup allows providers greater leeway to explore requests' delay tolerances to design efficient routes. To balance user expectations and avoid an oversupply of vehicles, we propose a multi-objective matheuristic that dynamically hires third-party AVs to meet the demand. Our approach adds to recent literature by allowing providers to prioritize different customer segments, besides choosing the exact tradeoff between meeting each segment's needs and hiring extra vehicles. This way, when vehicles are lacking, the optimization process can steer the ride-matching solution towards addressing user requests in order of importance (e.g., most lucrative first). To make the most of currently working vehicles, we also design a repositioning algorithm that fixes supply and demand imbalances using users' service level violations as stimuli.
Further, to enable anticipatory decision making, this thesis incorporates the stochastic information surrounding both privately-owned AV supply and heterogeneous passenger demand in the fleet management process. We propose a learning-based optimization approach that uses the underlying assignment problem's dual variables to iteratively approximate the marginal value of vehicles at each time and location under different availability settings. In turn, these approximations are used in the optimization problem's objective function to weigh the downstream impact of dispatching, rebalancing, and occasionally hiring vehicles. By harnessing the historical knowledge regarding both demand and supply patterns, we show that AMoD providers are substantially better equipped to meet user needs without necessarily having to own large AV fleets.
Typically, learning-based fleet management strategies end up reinforcing biases present in the demand data, therefore frequently moving towards cities' most affluent and densely populated areas, where alternative mobility choices already abound. Although lucrative for providers, this fleet management strategy runs counter to a broader city goal of equitably distributing accessibility across all regions and population demographics. To counterbalance the demand biases, we investigate the extent to which fare subsidization policies can drive the learning process towards sending vehicles to targeted regions where accessibility is lacking. Our results suggest that by using an adequate scheme of incentives, policymakers can orchestrate transportation providers to diminish the insidious effects of ``cream-skimming'' practices, thus using AVs in favor of mobility equity.
Lastly, once we have designed strategies that balance the goals of cities, independent owners, fleet owners, and users, we focus on a different approach to maximizing fleet productivity in urban environments. No matter how efficient a fleet optimization method can be, by limiting AVs to service a single commodity type (i.e., people), fleet utilization and consequently profits are bounded by passenger demand patterns. As autonomous technology evolves, however, new opportunities to improve asset utilization arise. We end this thesis with a model for a versatile transportation system where mixed-purpose compartmentalized AVs can address both passengers and goods simultaneously. With the growth of e-commerce and same-day deliveries, our approach provides a starting point to study more flexible short-haul integration systems to consolidate passenger and freight flows.","dynamic fleet management; autonomous vehicles; service quality; on-demand hiring; autonomous vehicle zone; people and freight integration; mobility poverty; mobility-on-demand; approximate dynamic programming; matheuristic; mixed integer programming","en","doctoral thesis","Delft University of Technology","978-90-5584-286-5","","","","TRAIL Thesis Series no. T2021/12, the Netherlands TRAIL Research School","","","","","Transport Engineering and Logistics","","",""
"uuid:bf835c87-da7b-4dd7-bfad-41fd1bb537c0","http://resolver.tudelft.nl/uuid:bf835c87-da7b-4dd7-bfad-41fd1bb537c0","Countering Rumours in Online Social Media","Ebrahimi Fard, A. (TU Delft Policy Analysis)","van de Walle, B.A. (promotor); Helbing, D. (promotor); Verma, T. (copromotor); Delft University of Technology (degree granting institution)","2021","The phenomenon of rumour spreading refers to a collective process where people participate in the transmission of unverified and relevant information to make sense of the ambiguous, dangerous, or threatening situation. The dissemination of rumours on a large scale no matter with what purpose could precipitate catastrophic repercussions. This research aims at addressing this challenge systematically. More in detail, the primary research objective of this dissertation is
To systematically study the rumour confrontation within online social media.
To accomplish this objective, six steps are taken. At first, the conceptualisation of the main construct in this research is investigated. There are myriad of concepts in English language implying false or unverified information. However, despite years of academic research, there is no consensus regarding their conceptualisation, and they are often used interchangeably or conflated into one idea. This problem could become an obstacle to countering the surge of false information by creating confusion, distracting the community’s attention, and draining their efforts. In the first step, this dissertation addresses this challenge by providing a process-based reading of false and unverified information. This view argues that although the genesis of such information might be deliberate or inadvertent and with different purposes, they primarily disseminate on the basis of similar motives and follow the same process.
After settling the conceptualisation problem, the next step investigates the role of communication mediums and especially online social media in the spread of rumours. Although the phenomenon of rumour dissemination has drawn much attention over the past few years, it is an ancient phenomenon. The rumours used to circulate through primitive forms of communications such as word of mouth or letters; however, the technological development, particularly social media, escalated the scale, speed, and scope of this phenomenon. This step aims to pinpoint the features privy to social media that facilitate the emergence and the spread of rumours. Especially, an exclusive automation mechanism of recommendation systems in social media is closely examined through a set of experiments based on YouTube data.
The third step in this study investigates the constellation of past counter-rumour strategies. Although rumour spreading and its potentially destructive effects have been taken into account since ancient times, it was only less than a century ago that the first systematic efforts against the mass spread of rumours began. Since then, a series of strategies have been practised by various entities; nevertheless, the massive waves of rumours are still sweeping over individuals, organisations, and societal institutions. In order to develop an effective and comprehensive plan to quell rumours, it is crucial to be aware of the past counter strategies and their potential capabilities, shortcomings and flaws. In this step, we collect the counter strategies over the past century and set them in the epidemic control framework. This framework helps to analyse the purpose of the strategies which could be (i) exposure minimisation, (ii) immunisation or vaccination, and (iii) reducing the transmission rate. The result of the analysis allows us to understand, what aspects of confrontation with rumour have been targeted extensively and what aspects are highly neglected.
Following the discussion on the epidemic framework, one of the most effective approaches to rumour confrontation is the immunisation which is primarily driven by academia. The fourth step investigates the readiness of academia in this subject domain. When we do not know the readiness level in a particular subject, we either overestimate or underestimate our ability in that subject. Both of these misjudgements are incorrect and lead to decisions irrelevant to the existing circumstance. To tackle this challenge, the technology emergence framework is deployed to measure academia's readiness level in the topic of rumour circulation. In this framework, we study four dimensions of emergence (novelty, growth, coherence and impact) over more than 21,000 scientific articles, to see the level of readiness in each dimension. The results show an organic growth which is not sufficiently promising due to the surge of rumours in social media. This challenge could be tackled by creating exclusive venues that lead to the formation of a stable community and realisation of an active field for rumour studies.
The other aspect of the epidemic framework involves exposure minimisation and transmission rate reduction, which are addressed in the fifth step by an artificial intelligence based solution. The drastic increase in the volume, velocity, and the variety of rumours entails automated solutions for the inspection of circulating contents in social media. In this vein, binary classification is a dominant computational approach; however, it suffers from non-rumour pitfall, which makes the classifier unreliable and inconsistent. To address this issue a novel classification approach is utilised which only uses one rather than multiple classes for the training phase. The experimentation of this approach on two major datasets shows a promising classifier that can recognise rumours with a high level of F1-score.
The last step of this manuscript approaches the topic of rumour confrontation from a pro-active perspective. The epidemic framework helps to develop solutions to control rumour dissemination; however, they mostly adopt a passive approach which is reactive and after-the-fact. This step introduces an ontology model that can capture the underlying mechanisms of social manipulation operations. This model takes a proactive stance against social manipulation and provides us with an opportunity of developing preemptive measures. The model is evaluated by the experts and through exemplification on three notoriously famous social manipulation campaigns.","Rumours; social media; recommender systems; counter-strategies; one-class classification; social manipulation","en","doctoral thesis","","978-94-6419-147-9","","","","","","","","","Policy Analysis","","",""
"uuid:944a1cc2-9a30-4b84-91b9-5022d689d7f3","http://resolver.tudelft.nl/uuid:944a1cc2-9a30-4b84-91b9-5022d689d7f3","Life in changing environments: The intriguing cycles of Polyphosphate Accumulating Organisms","Gabriel Guedes da Silva, L. (TU Delft OLD BT/Cell Systems Engineering)","van Loosdrecht, Mark C.M. (promotor); Wahl, S.A. (copromotor); Delft University of Technology (degree granting institution)","2021","Cells are complex systems continuously exposed to changing, dynamic environments. Understanding how cells respond and adapt is of great interest not only from a fundamental viewpoint but also for the development of solutions for current challenges in medical, industrial and environmental research fields.
In this thesis, the bacterial community member Candidatus Accumulibacter phosphatis (hereafter referred to as Accumulibacter) from the functional group of Polyphosphate Accumulating Organisms (PAOs) was selected as study object due to their key-role in phosphorus removal at wastewater treatment processes and their adaptive metabolic strategies to thrive in fluctuating environments. These microorganisms are enriched in Enhanced Biological Phosphorus removal (EBPR) systems where they experience cyclic absence and presence of external electron acceptors (here, anaerobic and aerobic conditions, respectively). These fluctuations together with non-continuous availability of nutrients lead to intricate metabolic strategies. While the overall metabolic traits of these bacteria are well described, the non-availability of isolates has led to controversial hypotheses on which metabolic pathways are used - structure, when are these pathways active - function, and what mechanisms control the operation of these pathways - regulation. These hypotheses were further analysed and discussed in this dissertation.
While the bacterial community member Accumulibacter was the cellular system example studied here, this doctoral dissertation explored and combined systems biology methods to improve the mechanistic understanding of cells as structured metabolic systems with functional and regulatory frameworks, which respond and adapt to changing external conditions (environments). The developed and applied approaches are not only specific to Accumulibacter and can be applied to other cell systems and communities exposed to changing environments.","Polyphosphate Accumulating Organisms; Dynamics; Metabolism; Environment; Wastewater treatment; Systems microbiology","en","doctoral thesis","","978-94-6384-205-1","","","","","","","","","OLD BT/Cell Systems Engineering","","",""
"uuid:94a015c7-d6f1-46f4-a511-1a97006554e5","http://resolver.tudelft.nl/uuid:94a015c7-d6f1-46f4-a511-1a97006554e5","Inclusion in sugarcane ethanol expansion: Perceptions of local stakeholders in the Brazilian context","Marques Postal, Andreia (TU Delft BT/Biotechnology and Society)","Osseweijer, P. (promotor); Da Silveira, J.M.F.J. (promotor); Asveld, L. (copromotor); Delft University of Technology (degree granting institution)","2021","The global search for alternative energies has put Brazil's sugarcane at the centre of the debate about the pros and cons of first-generation bioenergy as a supplier of global needs for cleaner energy. In fact, the already mature and structured sugar cane sector attracted important investments for its expansion. However, this led to global concerns about its social and environmental impact that soon became important planning criteria in the transition strategy to the bioeconomy. After all, the bioeconomy is intended to contribute to social development that is responsible for current and future generations.
However, the debate about the impact of Brazil’s sugarcane was based on one hand on highly aggregated data and generalizations on the impacts of different raw materials, and on the other hand, on case studies with limited number of respondents, which conclusions are unable to reflect the whole sector. According to some authors, the low representation of local communities in the process of expansion impaired the otherwise positive impacts, especially for poverty reduction and social development. In order to identify whether the desired inclusion for sustainable development actually took place, we need an in-depth, broad and inclusive analysis of the most impacted actors, which were the communities surrounding the new plants being built. To fill this gap, this research was set-up to understand, value, systematize and incorporate local perceptions regarding the impact of sugar cane expansion areas. For this, literature review and analysis of secondary data are used as methodologies to support the content analysis of the interviews generated in expansion regions of 5 states in the Centre-South of Brazil, the main sugarcane expansion region in the country.","","en","doctoral thesis","","978-94-6419-170-7","","","","","","","","","BT/Biotechnology and Society","","",""
"uuid:11f2bd1a-c4f7-4f32-af75-b441735fa2fa","http://resolver.tudelft.nl/uuid:11f2bd1a-c4f7-4f32-af75-b441735fa2fa","Spatial Planning for Urban Resilience in the Face of the Flood Risk: Institutional Actions, Opportunities and Challenges","Meng, M. (TU Delft Spatial Planning and Strategy)","Nadin, V. (promotor); Stead, D. (promotor); Dabrowski, M.M. (copromotor); Delft University of Technology (degree granting institution)","2021","The research was inspired by the increasing impact of extreme weather events and changing climate patterns on flood-prone regions and cities, and the consequent human and economic costs. Despite global efforts for flood resilience and climate adaptation involving climate analysts, economists, social scientists, politicians, hydrological engineers, spatial planners, and policymakers, it is only partially clear how best to construct resilience measures and implement concrete initiatives. The complexity of institutions is a key factor that is often neglected, and which needs further investigation. The thesis examines the institutional arrangements that determine the role of spatial planning in managing flood risk, through an in-depth case study of Guangzhou, one of the most vulnerable cities in China and globally. The thesis employs theories of historical institutionalism, planning procedure and planning tools, policy framing and collaborative governance, to explore the mechanisms and factors that influence the creative planning and design process. Content analysis, GIS-based mapping, stakeholder analysis and TOWS analysis are used to investigate data from official policy documents, grey literature, geo-information data and interview scripts. The findings indicate that institutional arrangements, such as long-established planning traditions, formal planning procedures and tools, policy framing patterns and contextual organisational factors, determine spatial planning’s role in managing flood risk. They do this through (1) the extent of the changeability of an established planning system towards expanded flood resilience measures; (2) the performance of cross-level communication and boundary-spanning work between planning and water management; (3) the legal framework that planners and hydrological engineers follow; and (4) the capacities of planning and water management institutions to work on flood issues. This research shows how to apply knowledge from policy science, political science, institutional science and administration, to analyse the nature of the planning process in tackling the urgent challenge of flood risk and climate change.","","en","doctoral thesis","","978-94-6366-386-1","","","","A+BE | Architecture and the Built Environment No 3 (2021)","","","","","Spatial Planning and Strategy","","",""
"uuid:a2740d8c-f08c-4fdd-96b8-54dd5d6fee01","http://resolver.tudelft.nl/uuid:a2740d8c-f08c-4fdd-96b8-54dd5d6fee01","Alternative aviation fuels in Brazil: Environmental performance and economic feasibility","Silva Capaz, R. (TU Delft BT/Biotechnology and Society)","Osseweijer, P. (promotor); Posada Duque, J.A. (copromotor); Seabra, Joaquim E.A. (promotor); Delft University of Technology (degree granting institution)","2021","The aviation sector is responsible for only 3% of the anthropogenic carbon emissions in the world. However, this transport mode – which demands 3-fold more energy per capita than other collective modes, such as railway and bus transportation – is exclusively supplied by fossil fuels, and it has grown at an impressive rate of 7.5% per year in the last decade in the world. In line with the global aims to reduce Greenhouse Gases (GHG) emissions and the dependency on fossil fuels, the decarbonization of the aviation sector – which is typically based on cost-intensive projects with rigorous quality control – is a challenge...","","en","doctoral thesis","","978-94-6419-172-1","","","","","","","","","BT/Biotechnology and Society","","",""
"uuid:72bdc3d3-e28a-49b5-9409-9332401aba97","http://resolver.tudelft.nl/uuid:72bdc3d3-e28a-49b5-9409-9332401aba97","Aggregators’ business models for flexibility from electricity consumers","Okur, Ö. (TU Delft Energie and Industrie; TU Delft System Engineering)","Lukszo, Z. (promotor); Heijnen, P.W. (copromotor); Delft University of Technology (degree granting institution)","2021","In order to limit the amount of greenhouse gas emission, transitioning to renewable energy sources (RES) is critical. However, integrating RES in the existing power system is not straightforward since RES possess variable and uncertain characteristics. Due to these characteristics, as the penetration of RES increases, maintaining the balance between electricity demand and generation becomes more challenging. Therefore, to deal with variability and uncertainty of RES, the power system needs to become more flexible. Flexibility from the demand side is acquired by modifying the electricity demand of the consumers’ assets, such as appliances and battery energy storage systems. Nevertheless, the electricity demand and generation of individual consumers in residential and service sectors is too small to participate in electricity markets, and to contribute substantially to flexibility. To overcome this, these consumers can be represented by aggregators. To trade flexibility and make profit from it, the aggregator can choose between different business models and strategies to implement these. To make a business model viable in the long run, it should be feasible in a multi-actor context, i.e., for the aggregator, the consumers and the power system. It should contribute to the aggregator’s profit and it should reduce consumers’ cost, the economic feasibility. Moreover, it should provide flexibility to the power system to maintain the system balance, and should operate the consumers’ assets in a suitable way, the operational feasibility. This thesis aims to analyze the operational and economic feasibility of aggregators’ different business models in a multi-actor context. For this purpose, first an overview of the possible business models is provided, as well as the extent to which they differ in terms of the operational and economic aspects. After that, various optimization models are employed to study the economic and operational feasibility of these business models, the economic relation between the aggregator and the consumers, and the combination of different business models.","Aggregator; demand response; flexibility; business model; energy storage","en","doctoral thesis","","978-94-6384-198-6","","","","","","","","","Energie and Industrie","","",""
"uuid:92565bdb-cf5a-4110-abf3-1b4298720466","http://resolver.tudelft.nl/uuid:92565bdb-cf5a-4110-abf3-1b4298720466","Measuring horizontal groundwater flow with distributed temperature sensing along cables installed with direct-push equipment","des Tombe, B.F. (TU Delft Water Resources)","Bakker, M. (promotor); Delft University of Technology (degree granting institution)","2021","The pressure on groundwater systems, especially in coastal regions, increases as the population rapidly grows. In these regions, management of water tables and fluxes is important to minimize droughts, salt-water intrusion, and flooding. Proper management of such groundwater systems requires knowledge of how groundwater responds to water entering and leaving the system. Groundwater models can translate changes in inflow and outflow into changes in the groundwater table and flow. Proper calibration of these models depends on measurements of the flow and the groundwater table. While the groundwater table can be measured relatively easily, flow can only be measured either when it enters or exits groundwater systems (e.g., wells, infiltration, seepage), indirectly with tracers (solutes or heat), or with a variety of geophysical techniques. In this dissertation, a new approach is presented to measure horizontal groundwater in the aquifer with distributed temperature sensing (DTS) along cables that are inserted with direct-push equipment...","Hydrology; Groundwater; flow; measurement; DTS; fiber optic; heat; tracer; Residence time","en","doctoral thesis","","978-94-93184-81-7","","","","","","","","","Water Resources","","",""
"uuid:019a376a-f9f7-4e0c-ba56-91524c0b90bd","http://resolver.tudelft.nl/uuid:019a376a-f9f7-4e0c-ba56-91524c0b90bd","Linear simulation of large scale regional electricity distribution networks and its applications: Towards a controllable electricity network","van Westering, W.H.P. (TU Delft Cognitive Robotics; Alliander N.V.)","Hellendoorn, J. (promotor); Slootweg, JG (promotor); Delft University of Technology (degree granting institution)","2021","The volatility of renewable energy sources pose a significant challenge for Distribution Network Operators (DNOs) as it makes planning and maintaining a reliable electricity grid more complex. An essential tool in dealing with the uncertain behavior of renewable energy resources is the load flow simulation, i.e., the standard electricity network simulation in network design and operation. There is, however, still much untapped potential of applying these kind of simulations. The thesis presents improvements to the theory on linear load flow approximations. The resulting algorithms are then applied to various real world problems: control of a community battery, handling very large simulations, coping with low sensor coverage and evaluating strategic scenario's with high uncertainty. Firstly, theory is presented for the control of a community battery. It is shown how such a battery can be used for grid congestion reduction, backed up by a live experiment. A charge path optimization problem is posed as a linear problem and subsequently solved by an Linear Programming (LP) algorithm. It was found that the voltages and currents can be controlled to a great degree, increasing the grid capacity significantly. Network design formulas are described with which a DNO can quickly estimate the potential (de)stabilizing effect caused by a community battery on the steady-state voltages and currents in the grid. Next, load flow simulations are improved by applying numerical analysis techniques and the accuracy and efficiency of a linear load flow approach is investigated. The resulting fast load flow algorithm is then applied to a very large problem: integrally simulating the low and medium voltage network of Alliander DNO, a grid with over 22 million cable segments with a total combined length of over 88,000 km, built according to international standards. It is shown that this integral simulation can identify voltage problems much more accurately. Next, Bayesian state estimation is considered. A mathematical model is proposed to complement a limited set of real-time measurements with voltage predictions from forecast models. This method relies on Bayesian estimation formulated as a linear least squares estimation problem. The model is then applied to an IEEE benchmark and on a real network test bed. An observability analysis suggests strategies for optimal sensor placement. Next, theory is presented on coping with uncertain long-term scenarios for strategic simulations. A stochastic profile model is proposed based on copulas which can be calibrated by technology adoption data. Using a Monte Carlo approach, the stochastic profiles of all DNO assets are then simulated, identifying parts of the network with heavy loads. Finally, the thesis concludes by demonstrating additional applications of the presented methods, such as fast network capacity checks and reducing losses via network reconfiguration. It concludes by giving suggestions for future research.","Electricity distribution network; energy transition; linear load flow; community battery; numerical analysis; Bayesian state estimation; Gaussian mixture models","en","doctoral thesis","","","","","","","","","","Cognitive Robotics","","","",""
"uuid:f26ac568-7d3c-4dbe-a215-1e79bc00069f","http://resolver.tudelft.nl/uuid:f26ac568-7d3c-4dbe-a215-1e79bc00069f","Virtual Exposure Control for Creative Image and Video Editing","Ziliotto Salamon, N. (TU Delft Computer Graphics and Visualisation)","Eisemann, E. (promotor); Delft University of Technology (degree granting institution)","2021","Postprocessing has become a major component in movie and image production. This step is no longer a simple cleanup and cutting step, but it involves important manipulations that contribute to the atmosphere of a movie and the perception of a still image. Several movie studios spend a great part of their budget on it, as managing the postprocessing parameters is a cumbersome task, requiring costly and specialized tools and skills. For photography, while several software packages provide automatic adjustments and filters, the fine grained editing is difficult to achieve for novice users. The reason for postprocessing is that many parameters are difficult to set correctly during the actual capture of the scene. An example is exposure time. Imagine you are in a car race and want to register that moment. To convey a sense of motion in your photograph, you adjust the camera exposure time: not too short to freeze all cars, nor too long to blur the image completely. To find the threshold, other camera parameters such as aperture and sensor sensitivity must be taken into account. Even the speed of the cars needs to be considered. A much more suitable solution would be to adjust the motion blur after the acquisition. Nevertheless, this is not a simple task. Typically, it requires skill and involves manipulating the image by hand, which is time consuming and highly prone to artifacts. For videos, such edits are even more complex as the spatiotemporal coherence must be observed, especially when temporal warping occurs. In this dissertation, we present efficient solutions for exposure control in postproduction to enable high quality visual content generation. Next to imagemanipulation algorithms, we explore acquisition based solutions and intuitive interaction metaphors to support expressive content production. Our outcome is not only intended for professionals to reach their design visions regarding atmosphere and storytelling, but also includes semiautomatic approaches to enable novice users to achieve impactful and realistic images. Consequently, the presented results have the potential of inspiring new artists, while the methods described can also be employed to simplify complex visual content creation tasks.","multimedia; image processing; interactive editing; computer graphics","en","doctoral thesis","","9789464191363","","","","","","","","","Computer Graphics and Visualisation","","",""
"uuid:09e98a0c-65f8-46d3-b356-459987c0228a","http://resolver.tudelft.nl/uuid:09e98a0c-65f8-46d3-b356-459987c0228a","Pitch control for wind turbine load mitigation and enhanced wake mixing: A simulation and experimental validation study","Frederik, J.A. (TU Delft Team Jan-Willem van Wingerden)","van Wingerden, J.W. (promotor); Verhaegen, M.H.G. (promotor); Ferrari, Riccardo M.G. (promotor); Delft University of Technology (degree granting institution)","2021","The dissertation discusses different wind turbine control strategies that use pitch actuation to decrease the Levelized Cost of Energy (LCoE) of wind farms. This is achieved by either mitigating the loads experienced by individual turbines, or by enhancing wake mixing in turbines located within a wind farm. The latter strategy exploits the fact that wake mixing results in a lower wake deficit, such that turbines located in a wake can generate more power. Wind tunnel experiments are conducted to provide the next step in the development of two control strategies: Subspace Predictive Repetitive Control (SPRC) for load mitigation, and Dynamic Induction Control (DIC) for enhanced wake mixing. These strategies are compared with state-of-the-art technologies to assess their viability as wind turbine control approach. Both technologies prove to be effective in this environment, and achieve results that are similar to or better than state-of-the-art technologies. Finally, this dissertation introduces a novel pitch control technology as an alternative to the DIC approach. This technology, called the Helix approach, uses Individual Pitch Control (IPC) to enhance wake mixing and increase wind farm power generation. A proof of concept of the Helix approach is given through high-fidelity flow simulations.","wind turbine control; wind farm control; data-driven control; individual pitch control; wind tunnel experiments; enhanced wake mixing; helix approach","en","doctoral thesis","","9789463663731","","","","","","","","","Team Jan-Willem van Wingerden","","",""
"uuid:ab2adf33-ef5d-413c-b403-2cfb4f9b6bae","http://resolver.tudelft.nl/uuid:ab2adf33-ef5d-413c-b403-2cfb4f9b6bae","Robust Automatic Pumping Cycle Operation of Airborne Wind Energy Systems","Rapp, S. (TU Delft Wind Energy)","van Bussel, G.J.W. (promotor); Schmehl, R. (copromotor); Delft University of Technology (degree granting institution)","2021","Airborne wind energy (AWE) is a novel technology that aims at accessing wind resources at higher altitudes which cannot be reached with conventional wind turbines. This technological challenge is accomplished using tethered aircraft or kites in combination with either onboard or ground-based generators. In the former case, the kinetic energy of the air flow is transformed into electricity and transmitted via a conductive cable to the ground. In the latter case, the aerodynamic force of the aircraft or kite is translated into tether tension. The pulling force uncoils the tether from a drum which turns a generator and hence transforms the mechanical torque into electrical power on the ground. In this case two operational modes are required: In the first mode, the tether is reeled out until the maximum length is reached. It follows a reeling in phase where the aircraft or kite glides back towards the ground station and a fraction of the generated power is used to wind the tether again onto the drum of the winch. The cycle restarts as soon as the minimum tether length is reached. These two modes combined constitute a so-called pumping cycle. Reliability is a key system property that will decide over the success of AWE as a commercially feasible technology. To reach this goal, a well designed control system is required that can achieve the nominal control objectives as well as handle disturbances such as atmospheric turbulence and mismatches between the model used for the controller derivation and the real plant. In light of these challenges, the present work tries to make a contribution to bring AWE closer to commercial success. More specifically, a workflow to design a modular control architecture for a rigid wing AWE system operated in pumping cycle mode is presented. The thesis introduces models of different fidelity that are either directly used for the controller synthesis or in order to verify if the designed controller is able to meet its objectives. A quasi-stationary analysis is performed to describe the operational flight envelope and to derive linear state space models for the longitudinal and lateral flight controller synthesis. A generic outer loop controller, independent of the specific aircraft actuation, is designed which guides the system along the traction and retraction phase reference flight paths. A ground based winch controller is used to track the tether tension and hence the radial motion of the aircraft. To track the outer loop guidance commands several linear and nonlinear inner loop flight controllers are proposed. All controller designs are verified in detail using Monte Carlo simulations. The resulting distributions of critical metrics are used to quantify performance as well as robustness of the controllers in the presence of stochastic variations in the wind field and model uncertainties. In the last part of this thesis a methodology is proposed that can be used to systematically generate conditions in which the AWE control system is failing. The generated knowledge can be leveraged to create an analytic model that is able to predict during operation a critical flight state. Ultimately, this allows to trigger a mitigation maneuver to avoid the failure. Different prediction strategies are presented and eventually the methodology is specifically applied to the case of tether rupture condition generation, prediction and avoidance.","Airborne Wind Energy; Flight Control; Robustness; Monte Carlo Simulations","en","doctoral thesis","","978-94-6423-148-9","","","","","","","","","Wind Energy","","",""
"uuid:e930dace-0655-4b4c-9b4b-f5d846414fe7","http://resolver.tudelft.nl/uuid:e930dace-0655-4b4c-9b4b-f5d846414fe7","High-precision Versatile Ultrasonic Flow Meters Based on Matrix Transducer Arrays","Massaad Mouawad, J.M. (TU Delft ImPhys/Medical Imaging)","de Jong, N. (promotor); Verweij, M.D. (promotor); Delft University of Technology (degree granting institution)","2021","Ultrasonic flow meters are widely applied to measure flow in a variety of applications. The vast majority of ultrasonic flow meters are based on the measurement of the transit time of an acoustic pulse through the fluid. This can either be done in-line, by inserting a spool piece with ultrasonic transducers into the pipe carrying the fluid, or by clamping the transducers on an existing pipe. Clamp-on meters are attractive as they can be installed without cutting the pipe or shutting down the flow, but their stability is limited, and they are unable to measure flow profiles (in contrast with expensive multi-path in-line meters), which limits their linearity at low flow speeds. Moreover, their installation requires complex manual alignment of the transducers and input of a variety of setup parameters (e.g. pipe dimensions and material properties, speed of sound in the fluid) by the user. In this thesis, clamp-on meters based on matrix ultrasonic transducers are developed to address these drawbacks. These matrix transducers consist of a two-dimensional array of 100+ elements that enables beam steering in two directions by programming the timing of the electrical pulses applied to the elements. This allows to develop three innovative measurement techniques: (1) automatic beam alignment by adjusting the steering angles so as to optimize the signal-to-noise ratio and the path of the received pulse, thus simplifying installation and improving stability; (2) multi-path measurement by steering the beam at different angles, realizing the measurement of multiple paths through the fluid with a single pair of matrix transducers, and thus providing information about the flow profile; (3) self-calibration by using pulse-echo measurements between the elements of the matrix transducer to characterize the pipe wall and fluid, thus reducing the dependence on a-priori knowledge of their properties. The most significant steps to realize these kind of sensors were taken in this thesis. The mentioned measurement techniques were elaborated, and the relevant wave-propagation phenomena, beamforming schemes and transducer design were performed. Based on this, a prototype sensor was fabricated and successfully tested. Moreover, application-specific integrated circuits (ASICs) were developed with dedicated transmit and receive electronics to realize a cost-effective and accurate implementation of the beam forming and transit-time measurement.","Auto-calibration; Clamp-on flow meter; Finite elements; Lamb waves; matrix transducer arrays; Nonlinear acoustics; Signal Processing; Ultrasound","en","doctoral thesis","","978-94-6384-190-0","","","","","","","","","ImPhys/Medical Imaging","","",""
"uuid:b442ec3c-75f3-491f-99af-3843b19fcb92","http://resolver.tudelft.nl/uuid:b442ec3c-75f3-491f-99af-3843b19fcb92","Interactions and Evolution of the Greenland Ice Sheet Surface Mass Balance with the Global Climate","Sellevold, R. (TU Delft Physical and Space Geodesy)","Klees, R. (promotor); Vizcaino, M. (copromotor); Delft University of Technology (degree granting institution)","2021","One of the major consequences of ongoing global warming is the melting of the Greenland ice sheet (GrIS). The GrIS, as the world’s second largest freshwater reservoir, has the potential to raise sea levels by 7.4 m (Bamber et al., 2018a,b). Such a sea level rise would have a devastating effect on coastal societies, where a large fraction of the world’s population lives. Therefore, constraining the GrIS’ contribution to sea level rise is an important and vital task to plan for the future efficiently.
Since the 1990s, the GrIS has been losing mass at an accelerated rate (Enderlin et al., 2014; Bamber et al., 2018a; Shepherd et al., 2019; Oppenheimer et al., 2019). We can separate GrIS mass loss into the contribution from the surface mass balance (SMB) and ice discharge. The SMB is the primary contributor to recent GrIS mass loss (van den Broeke et al., 2016); thus, there is a need for accurate projec tions of GrIS SMB, and a thorough understanding of physical processes governing the surface mass loss under global warming. Further, the GrIS also interacts with the climate system (Fyke et al., 2018), highlighting the need for coupled global climate projections.
This thesis’ primary targets are to
1. Investigate the coevolution of the GrIS SMB and the global climate under increased greenhouse gases.
2. Examine the impact of reduced Arctic sea ice on GrIS SMB
3. Make projections of future GrIS surface melt.
This is achieved by using the Community Earth System Model (CESM) version 2.1 (Danabasoglu et al., 2020). CESM2 is a newly developed coupled earth system model that features an online downscaling of the SMB through elevation classes (ECs), advanced snow physics (van Kampenhout et al., 2017), and a prognostic calculation of snow albedo (Flanner and Zender, 2006). Also, the EC simulated SMB is interactive; that is, modification of surface fluxes of mass and energy is communicated to the earth system’s other components.
This thesis presents analysis of some of the first simulations of Greenland ice sheet climate and SMB with the newly developed CESM2 and CESM2CISM2. While many questions regarding the future of the GrIS remain, the results presented here contribute towards a better understanding of the coupled global climate and GrIS SMB evolution, and processes leading GrIS surface mass loss. The first steps towards making computationally efficient and robust projections of GrIS surface melt through machine learning are also taken.","Greenland ice sheet; Surface mass balance; Global climate modeling; Climate change","en","doctoral thesis","","9789463842013","","","","","","","","","Physical and Space Geodesy","","",""
"uuid:70c78e3e-618c-4ede-a0ee-e1379a6e598a","http://resolver.tudelft.nl/uuid:70c78e3e-618c-4ede-a0ee-e1379a6e598a","Dissemination Of Antibiotic Resistance Via Wastewater And Surface Water","Paulus, G.K. (TU Delft Sanitary Engineering)","Medema, G.J. (promotor); Berendonk, T. (copromotor); Delft University of Technology (degree granting institution)","2021","Antibiotic resistance is one of the biggest threats society is facing around the globe and has been on the rise worldwide. While antibiotic resistances play crucial roles in shaping and coordinating microbial communities in natural environments, they can lead to disastrous results when acquired by pathogens in clinical environments. Effective antibiotics not only enable the functioning and interactions necessary for our highly globalized world, but also drive advances in healthcare and are the deciding factor facilitating life-saving medical intervention such as open-heart surgery, organ transplants and chemotherapy. Increasing resistance antibiotics is threatening the medical status quo, as well as social and economic stability (Chapter 1). Water environments, especially anthropogenically impacted environments such as wastewater treatment plants, are suspected to be - not only - reservoirs for antibiotic resistance genes but also hotspots for horizontal gene transfer. Knowledge about the impact of anthropogenically impacted aqueous environments is needed in order to be able to uncover the processes, parameters and mechanisms underlying and facilitating the transfer of antibiotic resistance genes in order to be able to implement practical, useful and efficient measures in order to reduce the spread of antibiotic resistance and to reduce anthropogenic impact of antibiotic and antibiotic resistance gene pollution in the environmen.","","en","doctoral thesis","","","","","","","","","","","Sanitary Engineering","","",""
"uuid:3647c9bb-97d1-442a-9d3e-49cd1c9e3baf","http://resolver.tudelft.nl/uuid:3647c9bb-97d1-442a-9d3e-49cd1c9e3baf","Methods for controlling deformable mirrors with hysteresis","Kazasidis, O. (TU Delft Team Raf Van de Plas)","Verhaegen, M.H.G. (promotor); Wittrock, Ulrich (promotor); Delft University of Technology (degree granting institution)","2021","Fast adaptive optics and comparatively slower active optics are cornerstones of modern-day astronomy. Such systems are installed on most current large ground-based observatories in the visible or infrared and are included in the design of all future observatories. Their role is twofold; first, to compensate for astronomical seeing, and second, to correct for design and manufacturing errors, as well as thermal and mechanical distortions. What's more, the science goals of future large space observatories in the visible or infrared rely on active and adaptive optics systems for reaching the required wavefront accuracy and stability, with imminent examples the folded segmented primary mirror of the James Webb Space Telescope (JWST) and the deformable mirrors of the Roman Space Telescope, previously called the Wide Field Infrared Survey Telescope (WFIRST). Besides astronomy, adaptive optics find laser applications, for aberration correction and for beam shaping.
This thesis was set to explore methods for controlling deformable mirrors with hysteresis, specifically for controlling unimorph deformable mirrors developed and manufactured at the Photonics Laboratory of the FH Münster University of Applied Sciences in Germany. The technology for manufacturing unimorph deformable mirrors has been developed in the past at the Photonics Laboratory and has been expanded in a series of industrial and research projects, both for astronomical and for laser applications. Unimorph deformable mirrors are a promising technology for adaptive and active optics systems, thanks to their paramount mechanical properties and their versatility. However, their piezoelectric actuators exhibit higher hysteresis than most other actuators. The focus of this thesis lies in accurate and precise wavefront control with unimorph deformable mirrors despite their intrinsic hysteresis.
Hysteresis can be compensated with two different approaches. In the feedforward scheme, a mathematical model of the hysteresis is constructed and its inverse model is used in open-loop to drive the deformable mirror. In the feedback scheme, the wavefront deviation — including the hysteresis influence — is measured by a wavefront sensor and the deformable mirror is controlled in closed-loop. These two approaches can be combined for optimal performance. The open-loop compensation using the Prandtl-Ishlinskii formalism has previously been implemented at the Photonics Laboratory and was found to reduce the hysteresis from 15% to about 2%. Nevertheless, the residual uncompensated hysteresis still limits the performance of optical systems that have to be almost diffraction-limited. This thesis consists of two parts that manifest the two activities carried out during this PhD project. The first is image-based aberration correction using extended scenes. This aspires to complement existing technologies for the wavefront control in future space telescopes using active optics. The second is fast defocus sensing for the implementation of a closed-loop focus-shifter, with potential application in laser micromachining.
In the first part, the feedback for controlling the unimorph deformable mirror is generated from the imaging detector. This control should correct for constant or slow-changing effects and is classified as ""active optics."" We designed and built a testbed to evaluate control strategies for compensating for aberrations generated in a conjugate plane. The image-based wavefront correction is designed as a blind optimization with the following configuration parameters: the merit function, the control domain, and the algorithm. We use a common image-sharpness metric as merit function and study it extensively in the domain of the Zernike modes. We show that for a severely aberrated system, the Zernike modes are not orthogonal to each other with respect to the merit function. This effect, that we call ""aberration balancing,"" means that the performance of wavefront-free adaptive and active optics systems can be improved by adding specific low-order aberrations in the case of uncorrectable high-order aberrations, where the amount of the additional aberration depends on the power spectral density of the spatial frequencies of the object. We use this technique in simulation to show how a moon that was hidden in the halo of its planet comes into sight, by balancing secondary astigmatism 0° with astigmatism 0°; and in experiment to increase the limiting resolution of our testbed, by balancing spherical aberration with defocus.
With the knowledge of the merit function landscape, we design the control algorithm to account for valleys, plateaus and the aberration balancing. The algorithm is based on the heuristic hill climbing technique, which minimizes the influence of hysteresis. We compare image-based aberration correction in three different control domains, namely the voltage domain, the domain of the Zernike modes, and the domain of the singular modes of the deformable mirror. We demonstrate a combined control scheme that deals with the residual hysteresis left over by the open-loop compensation and with the high dimensionality of the control domains. Moreover, we experimentally show that the control in the domain of the singular modes of the deformable mirror is advantageous for the correction of random aberration in comparison to the domain of the Zernike modes.
In the second part of this thesis, the feedback for controlling the unimorph deformable mirror is generated from an additional sensor. Here, the goal is to perform fast focus control, the simplest kind of beam shaping, that falls into the category of ""adaptive optics."" Recently, the Photonics Laboratory presented a novel unimorph deformable mirror that allows for dynamic focus shift with an actuation rate of 2 kHz. Because of hysteresis and creep, this mirror has to be operated in closed-loop. In the past, a chromatic confocal sensor measured the displacement of the back side of the mirror and the signal was fed back to a PID controller. In the course of this PhD project, a novel defocus sensor based on an astigmatic detection system has been developed. It has a bandwidth higher than 18 kHz and meets the requirements, with high-frequency performance and noise level comparable to those of the commercial chromatic confocal sensor. This sensor can open the way towards a commercial fast focus-shifter based on this mirror, circumventing the limited bandwidth and the complexity of wavefront sensors.","Active optics; Adaptive optics; Wavefront sensing; Hysteresis; Mathematical optimization","en","doctoral thesis","","978-94-6384-169-6","","","","","","","","","Team Raf Van de Plas","","",""
"uuid:6744b23a-aec6-4852-837a-6ffd466ca24d","http://resolver.tudelft.nl/uuid:6744b23a-aec6-4852-837a-6ffd466ca24d","What does that gene do?: Gene function prediction by machine learning with applications to plants","Makrodimitris, S. (TU Delft Pattern Recognition and Bioinformatics)","Reinders, M.J.T. (promotor); van Ham, R.C.H.J. (promotor); Delft University of Technology (degree granting institution)","2021","Billions of people world-wide rely on plant-based food for their daily energy intake. As global warming and the spread of diseases (such as the banana Panama disease) is substantially hindering the cultivation of plants, the need to develop temperature- and/or disease-resistant varieties is getting more and more pressing. The field of plant breeding has been revolutionized by the use of molecular biology methods, such as DNA and RNA sequencing, which substantially accelerated the finding of genes that are likely to influence a trait of interest. The outcome of such experiments is typically a long list of candidate genes whose involvement in the trait needs to be experimentally validated. Prioritizing these experiments, i.e. testing the most promising genes first, can save a lot of time, effort and money, but is often hindered by the fact that the cellular roles (functions) of plant genes and the corresponding proteins is often unknown. Experimentally discovering the functions of genes is equally time-consuming and costly, so it is crucial to have computer algorithms that can automatically predict gene or protein functionswith high accuracy. After decades of research on this field, considerable progress has been made, but we are still far from a widely-acceptable and accurate solution to the problem.
This thesis explores different research directions to improve protein function prediction, by developing new machine learning algorithms. These directions include new ways to represent proteins, exploiting semantic relationships among functions, and function-specific feature selection. The thesis also deals with the problem of missing protein interaction data for non-model species and quantifies its effect on protein function prediction. All in all, it provides novel insights to the problem that future work can build upon to lead to new breakthroughs.","bioinformatics; gene function prediction; gene ontology; machine learning; plant biology","en","doctoral thesis","","978-94-6423-151-9","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:bb898ff7-2982-4a8a-bee5-4a1f02d526e6","http://resolver.tudelft.nl/uuid:bb898ff7-2982-4a8a-bee5-4a1f02d526e6","Satellite data in rainfall-runoff models: Exploring new opportunities for semi-arid, data-scarce river basins","Hulsman, P. (TU Delft Water Resources)","Hrachowitz, M. (promotor); Savenije, Hubert (promotor); Delft University of Technology (degree granting institution)","2021","Throughout the world, many people have been affected by water related issues in the past, some more extreme than others. In this context, hydrological models have often been used to gain more insight into the situation and to limit negative impacts as much as possible. There are many different types of hydrological models with each their strengths and weaknesses, but all models need a certain amount of reliable data. However, many river basins throughout the globe are poorly gauged which means there are only limited reliable ground observations available. That is why satellite observations provide many interesting opportunities to fill this gap of which many are not yet explored. Therefore the goal of this research was to answer the following main research question: What is the added value of satellite-based observations for hydrological modelling in a semi-arid, data-scarce river basin? This research focused on the Luangwa River in Zambia which is a large tributary of the Zambezi River. This semi-arid river basin is poorly gauged, mostly unregulated and sparsely populated. A process-based distributed hydrological model with sub-grid heterogeneity was developed in this research and modified step-wise when exploring the added value of different satellite observations for different aspects within hydrological modelling. This included analysing the information content of satellite-based river water level, i.e. altimetry, for model calibration, as well as evaporation and total water storage for step-wise model structure improvement and spatial-temporal model calibration. Overall, satellite-based observations have been used successfully to improve our understanding of the hydrological processes in the data-scarce Luangwa river basin, to improve the hydrological model structure and to allow for more reliable parameter identifications in the absence of reliable discharge data. This research focused on a selection of satellite-based observations and hydrological model applications. In other words, there remain many opportunities yet to be explored!","hydrological modelling; semi-arid regions; poorly gauged; satellite data","en","doctoral thesis","","978-94-6421-226-6","","","","","","","","","Water Resources","","",""
"uuid:95a72d63-03c2-4500-9cf2-a37e3cc7ad44","http://resolver.tudelft.nl/uuid:95a72d63-03c2-4500-9cf2-a37e3cc7ad44","The influence of essential growth nutrients on PHA production from waste","Mulders, M. (TU Delft BT/Environmental Biotechnology)","van Loosdrecht, Mark C.M. (promotor); Kleerebezem, R. (promotor); Delft University of Technology (degree granting institution)","2021","Volatile fatty acids (VFA) may serve as building blocks for Polyhydroxyalkanoates (PHA) production and can be derived from waste streams. Ideal streams for PHA production contain a high Chemical Oxygen Demand (COD) to nutrient ratio, such as (waste)water from a paper-mill factory or candy-bar factory. The (waste)water generated by these companies are usually treated anaerobically with the final product being methane containing biogas. Usually, the methane is burned to produce either heat or electricity. Potentially, more value can be added to these streams by producing VFA and/or PHA. PHA can be produced using microbial enrichment cultures that can be established by cultivation in a selective environment that favours the growth of PHA producing microorganisms. Some advantages of using open cultures are that no sterilization and expensive equipment is required compared to pure culture biotechnology. Open culture biotechnology can be effectively applied when the right selection pressure for a specific microbial trait is identified. The microorganism that is most effective in the given conditions will win the competition, i.e. the strongest will survive. A selection criteria for PHA productis is consuming substrate very fast by first making a storage polymer (in this case PHA) from the supplied substrate. The PHA producers prefer VFA as substrate, hence it is important to maximize the VFA content in the substrate stream. For the production of VFA in the product chain towards PHA it is important to minimize the solid content in the feedstock for PHA production. The objective of the research described in this thesis was to gain more insight in the two-step upstream process for PHA production from agricultural waste streams. The first step concerns the maximization of the VFA concentration in the feedstock. Optimization of VFA production was investigated using the granular sludge process in order to maximize the volumetric VFA production capacity and to minimize the solids concentration in the effluent. Two process variables were investigated regarding the PHA production process. Firstly, the influence of the presence of nutrients on PHA production was investigated using PHA producing enrichment cultures. Secondly, the production of PHA was investigated using the leachate of the organic fraction of municipal solid waste (OFMSW) at pilot scale.","","en","doctoral thesis","","978-94-6423-164-9","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:e94efb97-b56d-415f-9044-57da2cb57fb9","http://resolver.tudelft.nl/uuid:e94efb97-b56d-415f-9044-57da2cb57fb9","Failure Analysis and Diagnosis Scheme in Distribution Systems","Bhandia, R. (TU Delft Intelligent Electrical Power Grids)","Palensky, P. (promotor); Cvetkovic, M. (copromotor); Delft University of Technology (degree granting institution)","2021","Continuous and rapid technological advancements have transformed the modern day power system. Increased global inter-connectivity has made reliable power supply a critical requirement. A small outage can cascade into a blackout causing great inconvenience and significant monetary damage. These concerns highlight the need of an additional layer of proactive approach in conventional protection schemes. The focus of such an approach would be to shift from reacting to a failure to anticipating a failure. Anticipating a failure gives time to better prepare and mitigate the failure by efficient allocation of resources in order to limit the negative consequences. Starting from the inception of the event causing the failure to the final occurrence of the failure, the time-period in between is termed as the pre-failure period where the signatures of the incipient failure can be observed. The availability of high-resolution devices has improved monitoring of grid operations during this pre-failure period. Improved monitoring enhances situational awareness leading to easier detection of incipient failure signatures. Research conducted in this field has led to development of few failure anticipation techniques but the application potential of some are restricted to specific equipment or phenomena while that of others are restricted by resource requirements. There is a need of addressing the research gap of a comprehensive failure anticipation technique that fulfills three major criteria of low computational burden, wide applicability in different scenarios and installation compatibility with existing grid monitoring devices for economical implementation. The research conducted in this thesis aims to address this research gap by developing a comprehensive failure anticipation technology titled Failure Anticipation and Diagnosis Scheme (FADS) for AC distribution systems. FADS implementation broadly comprises of three functionalities. The first functionality is concerned with quick and accurate identification of incipient failure signatures. Almost all failure anticipation techniques rely on cross-referencing historical databases or identifying specific patterns in order to detect incipient failure signatures. However, incipient failure signatures seldom manifest in same patterns. Hence, FADS relies on the fundamental aspect that pure AC sinusoids are complex exponentials. Incipient failure signatures would invariably violate certain properties of complex exponentials and manifest as waveform distortions, which would be leveraged by FADS to detect the signatures. The second functionality involves the data processing of the distortion data. Several novel parameters are introduced in the second functionality that helps in processing the data obtained from the first functionality. The use of novel parameters helps in accurate assessment of the stress experienced by the grid operations due to the event causing distortions. Such an assessment help FADS to be robust to false positives or false negatives. Finally, the third functionality involves interpretation of the information obtained through data processing. The interpretation provides metrics to rank the severity of the damage the event can inflict on grid operations along with specific inputs on the event location. This interpreted and refined data helps to provide means to the Distribution System Operator (DSO) for informed decision-making and time-efficient resource allocation for failure mitigation purposes. The different FADS functionalities work in unison to detect incipient failure signatures and extract valuable information, which can be then used to plan mitigation strategies. The different functionalities of FADS are designed to be installed in a manner such that the incremental costs of widespread FADS implementation are minimal. The evaluation of FADS in this thesis is conducted through a series of stringent and realistic test cases. The test cases are simulated on the standard IEEE-13 and IEEE-34 node test feeders. The first set of simulation studies focusses on detecting High Impedance Faults (HIF) as conventional protection schemes mostly fail to detect it. The test cases comprise of several novel stringent scenarios to evaluate the capability of FADS to accurately distinguish and detect HIF events among multiple switching events and normal grid actions at different sections of the grid. The second set of simulation studies involves recreating transient behavior generated due to real life incipient equipment failure conditions in laboratory based simulations. Simulations are used to evaluate the ability of FADS to detect and assess the incipient failure before the equipment breakdown occurs. The next set of studies is focused on analyzing how the FADS performance in previous simulation studies could be translated to assess the improvement in major reliability indices, mainly System Average Interruption Duration Index (SAIDI). Improvement in reliability indices are a major area of concern for utilities and the results obtained from FADS implementation are further quantified to provide a range of possible improvement in SAIDI value in percentage terms. Finally, the proposed benefit of FADS is illustrated through implementation on real field data provided by the Dutch DSO, Stedin B.V. In the course of FADS implementation, few shortcomings were noticed and possibilities of further improvements were identified. The final chapters of this thesis discuss the shortcomings and recommend improvements for future research studies. The functionalities of FADS are flexible and mostly user-dependent and can be systematically improved over time to make FADS a global standard for industrial and research applications for failure anticipation in AC distribution systems.","Incipient failure detection; waveform analytics; intelligent grid analytics system; power system transients; situational awareness","en","doctoral thesis","","978-94-6419-151-6","","","","","","2021-08-01","","","Intelligent Electrical Power Grids","","",""
"uuid:1dd97312-bb7b-43c5-a931-303bd8649883","http://resolver.tudelft.nl/uuid:1dd97312-bb7b-43c5-a931-303bd8649883","Human Use of Flexible Tools for Dynamic Teleoperation","Aiple, M. (TU Delft Biomechatronics & Human-Machine Control)","van der Helm, F.C.T. (promotor); Schiele, A. (copromotor); Delft University of Technology (degree granting institution)","2021","This thesis explores possibilities and constraints in performing dynamic tasks through teleoperated robots. Teleoperation is commonly used to execute tasks by a human operator guiding a robot remotely through means of a teleoperation system. The teleoperation system bidirectionally mirrors the motions and forces between a handle device held by the operator and a robotic tool device interacting with the environment. While the use of teleoperation to execute slow motions precisely and with appropriate forces has been researched intensively, teleoperation for dynamic motions occurring in explosive movement tasks like throwing, hammering, shaking, and jolting is not yet sufficiently understood. Nevertheless, these motions also belong to the portfolio of motions that non-disabled humans routinely carry out. A deeper understanding of teleoperation for explosive movement tasks could be especially helpful for applications of field robotics, like disaster recovery scenarios, future planetary exploration missions, and other teleoperation applications in unknown environments where some degree of improvisation is useful.","Dynamic Teleoperation; Variable Impedance Actuator; Elastic Tool Use","en","doctoral thesis","","978-94-6419-130-1","","","","","","","","","Biomechatronics & Human-Machine Control","","",""
"uuid:59b7dcd5-bef9-41e9-80b3-e90412f1d5f8","http://resolver.tudelft.nl/uuid:59b7dcd5-bef9-41e9-80b3-e90412f1d5f8","Unraveling redox metabolism in Escherichia coli","Velasco Alvarez, M.I. (TU Delft OLD BT/Cell Systems Engineering)","van Loosdrecht, Mark C.M. (promotor); Wahl, S.A. (copromotor); Delft University of Technology (degree granting institution)","2021","A wide variety of microorganisms are increasingly being employed for the production of a broad diversity of compounds, instead of using fossil fuel. The production of such compounds faces different challenges for an optimized production. Identifying the bottlenecks in the synthesis of such products offers the possibility to reduce these bottlenecks and increase the production efficiency. This could lead to economically feasible production of these compounds without utilizing fossil fuels. This thesis focuses specifically on the production of PHB using E. coli, by studying the redox modified metabolism.
Different metabolic pathways that might favour the flux towards PHB production were evaluated. More specifically, the bottlenecks in the synthesis and the conditions that might favor or limit its production were carefully analyzed in this thesis. The main bottlenecks in PHB production that have been identified and discussed in literature are the precursor acetyl-CoA and the co-factor NADPH. However, their role has not been entirely clarified in the field of metabolic engineering. Most studies use flux balance analysis to investigate the roles of acetyl-CoA and NADPH.
However, these analyses only provide information about the stoichiometry of a pathway with the flux distribution, while analyzing them through thermodynamics gives the specific reaction that is furthest from equilibrium and therefore a bottleneck for the synthesis of a product. In this thesis we combine both flux balance analysis and thermodynamics for understanding the pathways EMP (Embden-Meyerhof-Parnas pathway), Entner–Doudoroff pathway (EDP), and modified Embden-Meyerhof-Parnas pathway (mEMP).","","en","doctoral thesis","","978-94-6384-203-7","","","","","","","","","OLD BT/Cell Systems Engineering","","",""
"uuid:7caf6926-a673-4e01-9f2b-673f58828b9b","http://resolver.tudelft.nl/uuid:7caf6926-a673-4e01-9f2b-673f58828b9b","Accessible Hand Prostheses: 3D Printing meet Smartphones","Cuellar Lopez, J.S. (TU Delft Medical Instruments & Bio-Inspired Technology)","Breedveld, P. (promotor); Zadpoor, A.A. (promotor); Smit, G. (copromotor); Delft University of Technology (degree granting institution)","2021","The World Health Organization (WHO) estimates that there are ≈40 million amputees in developing countries and that only ≈5% of them have access to prosthetic devices. In low income countries, there are only a few big cities capable of providing reasonable healthcare services and transportation from rural areas is usually complicated, expensive, and may take several days. In most of the cases, there is a general lack of trained personnel and materials making, prosthetic workshops limited, difficult to reach, or even non-existent. 3D printing is a manufacturing method that enables fabrication of structures with unusual geometries without the need for any particular manual skill, elaborate tooling, or labour-intensive procedures. Many 3D printing techniques have become easily accessible and have opened a window for creating low-cost functional parts in a simpler way than conventional procedures. The main purpose of the research described in this thesis is to increase the accessibility of prosthetic hands among people living in low-income settings. To achieve this, the goal of the research is twofold: one, to design a transradial hand prosthesis that can be 3D printed with very few and simple post assembly steps and suffice basic user requirements; and two, to develop a 3D modelling process based on 2D photographs for the design of transradial (below the elbow) sockets that can be 3D printed.
This thesis began exploring possibilities of non-assembly fabrication using 3D printing techniques. Chapter 2 contains a literature review describing a number of mechanisms fabricated in a non-assembly manner by 3D printing. Chapter 3 reviews the results of fatigue testing in 3D printed polymers in order to determine the 3D printing material and 3D printing settings that ensure best fatigue performance. Chapter 4 continues with a number of design considerations that were formulated for the fabrication of non-assembly mechanisms with 3D printing. We followed these guidelines to design a functional multi-articulated hand prosthesis that was then manufactured by material extrusion 3D printing. This design procedure concluded in a hand prosthesis concept that reduces manufacturing requirements to a single 3D printer and its building material. Chapter 5 contains a functional evaluation of the 3D printed prosthetic hand including mechanical and user testing. To further explore the capabilities of non-assembly 3D printing, in Chapter 6 we initiated a new design process aimed at producing articulated fingers (two degrees of freedom per finger) under this manufacturing framework. For this process, we adopted a bio-inspired design approach by studying the anatomical structures of the human hand that can be translated into components of prosthetic hands and have the potential of offering improved functionality. This bio-inspired designed prosthetic hand achieved superior pinch force as compared to our previous non-assembly BP prosthetic hand. Chapter 7 describes the method employed to obtain and process the 3D models of a stump. The method is based on photos from a smartphone and a Statistical Shape Model (SSM). The algorithm translates the photos into a 3D digital shape and then introduces the digital outcome into the process of automatic anthropometry. The outcome was later used for determining the parameters of a parametric design of a transradial socket that can be 3D printed and fitted onto the user’s residual limb. The error resulting from the automatic measurement was still too large for an acceptable socket design. The thesis ends in Chapter 8 with a pilot study of our new bio-inspired 3D printed hand design in Colombia. We employed a manual measuring method using visual cues of the stump and a measuring tape to obtain the dimensions required for the design of the socket. Through the manual measuring method and parametric socket and shaft designs, the components of the prosthetic device were produced easily and locally on a material extrusion 3D printer. The field testing in Colombia concluded that our design and manufacturing processes based on 3D printing are fast and easy to implement and opens a gateway for the production of prosthetic devices in developing countries.","","en","doctoral thesis","","","","","","","","","","","Medical Instruments & Bio-Inspired Technology","","",""
"uuid:746f5f73-1876-4371-b142-f0f3117ded6a","http://resolver.tudelft.nl/uuid:746f5f73-1876-4371-b142-f0f3117ded6a","On-road Assessment of Driver Workload and Awareness in Automated Vehicles","Stapel, J.C.J. (TU Delft Intelligent Vehicles)","Happee, R. (promotor); Gavrila, D. (promotor); Delft University of Technology (degree granting institution)","2021","Problem Definition According to the World Health Organization, traffic injuries have become the eighth cause of death and the leading cause among children and young adults. Human error, and in particular perceptual error, is among the most frequently reported causes of road fatalities. The desire to reduce traffic fatalities has led to the development of automated driving, which promises revolutionary advances in driver safety, traffic capacity and driver convenience. Since true autonomy in mixed traffic has not yet been achieved, today's automated vehicles require the driver to continuously supervise the automation and to capably intervene when necessary. However, simulator studies and experiences from disciplines such as aviation and factories have demonstrated that humans are generally ill-equipped to monitor automation for longer periods. This raises the concern that partial automation may harm rather than help traffic safety if not designed to adequately support the drivers in their supervisory tasks. Research objectives To address this concern, further insights are needed in how drivers monitor automation in complex real-world traffic, and how their behaviour and performance change with long-term automated driving experience. This dissertation sets out to investigate how real-world automation changes the availability of attentional resources, to establish where and how drivers use automation in naturalistic conditions, and evaluate how these change with experience. While these objectives investigate periods of automated driving, vehicles with automated driving functionalities will often be driven manually, when outside the operational design domain or at the driver’s preference. In these conditions, the available automation may still outperform the driver on particular tasks, such as detecting and tracking surrounding road users without bias or distraction. This dissertation therefore also contributes to the search for ways in which automation can provide meaningful support to the traffic monitoring task in manual and supervised driving. To evaluate if and when supervised automated driving negatively affects the driver’s ability to monitor, mental workload is evaluated in a Tesla model S on public roads (Chapter 2). Voluntary automation use and attention are examined in a naturalistic driving study on public roads (Chapter 3). To evaluate the effect of experience with automated driving, Chapter 2 compares drivers with and without prior automation use, whereas Chapter 3 examines how behaviour changes over a two-month period, compared to one month of manual driving. Two studies are performed to examine how driving automation can support the driver with the monitoring task, for which an instrumented vehicle was extended with cameras which track the driver’s gaze and associate it to surrounding road users as detected by the vehicle perception. The first study (Chapter 4) investigates how well gaze behaviour can indicate driver awareness toward individual road users, and proposes a recognition task to obtain a ground truth for awareness of multiple other road-users. The second study (Chapter 5) evaluates if driver gaze and head pose can provide earlier predictions for emergency alerting and intervention systems. A crossing pedestrian collision risk prediction system is used as a case study where gaze and contextual cues are evaluated in their contribution to path and risk prediction using a dynamic Bayesian network. Findings & recommendations Chapter 2 found that workload differed between roads with high and low traffic complexity, both for manual and automated driving, which indicates that drivers remain sensitive to changes in task demand while supervising automated driving. Drivers with prior experience in automated driving perceived a lower workload while supervising automation compared to manual driving. No workload difference was perceived for first-time users. In contrast, attentional demand as measured by a detection-response task was higher during automation use compared to manual driving regardless of experience. This indicates that monitoring automation (SAE2) requires more mental capacity compared to manual driving, which suggests that in contrast to a wide range of studies, SAE2 can increase workload. Supervising automation may therefore be beneficial for driver attention, but perception of workload during supervision may be too low for this to occur naturally. Future work should consider calibrating workload perception and system limitation understanding rather than actual task demand to encourage attentive supervision. Chapter 3 shows that automation is mostly used on road types generally considered suitable for automated driving with only incidental use on urban roads. This suggests that users are adhering to the operational design domain of these vehicles. On highways, automation is used at all speeds, but less during short periods of slow driving. No time-in-drive, time-of-day or experience effects were found for automation use. On the highway, head pose deviation was smaller during automation use compared to manual driving but tended to increase over the first six weeks of use, which may indicate a change in monitoring strategy. Further research is needed to assess if this difference indicates better or worse monitoring behaviour. Chapter 4 found that drivers performed better on the recognition task when road users were relevant for the driven manoeuvre and when drivers had directed their gaze within 10 degrees of these road users. However, at least 18% of road users were recognised while only observed peripherally, suggesting that peripheral vision should not be neglected in attention monitoring. Recognition performance was not predicted by gaze metrics and requires further development to reduce forget rates. Further analysis is needed to compare the recognition task to established situation awareness measures after these improvements are obtained. Chapter 5 demonstrates that driver and pedestrian attention monitoring can provide a benefit to pedestrian crossing collision risk prediction when predicting further than 0.75 seconds ahead. The higher workload during supervised automation and the general adherence to the operational design domain in naturalistic driving indicate that supervising driving automation can be beneficial to driver attention and traffic safety, but literature and recent accidents demonstrate that challenges remain in encouraging such attentive behaviour. Strategies to encourage attentive supervision should therefore be further developed, as well as ways to maintain these strategies while automation technology improves in pursuit of the opposite objective to reduce engagement in the driving task. The joint analysis of driver gaze and road scene may improve driver support during manual driving and supervised automation, and benefit the development of automated driving. But care should be taken that systems which use driver attention or rely on other contextual cues do not become susceptible to the same mistakes as drivers tend to make. While careful design approaches can reduce the risk of mimicking human error, validation will ultimately require a reliable way to distinguish between awareness and inattentional blindness. The instrumentation and conducted studies with on-road automation demonstrate that on-road research is becoming more practical and accessible than ever before, thanks to recent developments in automation. The observation that during on-road automation, inexperienced drivers perceive higher workload compared to in simulators testifies for the importance of on-road driving research. Challenges encountered during the naturalistic study and attention study demonstrate that the instrumentation and processing have to be designed and tested carefully for on-road research to be effective.","situation awareness; naturalistic driving; driver support","en","doctoral thesis","","978-94-6419-134-9","","","","","","2021-02-03","","","Intelligent Vehicles","","",""
"uuid:c898c8bf-7210-4a15-a358-a949ef2d71d2","http://resolver.tudelft.nl/uuid:c898c8bf-7210-4a15-a358-a949ef2d71d2","Computational fluid dynamics for non-conventional power cycles: Turbulence modelling of supercritical fluids and simulations of high-expansion turbines","Otero Rodriguez, G.J. (TU Delft Energy Technology)","Pecnik, Rene (promotor); Klein, S.A. (promotor); Delft University of Technology (degree granting institution)","2021","The global temperature rise, which directly results from greenhouse gases emitted by burning fossil fuels, requires humanity to harness renewable energy sources at an increased rate. However, renewable energy sources are either highly intermittent, such as wind and solar radiation, or available at low temperatures leading to low efficiencies of current thermal conversion systems.
Two exciting technologies that can alleviate the low thermal conversion efficiencies of power plants with low-temperature heat sources are supercritical carbon dioxide (s-CO2) and organic Rankine cycles (ORCs). Compared to conventional power cycles, ORCs and s-CO2 power cycles have different workingmedia (e.g., CO2 and hydrocarbons), such that the working fluid provides an additional degree of freedom to better adapt to low-grade heat sources. As a consequence, the power cycles have higher thermal efficiency and a more compact design.
However, the main difficulty in designing highly efficient components of these nonconventional power plants lies in the fact that the heat exchangers and the turbines operate either with fluids of high molecular complexity or with fluids in highly non-ideal thermodynamic conditions. These complexitiesmake it challenging to accurately design efficient components with computational fluid dynamic (CFD) software that can reliably predict heat transfer and pressure losses in heat exchangers, and aerodynamic performance parameters in turbomachinery equipment.","non-conventional power cycles; turbulence modeling of supercritical fluids; fluid dynamic simulations of high-expansion turbines","en","doctoral thesis","","978-94-6419-149-3","","","","","","","","","Energy Technology","","",""
"uuid:8ed1e48a-5c00-47d9-b1bf-ef7d06fab048","http://resolver.tudelft.nl/uuid:8ed1e48a-5c00-47d9-b1bf-ef7d06fab048","Behavior of reinforcing steel and reinforced concrete undergoing stray current","Chen, Zhipei (TU Delft Materials and Environment)","van Breugel, K. (promotor); Koleva, D.A. (copromotor); Delft University of Technology (degree granting institution)","2021","Currents flowing along paths not being elements of a purpose-built electric circuit, are called stray currents. Various types of reinforced concrete structures (such as viaducts, bridges and tunnels) in the neighborhoods of railways may be subjected to stray current leaking from the rails. In these cases the concrete pore solution acts as an electrolyte, and the reinforcing rebars (or pre-stressed steel wires) embedded in concrete act as conductors, which can “pick up” the stray current and can corrode.
The understanding of stray current-induced corrosion of steel rebar in concrete still remains unclear, as it is challenging to inspect in detail the full scale of steel rebar, as embedded in concrete. Most of previous understanding and preventive measures for stray current corrosion refer to investigations or field tests on pipelines. Besides, it is difficult to rebuild or repair the structures under or near rail transits. All above reasons reflect that stray current corrosion of reinforced concrete structures is in need of more in-depth investigation and understanding.
As an expansion of the current body of knowledge of stray current corrosion of steel rebar in cement-based materials, this research aims to be a step forward towards for a better understanding of stray current corrosion mechanisms, a basis of feasible preventive measures for stray current-induced corrosion of reinforced concrete structures.","Stray current; anodic polarization; steel rebar; corrosion; mortar; interface; bond; electrochemical response; rebar orientation","en","doctoral thesis","","978-94-6419-150-9","","","","","","","","","Materials and Environment","","",""
"uuid:9ac826a1-2203-4f23-b074-3d765abd73e3","http://resolver.tudelft.nl/uuid:9ac826a1-2203-4f23-b074-3d765abd73e3","Circular Strategies Enabled by the Internet of Things: Opportunities, Implementation Challenges, and Environmental Impact","Ingemarsdotter, E.K. (TU Delft Circular Product Design)","Balkenende, A.R. (promotor); Jamsin, E. (copromotor); Delft University of Technology (degree granting institution)","2021","The concept of a ‘Circular Economy’ (CE) has been gaining traction in business, policy, and academia. It envisions an economy powered by renewable energy in which the value of products and materials are preserved for as long as possible. ‘Design for Circular Economy’ is emerging as a research field as well as a branch of sustainable design practice. Design strategies for the CE include energy and material efficiency, increased utilization, maintenance, repair, reuse, remanufacturing, and recycling.
As more and more products are equipped with digital functionalities and connected to the ‘Internet of Things’ (IoT), new opportunities arise for circular and sustainable design. However, research at the intersection between IoT and CE is still in an early phase and companies are only starting to explorewhat is possible. There is still a lack of research-based design guidance for companies aiming to use IoT to support ‘circular strategies’. In particular, little is known about the actual environmental impact of IoTenabled circular strategies.
This thesis sets out to study how companies can use IoT to support circular strategies. By doing so, the aim is to provide guidance to companies who want to design and implement circular products and services. Focus is placed on understanding the opportunities for companies, as well as the implementation challenges and environmental impact of IoT-enabled circular strategies.","Circular Economy; Digitalization; Circular Business Models; Sustainable ICT; Condition-Based Maintenance; Life Cycle Assessment","en","doctoral thesis","","978-94-6366-369-4","","","","","","","","","Circular Product Design","","",""
"uuid:a5bdb2ef-c02c-45e9-bb92-07961d3ef28b","http://resolver.tudelft.nl/uuid:a5bdb2ef-c02c-45e9-bb92-07961d3ef28b","The impact of heterogeneity on geothermal production: simulation benchmarks and applications","Wang, Y. (TU Delft Reservoir Engineering)","Voskov, D.V. (promotor); Bruhn, D.F. (promotor); Delft University of Technology (degree granting institution)","2021","Numerical simulation plays an important role for the efficient development of geothermal resources, considering all the uncertain and sensitive parameters that exist within the subsurface and during the operations. This thesis describes the numerical modeling of geothermal developments of various types and in various situations using the newly developed open-source numerical simulator, called Delft Advanced Research Terra Simulator, shortly DARTS. The main objective of this thesis is to investigate the influence of heterogeneity to geothermal developments...","heterogeneous reservoir; geothermal benchmark; sensitivity analysis; uncertainty quantification; fractured reservoir","en","doctoral thesis","","978-94-6384-197-9","","","","","","","","","Reservoir Engineering","","",""
"uuid:6b9441d5-6e48-4046-b88d-d84178e16bcb","http://resolver.tudelft.nl/uuid:6b9441d5-6e48-4046-b88d-d84178e16bcb","Radical shifts and slow adaptions: The transformation of patterns of dwelling and urban planning since the discovery of oil in Dammam Metropolitan Area, Saudi Arabia","Al Kurdi, F.F.A. (TU Delft OLD Woningbouw)","van Gameren, D.E. (promotor); Delft University of Technology (degree granting institution)","2021","The main objective of this study is: to investigate the impact of the rapid evolution that occurred by several oil booms on the transformation of dwellings ’patterns on all cities components and urban design; and to document its’ effect on the home environment in Dammam Metropolitan Area, Saudi Arabia. It also enables the researcher to understand and describe the aspects that motivate people to adopt and refine the internal and external elements of their home environment and urban design. Finally, the study proposes guidelines and suggestion for urban space design and house form in Saudi neighbourhoods of the future.
The study plays a significant role in describing and understanding the development of house form and urban design. The research studies the house from the onset of residential settlements of the triplet cities of Dammam, Dhahran and Al-Khobar in the Eastern Province of Saudi Arabia until the contemporary period. Also, the research determines the main factors behind the changes within each period. The analysis of changes and their associated factors provides a framework for formulating the building regulation and housing policy to match the inhabitants’ needs, for the sake of appropriate dwellings and planned residential neighbourhoods that meet the needs of the population.
It is essential to provide the designers and urban planners with the ability to understand the effect of urban design on continued socio-cultural values to respect them and create a high-quality, sustainable environment in their future design. In Summary, the research findings reveal that oil discovery period has had a dramatic effect on an urban design which has affected house forms. The divisions of lands, the street layout and even public utilities, the house form and layout, regarding overall space organisation, rooms, distribution, their utilisation and facades, all have already changed and then have slow adaptations along with the development of the region.","","en","doctoral thesis","","","","","","","","","","","OLD Woningbouw","","",""
"uuid:c768cd19-f45e-4b1a-94e1-2a828d6cf175","http://resolver.tudelft.nl/uuid:c768cd19-f45e-4b1a-94e1-2a828d6cf175","From city branding to urban transformation: How do Chinese cities implement city branding strategies?","Ma, W. (TU Delft Organisation & Governance)","Veeneman, Wijnand (promotor); de Jong, W.M. (promotor); de Bruijne, M.L.C. (copromotor); Delft University of Technology (degree granting institution)","2021","Chinese cities have experienced unprecedented economic growth and urban population expansion in the last four decades. However, a variety of social and environmental problems is associated with urbanization. Cities try to grapple with these challenges but simultaneously find themselves locked in an intense competition with other cities. City branding is viewed as an essential strategy to remain competitive, improve their environmental performance, experience a sustainable urban transformation. However, very little is known about how cities actually implement city branding strategies. This study distinguishes the concepts in use and explores the evolution of research in place branding literature. A progressive relationship between city promotion, city marketing and city branding is proposed and empirically examined. Subsequently, a specific city brand is explored to study how different policy instruments are adopted and configured to realize urban transformation goals. Finally, a detailed investigation in a medium-sized Chinese city shows how stakeholders interact with city policymakers to create and implement city brands. The findings show that cities can apply promotion, marketing or branding strategies to achieve different urban development goals. The study concludes that to successfully implement city branding, extensive stakeholder participation, continuous political commitment and reasonable application of policy instruments are necessary.","city branding; Urban transformation; sustainability; Chinese cities; Policy implementation; Policy Instruments; Low carbon city","en","doctoral thesis","Delft University of Technology, TPM","978-94-6366-372-4","","","","","","2021-06-01","","","Organisation & Governance","","",""
"uuid:6be49718-24c9-4bd4-94cc-f3968662385e","http://resolver.tudelft.nl/uuid:6be49718-24c9-4bd4-94cc-f3968662385e","Towards a systematic design approach of D-regions in reinforced concrete: Optimization-based generation of Strut-and-Tie models","Xia, Y. (TU Delft Applied Mechanics)","Hendriks, M.A.N. (promotor); Langelaar, Matthijs (promotor); Delft University of Technology (degree granting institution)","2021","Reinforced Concrete (RC) structures are widely used in our society for more than a century. In order to design safe and economical RC structures, various methods have been proposed by engineers and researchers. Remarkably, it still is a challenging task for engineers to design D-regions of RC structures, regions with nonlinear strain distributions. Strut-and-Tie Modelling (STM) is a well-known method for designing such regions. The STM method uses a truss-analogy model to represent the force flow within the D-region, thereby providing insight to engineers for reinforcement design. The relative simplicity of the method and the fact that STM leads to a safe design are beneficial to engineering practice. However, in investigations of the STM method, the creation of suitable truss-analogy models has been identified as the key problem for a systematic application of STM. During the past three decades researchers have conducted intensive efforts to find systematic approaches for obtaining truss-analogy models for the STM method. Adopting topology optimization (TO) methods to assist the making of Strut-and-Tie (ST) models appears the most promising direction. For this reason, various TO methods have been proposed, however which method leads to the most suitable ST models is still unknown. Very few investigations have been carried out regarding the systematic evaluation of TO results from the perspective of the STM method. In this thesis, a procedure to evaluate the TO result for STM is presented. Using this procedure, an evaluation of TO methods revealed an urgent and challenging problem of generating a suitable ST model in the TO process. Currently, TO methods only provide optimized material layouts as inspiration for creating ST models. Manual and subjective adjustments are required to convert TO results into adequate ST models. These additional processes not only affect the performance of the desired design, but also hinder the application of TO methods for STM. In this thesis, first a 2D generation method that integrates TO, topology extraction and shape optimization is proposed to solve this problem. The proposed method successfully generates valid ST models for D-regions automatically and without manual adjustments. In addition, an evaluation procedure adopting nonlinear finite element analysis (NLFEA) is proposed to evaluate the performance of the generated ST models. Based on the evaluation results, the generated ST models show a high stiffness and sufficient, yet not overly conservative load capacity. By comparing the generated ST models with various previously manually-created ST models, the generated ST models lead to the most economical steel usage relative to load capacity. Through two case studies and three parameter investigations, the effectiveness of the proposed generation method is validated. For 3D D-regions, generating suitable ST models is an even more challenging task. Therefore, subsequently, the proposed generation method was extended to 3D conditions. In the 3D generation method, three additional measures are adopted to improve the computational efficiency of the TO process, and a new robust procedure is proposed to extract 3D truss-like structures from the TO results. Three 3D D-regions are investigated, and the corresponding ST models are generated based on the proposed method. Again, the generated ST models lead to economically superior designs compared to the manually created ST models. In addition, the proposed generation method is used to investigate three other aspects of the STM method: 1) a parametric study of four-pile caps; 2) STM generation considering complex load conditions; 3) the influence of load discretization. The robustness and effectiveness of the 3D generation method are validated through these investigations. In spite of these improvements, challenges remain for engineers in application of the STM method. The standard STM method involves human choices, which depend on the engineer’s experience and preferences. These subjective factors hinder the application of the STM method and bring uncertainties and variations to the STM design. Developing a systematic STM method that reduces the subjective choices and uncertainties is identified as an important future research direction. In order to explore this problem, in this thesis, the main choices and uncertainties are identified and discussed. In addition, the proposed generation method can be used to investigate these subjective aspects. Therefore, next to being of value for engineers already in the design of D-regions, the proposed generation method is expected to also form a fruitful basis for future refinements in this research direction.","reinforced concrete; Strut-and-Tie; topology optimization; Integrated optimization; Nonlinear Finite Element Analysis","en","doctoral thesis","","","","","","","","2021-02-22","","","Applied Mechanics","","",""
"uuid:088a3991-4ea9-48a0-9b92-cc763748868c","http://resolver.tudelft.nl/uuid:088a3991-4ea9-48a0-9b92-cc763748868c","Testing STT-MRAM: Manufacturing Defects, Fault Models, and Test Solutions","Wu, L. (TU Delft Computer Engineering)","Hamdioui, S. (promotor); Taouil, M. (copromotor); Delft University of Technology (degree granting institution)","2021","As STT-MRAM mass production and deployment in industry is around the corner, high-quality yet cost-efficient manufacturing test solutions are crucial to ensure the required quality of products being shipped to end customers. This dissertation focuses on STT-MRAM testing, covering three abstraction levels: manufacturing defects, fault models, and test solutions. We apply the advanced device-aware test (DAT) approach to STT-MRAM defects, including resistive defects on interconnects and STT-MRAM device-internal defects such as pinhole defects, synthetic anti-ferromagnet flip defects, intermediate state defects. With the derived accurate defect models calibrated by silicon data, a comprehensive fault analysis based on SPICE circuit simulations is performed. STT-MRAM unique faults are identified, including both permanent faults and intermittent faults. Based on the obtain fault models, high-quality test solutions are proposed. Additionally, this dissertation also explores the impact of magnetic coupling and density on STT-MRAM performance for robust designs.","memory test; device-aware test; manufacturing test; STT-MRAM; MTJ; manufacturing defect; fault model; robust design; magnetic coupling","en","doctoral thesis","","978-94-6384-199-3","","","","","","","","","Computer Engineering","","",""
"uuid:6f6e7a1b-65ac-4876-9531-24988a563e36","http://resolver.tudelft.nl/uuid:6f6e7a1b-65ac-4876-9531-24988a563e36","Factors influencing the household water treatment adoption in rural areas in developing countries","Daniel, D. (TU Delft Sanitary Engineering)","Rietveld, L.C. (promotor); Pande, S. (copromotor); Delft University of Technology (degree granting institution)","2021","Household water treatment (HWT), such as boiling, chlorination, and ceramic filtration, is an interim solution to solve the problem of unsafe drinking water at home, especially for households that do not have access to safe drinking water services. However, previous reports indicate that many people in low and middle-income countries (LMICs) do not use HWT regularly, i.e. still drink unsafe and untreated water. A behavioural study is needed to find reasons for these phenomena, which can help related stakeholders in designing appropriate interventions to increase the regular use of HWT.","","en","doctoral thesis","","978-94-6384-200-6","","","","","","","","","Sanitary Engineering","","",""
"uuid:2d34ed0c-1729-4e4e-a01d-d3cbb6544eb0","http://resolver.tudelft.nl/uuid:2d34ed0c-1729-4e4e-a01d-d3cbb6544eb0","Housing Refurbishment for Energy Efficiency and Comfort: Toward sustainable hosuing in Vietnam","Nguyen, P.A. (TU Delft Climate Design and Sustainability)","van den Dobbelsteen, A.A.J.F. (promotor); Bokel, R.M.J. (promotor); Delft University of Technology (degree granting institution)","2021","Vietnam has made a lot of significant developments in both economic and social
fields since the transition from a centrally planned to the market-oriented economy in 1986. Along with the growth of the country, the energy sector, which accounts for one-fourth of the national foreign earnings, plays an important role. In order to continue contributing to the sustainable development of Vietnam, the energy sector has to tackle the problems of ensuring adequate energy supply and minimising energy-related environment impacts.
In building sector, a newly constructed building has more potential to achieve better energy performance than a refurbished project, which is limited by unchangeable factors on the existing site. However, the importance of the existing building should not be ignored due to the fact that the number of existing buildings is outweigh the number of possible new buildings added to the market annually. Although refurbishment activities are being carried out regularly in Vietnam, little effort was recognised to improve the energy performance of the building. One of the reasons is that the contemporary construction methods in Vietnam are still quite simple and do not integrate energy efficiency measures.
Sustainable and energy-efficient housing was not just recently recognised and
concerned in Vietnam. However, there is still lack of research in this specific field. This research aims to develop a design strategy for housing refurbishment projects in Vietnam, in order to achieve better energy performance. The approach should be systematic and holistic, addressing all relevant issues in the current housing stock of Vietnam. It is expected to be used to help architects in decision making in the early design stage and to help state agencies to set guidelines and regulations for future housing of Vietnam. This led to the main research question to be answer in this thesis.
What are the design strategies for energy efficiency in Vietnam housing and how
should they be integrated in an existing house as well as in a new built house?","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-376-2","","","","A+BE I Architecture and the Built Environment No 2 (2021)","","","","","Climate Design and Sustainability","","",""
"uuid:90594179-e599-4371-ac63-3fa800c53cc9","http://resolver.tudelft.nl/uuid:90594179-e599-4371-ac63-3fa800c53cc9","Comparative genomics in the era of long-reads: An application on industrial yeasts","Salazar, A.N. (TU Delft Pattern Recognition and Bioinformatics)","Reinders, M.J.T. (promotor); Abeel, T.E.P.M.F. (copromotor); Delft University of Technology (degree granting institution)","2021","We, humans, have an ancient microscopic companion: yeasts. These microbial organisms have helped shape our evolution, our civilizations, and our sciences. The evolutionary event that enabled yeasts to produce alcohol more than 100 million years ago was followed with adaptations throughout the animal kingdom to tolerate it. Our realisation that yeast could be used to produce bread, beer, and wine quickly enabled us to fuel the high, caloric need of many civilizations. An international dispute nearly two centuries ago about the biological nature of yeast in alcohol production, ultimately led to the founding of microbiology and the various medicinal benefits from its practice. And today, yeasts are the ‘Swiss Army knives of biotechnology’, as they are often engineered to produced cheaper therapeutics and alternative energy sources.
Although an ancient companion, we have only begun to truly understand yeasts and their biotechnological capabilities, largely due to a new scientific instrument: genome sequencing technology. Analogous to an ‘algorithmic microscope’, genome sequencing technology is enabling us to generate large amounts of data about the genetic composition and diversity of yeasts. But it comes with a challenge: these (ever-growing) datasets are complex. So how do we properly analyse them? How do we consider the complex evolutionary histories encoded in the genomes of yeasts and other microbes alike? What new biology could we learn?","","en","doctoral thesis","","978-90-9034-313-6","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:f961da81-9a49-4a24-8f9e-1a9978ff287d","http://resolver.tudelft.nl/uuid:f961da81-9a49-4a24-8f9e-1a9978ff287d","Public Rental Housing Governance in Urban China: Essence, Mechanisms and Measurement","Yan, J. (TU Delft Housing Institutions & Governance)","Elsinga, M.G. (promotor); Haffner, M.E.A. (copromotor); Delft University of Technology (degree granting institution)","2021","Recently, Chinese Public Rental Housing (PRH) provision has witnessed a shift from ‘government’ to ‘governance’: policy making shifted from government steering to mixed forms involving government, market and civic actors to pursue effective and fair policies. In the meantime, this new-era PRH governance is credited with mixed results. However, the existing studies fail to describe the mechanisms underlying this new-era governance of PRH with the rising involvement of market actors and those in civil society and whether the new-era governance is considered to be effective, achieving the objective of stability. Therefore, this PhD research aims to fill the two research gaps through building a better understanding of the PRH governance in the current Chinese context and evaluating PRH governance. To fulfil this aim, this dissertation is underpinned by a theoretical foundation from the governance perspective and adopts a mixedmethod approach with quantitative and qualitative data in the study of Chinese PRH provision. The dissertation reveals the essence of the current Chinese PRH governance by bringing forth a governance model and shows the structures and mechanisms for non-governmental actors to play a role in the governance of PRH. The dissertation also shows the perceived governance outcomes from tenants’ perspective and demonstrates two main governance challenges of Inclusionary Housing, a newly introduced instrument adopted in the Chinese PRH governance. Based on the results, this PhD research theoretically and empirically contributes to the h ture.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-374-8","","","","A+BE I Architecture and the Built Environment No 1 (2021)","","","","","Housing Institutions & Governance","","",""
"uuid:0e66fcb3-691e-4737-be5f-2a57dbce6f6b","http://resolver.tudelft.nl/uuid:0e66fcb3-691e-4737-be5f-2a57dbce6f6b","Fluctuations for Interacting Particle Systems with Duality","Ayala Valenzuela, M.A. (TU Delft Applied Probability)","Redig, F.H.J. (promotor); Carinci, G. (copromotor); Delft University of Technology (degree granting institution)","2021","This thesis is concerned with fluctuations of interacting particle systems that
enjoy the property of duality. The main contributions of this work are divided
in two main parts. In the first part we study some of the advantages of looking
at the density fluctuation field through the lenses of orthogonal self-dualities. In
the second part, we made use of self-duality and Mosco convergence of Dirichlet
forms to understand the coarsening behaviour of the symmetric inclusion process
when the process undergoes a phase transition known as condensation.","Interacting particle systems; Fluctuation fields; Duality","en","doctoral thesis","","","","","","","","2021-02-19","","","Applied Probability","","",""
"uuid:7086f01f-28e7-4e1b-bf97-bb3e38dd22b9","http://resolver.tudelft.nl/uuid:7086f01f-28e7-4e1b-bf97-bb3e38dd22b9","Aerodynamic advances in vertical-axis wind turbines","De Tavernier, D. (TU Delft Wind Energy)","Ferreira, Carlos (promotor); van Bussel, G.J.W. (promotor); Delft University of Technology (degree granting institution)","2021","As wind farms tend to move towards deeper waters with better wind resources, the classical top-heavy horizontal-axis wind turbines (HAWTs) become particularly challenging. This raises the question whether other concepts could be more suitable and compatible with the deep-sea floating conditions. Hence, the interest in vertical-axis wind turbines (VAWTs) is growing in the search for affordable renewable energy. Vertical-axis wind turbines exchange momentum and energy with the fluid by applying a force field on the flow. The actuation surface, over which the forces are distributed, is cylindrical. The diameter can vary with height as in the Φ-rotor and the surface can be a combination of actuation surfaces nested inside and/or crossing each other. As such, the cylindrical surface provides more freedom than the conventional actuator disk to generate a complex 3D force field varying in space and time. A significant advantage of the cylindrical actuation surface is the ability to distribute forces between the upwind and downwind half and to generate forces perpendicular to the incoming flow. Additionally, the 3D actuation cylindrical surface can be made insensitive to wind direction, eliminating the need for a yaw system. The vertical rotating axis also provides the ability to mount the power generation components near the ground, lowering the centre of gravity and facilitating maintenance.","Vertical-axis wind turbines,; airfoil aerodynamics; rotor/wake aerodynamics; actuator cylinder","en","doctoral thesis","","978-94-6419-131-8","","","","","","","","","Wind Energy","","",""
"uuid:09ad251c-fc03-4252-9c25-e205f0b5a5a1","http://resolver.tudelft.nl/uuid:09ad251c-fc03-4252-9c25-e205f0b5a5a1","Role of Surface Carboxylates Deposition on the Deactivation of Fischer-Tropsch Synthesis Catalysts","Gonugunta, P. (TU Delft RST/Fundamental Aspects of Materials and Energy)","Brück, E.H. (promotor); Dugulan, A.I. (copromotor); Delft University of Technology (degree granting institution)","2021","Fischer-Tropsch synthesis (FTS) is a catalytic reaction, which involves the production of liquid hydrocarbon fuel from synthesis gas obtained from natural gas, biomass or coal via gasification and steam reforming. From an industrial perspective, both Co and Fe based catalysts have been applied. However, Co-based catalysts are preferred in FTS particularly for gas-to-liquid (GTL) processes as they have high activity, high selectivity to linear hydrocarbons and low activity for the unwanted water-gas shift reaction. However, Co-based catalysts are relatively expensive and deactivate in time. To make the FTS process economically more effective, a stable performance of the catalyst is required. Therefore, studying the catalyst deactivation is an important topic in the development of better industrial catalysts. Oxidation, sintering of active phase and deposition of oxygenated compounds are potential causes for deactivation. The possible role of oxygenates and their effect of catalyst deactivation, however, is less understood. With the aim to investigate the deposition of oxygenated compounds, particularly carboxylates, as hypothetical deactivation mechanism, operando characterisation techniques were adapted to monitor the chemical and physical properties and structure-activity relationship of the catalyst during the reaction. Operando Diffuse Reflectance Infrared Fourier Transform (DRIFT) and Mössbauer emission spectroscopy setups were employed that can be operated at industrially relevant FTS conditions.","","en","doctoral thesis","","","","","","","","2021-08-12","","","RST/Fundamental Aspects of Materials and Energy","","",""
"uuid:ee9e2137-630b-454b-8f37-228f068bcc89","http://resolver.tudelft.nl/uuid:ee9e2137-630b-454b-8f37-228f068bcc89","Circuit Quantum Electrodynamics with Single Electron Spins in Silicon","Zheng, G. (TU Delft QCD/Vandersypen Lab)","Vandersypen, L.M.K. (promotor); Scappucci, G. (copromotor); Delft University of Technology (degree granting institution)","2021","This dissertation describes a set of experiments with the goal of creating a super-conductor-semiconductor hybrid circuit quantum electrodynamics architecture with single electron spins. Single spins in silicon quantum dots have emerged as attractive qubits for quantum computation. However, how to scale up spin qubit systems remains an open question. The hybrid architecture considered here could provide a route to realizing large networks of quantum dot–based spin qubit registers. The first experiment in this thesis is aimed at achieving strong coupling between a single electron spin and a single microwave photon. The electron is trapped in a gate-defined double quantum dot in a Si/SiGe heterostructure and the photon is stored in an on-chip superconducting high-impedance NbTiN cavity. The photon is coupled directly to the electron charge, and indirectly to the electron spin, mediated through a synthetic spin-orbit field. We observe a vacuum Rabi splitting that depends on the spin-charge hybridization. The ratio of spin-photon coupling strength to decoherence rates of the spin and cavity combined is larger than unity, confirming the strong coupling regime has been reached. In addition, we find an optimal degree of spin-charge hybridization for which this ratio is maximized. The demonstration of strong spin-photon coupling not only opens a new range of physics experiments, but fulfills also a crucial requirement for coupling spin qubits at a distance via a cavity. The second experiment is focused on spin readout with the on-chip cavity. Instead of the direct dispersive readout of a single spin, we use the cavity to detect whether the electron is allowed to tunnel between the two dots or not. We benchmark the charge sensitivity and bandwidth of the detector and find that rapid detection of the electron charge with high SNR is possible. In the two-electron regime, electron tunneling is contingent on the total spin state (Pauli spin blockade). This spin-to-charge conversion scheme enables single-shot detection of singlet states with high-fidelity. The demonstration of single-shot spin readout with a cavity is an essential step towards readout in dense spin qubit arrays, such as the crossbar network, where it is not possible to integrate electrometers and accompanying reservoirs adjacent to the qubit dots. In the third experiment, we develop on-chip microwave filters to suppress microwave photon leakage from the cavity through the gate electrodes that are necessary to form quantum dots. We introduce a new cavity design that is compatible with long-distance connectivity between spins, but is also more susceptible to microwave leakage. We test and compare two low-pass filter variations in terms of performance, footprint and integrability. They use the same nanowire inductor, but different implementations of the capacitor: one with a planar interdigitated capacitor and one novel design with an overlapping thin-film capacitor. We find that both approaches are effective against microwave leakage. However, the large footprint of the interdigitated capacitor makes this solution inconvenient as the number of gate lines increases. The thin-film capacitor, with its much smaller footprint, is better suited for our devices. The final part of this dissertation contains concluding remarks and possible future directions are proposed.","quantum dots; electrons; spins; superconducting resonators; microwave photons; quantum computation; silicon","en","doctoral thesis","","978-90-8593-465-3","","","","","","","","","QCD/Vandersypen Lab","","",""
"uuid:face1276-5025-4853-996a-30d6771cf24d","http://resolver.tudelft.nl/uuid:face1276-5025-4853-996a-30d6771cf24d","Self-Organizing Multi-Agent Systems","van Leeuwen, C.J. (TU Delft Embedded Systems; TNO)","Langendoen, K.G. (promotor); Pawełczak, Przemysław (copromotor); Delft University of Technology (degree granting institution)","2021","In this thesis I research the ability of groups of agents to organize their collective behavior, without any human intervention. Using a framework for gathering information of the behavior, analyzing the performance, and updating the behavior, the agents can adapt to changing environments or user requirements. In my thesis I use different mechanisms driving the self-organization, but mostly focus on Distributed Constraint Optimization Problems (DCOPs) to do so. A new algorithm called CoCoA (Cooperative Constraint Approximation) is used to quickly find solutions that are near-optimal. Throughout my thesis the approach is put to use for different applications such as sensor networks, wireless power transfer networks and smart grids.","Self-organisation; Multi-agent systems; Artificial Intelligence (AI); distributed optimization; Smart Grid; Wireless Power Transfer; Autonomous systems","en","doctoral thesis","","978-94-6366-362-5","","","","data: https://doi.org/10.4121/13066052 code: https://doi.org/10.4121/13066028","","","","","Embedded Systems","","",""
"uuid:b528d7be-e82d-4205-abdf-3fb3fa7f1011","http://resolver.tudelft.nl/uuid:b528d7be-e82d-4205-abdf-3fb3fa7f1011","Agent-based architectures supporting fault-tolerance in small satellites","Carvajal Godínez, J. (TU Delft Space Engineering)","Gill, E.K.A. (promotor); Guo, J. (copromotor); Delft University of Technology (degree granting institution)","2021","Since the launch of the first artificial satellite in October 1957, both satellite computers and their onboard software have changed significantly to integrate more functionalities and making mission operations more reliable. As a result, satellites have become more sophisticated and mission designers have moved functionality from the ground segment to the satellite onboard computers to make them more autonomous. In fact, a larger number of satellite components are adopting the use of embedded computers to improve their performance and increasing subsystems miniaturization. This can be seen by the growth in the number of small satellite missions ranging from 1 kg to 100 kg of mass launched during the past 10 years.","Small Satellites; Onboard Software; Multi-Agent Systems; Satellite Software Architecture; Onboard Satellite Communication; Fault Tolerance","en","doctoral thesis","","","","","","","","","","Space Engineering","","","",""
"uuid:9f2a640e-0f19-4d4d-9feb-e27e3e963fcb","http://resolver.tudelft.nl/uuid:9f2a640e-0f19-4d4d-9feb-e27e3e963fcb","Computation-in-Memory: From Circuits to Compilers","Yu, J. (TU Delft Computer Engineering)","Hamdioui, S. (promotor); Taouil, M. (copromotor); Delft University of Technology (degree granting institution)","2021","Memristive devices are promising candidates as a complement to CMOS devices. These devices come with several advantages such as non-volatility, high density, good scalability, and CMOS compatibility. They enable in-memory computing paradigms since they can be used for both storing and computing. However, building in-memory computing systems using memristive devices is still in an early research stage. Therefore, challenges still exist with respect to the development of devices, circuits, architectures, design automation, and applications. This thesis focuses on developing memristive device-based circuits, their usage in in-memory computing architectures, and design automation methodologies to create or use such circuits. Circuit Level – We propose two logical operation schemes based on memristive devices. The first one uses resistive sensing to perform logical operations. It modifies the sense amplifier in such a way that it can compare the overall current with references and output the logical operation result. During sensing, the resistance of memristive devices remains unchanged. Therefore, endurance and lifetime are not reduced. This scheme provides a solution for maintaining a relatively long lifetime in logic operations for memristive devices that have low endurance. The second scheme is the enhanced version of the first one. It uses two different sensing paths for AND and OR operations. In this way, the correctness of logic operations can be guaranteed even if large resistance variation exists in memristive devices. Architecture Level – We present three in-memory computing architectures based on memristive devices. The first one is a heterogeneous architecture containing an accelerator for vector bit-wise logical operations and a CPU. The accelerator communicates with the CPU or accesses the external memory directly. The second one is to accelerate automata processing. In this architecture, memristive memory arrays store configuration information and conduct computation as well. This architecture outperforms similar ones that are built with conventional memory technologies. The third one is an improved version of the second one. It breaks the routing network into multiple pipeline stages, each processing a different input sequence. In this way, the architecture achieves a higher throughput with a negligible area overhead. Design Automation – A synthesis flow for computation-in-memory architectures and a compiler for automata processors are presented. The synthesis flow is proposed based on the concept of skeletons, which relates an algorithmic structure to a pre-defined solution template. This solution template contains scheduling, placement, and routing information needed for the hardware generation. After the user rewrites the algorithm using skeletons, the tool generates the desired circuit by instantiating the solution template. The automata processor compiler generates configuration bits according to the input automata. It uses multiple strategies to transform given automata, so that constraint conflicts can be resolved automatically. It also optimizes the mapping for storage utilization.","In-memory computing; automata processing; memristive devices","en","doctoral thesis","","978-94-6384-196-2","","","","","","","","","Computer Engineering","","",""
"uuid:e5b8e199-a34c-4cb1-b8bb-836218b12e77","http://resolver.tudelft.nl/uuid:e5b8e199-a34c-4cb1-b8bb-836218b12e77","Empowering stakeholders to organise their agricultural production and supply chains for a sustainable and inclusive future in Indonesia","Kusnandar, K. (TU Delft System Engineering)","Brazier, F.M. (promotor); van Kooten, Olaf (promotor); Delft University of Technology (degree granting institution)","2021","Participation of actors is essential for achievement of the United Nation’s (UN) Sustainable Development Goals (SDGs). With respect to sustainable agriculture the UN has introduced a collaborative framework for food systems transformation encompassing: 1) food system champions identification; 2) food systems assessment; 3) multi-stakeholder dialogue and action facilitation; and, 4) strengthen institutional capacity for food systems governance. The last two actions are the focus of this thesis. Sustainable agriculture involves multiple actors connected horizontally and vertically through agricultural production and supply chain (APSC) networks in which every actors’ decisions and actions are affected by, and affect most, if not, all other actors. Involvement of every actor in the APSCs is essential to enable coordination for sustainable agriculture. Most previous programmes to pursue sustainable agriculture, however, still follow the top-down approach in which local actors are considered as passive entities encouraged to adopt initiatives designed by external actors (e.g. governments, universities, NGOs). Most often, this results in unsustainability of programmes due to the incompatibility of initiatives with factors related to local context. In addition, most programmes focus on horizontal relationships between farmers to deal with encountered challenges, e.g. in production, market, finance. This thesis proposes a different approach that focuses on the participation of actors connected horizontally and vertically in APSCs to (by the actors themselves): analyse situations; design initiatives; and take actions (through working together) to pursue sustainable and workable APSCs. Research through Design (RtD) combined with Action Research, more specifically, Participatory Action Research (PAR) is performed with cases of APSCs in Indonesia, more specifically in the horticultural sector. As most farmers in Indonesia are smallholder farmers (about 93%) with lack of knowledge, information, and capital, Indonesia can be considered to be exemplary for APSCs in developing countries. As most smallholder farmers (including in Indonesia) do not recognise opportunities for sustainable APSCs, empowerment is of importance. This thesis addresses the question: “Can agricultural chain actors (connected vertically and horizontally) in Indonesia be empowered to pursue sustainable APSCs?”. Three concepts that are the foundation of this research are identified: agricultural production and supply chains; empowerment; and co-creation (an approach for empowerment). The first step to answer this question is taking lessons learnt from previous programmes of sustainable agricultural development (SAD) in developing countries. For this, a framework of sustainable APSCs is introduced in this thesis: Participatory Sustainable Agricultural Development (PSAD). The framework focuses on the principles of participation on which most previous frameworks do not focus. This framework was used to analyse previous SAD programmes in developing countries. The results show that, in addition to environmental and economic factors, social factors of empowerment and engagement have shown a positive effect in pursuing sustainable APSCs. In addition, continued facilitation in a follow-up programme is also essential to pursue sustainable APSCs. Based on these results, an approach to empower APSC actors has been designed in this thesis: COCREATE. COCREATE empowers local actors to engage in designing initiatives to be implemented by local actors themselves (through working together) to deal with their situations. For this, pursuing a common understanding of involved actors on their common situations is essential. COCREATE consists of design and implementation activities, and the process of these activities is cyclic with continuous feedback. Reflection is one of the essential elements with which this approach pursues common understanding that most previous approaches do not include. Meanwhile, with respect to actors, COCREATE involves actors that are connected both horizontally and vertically in the APSCs in which power imbalances exist. COCREATE, in this research, was implemented in multiple cases of APSCs in Indonesia, more specifically with local trader-farmer groups and a farmer organisation (FO), more specifically with a group of farmer groups, in which most smallholder farmers were involved. Both cases are located in a horticultural production centre in Indonesia, in Bandung District, West Java. Local trader-farmers groups, representing the primary vertical relation in the chain, consist of a local trader and farmers connected through traditional chain governance (i.e. local trader provides finance to farmers, consequently farmers must sell all their produce to the local trader). Even though they depend on each other in their APSCs systems, there is lack of incentive alignment in their relationships. In fact, many problems arise due to, e.g. lack of information transparency, unfair chain governance, lack of commitment. Meanwhile, an FO, representing horizontal relations in the chain, consists of farmers connected through organisational governance to enable them to do collective actions, including getting access to markets. The FO in this case is a group of farmer groups (GFG) that faced challenges of the commitment of farmer members, internal information flow, bottlenecks in the production and supply chain, and, financial arrangements as a consequence of the growth in market and membership (also known as being successful). In the implementation of COCREATE with cases of both local trader-farmers groups and a GFG, participants were able to apply the sequence of procedure in design activities, including reflection. It resulted in common understanding improvement of involved actors of their situations that can be seen from solutions co-created and agreed by them. In the implementation activities, participants (in cases of local trader-farmers groups and GFG) were able to organise themselves to implement agreed solutions. Meanwhile, in the follow-up design, they were able to evaluate and adapt the solutions to address the changes in situations. Facilitators, in both cases, played important roles to ensure the procedure of COCREATE (including reflection) was implemented appropriately supporting participants by providing the information needed and facilitating access to external parties when asked to by the participating actors. In the case of local trader-farmers groups, COCREATE implementation resulted in a change in relation and task division between farmers and local traders (in each group), improving market position and institutional arrangements between them. Meanwhile, in the case of a GFG, COCREATE implementation resulted in the ability of the GFG to self-organise their governance to deal with the challenges identified. Based on the results, this thesis concludes that: 1) social factors of empowerment and engagement are essential to pursue sustainable APSCs, in addition to environmental, economic and governance factors; 2) COCREATE is an approach to empower APSCs actors (connected horizontally and vertically) in developing countries to engage in pursuing sustainable and workable their APSCs; 3) COCREATE supports farmers and local traders (in the vertical relationships) to improve their own and each other position in the APSCs; 4) COCREATE supports farmer organisation (in the horizontal relationships) to self-organise their governance to maintain sustainable inclusion; 5) Empowering agricultural chain actors in developing countries is a long-term process and requires new approaches within, e.g. extension programmes, local university programmes as well as private business interventions.","","en","doctoral thesis","","978-94-6366-363-2","","","","","","","","","System Engineering","","",""
"uuid:404cd375-f152-4a28-b324-8727497a517d","http://resolver.tudelft.nl/uuid:404cd375-f152-4a28-b324-8727497a517d","Development of an Integrated Analytical Model to Predict the Wet Collapse Pressure of Flexible Risers","Li, X. (TU Delft Marine and Transport Technology)","Hopman, J.J. (promotor); Jiang, X. (copromotor); Delft University of Technology (degree granting institution)","2021","A flexible riser is a multi-layered pipe device which enables deep-water production by connecting seabed facilities to floating vessels. To withstand huge hydro-static pressure, it is required to have strong collapse capacities. At present, the collapse capacity of a flexible riser is designed based on a ""wet collapse'' concept, in which the outer sheath is damaged and the seawater has flooded the annulus. For a given water depth, the hydro-static collapse design of a flexible riser needs to be confirmed by a wet collapse calculation. Calculating the wet collapse pressure of flexible risers is always challenging since the layers within risers are different in geometries and materials. Such a complex cross-sectional configuration makes the numerical simulation become the main approach for collapse analysis, which is quite time-consuming for the design stage. As the production is moving towards ultra-deep water fields, the design of new riser product is being required to achieve a balance between collapse resistance and self-weight. Therefore, there is a demand to develop an efficient tool to facilitate the collapse design. In view of this, this thesis presents an integrated analytical model, which addresses three challenges in the wet collapse analysis of flexible risers: the interlocked layer profiles, the geometric imperfections and the pipe curvature. The integrated analytical model contributes to the hydro-static collapse design of flexible risers, which can provide the designers a rapid feedback for their designed cross-section configuration. In our research work, the whole collapse analysis conducted by the proposed analytical model takes less than one hour to finish the prediction. Most of the time is spent on modeling the metallic layers for obtaining their equivalent properties. The actually wet collapse calculation given by this analytical model takes only a few seconds. By contrast, the numerical simulation requires 8-12 hours for modeling and consumes 2-3 days on average to complete one job. For companies that are developing new riser product to enable the ultra-deep water production, this proposed integrated analytical model can effectively facilitate the collapse design of new riser products.","Flexible pipes; Wet collapse; Equivalent layer method; Initial ovality; Inter-layer gap; curvature effect; Integrated analytical model; Numerical simulations","en","doctoral thesis","TRAIL Research School","978-90-5584-285-8","","","","","","","","Marine and Transport Technology","","","",""
"uuid:24c5867a-6fbb-4e6a-9785-2c9362fada91","http://resolver.tudelft.nl/uuid:24c5867a-6fbb-4e6a-9785-2c9362fada91","Hardware in the Loop Emulation of Ship Propulsion Systems at Model Scale","Huijgens, L.J.G. (TU Delft Ship Design, Production and Operations)","Hopman, J.J. (promotor); Vrijdag, A. (copromotor); Delft University of Technology (degree granting institution)","2021","Requirements on ships are rapidly increasing. In particular, safety and environmental impact are under increasing scrutiny. At the same time, cost and profitability remain as important as they have ever been. These increasingly stringent constraints are beginning to pose problems during the design process. For example, the energy efficiency design index (EEDI) aims to reduce emissions of carbon dioxide by progressively limiting engine power installed on board. However, these reductions in propulsive power raise concerns about the ship's manoeuvrability in rough seas. Moreover, the expected introduction of novel power and propulsion systems based on, for example, fuel cell technology, further raises uncertainty regarding the performance of future ships and propulsion systems in dynamic environments. Considering these developments, detailed predictions of manoeuvrability and propulsion plant behaviour are becoming increasingly important in the ship design process. Yet, present prediction methods are insu_cient to evaluate manoeuvrability and behaviour of ship propulsion systems in complex, dynamic environments such as heavy seas. Fully numerical methods based on computational fluid dynamics (CFD) and first principles are inherently uncertain and compute-intensive. As such, these methods are presently unsuitable to assess the dynamic interaction between machinery and hydrodynamics over prolonged periods of time. As an alternative to numerical methods, experiments with scale model ships can be conducted. However, such experiments are subject to hydrodynamic scale effects: viscous friction, spray formation and propeller cavitation are not the same as at full scale. Moreover, these model ships are powered by considerably simplified propulsion systems, causing entirely different propulsion plant dynamics than at full scale. Ideally, scale model experiments would be conducted with, for example, a perfectly downscaled diesel engine, gearbox and propeller; in practice, however, this is generally not feasible. As such, existing prediction methods leave great uncertainty how future ship designs can simultaneously meet all requirements regarding operational performance, safety and compliance with environmental regulations. A possible way to bridge this knowledge gap is by conducting hardware in the loop (HIL) experiments in the ship model basin. Such experiments combine numerical simulations with a physical test setup. During HIL experiments with free sailing ship models, the propulsion engine and other machinery are simulated by a computer. These simulations are then used to control an electric motor, powering the propeller of a physical scale model ship. As such, the complex interaction between engine, propeller, hull and environment can be physically reproduced, allowing to assess design choices early on in the ship design process.","marine propulsion; Hardware In the Loop; open water test; hybrid testing; towing tank","en","doctoral thesis","TRAIL Research School","978-94-6366-364-9","","","","","","","","","Ship Design, Production and Operations","","",""
"uuid:b556b042-495f-4a75-a176-966c008074ee","http://resolver.tudelft.nl/uuid:b556b042-495f-4a75-a176-966c008074ee","Cyclists in Motion: from data collection to behavioural models","Gavriilidou, A. (TU Delft Transport and Planning)","Hoogendoorn, S.P. (promotor); Daamen, W. (promotor); Delft University of Technology (degree granting institution)","2021","As the title suggests, cyclists are the main topic of this dissertation
and more specifically, their behaviour while they are ‘in motion’. The
term ‘in motion’ is used in the title to represent microscopic operational
cycling behaviour, which is the behaviour of cyclists, treated as individuals
(microscopic level), while they are riding their bicycle and making
decisions on how to interact with other traffic participants and with the
infrastructure (operational level). Within this dissertation, models are
developed to capture this behaviour using data collected for this purpose.
Further empirical data analyses led to more behavioural insights
and design recommendations were provided based on the findings. In
this summary, each of these elements is shortly discussed, along with
the need for this research.","Operational cycling behaviour; Discrete choice models; Data Collection; Queuing behaviour; Yielding behaviour","en","doctoral thesis","TRAIL Research School","978-90-5584-282-7","","","","TRAIL Thesis Series no. T2021/07, the Netherlands Research School","","","","","Transport and Planning","","",""
"uuid:18d0a6d1-dbf6-4baa-8197-855ea42a85fe","http://resolver.tudelft.nl/uuid:18d0a6d1-dbf6-4baa-8197-855ea42a85fe","Exploring the Pedestrians Realm: An overview of insights needed for developing a generative system approach to walkability","Methorst, R. (TU Delft Transport and Logistics)","van Wee, G.P. (promotor); Delft University of Technology (degree granting institution)","2021","Walking is an essential form of human mobility. In policy making, however, pedestrians are largely neglected. This dissertation explores how the system for pedestrians works and what steps authorities can take to improve conditions for pedestrians, walking and sojourning in public space. It outlines an effective and fair approach by redefining the domain. Methorst combines, triangulates and advances available information, data and statistics.","","en","doctoral thesis","TRAIL Research School","978-90-5584-277-3","","","","TRAIL Thesis Series no. T2021/6, the Netherlands Research School TRAIL","","","","","Transport and Logistics","","",""
"uuid:9efdc813-e4a7-4a29-9023-1b95b498ca2a","http://resolver.tudelft.nl/uuid:9efdc813-e4a7-4a29-9023-1b95b498ca2a","Computational optical imaging based on helical point spread functions","Berlich, R. (TU Delft ImPhys/Computational Imaging)","Stallinga, S. (promotor); Pereira, S.F. (copromotor); Delft University of Technology (degree granting institution)","2021","Helical point spread functions (PSFs) provide a powerful computational imaging tool for modern optical imaging and sensing applications. However, their utilization is, so far, limited to a single field of application, i.e. super-resolution microscopy, which is due to multiple shortcomings in their current system implementation. A new computational imaging approach is developed in this thesis, which enables the utilization of helical PSFs and their unique advantages for applications in the area of machine vision. In particular, the approach can be used to acquire the three-dimensional distribution of a passively illuminated, extended scene in a single shot based on a compact, monocular camera setup. A novel image processing routine is established to overcome a major challenge of computational imaging using helical PSFs, i.e. the retrieval of the PSF rotation angle in the case of an extended object distribution. The hardware implementation of computational imaging setups that rely on helical PSFs is based on a combination of a conventional optical element, such as a microscope objective or a camera lens, and an additional, dedicated pupil mask. This mask is commonly realized using either a spatial light modulator or a lithographic element that features a structured surface profile. Two new fabrication schemes with different advantages are explored in this thesis. The first scheme utilizes wafer-scale optical lithography in combination with UV-replication in order to fabricate highly cost efficient phase elements. The second method is based on femto-second laser direct writing. It enables the inscription of the phase element directly inside a transparent optical element using a single fabrication step. Therefore, it facilitates a flexible realization of highly integrated PSF engineered optical systems. Current design concepts for pupil masks that generate helical PSFs only focus on doublehelix distributions that feature two, laterally separated irradiance peaks. Furthermore, a diffraction limited performance of the computational imaging system is assumed. A new design method that enables the generation of multi-order-helix PSFs with an arbitrary number of rotating peaks is developed in this thesis. A study of the influence of first order aberrations on the rotation angle of multi-order-helix PSFs is performed in order to assess their effect on the accuracy limits with respect to three-dimensional imaging. In this context, the superior aberration robustness of high-order-helix PSFs featuring three or more rotating spots is demonstrated. Whereas, on the one hand, the effect of aberrations on helical PSFs degrade the depth retrieval accuracy of three-dimensional imaging systems, their influence can be explored in order to obtain information on the system’s wavefront aberrations on the other hand. To this end, the computational imaging approach developed for three-dimensional imaging is extended and combined with a conventional phase diversity method. The novel approach enables a numerically efficient estimation of general wavefront aberrations based on the acquisition of an extended, unknown object scene. In summary, the research performed in this thesis provides the foundation to exploit the unique advantages of computational imaging systems based on helical PSFs for applications in the area of three-dimensional imaging and wavefront sensing.","Computational imaging; PSF engineering","en","doctoral thesis","","978-94-6416-444-2","","","","","","","","","ImPhys/Computational Imaging","","",""
"uuid:510ac2d4-ddae-4648-a893-681017530ce7","http://resolver.tudelft.nl/uuid:510ac2d4-ddae-4648-a893-681017530ce7","Single-molecule sensing with nanopores and nanoslits","Yang, W.W.W. (TU Delft BN/Cees Dekker Lab)","Dekker, C. (promotor); Delft University of Technology (degree granting institution)","2021","We start this thesis by exploring the question whether there is more to be done with solid-state nanopores, given the success of nanopores for DNA sequencing applications.","nanopores; graphene; 2D materials; 2D nanoslit; optical nanotweezing; plasmonics; singlemolecule sensing","en","doctoral thesis","","978-90-8593-464-6","","","","","","","","","BN/Cees Dekker Lab","","",""
"uuid:5966c116-1108-4ecf-8f86-3d8348a3504a","http://resolver.tudelft.nl/uuid:5966c116-1108-4ecf-8f86-3d8348a3504a","Supervised deep learning in computational finance","Liu, S. (TU Delft Numerical Analysis)","Oosterlee, C.W. (promotor); Cirillo, P. (copromotor); Delft University of Technology (degree granting institution)","2021","Mathematical modeling and numerical methods play a key role in the field of quantitative finance, for example, for financial derivative pricing and for risk management purposes. Asset models of increasing complexity, like stochastic volatility models (local stochastic volatility, rough volatility based on fractional Brownian motion) require advanced, efficient numerical techniques to bring them successfully into practice. When computations take too long, an involved asset model is not a feasible option as practical considerations demand a balance between the model’s accuracy and the time it takes to compute prices and risk management measures. In the big data era, typical basic computational tasks in the financial industry are often involved and computationally intensive due to the large volumes of financial data that are generated nowadays. Besides the traditional numerical methods in financial derivatives pricing in quantitative finance (like partial differential equation (PDE) discretization and solution methods, Fourier methods, Monte Carlo simulation), recently deep machine learning techniques have emerged as powerful numerical approximation techniques within scientific computing. Following the so-called Universal Approximation Theory, we will employ deep neural networks for financial computations, either to speed up the solution processes or to solve highly complicated, highdimensional, problems in finance. Particularly, we will employ supervised machine learning techniques, based on intensive learning of so called labeled information (input-output relations, where sets of parameters form the input to a neural network, and the output to be learned is a solution to a financial problem).","","en","doctoral thesis","","978-94-6384-191-7","","","","","","","","","Numerical Analysis","","",""
"uuid:d72c3cb8-ed98-452b-b8b0-1270556e6367","http://resolver.tudelft.nl/uuid:d72c3cb8-ed98-452b-b8b0-1270556e6367","Spatial Activity-Travel Patterns of Cyclists","Schneider, F. (TU Delft Transport and Planning)","Daamen, W. (promotor); Hoogendoorn, S.P. (promotor); Delft University of Technology (degree granting institution)","2021","Knowledge about the way how the bicycle is used for activity participation is still scarce. This thesis provides empirical insights into typical activity-travel behaviour of cyclists. A special focus is put on the spatial dimension of activity-travelling by bicycle and its determinants. The findings can be used to design more bicycle-friendly urban environments.","Activity-travel behaviour; Cycling; Trip chaining behaviour; Activity hierarchies; Cycling distances; Bicycle-friendly city","en","doctoral thesis","TRAIL Research School","978-90-5584-279-7","","","","TRAIL Thesis Series no. T2021/4, the Netherlands Research School","","","","","Transport and Planning","","",""
"uuid:cff86e71-055e-47d3-bb5d-c0b73c0b3b5c","http://resolver.tudelft.nl/uuid:cff86e71-055e-47d3-bb5d-c0b73c0b3b5c","Crystallographic texture control in a non-oriented electrical steel by plastic deformation and recrystallization","Nguyen-Minh, T. (TU Delft (OLD) MSE-3)","Kestens, L.A.I. (promotor); Petrov, R.H. (promotor); Delft University of Technology (degree granting institution)","2021","Texture and anisotropy are persistent characteristics of polycrystal. Because of the ordered and periodic arrangement of atoms in crystal lattices, responses of each oriented crystals in a polycrystal to the same external forces are not similar. Crystal grains which are better accommodated to boundary conditions because of their energetically orientation stable, should have higher volume fractions. The dependence of crystal behaviors on their relative orientations to applied field vector(s) result in anisotropy. To enhance or reduce anisotropy, crystallographic texture in materials need to be controlled and improved.","crystallographic texture; crystal plasticity; recrystallization; shear banding; electrical steels","en","doctoral thesis","","978-94-6423-120-5","","","","","","","","","(OLD) MSE-3","","",""
"uuid:7264ff4b-23d9-4e87-908f-2e5a432aa8f8","http://resolver.tudelft.nl/uuid:7264ff4b-23d9-4e87-908f-2e5a432aa8f8","Off-site enhanced biogas production with concomitant pathogen removal from faecal matter","Riungu, J. (TU Delft Sanitary Engineering)","van Lier, J.B. (promotor); Ronteltap, Mariska (copromotor); Delft University of Technology (degree granting institution)","2021","Globally, 2.7 billion people are using onsite sanitation systems, particularly in low income, high density settlements (LIHDS) in urban areas of developing countries. However, treatment technologies to manage the faecal sludge (FS) generated from these systems are often not in place, leading to high risks for environmental and public health. The development of replicable and effective technologies for FS treatment is key in addressing this challenge. This research focused on development of an innovative FS stabilisation technology and addressed key constraints in anaerobic FS treatment: inadequate pathogen inactivation and limitations in biochemical energy recovery. The developed two-stage reactor system consists of an acidogenic reactor fed with mixtures of FS and market waste to facilitate pathogen inactivation, and a subsequent methanogenic plug-flow reactor for enhanced methane production. Due to its potential for application as an off-site FS treatment technology at any scale, receiving any type of faecal matter, collected from different types of sanitary systems, the system provides an option for FS stabilisation for LIHDS. Additionally, the research evaluated the limitations of sanitation provision in LIHDS, and proposes methods for creating an enabling environment for fullscale implementation of onsite systems. The presented results contribute to designing appropriate sanitation interventions in LIHDS.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-1-032-00443-3","","","","","","","","","Sanitary Engineering","","",""
"uuid:d70c85aa-7aa6-4ae0-970b-c944cec74dec","http://resolver.tudelft.nl/uuid:d70c85aa-7aa6-4ae0-970b-c944cec74dec","Towards electrochemical-performance evaluation of fiber-based batteries: Fiber-arrangement-based method and FE2 multiscale framework","Zhuo, M. (TU Delft Applied Mechanics)","Sluys, Lambertus J. (promotor); Simone, A. (promotor); Delft University of Technology (degree granting institution)","2021","Conventional battery models (e.g., Pseudo-2D model) were developed especially for particle-based battery electrodes and have limitations in addressing the newly-emerging fiber-based ones. This thesis proposes numerical tools for efficient property evaluation of fiber-based electrodes and for multiscale simulation of battery electrochemical behavior.
An efficient computational model is first developed to evaluate percolation threshold, effective electronic conductivity, and capacity of fiber-based electrodes. The electrode is composed of conductive and active fibers mixed in an electrolyte matrix. This model rests with generation of randomly-distributed fibers by Monte Carlo method. The connection between conductive fibers is used to determine percolation threshold and electronic conductivity, while the connection between conductive and active fibers defines the active material utilization and capacity. An optimal active-conductive material ratio is identified to maximize the electrode capacity, and the study of fiber orientation effect reveals that the isotropic distribution leads to the highest utilization of active fibers.
For more accurate estimation, a FE2 multiscale framework is further proposed to solve physics-based governing equations. The first part extends the conventional FE2 method suited to a one-equation model to transient diffusion in a two-phase medium described by a two-equation model. The new features include the macroscale equations derived by the volume-averaging method and separate treatment of the two phases in terms of information exchange between macro- and micro-scales and boundary conditions of the microscale problem. The differentiation of the two phases results in additional macroscale source terms upscaled from the microscale interfacial flux. Unlike effective material properties, the tangents of the interfacial flux depend on the microscopic length scale.
The second part of the FE2 framework addresses the ionic transport in the pore-filling electrolyte of separators, ignoring the interfacial flux between the electrolyte and the active material. The FE2 method features a macroscale constitutive relation numerically obtained, rather than assumed as in Pseudo-2D model and many of the existing models, from microscale simulation results. This unique feature enables the FE2 method to allow for nonlinear (concentration-dependent) transport properties at the microscale and reflect them at the macroscale without postulation. The well-defined microscale problem setting results in effective transport properties expressed in a tensor format that is indispensable for an anisotropic microstructure. ","Li-ion batteries; fiber-based battery electrodes; multi-scale; multi-physics; FE2 method; computational homogenization","en","doctoral thesis","","","","","","","","","","","Applied Mechanics","","",""
"uuid:9db1a0c4-89ba-4f9b-b32a-47b7bca5b55e","http://resolver.tudelft.nl/uuid:9db1a0c4-89ba-4f9b-b32a-47b7bca5b55e","Location-Based Games For Social Interaction In Public Space","Fonseca, Xavier (TU Delft System Engineering)","Brazier, F.M. (promotor); Lukosch, S.G. (promotor); Helbing, D. (promotor); Delft University of Technology (degree granting institution)","2021","This thesis broadens current understanding of how location-based games can promote meaningful social interaction in citizens' own neighbourhoods. It investigates social cohesion and the role of social interaction to its promotion, delves into which requirements users have for playing in their neighbourhood and with its citizens, and takes a technical perspective into how this type of games should be designed to be successful at triggering interaction in public space. From this understanding, which stems from adolescents and adults from Rotterdam and The Hague, NL, a specific design and prototype of a location-based game is proposed and tested. This thesis addresses several gaps found in the current body of knowledge. On the one hand, meaningful interactions are person-dependent, can occur in various forms, and their impact on societies is not well understood. On the other hand, it is not well understood how to build location-based games for such aim: it is not known which requirements should be considered, attempts to build location-based games are often a product of in-house development not centred early on around users, no known guidelines exist for meaningful social interaction, and no consensus exists on what to consider when building location-based games from a technical perspective.
This thesis offers learnings on how to best design location-based games to promote interaction that matters to local communities. It firstly offers an overview of social cohesion and how multiple factors and actors have the power to influence local communities. It then argues that meaningful social interaction bears the power to break down stereotypes and prejudice, empowers people's agencies to act, has a positive impact on cohesion, emerges at people's own pace, and addresses conflict. From this, it dives into the preferences, needs and desires of adolescents and adults to better understand what sorts of interactions are meaningful to them. This thesis explores throughout several case studies the requirements that these target groups have, and advances gameplay dynamics and game activity types that location-based games should implement to be successful at inviting meaningful social interaction in public space. These case studies also research different sorts of interaction that each game activity type invites players to have, and elicit specific game ideas that are particularly tailored around perceived-to-be socially challenging neighbourhoods in The Netherlands. These case studies culminate in the recommendation of several guidelines to be used at different stages of the game design: gameplay requirements, guidelines for meaningful social interaction to occur in the studied groups, and the sorts of game activities that designers should include to invite specific forms of social interaction. This thesis also proposes a systems architecture with key architectural components, to drive consensus and inform on what to consider when building location-based games for this purpose from a technical perspective.
The lessons learned that are advanced in this thesis help practitioners design location-based games that are more tailored to what future players want to play, and help researchers understand what it means to design for meaningful social interaction in any public space around the world. Players have distinct preferences with regard to the ways they are exposed to their own neighbourhood, and the forms of interaction they would rather experience. Understanding this, and incorporating such preferences in game design, lead to gameplay experiences that can have a positive effect on societies, as they have the power to promote interaction and positive relationships in local communities. These gameplay experiences invite individuals to come together and have meaningful interactions in a playful way, (re)engage with their own neighbourhood, and be part of their local community.","Location-based Games; Serious Games; Software architecture; Requirements Engineering; Social Interaction; social cohesion; Meaningfulness; Public Space","en","doctoral thesis","","978-94-6421-198-6","","","","","","","","","System Engineering","","",""
"uuid:8c5055e3-477e-438c-b722-f55d2c3e41fc","http://resolver.tudelft.nl/uuid:8c5055e3-477e-438c-b722-f55d2c3e41fc","Breach detection using diffuse reflectance spectroscopy during spinal screw placement","Swamy, A. (TU Delft Medical Instruments & Bio-Inspired Technology)","Hendriks, B.H.W. (promotor); Dankelman, J. (promotor); Delft University of Technology (degree granting institution)","2021","The intraoperative guidance and placement of spinal screws is a complex procedure. High technical expertise is required fromthe surgeons in order to achieve adequate fixation and ensure patient safety by preventing vascular and neurological injuries. The conventional screw placement techniques face several challenges. Surgeons heavily rely on experience-based judgement, tactile feedback and X-ray guidance. The consequences of which are reflected in clinical literature via high risks associated with complications, screw placement accuracy variability and radiation exposure. Moreover, cost savings in terms of improved patient outcomes such as patient recovery times and fewer revision surgeries are major incentives towards development and clinical adoption of better intraoperative guidance technologies. The aim of this PhD work was to investigate the applicability of spectral sensing based technique namely Diffuse Reflectance Spectroscopy (DRS) for intraoperative instrument guidance and breach detection during pedicle screw placement procedures.","diffuse reflectance spectroscopy; spinal screw placement; breach detection","en","doctoral thesis","","","","","","","","","","","Medical Instruments & Bio-Inspired Technology","","",""
"uuid:0efec7ac-3198-4934-8d15-e94986ae104a","http://resolver.tudelft.nl/uuid:0efec7ac-3198-4934-8d15-e94986ae104a","Surrogate-assisted reservoir history matching","Xiao, C. (TU Delft Mathematical Physics)","Heemink, A.W. (promotor); Lin, H.X. (promotor); Delft University of Technology (degree granting institution)","2021","In the community of petroleum engineering, the use of surrogate modelling techniques have recently gained more and more popularity to improve the efficiency of history matching. However, it is still not possible to fully utilize their potential in realistic applications. One of the challenges is to retain high accuracy while increasing the computational efficiency using a surrogate model. This dissertation proposed a projection-based reduced-order model and a data-driven deep convolutional neural network. In the first part of the thesis, a non-intrusive subdomain POD-TPWL method for solving gradient-based reservoir history matching problems is presented. It is a projection-based reduced-order modelling approach wherein the adjoint model of the original high-dimensional non-linear model is approximated by a subdomain reduced-order linear model. Furthermore, by introducing domain decomposition for the reduced-order model and by restricting the number of uncertain parameter patterns to the subdomains, the number of full order simulations required for the derivation of this surrogate model is reduced drastically. In the second part of the thesis, we propose two kinds of deep-learning inversion frameworks for efficiently solving large-scale history matching problems. The first deep-learning deterministic inversion framework primarily explores the possibility of applying a DNN surrogate to approximate the gradient of the objective function by making use of auto-differentiation (AD). In combination with the DNN surrogate, the AD enables us to evaluate the gradients efficiently in a parallel manner and without the need of explicitly coding of the adjoint model. The second framework is the deep-learning stochastic inversion which constructs a deep-learning surrogate based on an image-oriented distance parameterization for ensemble-based seismic history matching. Instead of directly assimilating spatially dense seismic data, image-oriented distance parameterization is employed to extract valuable information from the water fronts. Inspired by the methodologies developed for image segmentation in the field of computer vision and image processing, we propose an advanced image segmentation network for accurately predicting water fronts with highly-complex spatial discontinuities. In comparison with the conventional workflows entirely based on high-fidelity simulation models, experimental results show that the proposed surrogate-supported workflow achieves an accuracy equal to or better than the conventional workflow at significantly lower cost.","reservoir simulation; reduced-order modeling; deep neural network; data assimilation; model-reduced adjoint; smooth local parameterization","en","doctoral thesis","","978-94-6366-365-6","","","","","","","","","Mathematical Physics","","",""
"uuid:ad561ffb-3b28-47b3-b645-448771eddaff","http://resolver.tudelft.nl/uuid:ad561ffb-3b28-47b3-b645-448771eddaff","Machine Learning and Counter-Terrorism: Ethics, Efficacy, and Meaningful Human Control","Robbins, S.A. (TU Delft Ethics & Philosophy of Technology)","Miller, S.R.M. (promotor); van de Poel, I.R. (copromotor); Delft University of Technology (degree granting institution)","2021","Machine Learning (ML) is reaching the peak of a hype cycle. If you can think of a personal or grand societal challenge – then ML is being proposed to solve it. For example, ML is purported to be able to assist in the current global pandemic by predicting COVID-19 outbreaks and identifying carriers (see, e.g., Ardabili et al. 2020). ML can make our buildings and energy grids more efficient – helping to tackle climate change (see, e.g., Rolnick et al. 2019). ML is even used to tackle the very problem of ethics itself – creating an algorithm to solve ethical dilemmas. Humans, it is argued, are simply not smart enough to solve ethical dilemmas; however, ML can use its mass processing power to tell us the answers regarding how to be ‘good’, in the same way it is better at Chess or Go (Metz 2016). States have taken notice of this new power and are attempting to use ML to solve their problems, including their security problems and, of particular importance in this thesis, the problem of countering terrorism. Counterterrorism procedures including border checks, intelligence collection, waging war against terrorist armed forces, etc. These practices are all being ‘enhanced’ with ML-powered tools (Saunders et al. 2016; Kendrick 2019; Ganor 2019), including: bulk data collection and analysis, mass surveillance, and autonomous weapons among others. This is concerning. Not because the state should not be able to use such power to enhance the services it provides. Not because AI is in principle unethical to use – like land mines or chemical weapons. This is concerning because little has been worked out regarding how to use this tool in a way that is compatible with liberal democratic values. States are in the dark about what these tools can and should do.","Machine Learning; Ethics; Meaningful Human Control; Artificial Intelligence","en","doctoral thesis","","","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:805eaec5-3b87-4bd6-bbe3-fa896da3b1c5","http://resolver.tudelft.nl/uuid:805eaec5-3b87-4bd6-bbe3-fa896da3b1c5","Bank erosion in regulated navigable rivers","Duro, G. (TU Delft Rivers, Ports, Waterways and Dredging Engineering)","Delft University of Technology (degree granting institution)","2021","Banks constitute important areas for the river ecology since they provide a multitude of favourable conditions for flora and fauna. The hydromorphological diversity typical of these transitional zones between water and land, and the associated processes of erosion and accretion, make riverbanks vital for many aquatic and riparian plants and animals. In recent decades, the increasing awareness of the ecological significance of rivers and water bodies resulted in the gradual implementation of extensive stream, river and floodplain restoration. In the EU, these practices are regulated by the Water Framework Directive. An important and largely applied re-naturalization measure in highly trained watercourses is the removal of bank protections to reactivate erosion processes and promote habitat diversity. In rivers used as waterways, ship waves can be an important cause of bank erosion and ecological disturbance. The sediment yield from bank erosion may alter navigable depths, the water quality, and flood conveyance, for which enhancing the hydromorphology is a challenge in multifunctional rivers. Due to pressing needs to improve riverine habitats, large-scale restoration works have been implemented based on conceptual schemes without a comprehensive knowledge of wave erosion processes or a precise estimate of long-term bank retreat. The Meuse River in the Netherlands constitutes a remarkable example of systematic rehabilitation, where bank protections have been removed along 100 km between 2008 and 2020.
Given that ship-induced erosion is still poorly understood, the management of navigable rivers and the planning of restoration measures would benefit from a solid and deeper understanding of natural bank dynamics induced by ship waves, for both economic and ecological reasons. Moreover, more precise estimates of long-term bank retreat would help to optimize different functions and reduce conflicts of interest within the river system. Therefore, the main objective of this investigation is to understand and predict erosion processes and the morphological evolution of natural banks in regulated navigable rivers. The research goal is pursued through the thorough investigation of a river reach that presents a wide range of erosion rates after the removal of bank protections. This main case study consists of a 1.2-km straight reach in the Meuse River, near Oeffelt in the Netherlands, the left bank of which was re-naturalized in 2010 by extracting the riprap. The Meuse is a midsize river with a pluvial regime, which has been canalized and is regulated with a series of weirs to enable navigation. Here, field techniques and complementary laboratory tests are utilized including topographic surveys with UAV, wave measurements with ADV, soil coring, geotechnical tests, and RTK GPS profiling. Processing and analysis of data are carried out with MATLAB. Four research steps are conducted. First, a methodology to quickly survey the 3D bank topography along a midsize river reach is determined to measure bank erosion processes. Second, distinct patterns of bank erosion that appeared along the Meuse River after protection removal are investigated. The aim is to disentangle the causes of the size, location and asymmetry of large embayments before analysing erosion processes at single river sections. Third, bank erosion processes in regulated navigable rivers are characterized and conceptualized. Fourth, a tool to estimate long-term or final retreat of re-naturalized banks in regulated navigable rivers is developed. The results of the first research component show that structure from motion photogrammetry applied to photos taken from an UAV is a practical and accurate method to measure riverbank erosion. By distributing ground-control points sufficiently spaced from the bank into the floodplain, digital surface models are georeferenced with sufficient accuracy to compare bank profiles between successive surveys. The identification of ground-control points in photographs is facilitated by placing oblique plaques on the floodplain, reducing the need for another perspective along banks. A single UAV flight with an oblique perspective of the bank becomes then sufficient to capture its three-dimensional complexity. Eight overlaps among consecutive images is the minimum number not to reduce the precision potential of a single UAV flight. The proposed methodology is fast to deploy in the field and surveys reach-scale riverbanks in sufficient resolution and accuracy to quantify bank retreat and identify morphological features of the complete erosion cycle, which enables the characterization of bank erosion at the process scale. Second, the oblique orientation of heterogeneous sedimentary strata with respect to the canalized Meuse River alignment explains the formation and asymmetry of large embayments. Depositional layers of varying compositions, structured by scroll-bar formation during former river meandering, led to wide-ranging erosion rates within a relatively short reach, which formed distinct bankline patterns across diverse lithologies and above the controlled water level of the river. The frequent occurrence of this water level and the persistent ship wave attack shaped bank profiles of varying strengths with a mild sloping terrace. The presence of isolated trees on the floodplain only locally delay erosion rates. Bank retreat rates at single cross sections primarily depend on the lithology near the minimum regulated water stage. Third, the evolution of bank profiles revealed the active role of ship waves in erosion progression, even at well-developed terraces. Currents initially contribute to all phases of the erosion cycle, but they gradually exert less shear stresses on the upper bank as the terrace elongates. Their later role at intermediate stages of development is reduced to the destabilization of steep high banks through water level fluctuations, without capacity to transport slump blocks. The resistance to erosion of the bank lithology defines the terrace geometrical proportions and the pace of morphological evolution of bank profiles. For instance, at a given time after protection removal, less cohesive banks can be present at intermediate stages of development while more cohesive banks remain at early stages. The latter present shorter and shallower terraces whereas the opposite holds for the former. Vegetation temporarily protects the upper bank from failure and toe erosion, but its permanence is subject to terrace stability and effectiveness to dissipate waves. Biofilms are able to partially cover well-developed terraces, changing entrainment thresholds. Fourth, based on the above conceptual framework of bank profile evolution, a model was developed which captures the observed non-linear morphodynamics driven by ship waves in regulated settings. This new tool estimates long-term retreat by accounting for the main erosion drivers and essential mechanisms. Equilibrium bank profiles are reached once wave-induced shear stresses fall below the threshold for entrainment of cohesive soils. Unlike previous models of ship-induced erosion, the process-based approach enables to distinguish the contribution of each factor to erosion. Primary waves are found to exert the highest loads on the terrace, shaping long-term profiles and defining ultimate retreat. To apply the model, it is necessary to measure or estimate the largest primary wave and the soil cohesion at the controlled level, preferably in the range -1.00 m to +0.50 m with respect to it. The above findings are based on cohesive banks in a straight reach of a regulated river. The presence of gravel layers in the bank changes the morphological response to ship waves due to the armouring of lower strata. In such cases, the bank terrace can reach a transverse slope in dynamic equilibrium defined by grain size, as long as longitudinal currents do not transport the gravel to the lower bank. The lower non-cohesive layer of composite banks responds in a similar way, eventually reaching a dynamic equilibrium, after which a final retreat of the upper cohesive layer is possible. The position of banks in the river planform affects the magnitude and duration of the contribution of currents to upper bank erosion. Their direct impact, especially during high floods, can dominate bank retreat during long periods if the flow is persistently steered against the upper bank, as at outer bends. Unregulated rivers present higher shear stresses than those with controlled stages. Their sandy strata of composite banks are normally exposed to currents and waves, creating larger morphodynamics and more challenging conditions for vegetation growth. The new model to estimate final retreat of cohesive banks may be used to prepare a reach scale strategy that defines the most convenient approach for stretches with similar morphological behaviour and available space to develop. In this way, the eventual need to reduce or stop erosion at sections with future excess retreat is determined in advance. In order to make the most of re-naturalized banks in terms of their benefits for ecological processes and habitat diversity in navigable rivers, the advantages of shallow areas with less perturbated zones should be sought where possible. Two phases of interventions are recommended, a first phase where ship waves freely reach the bank for terrace creation, responding to local lithologies, and a second phase with lowered erosive loads, facilitated by slightly submerged pre-banks. The latter phase increases the possibilities for vegetation, and likely other living organisms, to develop. The knowledge and tools now available create new possibilities for improved management of re-naturalized banks in navigable rivers. The progress made helps to better understand the contribution of different drivers to bank erosion and to identify which factors control retreat at different bank types, stages of development, and settings. The new insights explain how to apply SfM-UAV to monitor bank erosion processes along river reaches, interpret bankline patterns, assess the role of isolated trees in bank retreat, and manage expectations regarding bank retreat and the role of vegetation to control erosion. The understanding of erosion processes in regulated navigable rivers and the possibility to estimate final erosion magnitudes open future opportunities to analyse the river system from a holistic perspective and to find creative ways to balance diverse river functions.","river morphodynamics; bank erosion; navigation; retreat modelling; river restoration","en","doctoral thesis","","978-94-6366-352-6","","","","","","","","","Rivers, Ports, Waterways and Dredging Engineering","","",""
"uuid:69d868ef-33d0-4abf-a33c-dac9a0577e78","http://resolver.tudelft.nl/uuid:69d868ef-33d0-4abf-a33c-dac9a0577e78","From problem to solution: A few stories about design and business for sustainable development","Baldassarre, B.R. (TU Delft Marketing and Consumer Research)","Hultink, H.J. (promotor); Bocken, N.M.P. (promotor); Calabretta, G. (copromotor); Delft University of Technology (degree granting institution)","2021","","","en","doctoral thesis","","","","","","","","2024-02-01","","","Marketing and Consumer Research","","",""
"uuid:42c5bef9-8195-42a5-a103-49b8bbbc2d96","http://resolver.tudelft.nl/uuid:42c5bef9-8195-42a5-a103-49b8bbbc2d96","Monitoring Aerosol Cloud Interactions in Liquid Water Clouds","Sarna, K. (TU Delft Atmospheric Remote Sensing)","Russchenberg, H.W.J. (promotor); Delft University of Technology (degree granting institution)","2021","This thesis presents a new method for the continuous observation of aerosol-cloud interactions with ground-based remote sensing instruments. The described method is based on the measurements from UV lidar, radar and radiometer. All of those instruments are capable of obtaining continuous, high-resolution measurements. In order to facilitate its easy implementation to measuring sites the method is based on a standardized Cloudnet data format. The main goal is to monitor the change in the cloud droplet concentration, as obtained from the measurements by cloud radar and radiometer, to then compare it to the aerosol background below the cloud, represented by the attenuated backscatter measured by UV lidar. The response of the cloud to the aerosol background can best be measured when the amount of available water is kept constant. Hence the measurements from the radiometer, specifically the derived liquid water path (LWP), which is used to constrain the cloud response. Based on the value of the LWP, analyzed data is divided into bins and for each of these the relation between cloud droplet effective radius and integrated value of the attenuated backscatter are calculated. This metric is called ACIr and is used to describe the strength of the relation between the clouds microphysical properties and the aerosol background below the cloud.
The method was first tested and applied to pristine marine clouds as measured at the Graciosa Island in the Azores. The application was then extended to the Cabauw site located in the Netherlands. On both sites a decrease in the cloud size was observed in combination with a simultaneous increase of the aerosol loading below the cloud. This relation was particularly strong for a mid range of the LWP, between 40 and 60 gm-2 LWP for the cases from Azores and between 60 and 105 gm-2 for the cases from the Netherlands. These results indicate that the process of aerosol-cloud interactions is a predominant one only under those conditions where a mid amount of water is available. When the amount of available water is less than 40 gm-2 this process is harder to observe, due to the initial stage of cloud formation. In the case of LWP above 105 gm-2 other cloud processes, such as collision and coalescence, seem to be predominant. The results from the analysis of the Cabauw dataset, which was the more extensive dataset, also made clear that updraft within the cloud plays a significant role in invigorating aerosol particles into becoming cloud droplets. A possible extension of the presented method includes obtaining optical cloud extinction from the UV lidar measurements. The presented retrieval method can obtain very reliable results when compared to the simulated results. Hence the cloud optical extinction can be used as a proxy of the cloud properties and the described method of monitoring aerosol-cloud interactions can be applied to measurement sites where only UV lidar and radiometer are present. This thesis shows that ground-based remote sensing instruments used in synergy can efficiently and continuously monitor aerosol–cloud interactions.","aerosol; clouds; aerosolclouds interactions; remote sensing","en","doctoral thesis","","","","","","","","","","","Atmospheric Remote Sensing","","",""
"uuid:cce8dbcb-cfc2-4fa2-b78b-99c803dee02d","http://resolver.tudelft.nl/uuid:cce8dbcb-cfc2-4fa2-b78b-99c803dee02d","From atomic-scale imaging to quantum fault-tolerance with spins in diamond","Abobeih, M.H.M.A. (TU Delft QID/Taminiau Lab)","Hanson, R. (promotor); Taminiau, T.H. (copromotor); Delft University of Technology (degree granting institution)","2021","Owing to its exceptional spin properties and bright spin-photon interface, the nitrogenvacancy (NV) center in diamond has emerged as a promising platformfor quantum science and technology, including quantum communication, quantum computation and quantum sensing. In this thesis we develop novel methods for atomic-scale imaging and high-fidelity control of complex nuclear-spin systems coupled to the electron spin of an NV center in diamond. This well-controlled quantum system provides new opportunities in quantum sensing, quantum information processing, and may also form the building block of a large-scale quantum network, one of the key goals in quantum technology.","","en","doctoral thesis","","978-90-8593-461-5","","","","","","2021-03-01","","","QID/Taminiau Lab","","",""
"uuid:e45cea45-8915-4a11-b8fd-389cb3e19d22","http://resolver.tudelft.nl/uuid:e45cea45-8915-4a11-b8fd-389cb3e19d22","Modeling the atmospheric diurnal cycle","van Hooft, J.A. (TU Delft Atmospheric Remote Sensing)","van de Wiel, B.J.H. (promotor); Russchenberg, H.W.J. (promotor); Popinet, Stéphane (copromotor); Delft University of Technology (degree granting institution)","2021","Weather and climate influence life in many ways; varying climatic conditions can be associated with varying human cultures and on a day-to-day basis, the weather influences our plans and mood. As such, a proper prediction of the weather is of great importance. Ultimately, the weather is fueled by solar irradiation, which changes sharply over the course of the day. The relative position of the sun is directly influencingthe meteorological properties in the atmosphere closest to the surface. The atmospheric state in the lowest few kilometers, known as the troposphere, is therefore characterized by a daily cycle: during daytime, the sun heats the air and at night, the atmosphere typically cools down and the wind settles a bit.
In spite of its omnipresence and importance, this diurnal cycle in weather patterns
is still not fully understood by the meteorological community. Consequently, it is also hard to describe and predict the weather properly. The challenges emerge from the fact that the weather is continuously evolving, and is therefore never ‘in balance’, which would simplify analysis of the processes.","Atmospheric boundary layer; diurnal cycle; modeling; adaptive methods; weather","en","doctoral thesis","","","","","","","","","","","Atmospheric Remote Sensing","","",""
"uuid:efb18518-074f-4159-880b-0eb6cbc88ff3","http://resolver.tudelft.nl/uuid:efb18518-074f-4159-880b-0eb6cbc88ff3","Surface Crack Growth in Metallic Pipes Reinforced with Composite Repair System","Li, Z. (TU Delft Support Marine and Transport Techology)","Hopman, J.J. (promotor); Jiang, X. (copromotor); Delft University of Technology (degree granting institution)","2021","Surface cracks are serious threats to the structural integrity of offshore metallic pipes. This dissertation proposes a protocol of composite reinforcement on surface cracked metallic pipes subjected to cyclic loads, aiming to decrease the crack growth rate and prolong the residual fatigue life. The main objective of this dissertation is to reveal the mechanism of the composite reinforcement on surface crack growth in metallic pipes, in order to develop/improve the associated CRS standards. For this purpose, a series of investigations to determine the crack growth behaviour and possible failure modes have been conducted through numerical and experimental approaches. Finally, an analytical method is proposed to evaluate the Stress Intensity Factor (SIF) of the surface crack in metallic pipes reinforced with CRS.","Surface crack; Fatigue crack growth; Composite reinforcement; Fracture mechanics; Offshore metallic pipes; Structural integrity; Finite Element Method (FEM); Debonding; Cohesive Zone Model","en","doctoral thesis","TRAIL Research School","978-90-5584-283-4","","","","TRAIL Thesis Series no. T2021/8, the Netherlands Research School TRAIL","","","","","Support Marine and Transport Techology","","",""
"uuid:c21d4943-b848-4e77-b5b8-a5423f751dbd","http://resolver.tudelft.nl/uuid:c21d4943-b848-4e77-b5b8-a5423f751dbd","Design and Optimization of Road Networks for Automated Vehicles","Madadi, B. (TU Delft Transport and Planning)","van Arem, B. (promotor); van Nes, R. (promotor); Snelder, M. (copromotor); Delft University of Technology (degree granting institution)","2021","Automated vehicles (AVs) are on the horizon, and they are expected to deliver traffic safety and efficiency benefits to transportation systems. There are different automation levels for AVs based on the functionalities of the automation systems and their operating design domain (i.e., on which conditions these functionalities can be realized). AVs with limited automation functions are already available on the market; however, fully automated vehicles with unlimited operational design domain (ODD) are not expected in the near future. Reaching a high market penetration rate of fully automated vehicles is a gradual process that can take several decades. Thus for a long time, a heterogeneous mix of traffic with AVs of different automation levels and regular vehicles on the roads will be inevitable. During this transition period with mixed traffic, relying on driving automation technology alone without infrastructure support might compromise the potential safety and efficiency gains of AVs. A proper infrastructure can support AVs’ functionalities, extend their ODD, and improve safety for all road users, while lack of proper infrastructure can negatively influence these factors. Besides, road infrastructure elements usually have a long lifetime and adjusting them can be costly. Hence, there is a strong need for research and planning to ensure that large infrastructure investments provide the highest societal benefits...","Network design problem; Automated vehicles; Optimizaion","en","doctoral thesis","TRAIL Research School","978-90-5584-272-8","","","","TRAIL Thesis Series no. T2021/3, the Netherlands Research School TRAIL","","","","","Transport and Planning","","",""
"uuid:467f9f64-ae55-4e66-be24-0d0cb5f46fc4","http://resolver.tudelft.nl/uuid:467f9f64-ae55-4e66-be24-0d0cb5f46fc4","Novel applications of ground-penetrating radar in oil fields","Zhou, F. (TU Delft Applied Geophysics and Petrophysics)","Slob, E.C. (promotor); Delft University of Technology (degree granting institution)","2021","Ground-penetrating radar (GPR), usually working in the frequency from tens of megahertz to several gigahertz, is widely applied in mapping near-surface applications. In recent decades, GPR is frequently utilized for fluid-related applications, such as groundwater assessment, contaminant monitoring, and water-filled fracture detection, based on the principle that at these radar frequencies, electromagnetic (EM) waves are sensitive to water content. When operated from the surface, ground-penetrating radars are limited to a survey depth up to tens of meters in most soils. To further extend the detection range, borehole radar is developed by placing the GPR antennas in boreholes close to the underground targets. Different downhole survey modes, e.g. single-hole, cross-hole, and vertical radar profiling measurements, have demonstrated applicabilities for fracture detection, metal ore exploration, or water content prediction, up to a depth of a few hundred meters from the ground. Deeper GPR measurements in hydrocarbon reservoirs have been proposed. Some theoretical studies have shown that a borehole radar is expected to have the capability of mapping structures in the range of a few decimeters to ten meters away from the borehole in most reservoir environments, filling in the gap of the conventional electrical, sonic and nuclear logging methods. More attractively, GPR has a relatively high radial resolution and suits best for the downhole structure and fluid imaging. This thesis aims to explore the potential applications of GPR and assess their values in these oil industry applications. Applicability studies are carried out in the fields of well logging and monitoring of oil production. Numerical simulations are carried out, where joint multiphase flow and borehole radar modelling is established.","Ground-penetrating radar; Borehole geophysics; Enhanced oil recovery; Reservoir estimation","en","doctoral thesis","","978-94-6384-183-2","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:000612a5-fbda-4394-b36f-45bf44ef9e21","http://resolver.tudelft.nl/uuid:000612a5-fbda-4394-b36f-45bf44ef9e21","Modeling the carbon dioxide electrocatalysis system","Bohra, D.","Smith, W.A. (promotor); Pidko, E.A. (promotor); Delft University of Technology (degree granting institution)","2021","Increasing level of atmospheric carbon dioxide (CO2) because of human activities is a serious threat to ecosystems on Earth due to global warming. A transition to net-zero CO2 emissions before 2050 is necessary to limit the global mean surface temperature rise to 1.5ºC-2ºC above preindustrial levels. Carbon capture and utilization technologies are an important piece of the decarbonisation puzzle. Electrochemical conversion of CO2 (eCO2R) using renewable electricity can contribute to an integrated low-carbon energy and materials system by providing grid flexibility and means to produce essential hydrocarbon molecules. Considerable progress has been made in the past decade in developing eCO2R systems towards practical feasibility. However, major challenges remain to realize efficient and cost-competitive eCO2R devices at industrial scale.","","en","doctoral thesis","","978-94-6421-162-7","","","","","","","","","ChemE/Materials for Energy Conversion and Storage","","",""
"uuid:7d2ced9a-1b58-40d1-9e2c-0e4fb6a1c882","http://resolver.tudelft.nl/uuid:7d2ced9a-1b58-40d1-9e2c-0e4fb6a1c882","Transient mechanics of foams and emulsions","Boschan, J. (TU Delft Engineering Thermodynamics)","Vlugt, T.J.H. (promotor); Tighe, B.P. (copromotor); Delft University of Technology (degree granting institution)","2021","Systems far from equilibrium have numerous practical uses, but challenge our understanding of their underlying physics. Materials like foams, emulsions, suspensions and granular matter can show liquidlike properties or get trapped in a solidlike jammed state. The phase transition between the flowing and static state is often referred to as the ‘jamming transition‘. This work focuses on the mechanical behavior of amorphous viscoelastic materials, close to the jamming point. In many traditional solids, the relation between stress and strain is well described by a linear proportionality, known as Hooke’s law. In jammed solids, by contrast, the stressstrain relation quickly becomes nonlinear, making them much harder to model. Here we ask how and why the linear response breaks. To answer the questions, we investigate the breakdown of linear response as a function of deformation rate and amplitude.","Jamming; Rheology; Viscoelastict; Plasticty; Marginal Solids; Soft Spheres; Shear Modulus","en","doctoral thesis","","978-94-6366-349-6","","","","","","","","","Engineering Thermodynamics","","",""
"uuid:ff2ffac0-c76a-4a3d-af22-88f2151f6133","http://resolver.tudelft.nl/uuid:ff2ffac0-c76a-4a3d-af22-88f2151f6133","Spins in Josephson Junctions","Bouman, D. (TU Delft QRD/Kouwenhoven Lab)","Kouwenhoven, Leo P. (promotor); Geresdi, A. (copromotor); Delft University of Technology (degree granting institution)","2021","Quantum technology is an exciting research area that has gained a lot of interest in the past few decades with the advances made in quantum computing. The quantum computer promises speedups that are impossible to achieve with classical computers. It does so by exploiting quantum mechanical properties such as entanglement and superposition with the quantum bit, or qubit, as its main building block.
Today, quantum computers are in their infancy and realizing a computer powerful enough to perform useful calculations poses major challenges. The fragility of qubits being the main difficulty. Approaches to mitigate this include implementing error correction schemes or alternative qubit designs. Topological qubits are part of the latter category and exploit the robustness of topologically invariant states to small perturbations to create more stable qubits.
In this thesis we explore semiconductor-superconductor hybrid nanowire structures and in particular the interaction of electron spins in quantum dots with superconductivity. When connected to superconductors, arrays of superconductor quantum dot hybrids can host Majorana states, a promising approach to realizing topological qubits. Creating Majoranas in quantum dots, as opposed to traditional methods, offers greater control over their properties. Additionally, understanding the interaction between spins in these quantum dots superconductor hybrids could enable new readout methods or coupling mechanisms between superconducting and spin qubits.
We start by investigating a nanowire SNS Josephson junction with signatures of Majorana states. A nanowire junction is capacitively coupled to an on-chip microwave detector made from a Josephson tunnel junction. We monitor the Josephson radiation frequency as a function of magnetic field and find a transition from a $2\pi$ to a $4\pi$-periodic Josephson current-phase relation, consistent with a topological transition.
In a different device, we investigate a multi-orbital double quantum dot Josephson junction. We measure the excitations between doublet and singlet states that arise in a quantum dot weakly coupled to a superconducting lead, also known as Yu-Shiba-Rusinov (YSR) states. With increased dot-lead coupling we observe a supercurrent and reveal its current-phase relation, both in the single and multi-orbit regime. We show that in the single-orbital regime the supercurrent sign follows an even-odd charge occupation effects. In the even charge parity sector, we observe a supercurrent blockade when the spin ground state transitions to a triplet -- demonstrating a direct spin to supercurrent conversion. For yet stronger dot-lead coupling we find a rectified current-phase relation at the transition between even and odd charge states. We investigate this apparent non-equilibrium effect and think about possible explanations.
To conclude, we discuss possible applications in spin qubit state readout and extensions of the device geometry towards realizing a Kiteav chain able to host Majorana states.","","en","doctoral thesis","","978-90-8593-458-5","","","","","","","","","QRD/Kouwenhoven Lab","","",""
"uuid:363f4643-0a68-4726-9945-e8daf6e0350c","http://resolver.tudelft.nl/uuid:363f4643-0a68-4726-9945-e8daf6e0350c","Genetically encoded phospholipid production for autonomous synthetic cell proliferation","Blanken, D.M. (TU Delft BN/Christophe Danelon Lab)","Dogterom, A.M. (promotor); Danelon, C.J.A. (promotor); Delft University of Technology (degree granting institution)","2021","Cells, the building blocks of life, are vastly complex. This complexity confers to every living organism the ability to maintain oneself, reproduce oneself, and evolve. Creating a minimal system from nonliving components that is capable of self-maintenance, self-reproduction, and evolvability, will greatly increase our understanding of life. Essential features of every cell, synthetic or otherwise, are the compartment, a form of information transfer, and the ability to proliferate. In chapter 1, I argue that autonomy, that is self-governance, is another key characteristic of life. Therefore, to create a synthetic cell, aforementioned features should be recapitulated in an autonomous manner. This informs the synthetic cell design that is adhered to in this thesis. Phospholipid vesicles, so called liposomes, serve as a compartment. Reconstitution of the central dogma of molecular biology by cell-free gene expression with the PURE system enables the use of a genetic program encoded on DNA. By encoding its working instructions on its own DNA, the cell will be autonomous. Proliferation is carried out by a set of modules encoded on the DNA. These modules will serve to replicate the DNA itself, to replenish the gene expression machinery, and to stimulate growth and trigger division of the compartment. The goal of this thesis is to reconstitute compartment growth by cell-free gene expression of DNA encoding for phospholipid synthesis machinery.","Synthetic biology; synthetic cell; minimal cell; liposomes; cell-free gene expression; phospholipid synthesis; cell growth; cell proliferation","en","doctoral thesis","","978-90-8593-457-8","","","","","","","","","BN/Christophe Danelon Lab","","",""
"uuid:279260a6-b79e-4334-9040-e130e54b9360","http://resolver.tudelft.nl/uuid:279260a6-b79e-4334-9040-e130e54b9360","On the dynamics of tidal plume fronts in the Rhine Region of Freshwater Influence","Rijnsburger, S. (TU Delft Environmental Fluid Mechanics)","Pietrzak, J.D. (promotor); Horner-Devine, A.R. (promotor); Souza, A.J. (promotor); Delft University of Technology (degree granting institution)","2021","River plumes are the link between the river and the ocean, and therefore play an important role for the health of coastal and marine ecosystems. As a result of human activity in coastal areas, the freshwater discharge transports anthropogenic inputs into the ocean. It is therefore important to understand the processes controlling transport, dilution and dispersion in river plumes from the river mouth up to tens of kilometers and beyond. River plumes are buoyant bodies of brackish water overlaying saltier water created by freshwater outflow. This thesis focuses on an improved understanding of the dynamics in the Rhine River Plume, which is influenced by strong tidal currents and bottom friction due to a shallow shelf. In particular, we study the plume in two different regimes: 70 - 80 km north of the river mouth and close to the river mouth (within a radius of 20 km).","","en","doctoral thesis","","978-94-6366-355-7","","","","","","","","","Environmental Fluid Mechanics","","",""
"uuid:c5f9cdd2-fc2b-433b-bffe-2e68cb3799c7","http://resolver.tudelft.nl/uuid:c5f9cdd2-fc2b-433b-bffe-2e68cb3799c7","High productivity hollow fiber membranes for CO2 capture","Etxeberria Benavides, M. (TU Delft ChemE/Catalysis Engineering)","Kapteijn, F. (promotor); Gascon, Jorge (promotor); Delft University of Technology (degree granting institution)","2021","","","en","doctoral thesis","","978-94-6421-145-0","","","","","","","","","ChemE/Catalysis Engineering","","",""
"uuid:3cbbf7ad-6ac7-4f38-9fa6-85c7a4f11034","http://resolver.tudelft.nl/uuid:3cbbf7ad-6ac7-4f38-9fa6-85c7a4f11034","Development of DNA diagnostics of neglected tropical diseases in resource-limited settings","Bengtson, M.L. (TU Delft BN/Cees Dekker Lab)","Dekker, C. (promotor); Delft University of Technology (degree granting institution)","2021","The aim of this thesis was to develop a DNA-detection scheme for a point-of-care diagnostic test for Neglected Tropical Diseases (NTDs) for use within resource-limited settings. The scientific innovation is to develop an adaptable DNA-detection scheme, using CRISPR-dCas9 (catalytically inactive Cas9), that can detect the DNA of any pathogen in bodily fluids i.e. in a blood or urine sample. This detection of DNA of the pathogen will be much more reliable than antibody-based tests as it will work independently of the persons immune response. Unlike current antibody-based diagnostic tests, it will be able to distinguish between current and previous infections. Specifically for visceral leishmaniasis (VL), the current rk39 antigen-based rapid diagnostic test lacks specificity and sensitivity in sub-Saharan Africa, where VL remains prevalent. We aim for a DNA-detection scheme that does not require infrastructure, electricity, or skilled laboratory personnel to operate. Furthermore, the DNA-detection scheme will need to be functional at a broad temperature range, yet remain highly sensitive and specific. Such a DNA-detection scheme can be a promising tool for effective diagnoses of NTDs within resource-limited settings, though it needs to be further tested, incorporated into a packaged test format, and validated in the field. Integrating this DNA-detection scheme into a potentially low-cost diagnostic test is a very promising alternative to current diagnostic tests in both high-resource and resource-limited settings.","point-of-care diagnostic test; Neglected tropical diseases; resource-limited settings; visceral leishmaniasis; context-driven design; CRISPR/cas9","en","doctoral thesis","","978-90-8593-463-9","","","","","","","","","BN/Cees Dekker Lab","","",""
"uuid:192f633d-1bdb-440c-b435-7c0cd0d1f648","http://resolver.tudelft.nl/uuid:192f633d-1bdb-440c-b435-7c0cd0d1f648","Sharp Estimates and Extrapolation for Multilinear Weight Classes","Nieraeth, Z. (TU Delft Analysis)","Frey, D. (promotor); Veraar, M.C. (promotor); Delft University of Technology (degree granting institution)","2021","The subject of this thesis is the study of the multilinear Muckenhoupt weight classes and the quantitative boundedness of operators with respect to these weights in both the scalar-valued and the vector-valued setting. This includes the study of multisublinear Hardy-Littlewood maximal operators, sparse forms, and multilinear Rubio de Francia extrapolation methods.","Banach function space; Bilinear Hilbert transform; Calderón-Zygmund operator; Hardy-Littlewood maximal operator; Limited range; Muckenhoupt weights; Multilinear; Rubio de Francia extrapolation; UMD; Sparse domination","en","doctoral thesis","","978-94-6421-169-6","","","","","","","","","Analysis","","",""
"uuid:22d925ab-e6a8-4360-8f92-f808c39f89a2","http://resolver.tudelft.nl/uuid:22d925ab-e6a8-4360-8f92-f808c39f89a2","Tradable Credits for Congestion Management: support/reject?","Krabbenborg, L.D.M. (TU Delft Transport and Logistics)","van Wee, G.P. (promotor); Molin, E.J.E. (promotor); Annema, J.A. (copromotor); Delft University of Technology (degree granting institution)","2021","Dozens of variants for congestion charging have been studied and discussed in scientific and political circles in the search for an efficient policy to abate the negative effects of car use, these being congestion and emissions, in particular. Although congestion charging provides clear economic advantages and is technically possible, the actual implementation of these schemes is rare. Proposals for charging schemes typically stir up public opposition, which negatively influences political support and overall feasibility. Reoccurring arguments in the public debate include people’s disbelief in the scheme’s effectiveness, conviction that it is ‘yet another revenue stream for the government’, fear that it will treat them or others unfairly, and car users, especially, expect that it will (financially) disadvantage them. The concept of tradable peak credits (TPC) is a drastically different alternative that can potentially address these concerns and hence become a more feasible policy instrument. This concept is based on the cap-and-trade principle and is, in theory, very effective since it puts a firm ‘limit’ on road access during peak hours. Access rights - the credits - are distributed among people who can use them to access the road or trade them via an online market where the credit price is set by supply and demand. Thus, money flow stays within the group of users and does not flow towards the government. Since the credit distribution does not affect the scheme’s efficiency, the operator (government) can distribute the credits in any way to meet equity concerns. The main reason for the recent upsurge in literature on tradable credits in transportation research lies in the notion that support from the public in general, and of car users in particular, might be higher than it is for a congestion charge. Studies on theoretical explorations, scheme design, effects on traffic flow and behavioural effects have expanded in the last decade, but empirical studies on public support has remained remarkably scarce. A few empirical studies on related concepts in mobility management have been conducted, but these typically study support for a fixed scheme design, whereas support may heavily depend on the scheme design, for example on the credit distribution. Furthermore, public support for road pricing is often studied in a quantitative way and analysed on an aggregated level. However, the public debate about road pricing is full of varying arguments, which indicates that the public is very heterogeneous in their opinion and preferences. To better understand how (novel) road pricing can be designed and implemented, this thesis therefore also focuses on the underlying arguments and the differences between (groups of) people. Lastly, a broader view on the feasibility of TPC with insights from fields other than transportation economics also seems to be missing. Hence, the main aim of this study is to increase the understanding of the feasibility - and in particular public support - of Tradable Peak Credits (TPC) as a policy instrument for congestion management...","","en","doctoral thesis","TRAIL Research School","978-90-5584-275-9","","","","TRAIL Thesis Series no. T2021/2, the Netherlands Research School TRAIL","","","","","Transport and Logistics","","",""
"uuid:d97f766d-bbad-4d08-9f3a-911ac9210416","http://resolver.tudelft.nl/uuid:d97f766d-bbad-4d08-9f3a-911ac9210416","NH3 condensation within plate heat exchangers: Flow patterns, heat transfer and frictional pressure drop","Toa, X. (TU Delft Engineering Thermodynamics)","Infante Ferreira, C.A. (promotor); Vlugt, T.J.H. (promotor); Delft University of Technology (degree granting institution)","2021","Energy shortage and energy related environmental problems are urgent issues to be addressed in the coming years. Low-grade heat is utilized to drive energy conversion cycle and to produce electricity, which is a renewable and sustainable approach to energy supply. These thermodynamic cycles for energy conversion require eco-friendly working fluids and highly efficient heat transfer processes. NH3 is a natural refrigerant with superior thermal properties such as large latent heat and high thermal conductivity. However, the application of NH3 is restrained due to safety issues. Plate heat exchangers have the potential to be used in the thermal facility of NH3 for the recovery of low-grade heat. These compact structures are able to transfer large heat loads with reduced charge of working fluid, thereby mitigating the safety risk. For instance, the Organic Rankine Cycles of NH3 equipped with plate heat exchangers have smaller sizes compared with the plants filledwith other refrigerants. Furthermore, plate heat exchangers have the advantage of design flexibility and easy maintenance for highly efficient heat transfer, bringing aboutwide utilization in refrigeration, pharmacy and chemical engineering. In this thesis, NH3 condensation is experimentally and theoretically investigated in plate heat exchangers. The main aim is to provide design methods of compact plate condensers used in the thermal facility of NH3, which are not available in open literature. The experiments ofNH3 condensation have been reported, but no design method is provided. The heat transfer and frictional pressure drop correlations of hydrofluorocarbons (HFCs), hydrocarbons (HCs) and hydrofluoroolefins (HFOs) are assessed making use of an experimental database. Most suitable correlations are recommended.","","en","doctoral thesis","","978-94-6384-173-3","","","","","","","","","Engineering Thermodynamics","","",""
"uuid:f4ba6396-f4c0-4f8c-95be-2620d62e4387","http://resolver.tudelft.nl/uuid:f4ba6396-f4c0-4f8c-95be-2620d62e4387","Entanglement Generation in Quantum Networks: Towards a universal and scalable quantum internet","Dahlberg, E.A. (TU Delft QID/Wehner Group)","Wehner, S.D.C. (promotor); Delft University of Technology (degree granting institution)","2021","Quantum mechanics shows that if one is able to generate and manipulate entanglement over a distance, one is able to perform certain tasks which are impossible using only classical communication. Classical communication refers to what is used in the Internet of today. A quantum internet would therefore bring new capabilities to our highly connected world. These capabilities both involve (1) the ability to perform tasks with are provably impossible in the current Internet, such as unconditionally secure communication, and (2) the ability to perform certain tasks much more efficient, such as distributed (quantum) computing or extending the baseline of telescopes. To be able to build a quantum internet, two main components are needed: (i) hardware that can store, manipulate and entangle qubits and (ii) a software stack to control the hardware. The core task of both of these is to generate entanglement to be used by applications. In this thesis we focus on the latter, i.e. the development of software and protocols that enable entanglement generation using capable hardware. To enable a certain application, one can certainly, in theory, manually specify each operation the hardware should perform, involving micro-wave pulses, lasers etc. However, in practice this is not feasible, if not to say impossible, due to the complexity of the operations needed, especially in a distributed system such as a quantum network. What is needed is a software stack, which can help with abstracting complexity away in multiple layers. This allows for someone to program a protocol in one layer without knowing all the details of the lower layers. In particular, one can abstract away the hardware details, in order to make higher-layer protocols and applications hardware-agnostic. Therefore, to be able to build a universal, efficient and scalable quantum internet, a software stack is crucial. In chapter 2 we start discussing the networking part of a software stack. Namely, we introduce a network stack for a quantum internet, drawing parallels to the IP/TCP-suite of the classical Internet. We continue with proposing a service and interface of the lowest layer of the network stack: the link layer. The link layer is here responsible for generating entanglement between nodes in a quantum network which are directly connected by a quantum link, i.e. a fiber cable. When developing a protocol or application it is very useful to be able to run it. Both to see if the intended ideas make sense and also to check that the implementation is actually correct. Currently we do not have quantum hardware that exposes a full-fledge API that can be used to execute applications. For this reason, it is very useful to be able to instead simulate the hardware in a way that exposes the same API as the hardware being developed. In chapter 3 we introduce SimulaQron for this exact purpose. Any application of a quantum internet will need entanglement in one way or another. However, entanglement is generally hard to generate and is usually the bottleneck when executing an application. We would therefore like to make use of the generated entanglement in the most optimal way. To be able to do this we need to understand how entanglement can be transformed and distributed in a quantum network. We study the entanglement of a particular class of states called graph states in chapters 4 to 9 and how these states can be transformed in a quantum network.","quantum networks; entanglement; quantum internet","en","doctoral thesis","","978-94-6384-187-0","","","","","","","","","QID/Wehner Group","","",""
"uuid:a9bb41e0-3d2a-4028-a218-bd85f2053545","http://resolver.tudelft.nl/uuid:a9bb41e0-3d2a-4028-a218-bd85f2053545","On the Design of Fly's Eye Lenses at Sub-THz Frequencies for Wideband Communications","Arias Campo, M. (TU Delft Tera-Hertz Sensing)","Llombart, Nuria (promotor); Neto, A. (promotor); Delft University of Technology (degree granting institution)","2021","The expanding demand for high-speed wireless communications is pushing current networks and systems close to their limits. The use of sub-THz bands, where large bandwidth is available, is pointed out as one of the key strategies to cope with the huge amount of data transfer in the next Beyond 5G and 6G communications generations. Despite the promising properties of these high frequency bands, many challenges arise with their exploitation. The increasing loss due to propagation spreading, as well as the higher atmospheric and rain loss, should be compensated with highly directive antenna concepts. Besides, the lower output power provided by the transceivers in this spectrum and the increasing noise figure of the receiving devices magnifies the importance of achieving highly efficient antennas and transitions to the active front-end. The implementation of multi-beam architectures becomes specially challenging when moving to higher frequencies. The chip area does not decrease with frequency as passive RF structures do, due to the decreasing electronics efficiency, hindering the integration of the active circuitry together with the antennas. The harnessing of the large bandwidths available should be supported by all system and network layers, which will require breakthroughs in the related fields. This dissertation focuses in the development of wideband, efficient antenna concepts with multi-beam capability for the next communication generations. The use of elliptical lens antennas with resonant leaky-wave feeders is proposed, reaching aperture efficiencies higher than 70% over more than 35% bandwidth, for the first time with leaky-waves. Fly's eye lens architectures are introduced to cover small cell or point-to-multipoint use cases, where multiple, static beams are required. In order to evaluate the feeder and lens performance, an analysis in reception combined with spectral Green's functions is applied, enabling the optimization of lenses with diameters of 20. Making use of this methodology, four lens designs have been developed, fabricated, and characterized at G-band (140 - 220 GHz) and H-band (220 - 320 GHz). The lens concepts presented concentrate on some of the main requirements to be integrated in the envisioned Fly's eye 178 Summary arrays: wide bandwidth, high aperture efficiency, low loss (antenna and transition to frontend), circular polarization, large scan range. New measurement strategies are presented, applicable to the characterization of lens antennas in the sub-THz bands…","Lens antennas; leaky-wave antennas; multi-beam antennas; dielectric gratings; polarizer; Green's function method; wideband communications","en","doctoral thesis","","978-94-6421-183-2","","","","","","","","","Tera-Hertz Sensing","","",""
"uuid:4384094a-a8ab-4ddc-87e1-078118280711","http://resolver.tudelft.nl/uuid:4384094a-a8ab-4ddc-87e1-078118280711","Crystal Engineering of Metal-Organic Frameworks for Molecular Recognition","Pustovarenko, Alexey (TU Delft ChemE/Catalysis Engineering)","Kapteijn, F. (promotor); Gascon, Jorge (promotor); Delft University of Technology (degree granting institution)","2021","A world in which there is a demand for materials with various properties for different needs, requires robust tools to assist in this matter. In this regard, crystal engineering is among the most important tools in materials design and keeps developing in a sustained manner. Obviously, the application of crystal engineering principles to multifunctional microporous materials cannot cause any surprise, and Metal-Organic Frameworks (MOFs), as an example of the latter, stay in focus of scientific and industrial interests.","","en","doctoral thesis","","978-94-6421-153-5","","","","","","","","","ChemE/Catalysis Engineering","","",""
"uuid:4d48181b-5429-4bef-976d-95b6c9535825","http://resolver.tudelft.nl/uuid:4d48181b-5429-4bef-976d-95b6c9535825","Structural and Excited State Dynamics in Hybrid Halide Perovskites","Fridriksson, M.B. (TU Delft ChemE/Opto-electronic Materials)","Grozema, F.C. (promotor); Houtepen, A.J. (promotor); Delft University of Technology (degree granting institution)","2021","During the last decade perovskite materials have rapidly emerged, and are currently among the most promising candidates as materials for solar cells and other opto-electronic applications. Although our knowledge related to these materials has advanced rapidly in last few years there are still many unknown aspects and many challenges remain. These challenges include a solid understanding of the relation between the composition of the materials and their structural and the photophysical processes that occur on charge excitation. In addition, there are many challenges related to the synthesis of pure-phase, defect free materials, the stability in presence of oxygen and water, and the replacement of toxic elements such as lead. In his thesis we try to shine a light on some of these unknown aspects using a combination of computational techniques such as molecular dynamics simulations and experimental techniques measuring photoluminescence. Using molecular dynamics simulations, we study the relation between the composition of the material and the static and dynamic structural properties of the individual parts of the structures. This gives insight into the effect of reduced dimensionality or introduction of aromatic molecules has on the structure. In addition, this also gives new insight in the origin of the low temperature phase transition that occurs in some perovskite materials. Experimentally we look at non-radiative pathways in perovskite nanoplatelets and how we can overcome them. Ultimately, the results presented in this thesis give some new design guidelines for perovskite materials to optimize their properties for specific applications.","","en","doctoral thesis","","978-94-6332-718-3","","","","","","","","","ChemE/Opto-electronic Materials","","",""
"uuid:f41b33db-30ce-42d5-9cad-06d12c50d90f","http://resolver.tudelft.nl/uuid:f41b33db-30ce-42d5-9cad-06d12c50d90f","Breaching Flow Slides and the Associated Turbidity Currents: Large-Scale Experiments and 3D Numerical Modelling","Alhaddad, S.M.S. (TU Delft Environmental Fluid Mechanics)","Uijttewaal, W.S.J. (promotor); Labeur, R.J. (copromotor); Delft University of Technology (degree granting institution)","2021","Underwater slope failure is a common problem in the fields of geotechnical, dredging and hydraulic engineering, posing a major risk to submerged infrastructure and flood defences along coasts, rivers, and lakes. The term ‘flow slide’ refers to a specific, complex failure mechanism of underwater slopes, which occurs when a substantial amount of sediment moves downslope and eventually redeposits, forming a milder slope. A distinctive feature of flow slides is that the sediment running downslope is transported as a sediment-water mixture rather than as a sediment mass, and thus it behaves as a viscous fluid. Breaching is a particular type of flow slide, described as a slow (mm/s), gradual, retrogressive erosion of submerged slopes that are steeper than the soil internal friction angle. Breaching has remained unexplored until it was identified in the 1970s by the Dutch dredging industry as an important production mechanism for stationary suction dredgers. In that period, breaching was not known as a failure mechanism of underwater slopes outside of the field of dredging. In the Netherlands, breaching is now an important consideration in the safety assessments of dikes. Breaching flow slides are accompanied by the generation of turbidity currents, which can be described as buoyancy-driven underflows generated by the action of gravity on the density difference between the water-sediment mixture and the ambient water. These currents pose a serious threat to submarine structures placed at the seafloor, such as oil pipelines and communication cables. Breaching-generated turbidity currents run over and directly interact with the eroding, submarine slope surface (breach face), thereby enhancing further sediment erosion. The investigation and understanding of this interaction are critical to understand and predict the failure evolution during breaching. This is an important consideration for avoiding the risks of breaching during dredging and for the design of effective mitigation measures to protect hydraulic structures. In this dissertation, the evolution of the breaching failure and the associated turbidity currents are investigated through large-scale laboratory experiments and numerical modelling. This study begins by surveying the state-of-the-art knowledge of breaching flow slides, with an emphasis on the relevant fluid mechanics, providing a better insight into the physics and identifying the relevant knowledge gaps. Then, existing breaching erosion closure models were employed in combination with the three-equation model of Parker et al. (1986) and applied to a typical case of a breaching submarine slope. The sand erosion rate and hydrodynamic properties of the turbidity current were found to vary substantially between the erosion closure models, motivating further experimental studies on breaching flow slides, including detailed flow measurements, for validation purposes and improving the current understanding of the breaching phenomenon. At the Laboratory of Fluid Mechanics of Delft University of Technology, a set of unique large-scale experiments was conducted in which various non-vertical initial breach faces were tested, providing the first quantitative data for such initial conditions. Direct measurements of breaching-generated turbidity currents are thus provided, illustrating their spatial development and visualizing the structure of their velocity and sediment concentration. The analysis of the experimental results indicated that breaching-generated turbidity currents are self-accelerating; sediment entrainment and flow velocity enhance each other in a positive feedback loop. The turbidity currents accelerate downslope, and consequently the sand erosion rate increases downslope until a certain threshold, likely imposed by turbulence damping. This leads to the steepening of the breach face which induces the collapse of coherent sand wedges (surficial slides). These slides considerably enhance local sediment erosion and affect the hydrodynamics and thus increase the erosive capacity of the turbidity current. Even though breaching is a gravity-induced failure in the first place, the generated turbidity current seems to start dominating the failure just after its onset until the final deposition of the sediments. Owing to several difficulties encountered during the lab experiments, obtaining measurements of turbulence quantities of the flow was not possible. The lack of such measurements hampers the estimate of the flow-induced bed shear stress and hence the prediction of erosion during breaching. This motivated the use of an advanced 3D numerical model as a complementary tool to the experimental work, to gain additional insights into the behavior and structure of breaching-generated turbidity currents. Large eddy simulations of breaching-generated turbidity currents were conducted, providing deeper insights into their hydrodynamics and physical structure. Through these turbulence-resolving simulations, it was shown that the proposed numerical tool can reasonably reproduce several distinctive aspects of the flow, such as the vertical density distribution, and the spatial development down the breach face. A limitation of the model is that it underestimates the thickness of the current. The numerical results confirm the self-accelerating behavior of breaching-generated turbidity currents as indicated by the experimental results. Considering the challenging conditions of breaching, a new breaching erosion closure model was proposed and validated using the series of the laboratory experimental data obtained within this study. Good agreement is observed between experimental and numerically predicted erosion rates. Breaching-generated turbidity currents are found to exhibit a self‐similar behavior; velocity, concentration, Reynolds stress, and turbulent kinetic energy profiles take a self-similar shape. Based on a sensitivity analysis, sand erosion during breaching is found to be susceptible to the in situ porosity; the lower the in situ porosity, the higher the sand resistance to erosion. The experimental measurements acquired within this study may be utilized for the validation of existing and new numerical models used to simulate breaching flow slides. These models, based on the findings of this research, must be capable of reasonably reproducing the hydrodynamics and sediment transport of turbidity currents. The self-accelerating behavior of this current implies that it is quite dangerous, and that breaching could be a triggering mechanism for sustained turbidity currents in deep water. The knowledge gained from this dissertation may help towards the design of robust mitigation measures against breaching flow slides and towards the optimization of the sand production process during dredging while minimizing the associated risk for the surrounding environments. In addition, it may lead to a more accurate interpretation of the process responsible for the encountered submarine slope failures.","Flow slide; breaching; turbidity current; sediment entrainment; pick-up function; erosion model; erosion rate; Self-accelerating current; surficial slide; large eddy simulation","en","doctoral thesis","","978-94-6366-345-8","","","","","","","","","Environmental Fluid Mechanics","","",""
"uuid:dd646e08-9839-4c17-b5cd-867c6f1e913d","http://resolver.tudelft.nl/uuid:dd646e08-9839-4c17-b5cd-867c6f1e913d","Universal quantum logic in hot silicon qubits","Petit, L. (TU Delft QCD/Veldhorst Lab)","Vandersypen, L.M.K. (promotor); Veldhorst, M. (copromotor); Delft University of Technology (degree granting institution)","2021","In the last decade silicon has emerged as a potential material platform for quantum information. The main attraction comes from the fact that silicon technologies have been developed extensively in the last semiconductor revolution, and this gives hope that quantum dots can be fabricated one day with the same ease transistors are made today. However, building a large-scale quantum computer presents also complications that go beyond fabrication. The heat-dissipation challenge is one of these. As many other qubit platforms, also quantum dot qubits are cooled down at temperatures close to absolute zero in order to overcome the problem of decoherence. While this can be advantageous in few-qubit experiments, it becomes soon impractical as the qubit number increases. The first part of the thesis describes a series of experiments that demonstrate how Si- MOS quantum dot qubits can be successfully operated beyond one Kelvin, where the increase in cooling power is substantial. The first step is to demonstrate that electrons have sufficiently large energy scales to be properly isolated and controlled at these high temperatures. In the first experimental chapter of the thesis we demonstrate a highly uniform double quantum dot system at the temperature of 0.5 K. The on-chip single-electron-transistor (SET) shows very regular oscillations and an exceptional sensitivity to dot-reservoir and interdot transitions. The electrons in the quantum dot can also be completely decoupled from the reservoir, resulting in a fully isolated system. In order to performquantumoperations it is not only crucial to isolate electrons, but also to couple them. While this is routinely achieved in Si-SiGe heterostructures, it is usually more challenging in Si-MOS due to the larger disorder at the Si-SiO2 interface. However, we find that in the same device we can control the tunnel coupling between the electrons, in a range from below 1 Hz up to 13 GHz. This would allow to isolate the electrons for single-qubit operations and to couple them for two-qubit gates or readout using Pauli spin blockade. Part of the challenges concerning operation of ‘hot’ spin qubits lies in the temperature dependence of two parameters: the spin lifetime and the charge noise, which are thoroughly studied in chapter 4. The spin lifetime is usually very long in silicon, due to a weak spin-orbit coupling, and it can approach seconds at low magnetic fields. However, the temperature increases the excitations in the phonon bath and activates two-phonon transitions, which have a steep temperature dependence. These processes, which we experimentally find to start around 500 mK, can ultimately limit qubit performances. However, the spin lifetime can be significantly improved by working in a low magnetic field and high valley splitting regime. Si-MOS quantum dot qubits have a large valley splitting, usually of several hundreds of &eV, and a lowmagnetic field can be set by reading out the qubits with Pauli spin blockade. This guarantees that useful spin lifetimes can still be found at temperatures close to one Kelvin. In particular, in chapter 4 we measure values exceeding 1 ms at 1.1 K, and discuss how they can be further improved in case of a larger valley splitting.","Quantum bit (qubit); silicon-based; Quantum dots; High temperature","en","doctoral thesis","","9789085934592","","","","","","","","","QCD/Veldhorst Lab","","",""
"uuid:14fd5c7b-0312-4139-a6a2-1ac3d8eb4058","http://resolver.tudelft.nl/uuid:14fd5c7b-0312-4139-a6a2-1ac3d8eb4058","Business innovation towards a circular economy: An ecosystem perspective","Konietzko, J.C. (TU Delft Marketing and Consumer Research)","Hultink, H.J. (promotor); Bocken, N.M.P. (copromotor); Delft University of Technology (degree granting institution)","2021","We currently live in a carbon intensive linear economy. On the basis of burning fossil fuels, we take, make and waste an increasing amount of materials. This has pushed us against serious planetary boundaries. Radical reductions in environmental impact are needed over the coming decades. Entire economies and societies will have to reorganize. A promising candidate to support this reorganizing is a circular economy. It cuts waste, emissions and pollution, and it keeps the value of products, components and materials high over time. Companies can innovate towards a circular economy by following five key resource strategies: narrow, slow, close, regenerate, and inform. This thesis explores these strategies – through case research and a design science approach. It shows that an ecosystem perspective is necessary to implement these strategies – and provides tools and methods that can help to put an ecosystem perspective into action. This can help companies to develop circular ecosystem value propositions: that propose a positive collective outcome, fulfill user needs in exciting ways, and minimize environmental impact.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-351-9","","","","A+BE | Architecture and the Built Environment No 1 (2021)","","","","","Marketing and Consumer Research","","",""
"uuid:bf8250f8-6377-4dd2-bb6a-dfd9e41451fc","http://resolver.tudelft.nl/uuid:bf8250f8-6377-4dd2-bb6a-dfd9e41451fc","Zero-energy states in Majorana nanowire devices","Bommer, J.D.S. (TU Delft QRD/Kouwenhoven Lab)","Kouwenhoven, Leo P. (promotor); Wimmer, M.T. (promotor); Delft University of Technology (degree granting institution)","2021","In the voyage towards solving increasingly challenging computations of physical systems, quantum computation has arisen as a contender for conventional computational approaches. To address the issue of keeping the required quantum mechanical states sufficiently stable against environmental disturbances, novel proposals suggested to employ topological quantum states, where information can be stored nonlocally, essentially by sharing the information over physically different locations. Because suitable topological states are elusive in existing materials, an approach of great interest is to engineer the required topological Majorana modes by combining a spin-orbit coupled semiconductor nanowire exposed to a magnetic field with a superconducting material: a Majorana nanowire. After the first experimental signs of Majorana modes were observed in 2012, it also became clear that the experiments showed deviations from the theoretical expectations and alternative interpretations were suggested. This dissertation explores the intricate physics that emerges in Majorana nanowires, with the aim to find improved Majorana signatures in transport experiments. By addressing disorder at the interface between the nanowire and the superconductor, we find Majorana signatures through the electrical transport through a ballistic tunnel junction, which allows us to exclude certain alternative explanations based on disorder. We also look into two key elements required to obtain Majorana modes: spin-orbit interaction and induced superconductivity. First, through measurements of the effect of a magnetic field and its direction on the size of the induced superconducting gap, we show that spin-orbit interaction counteracts the closing of the superconducting gap. This protection of the superconducting gap is ultimately responsible for the possibility of a topological nontrivial phase in nanowires. Second, we investigate the influence of an electric field in the nanowire on the coupling between electronic states in the nanowire and the superconductor and find that the electric field modifies the strength of the effective nanowire parameters essential to Majorana physics. Returning to the study of transport signatures of Majorana modes, we explore plateaus in the zero-bias conductance near the quantization value predicted for topological Majorana modes. Instabilities of the observed quantized plateaus on tunnel-barrier details indicate instead the presence of topologically trivial zero-energy states, which can be described as local Majorana modes and may offer an alternative route towards the demonstration of non-Abelian exchange statistics. Finally, we address the nonlocal distribution of Majorana nanowire zero-energy states through the modulation of the energy splitting due to a remote electrostatic gate decoupled from the tunneling barrier region. We identify states consistent with overlapping Majorana modes in a short nanowire. The dissertation is concluded by discussing interesting future avenues to solidify the understanding of Majorana nanowires and we indicating a possible alternative approach to demonstrate non-Abelian properties by deliberately stabilizing local Majorana modes.","","en","doctoral thesis","","978-90-8593-460-8","","","","","","","","","QRD/Kouwenhoven Lab","","",""
"uuid:bee74041-d6fc-4c7a-9036-a374c8bef3d5","http://resolver.tudelft.nl/uuid:bee74041-d6fc-4c7a-9036-a374c8bef3d5","The Squatted New Town: Modern Movement meets Self-organisation in Venezuela","Rots, S.J. (TU Delft Urban Design; TU Delft Public Commissioning)","Meyer, Han (promotor); Rooij, R.M. (copromotor); Delft University of Technology (degree granting institution)","2021","A critical discourse on the influence of the Modern Movement in urban design and planning has been taking place since the mid-20th century, accompanied by an ongoing search for opportunities to bring the human dimension, scale and self-organisation into this process. However, still a large number of new towns have been built across Asia and Africa, generally following modernistic urban concepts. This research contributes to the afore-mentioned discussion, exploring the context of urbanisation in Latin America in the 1950s and 1960s, in many ways similar to the context of current new town developments in Asia and Africa. To address the meeting point of the ideas of the Modern Movement and self-organisation, the study examines two case-studies in Venezuela, a country which urbanized rapidly after the discovery of oil in the 1920s. The oil revenues made it possible to build new towns, on a large scale, with modernist ideals.
The analysis of the planning and implementation of the two new towns built in Venezuela in the 1950s and 1960s provides important lessons that can be shared to improve the current practice of new town planning. The main lessons emphasize the importance of integrating the needs and wishes of the residents, and to get the commitment of the authorities. More importantly, the results show the opportunities of the aided self-help housing policy, an effective alternative beyond the habitual use of modernistic ideas and concepts.
To test whether induced seismicity in the real subsurface can be monitored using the single-sided representation, synthetic data are first considered, which include a synthetic reflection response and macro velocity model. The Marchenko method is used in combination with these data to obtain the focusing functions and Green's functions that are required for the homogeneous Green's function representations. The classical representation and the single-sided representation of the homogeneous Green's function employ the Green's functions and focusing functions to obtain the homogeneous Green's function of the medium. The homogeneous Green's function is visualized by creating snapshots of the homogeneous Green's function and these snapshots are compared to a directly modeled reference wavefield. This demonstrates that the classical representation, when applied to data at an open acquisition boundary, yields significant artifacts in the results, while the single-sided representation obtains accurate results. It is also shown that the radiation pattern of a double-couple source can be included in the retrieval of the homogeneous Green's function. The synthetic reflection data are truncated by limiting the offsets and sampling distance and applying attenuation to simulate field conditions. These truncations show that the single-sided homogeneous Green's function contains artifacts and lacks physical events if the reflection data are not ideal. 2D field reflection data and a macro velocity model from the V\o ring basin are considered and pre-processed to account for these truncations. The classical and the single-sided homogeneous Green's function representation are both applied to the field data and the results show that the retrieval of the homogeneous Green's function is possible for 2D field data using point sources while employing the single-sided representation. The results of the classical representation contain a large amount of errors. It is also shown that a homogeneous Green's function can be retrieved that has a virtual source with a double-couple radiation pattern.
Next, the application of the single-sided representation is considered in greater detail. The representation is used to forecast a wavefield in the subsurface as well as to monitor a wavefield in the subsurface. For the monitoring of the wavefield, it is assumed that a physical source in the subsurface causes a wavefield which is measured at the surface of the Earth. The Marchenko method is used to create virtual receivers inside the subsurface, which are used in combination with the physical measurement in the single-sided representation. This is a one-step process, because the Marchenko method is only used to create the virtual receivers. The single-sided representation of the homogeneous Green's function requires the source wavelet to be symmetric in time, which is unlikely for physical sources. Hence, a different single-sided representation can be used, which retrieves the causal Green's function and does not require a symmetric source wavelet. The single-sided representation of the causal Green's function can retrieve a majority of the correct events, however, the results contain anti-symmetric artifacts when the physical source is located above the virtual receiver. To forecast a wavefield in the subsurface, given a specific source configuration, the single-sided representation of the homogeneous Green's function can be used. In this case, a two-step process is applied, where both the source and the receiver in the subsurface are created by the Marchenko method and are therefore both virtual. After the homogeneous Green's function is obtained, it can be convolved with a non-symmetric wavelet. To demonstrate the difference between the one-step monitoring process and the two-step forecasting process, 2D synthetic reflection data are utilized. For the source configuration, a rupture plane is considered, which is modeled by superposing and time-shifting point sources, which contain a double-couple radiation pattern and are all scaled differently to simulate the heterogeneity of the rupture plane. The total wavefield created by this rupture plane is monitored using the single-sided representation of the causal Green's function. There are anti-symmetric artifacts present in the result, related to each point source, however, the correct wavefield is retrieved above the shallowest source location and below this source location after the first arrivals of all sources. The single-sided representation of the homogeneous Green's function is applied to forecast a virtual rupture plane, by retrieving the homogeneous Green's function for each source separately. The retrieved homogeneous Green's functions are transformed to causal Green's functions, shifted in time and superposed to forecast the total wavefield, which is free of the anti-symmetric artifacts at any depth. Both the monitoring approach and the forecasting approach are tested on 2D field data and the retrieved wavefields show similar results as were seen when the synthetic data were used. When the total wavefield is forecasted, there are no anti-symmetric artifacts present and when the wavefield is monitored, there are artifacts, however, they are only present in part of the result, below the sources before and during the first arrival of each source.
To test the application of the single-sided representation in 3D, a 3D implementation of the Marchenko method is required. The implementation is straightforward from a theoretical standpoint, as the surface integrals are performed over two dimensions instead of just one. The practical implementation is more difficult, however. The Marchenko method requires that the reflection data are well sampled in both space and time for sources and receivers, hence, the 3D reflection data are of a large size. As a result, not only a large amount of storage space is required, but the loading time of the reflection data is high, both of which are unpractical for efficient computation. We limit these problems by pre-transforming the reflection data to the frequency domain and compressing the data using floating point arrays, which reduces the storage space and loading time. Two datasets are considered, one modeled in a simple four layer model and the other in a subsection of the complex 3D Overthrust model. For both models, a Green's function inside the medium is retrieved, using a first arrival in the Marchenko method that was modeled in the exact medium, and compared to a reference Green's function that was directly modeled. The results for both models are accurate for the single Green's function. Next, imaging is performed for the models, however, instead of modeling the first arrivals, they are estimated using an Eikonal solver, because the modeling time of all the first arrivals is too high. The results of the imaging using the Marchenko method are compared to the results of conventional imaging, which demonstrates that artifacts, related to the internal multiples, are attenuated.
The 3D implementation of the Marchenko method is used to retrieve the Green's functions and focusing functions in 3D using 3D synthetic reflection data modeled in the Overhtrust model. The classical homogeneous Green's function representation and the single-sided representation of the causal Green's function and the homogeneous Green's function are all applied using these data, for three different combinations of a virtual source and a virtual receiver. The results are compared to a directly modeled wavefield, which shows that the result obtained by using the classical representation is contaminated by artifacts and lacks physical events. The result of the single-sided representation of the causal Green's function contains anti-symmetric artifacts related to the focusing function when the virtual receiver is located below the virtual source. The result of the single-sided representation of the homogeneous Green's function shows a good match to the reference result. The single-sided representation of the homogeneous Green's function is also applied using an Eikonal solver to obtain the first arrival that is required for the Marchenko method. The homogeneous Green's function that is obtained in this way shows a small decrease in quality for the result, however, this approach is more computationally feasible. The single-sided representation is used in combination with the Eikonal solver to retrieve a large amount of virtual receivers, so that the propagation of the wavefield in the subsurface can be visualized in time through the use of snapshots. This reveals that the part of the wavefield that is traveling at angles that are close to the normal of the surface is retrieved properly, while the part of the wavefield that is traveling at greater angles to the normal is reconstructed with less accuracy. This lack of proper retrieval is caused by the limited aperture of the reflection data. A rupture plane in 3D is considered and constructed in a similar way as is done for the 2D synthetic data. Point sources are used to model wavefields, which are time-shifted and superposed, however, to further represent the heterogeneity of the rupture plane, each wavefield is modeled using an unique causal wavelet. Both monitoring, using the single-sided causal Green's function representation, and forecasting, using the single-sided homogeneous Green's function representation, are performed on the rupture plane configuration. The two-step forecasting approach yields accurate results, for a given distribution of sources. The one-step monitoring approach retrieves accurate results above the shallowest source location, however, the result contains artifacts at the locations below the shallowest source, before and during the first arrival of each source.","Marchenko; Induced; Seismicity; Virtual; Source; Receiver; Monitoring; Forecasting","en","doctoral thesis","","978-94-6419-105-9","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:f88ae605-0f72-4638-b7c2-fc5a98996fc2","http://resolver.tudelft.nl/uuid:f88ae605-0f72-4638-b7c2-fc5a98996fc2","Deep learning for perception tasks","Gaisser, F. (TU Delft Intelligent Vehicles)","Jonker, P.P. (promotor); Dankelman, J. (promotor); Happee, R. (promotor); Delft University of Technology (degree granting institution)","2021","In recent years large advances have been made in the field of machine learning, driven by novel deep learning methods. Deep learning is a research field that focusses on creating neural networks. This field has seen a rapid advance due to an increase in computational power, availability of large amounts of data and a wide variety of novel methods that allows for more efficient training of neural networks. Deep learning has been applied in various fields to solve many different tasks. Effective training of these neural networks requires selecting the right data, network architecture and learning method. However, thorough understanding of the task for which the neural network is trained is needed to adhere to these requirements. This thesis will illustrate that deep learning methods can effectively be applied to perception tasks by thorough understanding of the task.","","en","doctoral thesis","","978-94-6361-503-7","","","","","","","","","Intelligent Vehicles","","",""
"uuid:9cb2d621-5a55-404a-a278-8d562ad8f57b","http://resolver.tudelft.nl/uuid:9cb2d621-5a55-404a-a278-8d562ad8f57b","Out-of-band Interference Immunity of Negative-Feedback Amplifiers","Totev, E.D. (TU Delft Bio-Electronics)","Serdijn, W.A. (promotor); Long, J..R.. (promotor); Verhoeven, C.J.M. (copromotor); Delft University of Technology (degree granting institution)","2021","A study of the out-of-band interference of negative-feedback amplifiers is carried out in this thesis. Several design methods to reduce the susceptibility to interference are identified and developed. The proposed techniques are based on robust circuits and topologies that are suitable for monolithic integration.","","en","doctoral thesis","","978-90-6824-066-5","","","","","","","","","Bio-Electronics","","",""
"uuid:14746f2d-786f-4176-8418-25b75e2c19b6","http://resolver.tudelft.nl/uuid:14746f2d-786f-4176-8418-25b75e2c19b6","Measuring adhesion and friction in mems","Kokorian, J. (TU Delft Micro and Nano Engineering)","Staufer, U. (promotor); van Spengen, W.M. (copromotor); Delft University of Technology (degree granting institution)","2020","The strange and unpredictable behavior of meso-scale adhesion and friction forces is a practical problem for the development of microelectromechanical systems (MEMS) with contacting surfaces. To overcome the associated limitations when designingMEMS devices, the first obstacle to remove is the fact that it is hard to measure displacements and forces in MEMS with sufficient resolution to discern atomic scale details from these meso-scale measurements. In this PhD thesis we show how an non-invasive, optical method can be used to measure forces and displacements inMEMS with sub-nanometer resolution. It is fundamentally impossible to opticallymeasure topological details below 500 nmin size, due to the wavelike nature of light. However, the location of a moving feature can be tracked with a much higher resolution, by curve-fitting a mathematical function to its shape.","","en","doctoral thesis","","978-94-6366-348-9","","","","","","","","","Micro and Nano Engineering","","",""
"uuid:42380f43-3979-4bb1-8f01-dac157ade47d","http://resolver.tudelft.nl/uuid:42380f43-3979-4bb1-8f01-dac157ade47d","Exploring the role of sketching on shared understanding in design","bin Nik Ahmad Ariff, N.S. (TU Delft OLD Design Theory and Methodology)","Badke-Schaub, P.G. (promotor); Thoring, K.C. (promotor); Delft University of Technology (degree granting institution)","2020","","","en","doctoral thesis","","978-94-6421-187-0","","","","","","","","","OLD Design Theory and Methodology","","",""
"uuid:78d96af2-fb96-4a6e-a51e-ea4236fdf2d7","http://resolver.tudelft.nl/uuid:78d96af2-fb96-4a6e-a51e-ea4236fdf2d7","Electromagnetic Fields in MRI: Analytical Methods and Applications","Fuchs, P.S. (TU Delft Signal Processing Systems)","Remis, R.F. (promotor); Leus, G.J.T. (promotor); Hari, K.V.S. (promotor); Delft University of Technology (degree granting institution)","2020","Electrical properties, the conductivity and permittivity of tissue, are quantities that describe the interaction of an object and electromagnetic fields. These properties influence electromagnetic fields and are influenced themselves by physiological phe- nomena such as lesions or a stroke. Therefore, they are important in identifying or diagnosing the severity of pathologies, and they are essential in magnetic resonance imaging (MRI) safety and efficiency by determining tissue heating or sensitivity to excitation pulses and antenna designs. In two-dimensional electromagnetic fields, which occur in specific measurement geometries, it is possible to simplify the relationship between electromagnetic fields and electrical properties, and reconstruct these properties using essentially a forward operation, foregoing a full inversion scheme. These insights also help to find, and ex- plain, the cause of specific artefacts, such as those caused by mismatches in incident field used in the computation of the full electromagnetic fields. The two-dimensional field assumption necessary for the simplified relationship described above is subsequently tested, and it is shown that this assumption does not hold when the object is sufficiently translation variant in the longitudinal direction. That is, even if the fields for a translation invariant object would be two-dimensional, they become three-dimensional through the interaction of the tissue parameters with the fields, which cause out of plane current and field contributions. Another interesting application of closed form expressions between currents and fields is the target field method, which solves the inverse source problem between electric currents and static magnetic fields in a regularised manner by constraining their relationship to a cylindrical geometry. This method is adapted for transverse oriented magnetic fields to be used with Halbach type magnet arrays, and an open source tool is developed to make the method easy to apply for various design con- siderations. Moving away from constraints on the field or current structure, we show the intri- cate relationship between electrical properties and the measured signal in an MRI scanner. This is done by deriving the electro- (and magneto-) motive force for a typ- ical MRI scenario without any assumptions on the object or electro-magnetic fields. This model can then even be used to reconstruct electrical properties from the sim- plest MRI signal, namely the free induced decay (FID) signal. To round off our investigation of tissue properties we take a small detour to the magnetic tissue property, the permeability or magnetic susceptibility. For reconstruct- ing this tissue property a dipole deconvolution is required, where the dipole convolu- tion loses information of the original object through the zeros of the dipole kernel. A new machine learning based approach to reconstruct the lost information is investi- gated in the final chapter of this thesis.","MRI; Electromagnetic FIelds; Inverse Problems; Coil Design; Electrical Properties","en","doctoral thesis","","978-94-6416-342-1","","","","","","","","","Signal Processing Systems","","",""
"uuid:5cb45c66-f404-49b8-b895-501da2b827a6","http://resolver.tudelft.nl/uuid:5cb45c66-f404-49b8-b895-501da2b827a6","Water Loss Assessment in Distribution Networks: Methods, Applications and Implications in Intermittent Supply","Al-Washali, T.M.Y. (TU Delft Sanitary Engineering)","Kennedy, M.D. (promotor); Sharma, S.K. (copromotor); Delft University of Technology (degree granting institution)","2020","Water utilities worldwide lose 128 billion cubic meters annually, causing annual monetary losses estimated at USD 40 billion. Most of these losses occur in developing countries (74%). This calls for a rethinking of the challenges facing water utilities in developing countries, foremost of which is the assessment of water losses in intermittent supply networks. Water loss assessment methods were originally developed in continuous supply systems, and their application in intermittently operated networks (in developing countries) is hindered by the widespread use of household water tanks and the unauthorised consumption. This study provides an extensive review of existing and new methods and (software) tools for water loss assessment. As the volume of water loss varies monthly and annually according to the amount of water supplied, this study proposes procedures to normalise the volume of water loss in order to enable water utilities to monitor and benchmark their performance in water loss management. In addition, a practical method was proposed for estimating apparent losses using data of WWTP inflows, enabling future real-time monitoring of losses in networks. The study then examines the applicability of minimum night flow analysis in the case of intermittent supply, and models the accuracy of the customer water meter under intermittent supply conditions. Finally, the study provides guidance to improve the accuracy of water loss assessment in intermittent supply networks. Accurate assessment of water loss is a prerequisite for reliable leakage modelling and minimisation, as well as planning for, and monitoring of water loss management in distribution networks.","Water loss; Intermittent supply; Water balance; Leakage; Non-revenue water; Software; Water supply; Real losses; Apparent losses; Active leakage control; Unauthorised consumption; Pressure management; Customer water meter; water meter performance","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","9780367766559","","","","","","","","","Sanitary Engineering","","",""
"uuid:7ad707a5-2db6-4185-bd3a-7a97fcf74b23","http://resolver.tudelft.nl/uuid:7ad707a5-2db6-4185-bd3a-7a97fcf74b23","Energy Efficient and Intrinsically Linear Digital Polar Transmitters","Hashemi, M.","de Vreede, L.C.N. (promotor); Delft University of Technology (degree granting institution)","2020","One of the biggest challenges in modern transmitter (TX) design, when going from the fourth generation (4G) to fifth generation (5G) communications network, is to handle the increased linearity requirements without introducing any compromise in the energy efficiency of the TX line-up. In analog systems, high quality for the TX signal can be only achieved when using very linear operation of the (analog) power amplifier (PA). This severely limits the achievable efficiency in practical TX line-ups. Alternatively, a nonlinear PA can be used, which is linearized by digital pre-distortion (DPD) circuitry. This later approach is commonly used in (4G) macro-cell base stations, but it comes at the cost of increased system complexity and high supply power for the advanced DPD unit. When going towards 5G handset, or massive -multiple - input -multiple - output (mMIMO) 5G base station units, that facilitate beamforming and higher data rates to their end users. The required RF output power per individual transmitter is rather low (at most only a few watts). However, since many more transmitters are used in 5G applications (e.g. a factor 64 x to 256 x more than in 4G base stations) the use of an advanced DPD units in each individual TX-lineup, with their related high-power consumption becomes simply impractical. Consequently, to address these changing needs, it is highly desirable to find new circuit-level TX solutions, that overcome the traditional linearity-efficiency trade-off. To achieve this goal, this PhD work is focused on the utilization and tailoring of digital device operation, as facilitated by advanced CMOS technologies, towards the needs of modern wireless applications with their wideband complex modulated TX signals. The circuit techniques developed within this thesis, target an inherently linear amplitude-code-word (ACW) to TX output signal transfer, as such omitting completely the need for a power hungry advanced DPD unit, or alternatively, rely on a much more simple and consequently less power hungry DPD unit for the most demanding applications (e.g. when handling large modulation bandwidths). The circuit techniques developed in thesis, allow excellent drain and TX line-up efficiency, while being compatible with wideband efficiency enhancement techniques like Doherty. The proposed circuit techniques are also able to correct for process, voltage, load and temperature variations of the application.","Polar TX; Digital TX; Digital power amplifier; Doherty power amplifier; Digital predistortion; Efficient; Linear; Wideband","en","doctoral thesis","","978-94-6421-184-9","","","","","","","","","Electronics","","",""
"uuid:54597354-3ebd-4bee-b5de-4a87e22bceae","http://resolver.tudelft.nl/uuid:54597354-3ebd-4bee-b5de-4a87e22bceae","Integrated Circuits for Miniature 3–D Ultrasound Probes: Solutions for the Interconnection Bottleneck","Chen, Z. (TU Delft Electronic Instrumentation)","Pertijs, M.A.P. (promotor); de Jong, N. (promotor); Delft University of Technology (degree granting institution)","2020","This thesis describes low-power application-specific integrated circuit (ASIC) designs to mitigate the constraint of cable count in miniature 3-D TEE probes. Receive cable count reduction techniques including subarray beamforming and digital time-division multiplexing (TDM) have been explored and the effectiveness of these techniques has been demonstrated by experimental prototypes. Digital TDM is a reliable technique to reduce cable count but it requires an in-probe datalink for high-speed data communication. A quantitative study on the impact of the datalink performance on B-mode ultrasound image quality has been introduced in this thesis for data communications in future digitized ultrasound probes. Finally, a high-voltage transmitter prototype has been presented for effective cable count reduction in transmission while achieving good power efficiency.","","en","doctoral thesis","","","","","","","","","","","Electronic Instrumentation","","",""
"uuid:9fe0a3cd-a7f7-4d29-bb44-6dc78575a2e8","http://resolver.tudelft.nl/uuid:9fe0a3cd-a7f7-4d29-bb44-6dc78575a2e8","Design for Product Care","Ackermann, L. (TU Delft Marketing and Consumer Research)","Mugge, R. (promotor); Schoormans, J.P.L. (promotor); Delft University of Technology (degree granting institution)","2020","Product care is defined as all activities initiated by the consumer that lead to the extension of a product’s lifetime. It includes repair and maintenance, as well as preventive measures or general careful handling of a product. Product care is one way to extend a product’s lifetime, as it keeps the product in a usable and maintained state for a longer period of time, thereby postponing its replacement. An issue with product care is that it heavily relies on consumers’ behaviour once the product is in use. Therefore, the main research question of this thesis is:
How can design foster product care among consumers?
We present the current state of product care among consumers, a scale to measure product care, and design strategies to foster product care. In addition, we explore product care in access-based product-service systems. Using the insights identified in this PhD project, designers can create and redesign products in such a way that care activities will be more likely to be performed.","","en","doctoral thesis","","978-94-6384-180-1","","","","","","","","","Marketing and Consumer Research","","",""
"uuid:edef269a-610d-4304-b07d-58d3438b995f","http://resolver.tudelft.nl/uuid:edef269a-610d-4304-b07d-58d3438b995f","A quantitative analysis of growth regulation by ppGpp in E. coli","Imholz, N.C.E. (TU Delft BN/Greg Bokinsky Lab)","Dogterom, A.M. (promotor); Bokinsky, G.E. (copromotor); Delft University of Technology (degree granting institution)","2020","This thesis is about a little molecule called guanosine tetraphosphate. ppGpp. Consider it the bacterial brain, at the core of the coordination and regulation of bacterial growth. For over half a century, it has haunted microbiologists as it appears involved in every aspect of microbial physiology, yet incredibly difficult to study due to its fast dynamics, chemical instability and pleiotropic effects. Like the human brain, it cannot simply be removed to show its true nature. In contrast to the pronunciation of its name, ppGpp is a rather simple molecule, and built from two of the most abundant substrates in the bacterial cell (ATP and GTP). The enzymes that make or break ppGpp are highly efficient, such that at any moment, the bacteria can decide to instantly 100-fold increase ppGpp concentrations, or virtually remove all of it. Thanks to this intelligent system, E. coli can decide to arrest growth, protecting itself against any threats, or to rapidly feast upon the sparse nutrients it may be tossed, within the order of minutes.","ppGpp; LC-MS; growth rate; translation; SpoT; ACP","en","doctoral thesis","","978-94-6421-154-2","","","","","","","","","BN/Greg Bokinsky Lab","","",""
"uuid:09223911-3c3e-42f4-a94a-e40640d5acfc","http://resolver.tudelft.nl/uuid:09223911-3c3e-42f4-a94a-e40640d5acfc","Building Blocks for Wavelength Converters: A Study of Monolithic Devices in Piezoelectric Materials","Forsch, M. (TU Delft QN/Groeblacher Lab)","Groeblacher, S. (promotor); Kuipers, L. (promotor); Delft University of Technology (degree granting institution)","2020","In cavity optomechanics, optical fields are coupled to the displacement of mechanical resonators. While it is interesting to study fundamental aspects of this interaction, it is the ability to link this mechanical displacement to various other degrees of freedom that inspires many applications in the field. In these applications, the mechanical resonator can be used as a handle to an external influence for the purpose of sensing, but also as a transducer between two otherwise detached degrees of freedom. The latter approach is the focus of this work. A particularly interesting regime for such a transduction process is between a few-gigahertz microwave tone and optical photons at telecom wavelengths around 1550 nm, connecting the operating regimes of long-range telecommunication with that of superconducting quantum nodes. Bridging the gap between these domains is an essential step towards any size of quantum network based on superconducting nodes, as the losses encountered in microwave transmission lines prohibits the connection of such nodes over length scales extending beyond a few meters. As such, a transducer between the few-gigahertz and optical telecom domains would enable the use of low-loss optical channels to connect remote superconducting nodes, given that the quantum information is preserved throughout the conversion process. One approach of realizing such a converter makes use of a gigahertz-frequency mechanical mode as the transducing element, which is coupled to the optical telecom domain using the optomechanical interaction and to the microwave domain using the electromechanical interaction. In this work, we aim to unify both of these interactions in a single device by designing and fabricating optomechanical devices from the III/V semiconductors, which, alongside their good optical properties, are also piezoelectric. In Chapter 2, we motivate our material choice and introduce the relevant properties of the materials, as well as their impact on the fabrication process. Following the material discussion, we then set out in Chapter 3 to realize a microwave-to-optics converter made of an optomechanical crystal in gallium arsenide, which we resonantly couple to a interdigital transducer using surface-acoustic-waves. With this device, we demonstrate the first microwave-to-optics conversion using a mechanical mode with an average number of thermal excitations below one. As well as verifying the coherence of the conversion process, our experiments also highlight the limitations arising from the material choice due to absorption-induced incoherent heating of the mechanical mode. The material choice itself then becomes the central topic of Chapter 4, where we opt for a gallium phosphide, a relative of gallium arsenide, which prominently features a larger bandgap. With this material we show non-classical correlations between photons an phonons. We enter a a regime that was previously inaccessible to devices made from piezoelectric materials, a promising step towards realizing the noise-requirements for microwave-to-optics converters. In Chapter 5, we use the insights from the two previous chapters to design a new type of electro-opto-mechanical resonator, specifically aimed at microwave-to-optics conversion. We miniaturize the electromechanical interface and use strongly coupled mechanical resonators for the transfer of excitations between the microwave and mechanical mode. We fabricate and characterise initial devices as well as demonstrate the validity of the functioning principle. Finally in Chapter 6, we reflect on the results of the previous chapters and highlight some potential advantages of other approaches.","Optomechanics; Piezoelectrics; Semiconductors; Wavelength Conversion; Nanofabrication","en","doctoral thesis","Casimir PhD Series","978-90-8593-453-0","","","","","","2021-05-27","","","QN/Groeblacher Lab","","",""
"uuid:92db8fa0-bba1-431a-866e-f05151753632","http://resolver.tudelft.nl/uuid:92db8fa0-bba1-431a-866e-f05151753632","Multiscale modeling of strain rate effects in FRP laminated composites","Liu, Y. (TU Delft Applied Mechanics)","Sluys, Lambertus J. (promotor); van der Meer, F.P. (copromotor); Delft University of Technology (degree granting institution)","2020","Fiber reinforced polymer composites are increasingly used in impactresistant devices, automotives, and aircraft structures due to their high strengthtoweight ratios and their potential for impact energy absorption. Dynamic impact loading causes complex deformation and failure phenomena in composite laminates. Moreover, the high loading rates in impact scenarios give rise to a significant change in mechanical properties (e.g. elastic modulus, strength, fracture energy) and failure characteristics (e.g. failure mechanisms, energy dissipation) of polymer composites. In other words, both mechanical deformation and failure are strainrate dependent. The contributing mechanisms can be roughly classified as viscous material behavior, changes in failure mechanism, inertia effects and thermome chanical effects. These effects involve multiple length and time scales. In experiments it is difficult to isolate single mechanisms contributing to the overall ratedependency. Therefore, it is difficult to quantify the contribution of each mechanism at different scales. The aim of this thesis is to establish a multiscale numerical framework in which three of the contributing mechanisms, i.e. the viscous material behavior, changes in fracture mechanisms and inertia effects, can be investigated at different scales. The research in this thesis is divided into four parts, one related to the macroscale, where the composite material is treated as homogeneous, and three on exploring possibilities to include microscale information, taking into account the microstructure of fibers and matrix.","","en","doctoral thesis","","","","","","","","","","","Applied Mechanics","","",""
"uuid:7f98f3d8-bebc-4927-8b5d-d70c84bfa04c","http://resolver.tudelft.nl/uuid:7f98f3d8-bebc-4927-8b5d-d70c84bfa04c","A virtual coach for low-literates to practice societal participation","Schouten, D.G.M. (TU Delft Interactive Intelligence)","Neerincx, M.A. (promotor); Cremers, A.H.M. (copromotor); Delft University of Technology (degree granting institution)","2020","This thesis presents the research, design, and evaluation of the learning support system VESSEL: Virtual Environment to Support the Societal participation Education of Low-literates. The project was started from the premise that people of low literacy in the Netherlands participate in society less often and less effectively than literate people do: Their lower ability to read, write, speak, and understand the Dutch language hampers their ability to independently be part of society. Our goal was to create learning support prototypes with a re-usable design rationale, aimed at helping these people of low literacy learn to improve their societal participation. To achieve this, low-literate learners participated throughout the entire design process, ensuring that we addressed their wants and needs with regard to learning and the perceived shortcomings of existing learning materials and kept in mind their skills and capabilities in order to ensure effective learning. Particularly, we investigated the possible ways that digital learning, Virtual Learning Environments (VLE), and Embodied Conversational Agents (ECA) could help fulfill the societal participation needs of this target group. We used the Socio-Cognitive Engineering (SCE) methodology to organize and structure this research, distinguishing the foundation, specification and evaluation of the VESSEL design. Two studies provided a grounded foundation for VESSEL, which was refined and worked out into three subsequent studies that provided the consequential design specifications and prototype evaluations (all prototypes have been tested with a human ’Wizard of Oz’ simulating VESSEL functionality).","Societal participation; Low-literacy; Virtual learning environment; Socio-Cognitive Engineering; Requirements engineering; Qualitative methods","en","doctoral thesis","","978-94-6423-079-6","","","","","","","","","Interactive Intelligence","","",""
"uuid:12393b11-a4c3-4697-8757-2b2dbc1291ec","http://resolver.tudelft.nl/uuid:12393b11-a4c3-4697-8757-2b2dbc1291ec","Combined gas engine-solid oxide fuel cell systems for marine power generation","Sapra, H.D. (TU Delft Ship Design, Production and Operations)","Hopman, J.J. (promotor); de Vos, P. (copromotor); Delft University of Technology (degree granting institution)","2020","Modern marine diesel engines operating on conventional marine fuels are unable to further reduce the adverse impact of ship emissions on the environment. Integration of a solid oxide fuel cell (SOFC) and internal combustion engine (ICE) equipped with an underwater exhaust (UWE) can provide the opportunity for mitigating ship emissions and improving energy efficiency. However, numerous integrated system variables such as fuel utilization, engine fuel composition, load-sharing etc. can significantly impact SOFC and ICE operation and, therefore, affect the feasibility and performance of the SOFC-ICE power plant. Moreover, presence of high and fluctuating back pressure due to an UWE can negatively impact engine operation and its performance limits. By investigating these challenges and more, the research presented in this dissertation aims to pave the way for the next generation of extremely efficient prime movers onboard ships, operating on alternative marine fuels, with ultra-low emissions.","Solid oxide fuel cells; Internal combustion engines; Underwater exhaust systems; Marine power generation; Alternative fuels; System integration; Combustion; Experiments andModelling and simulations","en","doctoral thesis","","978-94-6421-149-8","","","","","","","","","Ship Design, Production and Operations","","",""
"uuid:aecd0ff6-e2d8-48f2-9320-c2145f81697c","http://resolver.tudelft.nl/uuid:aecd0ff6-e2d8-48f2-9320-c2145f81697c","Computational analysis of fracture and healing in thermal barrier coatings","Krishnasamy, J. (TU Delft Aerospace Structures & Computational Mechanics)","van der Zwaag, S. (promotor); Turteltaub, S.R. (promotor); Delft University of Technology (degree granting institution)","2020","Thermal Barrier Coating (TBC) systems are protective layers applied to critical structural components of gas turbines operating at high-temperature. A typical TBC system consists of three different layers, namely a ceramic Top Coat (TC), an active Thermally Grown Oxide (TGO) layer and a metallic Bond Coat (BC). The outer ceramic TC that protects the substrate from high temperature gases and an intermediate BC acts as a bonding layer and also provides oxidation resistance to the underlying components by acting as a sacrificial layer. As a result of the oxidation process, TGO layer is formed at the interface between the TC and BC layer. Lifetime of a typical system lies around several hundred cycles after which, a cost and time intensive maintenance operation is necessary to replace the coating in order to continue safe operation of the engine. Earlier research on TBC micromechanical studies have been focussed on evaluating the influence ofmicrostructure on thermomechanical properties or stress distribution in the TBC system. Numerical efforts on TBC failure such asmodelling the different coating composition, TGOgrowth process and interface irregularities have been made in the past to predict its influence on lifetime of the TBC system. In this research, microstructural features and a novel self healing TBCs are explored through numerical simulations to predict its lifetime enhancement. The overall objective of this research is to develop a modelling and analysis tool capable of simulating fracture and healing processes in the TBC system. The resulting numerical tool aids in setting up design guidelines for the successful development of the proposed self healing TBC system.","Self healing thermal barrier coatings; Fracture; Cohesive elements; Porosity; Splats; Healingmodel; Life time prediction","en","doctoral thesis","","978-94-6421-152-8","","","","","","","","","Aerospace Structures & Computational Mechanics","","",""
"uuid:a6494d92-1373-402a-b6e2-6ca55cc93a8b","http://resolver.tudelft.nl/uuid:a6494d92-1373-402a-b6e2-6ca55cc93a8b","Pearl River Delta: Scales, Times, Domains: A Mapping Method for the Exploration of Rapidly Urbanizng Deltas","Xiong, L. (TU Delft Urban Design)","Meyer, Han (promotor); Nijhuis, S. (promotor); Klaasen, I.T. (copromotor); Delft University of Technology (degree granting institution)","2020","The research aims to provide an understanding of an urbanizing delta in which different scales, times, and domains are related to each other; and to examine how this understanding can be used in a planning and design process in a rapidly urbanizing delta. A mapping method is developed according to the key notions in the understanding of urban deltas, namely its systems, scales, and temporality. The systematic mapping approach was used to organize and analyze both short-term and long-term spatial data during the rapid delta urbanization processes by transforming spatial data via scales, times, and domains. The mapping approach works with insufficient data, which is often the case in a rapidly changing environment, to identify spatial challenges from a long-term perspective. Applied in the Pearl River Delta, the knowledge of the development of the urban landscape had been inventoried, synthesized, and presented in its own spatial-temporal model using maps. Three types of processes (landscape formation, infrastructure extension, and urbanization) were identified according to their speeds. Spatial interactions were illustratively explained on both the delta scale and local scale from 4000 BC to the present with a time extent ranging from 2000 years to 50 years. The intervention of this mapping framework was applied and evaluated in terms of design, decision-making, and education, and the insights gained were used to discover new possibilities and strategies for the delta.","Urban Design; Landscape Architecture; Mapping; Atlas; Pearl River Delta; scales; Timescale; interdiciplinary","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-341-0","","","","A+BE I Architecture and the Built Environment No 21 (2020)","","","","","Urban Design","","",""
"uuid:0bdce5d2-339b-4b42-9c2c-5cc4f49a7817","http://resolver.tudelft.nl/uuid:0bdce5d2-339b-4b42-9c2c-5cc4f49a7817","Mapping landscape spaces: Understanding, interpretation, and the use of spatial-visual landscape characteristics in landscape design","Liu, M. (TU Delft Landscape Architecture)","Nijhuis, S. (promotor); Sijmons, D.F. (promotor); Delft University of Technology (degree granting institution)","2020","Landscape design focuses on the construction and articulation of outdoor space and results in landscape architectonic compositions. In order to communicate about three-dimensional forms and functions, vocabulary, representations, and tools (in terms of spatial-visual characteristics) are of fundamental importance for landscape architects to describe, interpret, and manipulate landscape spaces. While combining design vocabulary and landscape indicators, qualitative and quantitative mapping approaches, visual representation and interpretation methods, this research aims to provide a framework for describing, understanding, and communicating about spatial-visual characteristics in landscape design. A pilot study is used to explore the potential of specific mapping approaches, such as compartment analysis, 3D landscapes, grid-cell analysis, landscape metrics, visibility analysis, and eye-tracking analysis, which are employed to address spatial-visual phenomena like sequence, orientation, continuity, and complexity. Hypothetical design experiments are conducted to evaluate the feasibility and effectiveness of spatial-visual mapping in the design process. Interviews with designers are carried out to reflect on techniques for mapping spatial-visual characteristics in the daily practice of landscape architecture. This research opens a way in which to apply visual landscape research in the process of landscape design and supports the development of multidisciplinary approaches. By expanding the spatial-visual mapping toolbox, designers can engage in issues of landscape development, transformation, and preservation while providing realistic and instrumental clues for interventions in urban landscapes.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-335-9","","","","A+BE I Architecture and the Built Environment No 20 (2020)","","","","","Landscape Architecture","","",""
"uuid:17da1df4-3295-45d3-9119-9f92a547e7c6","http://resolver.tudelft.nl/uuid:17da1df4-3295-45d3-9119-9f92a547e7c6","Distinguishing Attacks and Failures in Industrial Control Systems: Knowledge-based Design of Bayesian Networks for Water Management Infrastructures","Chockalingam, S. (TU Delft Safety and Security Science)","van Gelder, P.H.A.J.M. (promotor); Pieters, W. (promotor); Herdeiro Teixeira, A.M. (copromotor); Delft University of Technology (degree granting institution)","2020","Water management infrastructures such as floodgates are critical and increasingly operated by Industrial Control Systems (ICS). These systems are becoming more connected to the internet, either directly or through the corporate networks. This makes them vulnerable to cyber-attacks. Abnormal behaviour in floodgates operated by ICS could be caused by both (intentional) attacks and (accidental) technical failures. When operators notice abnormal behaviour, they should be able to distinguish between those two causes to take appropriate measures, because for example replacing a sensor in case of intentional incorrect sensor measurements would be ineffective and would not block corresponding the attack vector.
In this thesis, we developed the attack-failure distinguisher framework for constructing Bayesian Network (BN) models which enable operators to distinguish between those two causes, including the knowledge elicitation method to construct the directed acyclic graph and conditional probability tables of BN models.
As a full case study of the attack-failure distinguisher framework, we constructed a BN model to distinguish between attacks and technical failures for the problem of incorrect sensor measurements in floodgates, addressing the problem of floodgate operators. We utilised experts who associate themselves with the safety and/or security community to construct the BN model and validate the qualitative part of constructed BN model. The constructed BN model is usable in water management infrastructures to distinguish between attacks and technical failures in case of incorrect sensor measurements. This could help to decide on appropriate response strategies and avoid further complications in case of incorrect sensor measurements.","Bayesian network; Cyber security; Intentional attack; Knowledge elicitation; Risk assessment; Safety; Technical failure; Water management","en","doctoral thesis","","978-94-6384-178-8","","","","","","2022-09-30","","","Safety and Security Science","","",""
"uuid:af423f76-ed01-4dce-a24b-dc0257f9c4a2","http://resolver.tudelft.nl/uuid:af423f76-ed01-4dce-a24b-dc0257f9c4a2","Integrated High-Side Current Sensors","Xu, L. (TU Delft Electronic Instrumentation)","Makinwa, K.A.A. (promotor); Delft University of Technology (degree granting institution)","2020","This thesis describes the design and implementation of integrated high-side current sensors for IoT applications. As explained in chapter 1, the main challenges are the need to achieve low power, low cost and low area while maintaining a reasonably low gain error. To meet them, the focus of this thesis is on (1) the design of precision HV interface circuitry that does not need a HV supply, and (2) the design of energy-efficient temperature compensation schemes that enable the integration of shunt resistors with CMOS circuitry. Several new techniques at both system-level and circuit-level have been proposed and their effectiveness is verified in two prototypes. An integrated shunt-based current sensor consists of an interface circuit, a shunt resistor and a temperature compensation scheme. Chapter 2 gives an overview of these three elements. It first describes two sensing configurations: low-side sensing and high-side sensing, followed by a discussion of their pros and cons. High-side sensing is favored because of its ability to avoid the ground disturbance and detect the high load current caused by accidental shorts. However, it makes the design of the HV interface circuitry more challenging, as this must accurately and safely extract weak differential signals in the presence of large CM voltages. Several existing solutions are reviewed. However, these either consume too much power or occupy large silicon area. This observation leads to the first challenge addressed by this thesis: the design of power-efficient and compact HV circuitry for high-side current sensing. In the two prototypes described in this thesis, low-cost shunt resistors based on the metal layers of a CMOS process, or the lead-frame of a standard plastic package were used. However, both of them suffer from a large temperature coefficient of resistance (TCR) (>0.3%/°C), and so a temperature compensation scheme is necessary to achieve reasonable low inaccuracy. Two types (analog and digital) of temperature compensation schemes are reviewed. Analog ones achieve poor (>1%) inaccuracy while digital ones need a dedicated temperature sensor and a rather complex calibration process. This leads to the second challenge that this thesis addresses: the design of temperature compensation schemes that are low power and easy to use, while still achieving reasonable low inaccuracy.","","en","doctoral thesis","","","","","","","","","","","Electronic Instrumentation","","",""
"uuid:fa818144-a64b-4b2d-a364-49fd0d69d516","http://resolver.tudelft.nl/uuid:fa818144-a64b-4b2d-a364-49fd0d69d516","Sustainability optimization of the thermo-biochemical pathway for the production of second-generation ethanol","Magalhaes de Medeiros, E. (TU Delft BT/Bioprocess Engineering)","Noorman, H.J. (promotor); Filho, Rubens Maciel (promotor); Posada Duque, J.A. (copromotor); Delft University of Technology (degree granting institution); University of Campinas (degree granting institution)","2020","Renewable energy plays a key role in the fight to reduce greenhouse gas emissions while providing for human well-being and economic development. However, despite environmental benefits in terms of carbon sequestration, largely promoted biorenewable resources such as sugarcane and corn starch, so-called 1st generation (1G) feedstocks, are associated with other types of social and environmental issues that highly contradict the notion of sustainability, such as the food versus fuel conflict and the contribution to impacts such as deforestation, soil degradation, loss of biodiversity and contamination of water resources. As reaction to these issues, a lot of effort has been put into the development of technologies to extract and convert useful energy from non-food crops and agro-industrial residues, such as sugarcane bagasse, corn stover, and wheat straw. These now called 2nd generation (2G) feedstocks offer an extra challenge since fermentable sugars are not readily available; nonetheless, myriad technologies have been (and are being) developed to convert 2G materials into fuels and chemicals, with perhaps the most representative product being ethanol, a widely employed engine fuel and gasoline additive. 2G or cellulosic ethanol can be produced via biochemical pathways, thermochemical pathways, or a third option that combines aspects of the other two, commonly called the thermo-biochemical, or hybrid, pathway. The latter is the focus of this thesis, which explores this pathway via process modeling, simulations, (multi-objective) optimization, and other strategies applied in order to determine which process choices and conditions lead to the best performance in terms of main sustainability aspects. While the thermochemical process of gasification enables the nearly full conversion of biomass without the need for complex and expensive stages of pretreatment and hydrolysis, the subsequent biological conversion (fermentation) of syngas might offer several advantages when compared to the traditional catalytic conversion, e.g. higher flexibility of H2:CO ratios and tolerance to gas contaminants . Although certain challenges may drawback the commercial competitiveness of syngas fermentation, such as the low productivity when compared to heterotrophic fermentation, intelligent choices of process integration and design parameters could substantially enhance the performance of the process.","","en","doctoral thesis","","","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:f0bdac3d-376d-4b24-9241-3a1e35731373","http://resolver.tudelft.nl/uuid:f0bdac3d-376d-4b24-9241-3a1e35731373","Quadrotor Fault Tolerant Flight Control and Aerodynamic Model Identification","Sun, S. (TU Delft Control & Simulation)","de Croon, G.C.H.E. (promotor); de Visser, C.C. (copromotor); Delft University of Technology (degree granting institution)","2020","As Multi-rotor Unmanned Aerial Vehicles, or drones, are gradually becoming more popular in civilian applications, the safety of these flying machines becomes a significant concern. Such drones are powered by multiple rotors to generate lift and control torques. Hence, the failure of rotors can severely threaten their flying safety. Direct consequences of rotor failures are loss-of-control and a subsequent crash if no ad-hoc flight control method can take over. Such a method, built on the principles of Fault Tolerant Control (FTC), is thus essential to improving the safety of multi-rotor drones. Fixed-pitch quadrotors are the simplest type of multi-rotor drones and have been extensively used in various applications thanks to their simplicity and higher energy efficiency. However, they suffer most from rotor failures since it requires a minimum of four fixed-pitch rotors to achieve full attitude control. Therefore, devising FTC algorithms for quadrotors presents a significant challenge. As there have been many efforts to develop FTC for quadrotors flying in nearhover conditions, a primary objective of this thesis is further expanding the capability of FTC methods to high-speed conditions where significant aerodynamic effects arise that brings large model uncertainties to the control algorithm. The high-speed flight conditions can be, for instance, the cruising phase of a quadrotor (e.g., delivery drone). Once rotor failure occurs, these aerodynamic effects can adversely impact the performance of FTC methods, and even drive the damaged quadrotor into upset conditions with abnormal attitude and angular rates. On the one hand, it is essential to improve state-of-art FTC methods withstanding significant aerodynamic effects as well as possible large initial disturbances. On the other hand, these aerodynamic effects need to be further investigated and modeled to facilitate the development of FTC in high-speed conditions. These two aspects constitute the two major parts of this thesis...","Quadrotor; Safety; Control; Modeling","en","doctoral thesis","","978-94-6384-181-8","","","","","","","","","Control & Simulation","","",""
"uuid:01c293b2-ed2c-480e-a997-aec9d4dc04a1","http://resolver.tudelft.nl/uuid:01c293b2-ed2c-480e-a997-aec9d4dc04a1","Luminous Glass: A Study on the Optics Governing Luminescent Solar Concentrators and Optimization of Luminescent Materials through Combinatorial Gradient Sputter Deposition","Merkx, E.P.J. (TU Delft RST/Luminescence Materials)","Dorenbos, P. (promotor); van der Kolk, E. (promotor); Delft University of Technology (degree granting institution)","2020","A luminescent solar concentrator (LSC) is a concept from the 1970s that can find novel application as an electricity-generating window. An LSC converts sunlight to light of a different color by a process called luminescence. This light is transported to the edges of the LSC, where photovoltaic cells convert this incoming light to electricity. Since only a small part of the incoming sunlight is absorbed, most sunlight will still illuminate the rooms behind the LSC-window. Turning buildings and offices into nearly zero-energy buildings (nZEBs) is unlikely to happen by using electricity from rooftop photovoltaics (PVs) alone. Turning the envelope of a building, especially the large amount of glass used as windowpanes or facades, into a source of electricity by using LSCs can go a long way towards making these nZEBs a reality. Why then is not every window already an LSC? As will be explained in Chapter 2 and Chapter 3, current LSCs can be efficient at converting sunlight, but suffer from strong coloring, or are not compatible with large-scale industrial processes.
To solve the issue of coloring, one solution is to dope halides, such as table salt, with rare-earth elements, specifically divalent Thulium (Tm2+). This combination absorbs the entire visible spectrum. Another strategy is to dope insulating nitride or oxynitride materials with divalent or trivalent Europium (Eu2+ or Eu3+). Eu2+ or Eu3+ are strong absorbers of ultraviolet light.
In this thesis, optimizing the luminescent properties of these rare-earth-doped materials is researched using combinatorial synthesis methodology and a novel, fast but detailed characterization setup. The combinatorial synthesis methodology implies that a continuum of rare-earth-doped compositions is deposited on a single 5 × 5 cm2 piece of glass. This composition spread is equivalent to many hundreds of individual samples. The novel characterization setup can characterize the luminescence and other optical properties of these compositions in a matter of minutes.
In Chapter 4, this technique is used to form and analyze solid solutions of Eu2+-doped halides. The broad-band Eu2+-emission is sensitively susceptible to its local environment, unlike the infrared line-emitter Tm2+. Researching such solid solutions is of great importance for Tm2+-doped halide LSCs. A solid solution can combine the luminescent properties of its constituents, potentially yielding uniform absorption of the entire visible spectrum, which would make an LSC-window only dimming, without coloring the incident light. Unfortunately, while these halide materials solve the problem of coloring, they are very sensitive to water and are not used in large scale industrial production.
This is why the focus is shifted in Chapter 5 to materials composed of silicon (Si), aluminum (Al), oxygen (O) and nitrogen (N): the SiAlON material family. These SiAlONs are chemically stable, scratch-resistant and, because of their likeness to amorphous glass, do not scatter light. These SiAlONs are sputtered on a large-scale by industrial glass manufacturers.
Next to fabricating all these materials and characterizing their luminescence, it is also important to predict how they would behave if they were applied as large-scale LSCs. This is done through modeling all optical processes that occur within an LSC, presented in Chapter 3 and Chapter 6. In Chapter 3, a new way of modeling the optical processes within an LSC is presented. The industry-standard is ray-tracing, which can get slow when an LSC absorbs more light, or becomes larger in size. The model presented in Chapter 3 calculates all efficiency steps in an LSC in the same amount of time, regardless of the LSC’s size or transparency. In Chapter 6, we use all these methodologies—fast synthesis and characterization of luminescent thin-films, and modeling of light transport through an LSC—to simulate how efficient an LSC based on AlN:Eu3+,O2– would be. Such an LSC would be transparent in the visible spectrum, as it only absorbs ultraviolet light. AlN:Eu3+,O2– emits red luminescence. Therefore, AlN:Eu3+,O2– will not parasitically absorb the emission that makes its way to the LSC’s edges. The methodology to predict the performance of an LSC used in Chapter 6 is not specific to AlN:Eu3+,O2–, but applicable to all combinatorially synthesized luminescent thin-films.
As mentioned before, halide-type materials doped with Tm2+ have been often suggested as promising materials for LSCs. In the final chapter, Chapter 7, sputtered thin-films of NaI, CaBr2, and CaI2 doped with Tm2+ have therefore been evaluated on their performance as LSC; both in terms of simulated optical efficiency, as well as in terms of aesthetic appeal. Our Tm2+-based thin-film LSCs absorb the entire visible spectrum and emit a line of near-infrared radiation centered at 1140 nm. Chapter 7 demonstrates the universality of the techniques presented in Chapters 5 and 6. These techniques are adapted to take hygroscopic nature of the halides into account. The chapter does forgo on fully taking industrial compatibility into account: halides are not often sputtered, and the water-sensitivity will be a hurdle for implementation on window glass. By combining theory and modeling, we see that 10 μm thick films which transmit 80 % of the visible spectrum would be able to achieve optical efficiencies of 0.71 %. This efficiency already compares favorably to the maximally achievable optical efficiency of 3.5 % at those transmission constraints. Further research will have to show whether the photoluminescent quantum yield of the sputtered thin-films can be increased to achieve unity photon conversion.","Luminescence; Simulation; Solar concentration; Combinatorial science","en","doctoral thesis","","978-90-8593-456-1","","","","","","","","","RST/Luminescence Materials","","",""
"uuid:bac11b77-b808-4700-8f8e-8179458e19bc","http://resolver.tudelft.nl/uuid:bac11b77-b808-4700-8f8e-8179458e19bc","Dynamics of a supersonic flow over a backward/forward-facing step","Hu, W. (TU Delft Aerodynamics)","Hickel, S. (promotor); van Oudheusden, B.W. (promotor); Delft University of Technology (degree granting institution)","2020","The backward/forward-facing step (BFS/FFS) is one of the canonical geometries in aerospace engineering applications, the flow field over which has attracted extensive attention in the past decades. In a supersonic flow, laminar-to-turbulent transition and shock wave/boundary layer interaction (SWBLI) can occur over these configurations, which considerably affect the performance of aircraft, through, for example, an increase of flight drag and intense localized mechanical loads. In this thesis, a numerical study is carried out to scrutinize the dynamics of a supersonic flow over a backward/forwardfacing step at Ma Æ 1.7 and Re±0 Æ 13718 using well-resolved large eddy simulations (LES). For the transition aspect, our objective is to determine the transition path and the roles of the instabilities involved in this process. Considering the topic of SWBLI, the main objective is to scrutinize the various unsteady phenomena observed in SWBLI and, in particular, identify the origin of the low-frequency unsteadiness.","compressible flow; boundary layer; transition; shock waves","en","doctoral thesis","","978-94-6366-346-5","","","","","","","","","Aerodynamics","","",""
"uuid:ca17d04d-4c40-4856-97cd-8808ac641007","http://resolver.tudelft.nl/uuid:ca17d04d-4c40-4856-97cd-8808ac641007","Simulating quasi-brittle failure in structures using Sequentially Linear Methods: Studies on non-proportional loading, constitutive modelling, and computational efficiency","Pari, M. (TU Delft Applied Mechanics)","Rots, J.G. (promotor); Hendriks, M.A.N. (promotor); Delft University of Technology (degree granting institution)","2020","Sequentially Linear Analysis (SLA) is a proven robust alternative to incremental-iterative solution methods in nonlinear finite element analysis (NLFEA) of quasi-brittle specimen. The core of the method is in its departure from a load, displacement or arc-length driven incremental approach (aided by internal iterations to establish equilibrium) to a damage driven event-by-event approach that approximates the nonlinear response by a sequence of scaled linear analyses. The constitutive relations are discretised into secant-stiffness based saw-tooth laws, with successively reducing strengths and stiffnesses. In each linear analysis, the global load is scaled such that the critical integration point, with the largest stress, jumps to its next saw-tooth representing locally applied damage increments. The use of an event-by-event approach and the secant-stiffness based constitutive model enables SLA to avoid problems such as pushing multiple integration points simultaneously into softening, snap-backs, jumps, bifurcation points, and divergence, that are typically encountered in NLFEA. Despite the advantages of simplicity and numerical robustness in comparison to NLFEA, SLA as a solution procedure needed significant developments to be used in engineering practice as a numerical tool for structural applications, such as the pushover analysis of a masonry structure or the capacity assessment of a shear-critical reinforced concrete slab. To this end, this dissertation contributes to extending SLA and similar methods, together referred to as a class of Sequentially Linear Methods (SLM), to 3D applications in both the continuum and discrete damage frameworks under non-proportional loading conditions, and additionally improving on its computational efficiency.","Sequentially Linear Analysis (SLA); Direct & Iterative linear solvers; Quasi-brittle fracture; Non-proportional loading; Pushover Analysis; Force-release method; 3D Fixed Smeared Crack model; Composite Interface model","en","doctoral thesis","","978-94-6366-331-1","","","","","","","","","Applied Mechanics","","",""
"uuid:3010a337-742c-4776-a747-2985085e981d","http://resolver.tudelft.nl/uuid:3010a337-742c-4776-a747-2985085e981d","Flood risk analysis of embanked river systems: Probabilistic systems approaches for the Rhine and Po rivers","Curran, A.N. (TU Delft Hydraulic Structures and Flood Risk)","Kok, M. (promotor); de Bruijn, K.M. (copromotor); Delft University of Technology (degree granting institution)","2020","Flood risk analysis focused on small sections or regions of embanked systems ignores the dy-namics of large-scale floods through time and space. Resulting measures are oftentimes inef-ficient due to an over estimation of risk or be-cause they simply transfer risk downstream due to increased localised protection upstream. This present work addresses this problem through a ‘system behaviour’ analysis, in which the com-plete river-dike floodplain system is assessed using stochastic simulations of flood events and defence failures. Such analysis can provide decision-makers with accurate estimates of local and system-wide risk from which efficient and effective FRM strate-gies can be developed. Furthermore, it creates a repository of realistic flood event simulations that inform emergency responders and provides data for future analyses. System behaviour analyses were implemented on two of the most developed floodplain regions of Europe; the Po River in Italy and the Dutch Rhine-Meuse delta. In both cases, the analysis underlined the importance of assessing risk of complete systems, as it provided more accurate estimates of current flood hazard, defence fail-ure probabilities and risk. The analysis can also be used in the evaluation of new measures such as dike strengthening and detention areas, both in the case studies and in other protected river systems. Finally, the development of map-based tools allows for a clear interpretation of the data for decision makers and researchers that wish to further investigate aspects of the system.","flood hazard; Probabilistic; System behaviour; River system; Embankments","en","doctoral thesis","","978-94-6421-121-4","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:229ada6c-6316-4971-814b-8ed0c91c715e","http://resolver.tudelft.nl/uuid:229ada6c-6316-4971-814b-8ed0c91c715e","Spheres vs. rods in fluidized beds: Numerical and experimental investigations","Mema, I. (TU Delft Complex Fluid Processing)","Padding, J.T. (promotor); de Jong, W. (promotor); Delft University of Technology (degree granting institution)","2020","For the past century, fluidized beds have been standard equipment in many branches of industry. In most applications they are used to manipulate granular and powder-like materials, whose particles can roughly be approximated as spheres. Therefore, numerical models and investigations have focused mainly on fluidized beds with spherical particles. Recent decades witnessed an increase in the use of fluidized beds in biomass processing. Unlike other materials typically used in fluidized beds, biomass is characterized by relatively large and elongated particles. For the sake of simplicity, numerical models for simulating fluidization of elongated particles have so far neglected a lot of specifics that can occur during this process and even applied the same models and conclusions that were developed for fluidization of spherical particles. The goal of this thesis is to define what is necessary for performing physically correct Computational Fluid Dynamics - Discrete Element Model (CFD-DEM) simulations of elongated particles fluidization. This thesis emphasizes the difference in fluidization between spherical and elongated particles and looks into ways to include specific particle and fluid interactions related to elongated particles into numerical (CFD-DEM) model. Results fromCFD-DEMsimulationswere validated using two experimental techniques, magnetic particle tracking (MPT) and X-ray tomography (XRT). This thesis is part of larger project of multi-scale modeling of fluidized beds with elongated particles and is focusing on the middle scale, bridging fully resolved, direct numerical simulations (DNS) with large scale, two fluid model (TFM) or multi-phase - particle in cell (MP-PIC) models, capable of simulating industrial sized fluidized beds. This thesis first looks in to the effect of including shape induced lift force and hydrodynamic torque, which were so far neglected in CFD-DEM simulations of elongated particles. It is shown that including lift force and hydrodynamic torque leads to considerable changes in the particle vertical velocity and particle preferred orientation in the fluidized bed. Looking into the mixing characteristics, as one of the most important parameters of fluidized beds, also considerable differenceswere found. Further differences in fluidization behaviour of spherical and elongated particles, as well as the effect of increasing particle aspect ratio, were shown experimentally, using MPT. Clear differences between spherical and elongated particles were found concerning the particle velocity and rotational velocity distributions. The effect of increasing particle aspect ratio and gas inlet velocity on fluidization of elongated particles was shown. Using XRT, the difference in bubbling and slugging fluidization between spherical and elongated particles was shown. In the end, the effect of newly developed multi-particle correlations for hydrodynamic forces and torque was tested, and it is concluded that they can improve the accuracy of simulations of dense fluidized beds containing elongated particles. The findings of this thesis clearly show that the models and assumptions developed for fluidization of spherical particles cannot simply be transferred to the fluidization of elongated particles. The results presented here give a new insight in the fluidization of elongated particles. They are also valuable for validation and development of larger scale models capable of simulating industrial size fluidized bed with elongated particles.","Fluidized bed; Non-spherical particles; CFD-DEM; MPT; XRT; Hydrodynamic forces","en","doctoral thesis","","","","","","","","","","","Complex Fluid Processing","","",""
"uuid:f5496a9d-cd10-4279-b5f0-052cd8a53fc6","http://resolver.tudelft.nl/uuid:f5496a9d-cd10-4279-b5f0-052cd8a53fc6","Graphene-Based Computing: Nanoribbon Logic Gates & Circuits","Jiang, Y. (TU Delft Computer Engineering)","Cotofana, S.D. (promotor); Wong, J.S.S.M. (copromotor); Delft University of Technology (degree granting institution)","2020","As CMOS feature size is reaching atomic dimensions, unjustifiable static power, reliability, and economic implications are exacerbating, thus prompting for research on new materials, devices, and/or computation paradigms. Within this context, Graphene Nanoribbons (GNRs), owing to graphene’s excellent electronic properties, may serve as basic structures for carbon-based nanoelectronics. However, the graphene intrinsic energy bandgap absence hinders GNR-based devices and circuits implementation. As a result, en route to graphene-based logic circuits, finding a way to open a sizable energy bandgap, externally control GNR’s conduction, and construct reliable high-performance graphene-based gates are the main desideratum. To this end, first, we propose a GNR-based structure (building block) by extending it with additional top gates and back gate while considering five GNR shapes with zigzag edges in order to open a sizeable bandgap, and further investigate GNR geometry and contact topology influence on its conductance and current characteristics. Second, we present a methodology of encoding the desired Boolean logic transfer function into the GNR electrical characteristics, i.e., conduction maps, and then evaluate the effect of VDD variation on GNR conductance. Moreover, we find a proper external electric mean (e.g., top gates and back gates) to control the GNR behavior. Third, we develop a parameterized Verilog-A SPICEcompatible GNR model based on Non-Equilibrium Green’s Function (NEGF)-Landauer formalism that builds upon an accurate physics formalization, which enables to symbiotically exploit accurate physics results from Matlab Simulink and optimized SPICE circuit solvers (e.g., Spectre, HSPICE). Subsequently, we construct graphene-based Boolean gates by means of two complementary GNRs, and design a GNR-based 1-bit Full Adder and a SRAM cell. Finally, we extend the NEGF-Landauer simulation framework with the self-consistent Born approximation while taking into account the temperature-induced phenomena in GNR electron transport, i.e., electron-phonon interactions for both optical and acoustic phonons, and further explore the graphene-based gates performance robustness under temperature variations.","Graphene; Graphene Nanoribbon; Graphene-based Computing; Carbon-Nanoelectronics; Graphene-based Gate; Graphene-based Circuit","en","doctoral thesis","","978-94-6384-176-4","","","","","","","","","Computer Engineering","","",""
"uuid:fb938e90-bda7-47bd-ac90-7b1f9887c027","http://resolver.tudelft.nl/uuid:fb938e90-bda7-47bd-ac90-7b1f9887c027","Cooperative Adaptive Cruise Control Vehicles on Highways: Modelling and Traffic Flow Characteristics","Xiao, L. (TU Delft Transport and Planning)","van Arem, B. (promotor); Wang, M. (copromotor); Delft University of Technology (degree granting institution)","2020","Traffic congestion causes detrimental effects on economy and society in terms of travel time delay, increased vehicle collision risk and increased air pollution. To tackle that, new intelligent transportation systems are being developed. One of these emerging technologies comprise automated driving systems. Adaptive Cruise Control (ACC), enabling a vehicle to follow its leader automatically, is an automated driving system which has been available in the vehicle market. Literature shows that traffic flow performance may not significantly benefit from ACC due to traffic instability and large headways. As an extension of ACC, Cooperative ACC is designed to improve traffic stability and throughput by using Vehicle-to-Vehicle (V2V) communication. Thanks to the anticipation of the downstream traffic, a short following gap can be realized by CACC which is highly expected to considerably increase road capacities. Before CACC vehicle is allowed and promoted in the market, it is crucial for policy makers and road operators to gain insights into the traffic flow impacts of CACC systems. Existing studies show that CACC can increase traffic throughput at high vehicle market penetration. However, the realistic effects of CACC on traffic flow have not been adequately revealed because the behaviour of CACC vehicles has not been realistically modelled. The multiple CACC driving modes, the degraded operation to ACC and the human driver take-over control when it is out of the CACC operational design domain, have not yet been explicitly included in existing CACC impact studies. A scientific gap prevails in modelling the complex CACC behaviour and investigating its actual influence on traffic flow especially at realistic networks with a single bottleneck and interacting bottlenecks.","","en","doctoral thesis","TRAIL Research School","978-90-5584-280-3","","","","TRAIL Thesis Series no. T2020/19, the Netherlands Research School TRAIL","","","","","Transport and Planning","","",""
"uuid:9e4278f7-6b3b-406c-a6ac-9e1f8d9df0c7","http://resolver.tudelft.nl/uuid:9e4278f7-6b3b-406c-a6ac-9e1f8d9df0c7","Modelling Human-Flood Interactions: A Coupled Flood-Agent-Institution Modelling Framework for Long-term Flood Risk Management","Abebe, Y.A. (TU Delft BT/Environmental Biotechnology; IHE Delft Institute for Water Education)","Brdjanovic, Damir (promotor); Vojinovic, Zoran (copromotor); Delft University of Technology (degree granting institution); IHE Delft Institute for Water Education (degree granting institution)","2020","The negative impacts of floods are attributed to the extent and magnitude of a flood hazard, and the vulnerability and exposure of natural and human elements. In flood risk management (FRM) studies, it is crucial to model the interaction between human and flood subsystems across multiple spatial, temporal and organizational scales. Models should address the heterogeneity that exists within the human subsystem, and incorporate institutions that shape the behaviour of individuals. Hence, the main objectives of the dissertation are to develop a modelling framework and a methodology to build holistic models for FRM, and to assess how coupled human-flood interaction models support FRM policy analysis and decision-making. To achieve the objectives, the study introduces the Coupled fLood-Agent-Institution Modelling framework (CLAIM). CLAIM integrates actors, institutions, the urban environment, hydrologic and hydrodynamic processes and external factors, which affect FRM activities. The framework draws on the complex system perspective and conceptualizes the interaction of floods, humans and their environment as drivers of flood hazard, vulnerability and exposure. The human and flood subsystems are modelled using agent-based models and hydrodynamic models, respectively. The two models are dynamically coupled to understand human-flood interactions and to investigate the effect of institutions on FRM policy analysis.","flood risk management; socio-hydrology; agent-based modelling; policy analysis; institutional analysis","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-0-367-74886-9","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:72abf99f-dd2d-42a1-8c59-7a83870c9d3c","http://resolver.tudelft.nl/uuid:72abf99f-dd2d-42a1-8c59-7a83870c9d3c","Encoding a Qubit into an Oscillator with Near-Term Experimental Devices","Weigand, D.J. (TU Delft QCD/Terhal Group)","Terhal, B.M. (promotor); Delft University of Technology (degree granting institution)","2020","A universal, large-scale quantum computer would be a powerful tool with applications of high value to mankind. For example, such a computer could significantly speed up the search for new medications or materials. However, the error rates of current qubit designs are simply too large to enable interesting computations. Therefore, both error correction and improved designs of qubits are needed. In 2001, Gottesman, Kitaev and Preskill proposed an encoding (GKP code) where a qubit is stored in a harmonic oscillator — a system that can be controlled and manufactured with high precision, and therefore have comparatively high coherence times. Moreover, the code offers good protection against losses, a simple gate set, and error correction circuits that are comparatively easy to implement. The drawback is that encoding a qubit into a GKP code state is a challenging task. In this thesis, we develop efficient schemes to encode a GKP qubit. Bosonic codes, where a qubit is stored in an oscillator, and in particular the GKP code are still relatively unknown. Therefore, we will start the thesis with an overview of the field, and provide the reader with the tools to analyze a GKP code, as these are quite different from standard error correcting codes. A tool which is important to understand, and that describes a protocol that encodes a GKP qubit is the so-called phase estimation algorithm. This algorithm allows to measure the eigenvalue of any unitary operation, and is one of the cornerstones of quantum information. We will show how phase estimation can be applied to encode a GKP qubit, and what the requirements for
an experiment attempting to do so are. A major advantage of the GKP code over other encodings is that it can tolerate significant photon loss before the encoded information is lost. In addition, states that are closely related to the GKP qubit can be used to violate Bell’s inequalities (i. e. prove the presence of entanglement), even in the presence of large noise. Both these applications make the code particularly interesting in the optical regime, where error correction usually cannot be done while the signal is travelling. In this thesis, we will analyze an encoding protocol originally proposed by H.M. Vasconcelos, L. Sanz, and S. Glancy, Optics Letters 35, 3261 (2010) that relied on post-selection, and show that any output state can be used as a GKP code state with a simple change of frame, providing an exponential speedup. In 2019, two separate experiments generated a GKP code state for the first time: C.Flühmann et al., Nature 566, 513 (2019) realized a GKP qubit in the motional mode of a trapped ion, while P. Campagne-Ibarcq et al., Nature 584, 368 (2020) realized it with a transmon qubit coupled to a microwave cavity. However, both these experiments employ phase estimation, which is slow because it requires many measurements in sequence. We propose a circuit that allows a single-shot measurement of the GKP stabilizers, and analyze the performance of such a measurement as well as the impact of noise.","Quantum error correction; GKP code; bosonic codes; superconducting qubits","en","doctoral thesis","","978-94-6421-139-9","","","","","","","","","QCD/Terhal Group","","",""
"uuid:aadbb312-4596-4072-86f3-b43b3532ab40","http://resolver.tudelft.nl/uuid:aadbb312-4596-4072-86f3-b43b3532ab40","Development of Condition Monitoring System for Railway Crossings: Condition Assessment and Degradation Detection for Guided Maintenance","Liu, X. (TU Delft Railway Engineering)","Dollevoet, R.P.B.J. (promotor); Markine, V.L. (copromotor); Delft University of Technology (degree granting institution)","2020","Railway crossings are essential components of the railway track system that allow trains to switch from one track to another. Due to the complex wheel-rail interaction in the crossing panel, crossings are vulnerable elements of railway infrastructure and usually have short service lives. The crossing damage not only results in substantial maintenance efforts but also leads to traffic disruptions and can even affect traffic safety. In the Netherlands, the annual maintenance cost on railway crossings is more than 50 million euros. Due to the lack of monitoring systems, the real-time information on crossing condition is limited. As a result, the present maintenance actions on railway crossings are mainly reactive that take place only after the occurrence of visible damage. Usually, such actions (repairs) are carried out too late and result in unplanned disruptions that negatively affect track availability. In the Netherlands, around 100 crossings are urgently replaced every year, accompanied by traffic interruptions. Also, there is a considerable number of crossings with the service life of only 2-3 years. The maintenance methods used by the contractors on such crossings are somewhat limited and usually ended up with ballast tamping. In this case, the root causes of the fast crossing degradation are usually not resolved, and the crossings are still operated in degraded conditions after the maintenance. In order to improve the efficiency of the current maintenance of railway crossings aiming for better crossing performance, the goal of this study is to develop a monitoring system for railway crossings using which the crossing condition can be assessed, and the sources of the degradation can be detected. Using such a system timely and proper maintenance on railway crossings can be provided. The main steps in achieving this goal were as follows: Based on the measured dynamic responses of railway crossings due to passing trains, several condition indicators were proposed; To provide the fundamental basis for the proposed indicators a numerical model for the analysis of vehicle-crossing interaction was developed; The effectiveness of the proposed indicators was demonstrated using the data from long-term monitoring of 1:9 and 1:15 crossings. The railway crossing conditions can be reflected in the changes in the dynamic responses due to passing trains. In this study, the responses were obtained from the crossing instrumentation and wayside monitoring system. The responses reflect the wheel-rail interaction, which consists of the wheel impact accelerations, impact locations and the rail displacements due to the impacts, etc. Based on the correlation analysis of the responses, the indicators related to the wheel impact, fatigue area and ballast support were proposed. The indicators form a basis for the structural health monitoring (SHM) system for the railway crossings. To verify the effectiveness of the proposed indicators, and to explain the experimental findings, a numerical vehicle-crossing model is developed using the multi-body system (MBS) method. The model is validated using the measurement results and further verified using the finite element (FE) model. The proposed indicators and the MBS model were applied to the condition stage identification and damage source detection of the crossings. The main outcomes are presented below. In the condition monitoring of normally degraded crossings, the proposed indicators were capable to catch the main degradation stages of the railway crossing ranging from newly installed to damaged and repaired ones. With the assistance of these indicators, the maintenance actions can be timely applied before the occurrence of severe damage. The proposed indicators can also be used for assessing the effectiveness of the performed maintenance (repair welding and grinding, ballast tamping, etc.). It was demonstrated that ballast tamping has no positive effect on the performance of the monitored 1:9 crossing. The proposed indicators can also help to detect the root causes of the crossing damage. In some cases, the degradation is caused by adjacent structures, and therefore the maintenance should be performed not on the crossing itself but of the track nearby. In this study, the fast degradation of the monitored 1:9 crossing was found to be caused by the lateral track deformation in front of the crossing. The numerical results confirmed the phenomenon that the train hunting motion activated by the track deviation. It was the source of the extremely high impacts recorded by the monitoring system that ultimately resulted in the fast crossing degradation. By knowing the damage sources, proper maintenance can be performed rather than the currently used ineffective ballast tamping. Additionally, it was found that crossing degradation can also result from external disturbances. It was proven that highly increased rail temperature due to the long duration of sunshine would amplify the existed geometry deviation in turnout. Considering the high sensitivity of wheel-rail interaction in the crossing, higher standards for crossing maintenance and construction are required for better crossing performance. This study contributes to the development of the condition monitoring system for railway crossings. The application of the condition indicators is a big step forward for the current maintenance philosophies from damage repair to predictive maintenance, and from “failure reactive” to “failure proactive”. The outcomes in this study will contribute to the better performance of railway crossings.","railway crossing; condition monitoring; degradation detection; maintenance guidance","en","doctoral thesis","","9789464190786","","","","","","","","","Railway Engineering","","",""
"uuid:48af4e9b-1487-4402-a1aa-19e302b0eb97","http://resolver.tudelft.nl/uuid:48af4e9b-1487-4402-a1aa-19e302b0eb97","Aeroelastic Tailoring of Composite Aircraft","Natella, M. (TU Delft Aerospace Structures & Computational Mechanics)","De Breuker, R. (promotor); Bisagni, C. (promotor); Delft University of Technology (degree granting institution)","2020","The process of designing an aircraft is generally divided in three phases, namely the conceptual, preliminary and detailed. The whole process is one that involves various disciplines and is subject to multiple constraints. Traditionally the different aspects of the design process are tackled separately by different departments within a company. This approach assumes that there is little to no interaction between the various disciplines. The use of composite materials for aircraft structures has challenged this traditional approach on the account that the interaction between the relevant disciplines within aircraft design and optimization cannot be neglected as easily.","aeroelastic tailoring; structural design; stiffness optimization","en","doctoral thesis","","978-94-6421-117-7","","","","","","","","","Aerospace Structures & Computational Mechanics","","",""
"uuid:a6e229df-d3ea-496f-b682-5a79b6567deb","http://resolver.tudelft.nl/uuid:a6e229df-d3ea-496f-b682-5a79b6567deb","Collaboration in Circular Oriented Innovation: Why, How and What?","Brown, P.D. (TU Delft Circular Product Design)","Balkenende, A.R. (promotor); Bocken, N.M.P. (promotor); Delft University of Technology (degree granting institution)","2020","Our society faces many global sustainability challenges. Many of these challenges we have either created or exacerbated by not thinking about how the scale of our actions impacts the planet. We have however entered the Anthropocene, an epoch in time whereby human activity is now the dominant force upon the planet’s climate and environment. It is abundantly clear that our actions, if not changed, will result in the collapse of many crucial life support systems that will affect our society. A key contributing reason for why our current actions are unsustainable and are ultimately creating negative impacts on the planet is how we produce, use and consume products and services. This highlights that resource flows are out of balance with ecological systems. The way we have structured our economy simply does not account for the finite and limited nature of resources or the ecological systems capacity to renew resource stocks. It is clear we need to change how our production, consumption and economic system functions, especially if we are to avoid the worst or reverse anthropogenic impacts. This requires creativity and the operationalisation of new ideas to come up with new ways of doing things. In another word, it requires us to innovate. But, to do so with increasing sustainable impacts as the key driver and rationale for innovation activities. The role of innovation for stimulating and creating sustainable change is widely recognised in academia and practice. Both see that we need to increasingly pursue collaborative innovations that take a systemic perspective to mitigate or solve the sustainability challenges we have created.","Circular Economy; Circular Oriented Innovation; Collaboration; Collaborative Innovation; Circular Collaboration Canvas","en","doctoral thesis","","978-94-6384-182-5","","","","","","","","","Circular Product Design","","",""
"uuid:99d82992-080c-4c5d-8d40-4e62e62285c0","http://resolver.tudelft.nl/uuid:99d82992-080c-4c5d-8d40-4e62e62285c0","Robust nonlinear attitude control of aerospace vehicles: An incremental nonlinear control approach","Acquatella Bustillo, P.J. (TU Delft Control & Simulation)","Mulder, Max (promotor); van Kampen, E. (copromotor); Delft University of Technology (degree granting institution)","2020","Dynamics modeling, simulation, and control have been studied extensively for many applications in robotics, aeronautics, underwater vehicles, and aerospace vehicles (spacecraft, launchers, re-entry vehicles). In that context, this thesis is motivated from two research directions; namely, space launchers guidance and control (G&C) for preliminary design studies and spacecraft nonlinear and agile attitude control systems. The research performed in this thesis focuses on two aspects: 1) attitudemotion and control, which is considered to be one of the classical problems in nonlinear and multivariable control systems; 2) incremental nonlinear control, which is a combined model– and sensor–based control approach and has shown promising results in the aerospace community. The high–performance and robustness of incremental nonlinear control comes from the partial dependency removal of an accurate plant model by just requiring a control effectiveness model to estimate the so–called incremental dynamics, while relying on angular acceleration and actuator output measurements. This approach, integrated with nonlinear control methods, are robust to modeling and parametric uncertainties and allows for aggressive motion control. The objective of this thesis is to develop concepts and methods for nonlinear flight and attitude control design aspects within a multi-disciplinary modeling and simulation approach. With this approach, attitude dynamics and control can play a more important role in the outcomes of aerospace vehicle design and therefore should be considered more within the preliminary design studies of these vehicles. The research performed in this thesis can be summarized in the following three main parts...","","en","doctoral thesis","","978-94-6421-120-7","","","","","","","","","Control & Simulation","","",""
"uuid:536563e3-6894-4f3c-95af-c6b365176ce0","http://resolver.tudelft.nl/uuid:536563e3-6894-4f3c-95af-c6b365176ce0","Integrated, adaptive and machine learning approaches to estimate the ghost wavefield of seismic data","Vrolijk, J. (TU Delft Applied Geophysics and Petrophysics)","Blacquière, G. (promotor); Wapenaar, C.P.A. (promotor); Delft University of Technology (degree granting institution)","2020","In exploration geophysics, seismic measurements are used to obtain information about the subsurface. A large proportion of these measurements take place in oceans, seas and lakes, where the sources and the receivers are generally located somewhere between the water bottom and the water surface during data acquisition. The sources emit an acoustic signal into the subsurface and the receivers measure, amongst other things, the reflections of this signal. Some of these signals only reflect within the subsurface, but others may reflect at the water surface one or more times. The signals that reflect at the water surface disturb the reflections from the subsurface and have a destructive effect on the bandwidth. In this thesis the focus is on the removal of signals with the first reflection and/or the last reflection at the water surface. Correctly removing these so-called ghost reflections will improve the bandwidth.
In this thesis, three methods are covered, that aim to integrate the removal of ghost reflections into another process, or to improve the removal of ghost reflections under specific conditions. The first method integrates the removal of the receiver ghost into closed-loop surface-related multiple estimation. The results on modeled data and field data show that this is an efficient approach and provides a significant improvement over a sequential workflow. This first method, like many other methods that remove ghost reflections, requires accurate information about the depth of the receivers relative to the surface of the water. Due to a dynamic sea surface or movement of the cables this information about the depth of receivers is often not accurate, limiting the removal of the receiver ghost. The second method optimizes the removal of the ghost reflections by estimating and incorporating the depth of receivers relative to the dynamic water surface in this ghost removal process. On modeled data and field data, we show good results for cases where accurate information about the depth of the receivers relative to a dynamic water surface is not available.
The first two methods address the removal of the receiver ghost, and it is well known that the receiver ghost should be removed in the shot domain. This is different when removing the source ghost, which has to be done in the receiver domain. However, in practice, the receiver domain is often coarsely sampled, complicating the removal of the source ghost in this domain. The third method handles the removal of the source ghost in the coarsely sampled receiver domain by training a convolutional neural network. The training data consist of coarsely sampled shot records with and without the receiver ghost that can be obtained relatively easy because the corresponding densely sampled shot records are available as well. Using reciprocity, these training data are a representative data set for removing the source ghost in the coarsely sampled receiver domain. The modeled data and field data results show that this machine learning approach is able to accurately remove the source ghost in the receiver domain. The modeled data results also show that this approach significantly improves the removal of the source ghost compared to its removal in the densely sampled shot domain.","","en","doctoral thesis","","978-94-6366-330-4","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:3d0c46e3-e2bf-4ba2-9db3-48b520b4628d","http://resolver.tudelft.nl/uuid:3d0c46e3-e2bf-4ba2-9db3-48b520b4628d","A Conceptual Framework for Regulatory Practice in Mobile Telecommunications Systems","Ubacht, J. (TU Delft Information and Communication Technology)","Janssen, M.F.W.H.A. (promotor); Crompvoets, Joep (copromotor); Delft University of Technology (degree granting institution)","2020","Many of today’s essential services depend on infrastructure-based systems such as the energy, the telecommunications and the public transport system. These systems consist of interdependent subsystems that coevolve over time. The governance of these systems is performed by many different types of public and private actors. In addition these systems are subject to technological innovation. End users of their services rely on the quality and provision of these infrastructure-based systems. In order to mitigate unwanted societal outcomes such as outages, high consumer prices and underperformance of service quality, these systems are regulated. These infrastructure-based systems are defined as complex socio-technical systems (CSTS); a concept that denotes that the system’s functioning is dependent on the interactions between the technical, the social and the institutional components of the system. Due to their large scale size and the required upfront investments in infrastructural elements, these CSTS are not easily changed. However, changes in the institutional context, technological innovation or changes in the actor system do occur. The consequences of these changes are hard to predict and lead to uncertainties for authorities that regulate these systems. In this research we study a major change in the mobile telecommunications system to explain the way in which regulators address the (unwanted) consequences and uncertainties due to the ensuing changes within the system.","regulation; mobile telecommunications; regulatory authorities; telecommunicaion; conceptual framework; exploratory regulatory practice; regulatory practice; Complex socio-technical systems; Grounded theory; CSTS","en","doctoral thesis","","978-90-361-0633-7","","","","","","","","","Information and Communication Technology","","",""
"uuid:8bdf06e6-8e42-463a-af45-20a56e0e2022","http://resolver.tudelft.nl/uuid:8bdf06e6-8e42-463a-af45-20a56e0e2022","Thermal effects on thermoplastic composites welded joints: A physical and mechanical characterisation","Koutras, N. (TU Delft Structural Integrity & Composites)","Benedictus, R. (promotor); Villegas, I.F. (promotor); Delft University of Technology (degree granting institution)","2020","The use of high performance thermoplastic composites (TPCs) in the aviation industry has increased over the last years. TPCs have significant advantages such as superior damage tolerance, excellent chemical resistance and the ability to be welded. To date, mechanical fastening, primarily, and adhesive bonding are the two traditional joining methods used in aviation industry. However, welding of TPCs has attracted significant attention due to its advantageous qualities such as the very short cycle times and the minimised stress concentrations. Most of the research published on welding of TPCs concerns process optimization and the evaluation of the mechanical performance at room temperature (RT) conditions. However, aircraft operate in a wide range of temperatures, typically between -50 °C to 70 °C and depending on the conditions, even up to 93 °C. Considering the temperature dependency of polymer composites properties, the weld performance of TPCs at low and high temperatures needs to be addressed. To the author’s knowledge, prior to the year 2014 there was no available literature assessing the influence of temperature on the mechanical performance of TPCs welded joints and since then, only a few publications have addressed this topic. The primary objective of this thesis was to not only fill the literature gap but also to obtain deeper knowledge and clear understanding of the behaviour of thermoplastic composites welded joints at low and high temperatures. In other words, to understand the phenomena dictating the weld performance which, in turn, would pave the way for further improvement of material properties and weld design.","Thermoplastic composites; Temperature Effects; resistance welding; Ultrasonic Welding; Crystallinity; PPS","en","doctoral thesis","","978-94-6421-144-3","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:7be5ba28-0699-408f-be0e-3e4c448cb42c","http://resolver.tudelft.nl/uuid:7be5ba28-0699-408f-be0e-3e4c448cb42c","Cyclist Aerodynamic Drag Analysis through Large-Scale PIV","Terra, W. (TU Delft Aerodynamics)","Scarano, F. (promotor); Sciacchitano, A. (copromotor); Delft University of Technology (degree granting institution)","2020","The use of large-scale particle image velocimetry (PIV) is proposed for cycling aerodynamic study to advance the general understanding of the flow around the rider and the bike, leading to new strategies for cycling aerodynamic drag reduction in the future. The investigation concentrates on the measurement of the wake velocity and its relation to the aerodynamic drag of stationary models in wind tunnels and of transiting models in the field.
In the first part of this work, PIV measurements are conducted in a wind tunnel to capture the wake flow topology of a full-scale cyclist model and determine the cyclist aerodynamic drag. In-house built seeding systems are employed to inject Helium-filled soap bubble (HFSB) tracers upstream of an elite time-trial cyclist replica. The obtained flow topology compares well among different experimental repetitions and with literature, demonstrating the robustness of the PIV measurement approach. The aerodynamic drag is obtained by a so-called PIV wake rake approach, which relies on the conservation of momentum in a control volume surrounding the model. Comparison of the PIV wake rake aerodynamic drag against that of a force balance demonstrates that a drag accuracy of the latter below 1% is possible.
The PIV wake rake measurements are conducted in a plane downstream of the bike’s rear wheel to avoid shadows and optical blockage. At this distance from the athlete, however, investigation of the separated and reverse flow regions, that are the main driver of the aerodynamic drag, is not possible. In the second part of this dissertation, therefore, robotic volumetric PIV measurements are conducted to retrieve the velocity description close to the cyclist. The near-wake of the cyclist limbs is presented, which somehow resembles that of isolated bluff bodies, such as cylinders, featuring a recirculation region bounded by two shear layers. The size of the recirculation region, however, is not only governed by the width of the limb, but also by the coherent vortical structures emanating from these limbs near the limb junctions (e.g. elbows and knees). Moreover, interaction of the limbs with the wakes of the upstream body parts also plays a role in the local wake properties.
In addition to the measurement of the cyclist’s near wake at typical race speed, also the cyclist Reynolds number effects are investigated to understand how to reduce the aerodynamic drag by dedicated skinsuits designs in the future. This is achieved repeating the robotic volumetric PIV measurements in a wide range of freestream velocity. While reductions of the wake width are observed on both lower leg and arm with increasing free-stream velocity, the wake of the upper leg follows an opposite trend increasing in size at higher velocity. These variations of wake width with increasing freestream speed are related to the behaviour of the local drag coefficient, indicating a drag crisis behaviour on both leg and arm. The distribution of the so-called critical velocity upon these body segments is discussed, as it determines the freestream speed where a minimum value for the drag occurs.
The third, and last part of this work, is dedicated to the development of quantitative flow visualisation and drag determination of cyclists in the field. This so-called Ring-of-Fire system allows, among others, aerodynamic studies that are practically impossible in the wind tunnel, such as model accelerations and model curved-linear trajectories. A tomographic PIV wake rake is employed to measure the flow around a simplified transiting bluff body, a towed 10 cm sphere. These scaled experiments serve as a proof-of-concept of this novel measurement system.
The aerodynamic drag is obtained invoking the control volume momentum balance in a frame of reference moving with the object. The expression for the time-average drag consists of three terms, a momentum, Reynolds stress and pressure term, which are individually evaluated at increasing distance downstream of the sphere. It is shown that the aerodynamic drag is most accurately evaluated when the contribution of the momentum term dominates the overall drag and that the PIV pressure evaluation can be avoided five sphere diameters into the wake. The latter largely simplifies the data reduction procedures of the Ring-of-Fire. Finally, the present system estimates the aerodynamic drag with an accuracy of 20 drag counts. This is evaluated from repeated model passages in a range of Reynolds numbers in which the model’s drag coefficient is constant. This resolution is comparable to other aerodynamic drag measurement field techniques. It is rather poor, instead, in comparison to force balance measurements in wind tunnels. In contrast to the latter drag measurement techniques, the Ring-of-Fire also provides information about the flow yielding advanced insights into cyclist aerodynamics in the future.
making.","Self-regulation; Decision-theoretic planning under uncertainty; Dynamic mechanism design; Serious gaming","en","doctoral thesis","TRAIL Research School","978-90-5584-274-2","","","","TRAIL Thesis Series no. T2020/17, the Netherlands TRAIL Research School","","","","","Algorithmics","","",""
"uuid:1aa13612-f480-4862-9af1-7f3cc5570c62","http://resolver.tudelft.nl/uuid:1aa13612-f480-4862-9af1-7f3cc5570c62","Haptic feedback for flight envelope protection","van Baelen, D. (TU Delft Control & Simulation)","van Paassen, M.M. (promotor); Abbink, D.A. (promotor); Delft University of Technology (degree granting institution)","2020","Improving the safety level of aviation is vital to prevent serious accidents. One key area where improvements can be made is the prevention of loss of control occurrences, by preventing the aircraft state to pass beyond the limits from which no recovery is possible. Such improvements can focus on improved monitoring of the main flight parameters and active automationmodes. The limits of an aircraft are typically expressed in terms of a flight envelope which represents the allowable region of load factor versus velocity. Modern day aircraft can support pilots in monitoring themain flight parameters by employing a flight envelope protection system: the inputs of the pilots are routed to the flight control computers which can impose limits on those inputs. In doing so, the computers are protecting the aircraft state fromleaving the flight envelope. When the control device is linked to the control surfaces, for example using cables and pulleys, any limit imposed by the flight control computer can be felt by the pilot. With the advent of fly-by-wire control devices, the mechanical link is replaced by an electrical connection, resulting in the loss of this information using the sense of touch. This haptic information was initially not included as it requires active control devices which had issues regarding the size, power and stability requirements. The lack of such haptic information on the flight envelope protection system might have been a contributing factor in some accidents. Nowadays active control devices do meet the requirements in terms of size, power and stability, and offer the possibility to re-introduce haptic feedback in fly-by-wire control systems. Therefore, this thesis looked at adding haptic feedback to the control device of a modern aircraft to increase pilot awareness of the flight envelope protection system...","haptic feedback; flight envelope protection; manual control; situation awareness","en","doctoral thesis","","978-94-6366-323-6","","","","","","","","","Control & Simulation","","",""
"uuid:46178824-bb80-4247-83f1-dc8a9ca7d8e3","http://resolver.tudelft.nl/uuid:46178824-bb80-4247-83f1-dc8a9ca7d8e3","Aerodynamics of Propellers in Interaction Dominated Flowfields: An Application to Novel Aerospace Vehicles","Stokkermans, T.C.A. (TU Delft Flight Performance and Propulsion)","Veldhuis, L.L.M. (promotor); Voskuijl, M. (copromotor); Delft University of Technology (degree granting institution)","2020","This research is on the aerodynamics of propellers in interaction. As a propeller is dependent on the flowfield it encounters, any disturbance in that inflow results in changes in loading on the propeller. This can result in propulsive efficiency changes and affect stability through generation of in-plane forces and out-of-plane moments. The loading changes can also impact noise through loading fluctuations and cause vibrations in the structure leading to fatigue. The objective of the research in this dissertation is to get a fundamental understanding of the role of aerodynamic interaction on the loading and performance of primarily the propeller and secondarily the interacting object(s) in typical configurations where interaction dominates the flowfield. For the propeller this refers to situations where the inflow is disturbed by such an amount that the disturbance is a dominating factor in the propeller loading.","Propeller; aerodynamics; performance; interaction; propulsion integration; compound helicopter; tip-mounted propeller; eVTOL; swirlrecovery-vanes; computational fluid dynamics; wind-tunnel testing","en","doctoral thesis","","978-94-6366-332-8","","","","","","","","","Flight Performance and Propulsion","","",""
"uuid:228d9463-2c98-4cb6-b7f0-ac274e890edd","http://resolver.tudelft.nl/uuid:228d9463-2c98-4cb6-b7f0-ac274e890edd","High speed electronics for SPAD image sensors used in TimeofFlight applications","Carimatto, A.J. (TU Delft Quantum Circuit Architectures and Technology)","Charbon-Iwasaki-Charbon, E. (promotor); Delft University of Technology (degree granting institution)","2020","Multi Digital Silicon Photon Multipliers (MDSiPM), as image sensors, are utilized to calculate and estimate the properties of the incident light. These properties include spatial location of hits, intensity or number of photons and time of arrival. Some characteristics can be more important than others depending upon the problem at hand. Among endless applications where MDSiPMs are used for, Positron Emission Tomography and LiDAR are the two that this thesis is focused on. Positron Emission Tomography is an imaging technique to monitor functional information about tissue and organs, including early cancer lesions. This constitutes the main difference with structural techniques such as radiography, where, by means of Xrays, a projection of a section of the body under test is obtained.","CMOS; SPAD; TDC; PET; LiDAR; ARRAY; ANN","en","doctoral thesis","","","","","","","","2022-04-01","","","Quantum Circuit Architectures and Technology","","",""
"uuid:a04fe58f-85b2-4592-9a1e-42080b6863d1","http://resolver.tudelft.nl/uuid:a04fe58f-85b2-4592-9a1e-42080b6863d1","West European home ownership sectors and the Global Financial Crisis","Dol, C.P. (TU Delft Architecture OTB)","Elsinga, M.G. (promotor); Hoekstra, J.S.C.M. (copromotor); Delft University of Technology (degree granting institution)","2020","The Global Financial Crisis (GFC) had a severe impact on West and South European economies and in 2009, GDP declines ranged from -5.6% in Germany to -3.6% in Spain. Against the background of strong GDP declines it is quite remarkable that European housing market indicators showed strong international variability. Whereas transaction levels in Germany and Belgium remained quite stable, transactions plummeted in the UK, the Netherlands, Ireland and Spain. Furthermore, repossessions rapidly increased to well over 100,000 cases in the UK and Spain in the first years of the crisis, while in Ireland, Belgium and the Netherlands, approximately 10,000 owner-occupiers lost their homes. In Germany, repossession levels were actually on the decline after economic turmoil of the early 2000’s. This raises questions about the backgrounds of these international variations in the impact of the GFC. What factors play a role in the German and Belgian immunity of housing transactions and repossessions to the GFC? Furthermore, what measures have been taken in those countries where housing markets suffered most from the impact of the GFC in terms of transactions and repossessions? The overarching objective of this thesis is to gain an improved understanding of factors that determine the impact of the Global Financial Crisis on national home ownership markets. A related objective is to find how societies in the most affected countries have responded to the problems...","","en","doctoral thesis","","","","","","","","","","","Architecture OTB","","",""
"uuid:f3f07a75-223a-43be-9da4-d063bee67f56","http://resolver.tudelft.nl/uuid:f3f07a75-223a-43be-9da4-d063bee67f56","Advanced Light Management in Thin-Film Solar Cells","Vismara, R.","Zeman, M. (promotor); Isabella, O. (promotor); Delft University of Technology (degree granting institution)","2020","The coming years will see humanity facing significant challenges to ensure its continued survival. The threat of global warming to humans and the environment – exacerbated by rapidly growing population and energy demand – requires quick and decisive actions. Among them, increasing the generation of electricity from renewable resources is paramount to mitigate the effects of climate change. Photovoltaic (PV) energy conversion can be one of main technologies that propels the transition from fossil fuels towards a more sustainable future.
In recent years, the deployment of photovoltaic systems has increased at an astounding pace, with more than 100 GWp of power installed during each of the last three years. However, further expansion of PV installations cannot solely rely on increasing industrial production, but should also be supported by research aimed at increasing the efficiency of PV devices and reduce their manufacturing costs. One of the key aspects of photovoltaic energy conversion is absorption of light. By increasing the amount of solar energy that is absorbed inside PV devices, the efficiency of solar cells can be boosted. This is particularly true for thin-film structures, which due to their limited thickness struggle to effectively absorb photons. Light management indicates all the techniques aimed at maximising light absorption inside photovoltaic devices and is the main topic of this manuscript. The goal of this work is to investigate and optimise light management approaches – based on periodic structures – applied to different thin-film device technologies, and through this analysis provide guidelines for the design of photovoltaic devices and gain an insight into their optical performance.
After describing the theoretical background in chapter 1 and the methodology in chapter 2, chapter 3 begins the study of light management approaches by investigating nanowire arrays applied to thin-film nano-crystalline silicon solar cells. A proof-of-concept device was manufactured to ensure the feasibility of the proposed novel approach. Then, simulations were used to optimise the nanowire array structure. Results showed the good anti-reflective and scattering properties of nanowires, which are able to significantly boost absorption in the nano-crystalline silicon active layer.
In chapter 4, the analysis shifts to periodic metasurfaces and the achievement of perfect absorption in amorphous silicon solar cells. By tuning the size and arrangement of the dielectric nanostructures that make up the metasurface, near 100% absorption can be achieved in the spectral region where amorphous silicon struggles to efficiently absorb incident photons. With respect to flat devices the performance is significantly increased, despite a reduction of used material of more than 50%.
In chapter 5, a thorough investigation of periodic gratings for Cu(In,Ga)Se2 solar cells (CIGS) is carried out, complete with the selection of appropriate supporting materials to reduce the device thickness with a minimal sacrifice in performance. The accuracy of 3-dimensional rigorous modelling in predicting the performance of real CIGS devices was demonstrated for the first time. Then, a full study of 1-D and 2-D gratings was conducted – together with an analysis of device architectures that employ more transparent materials at the front and highly reflective metals at the back side. Results showed a marked increase in light absorption, mostly owing to a lower device reflectance and to reduced parasitic absorption in all supporting layers. The high optical performance was maintained when the thickness of the CIGS layer was reduced by 60%, which is crucial to reduce the utilisation of indium in the device and of the costs associated with it.
In chapter 6, the concept of front/back pyramidal textures with different geometries is introduced and fully explored. Its application to (nano-)crystalline silicon absorbers or to supporting layers is compared, showing a preference for the former. After careful study of the decoupled texture geometry, an optimised
optical performance beyond the traditional Lambertian scattering limit was achieved.
In chapter 7 the concept of double front and back textures is analysed further, by applying it to cheap and abundant barium silicide (BaSi2). The optical potential of of this novel PV material was first characterised with spectroscopy measurements, then assessed in both single- and multi-junction configurations with the aid of rigorous optical simulations. Results showed that BaSi2 outperforms thin-film silicon absorbers, owing to its higher absorption coefficient. This highlights the promise of this novel material, which can be an ideal candidate for both single- and double-junction thin-film devices.","Photovoltaic; Thin Film; Solar Cells; Light Management; CIGS; Barium Silicide; Perfect Absorption; Nanowires","en","doctoral thesis","Ipskamp","978-94-6421-111-5","","","","","","","","","Photovoltaic Materials and Devices","","",""
"uuid:67b8a7e6-2a10-49c1-8ec6-c922259ef5d9","http://resolver.tudelft.nl/uuid:67b8a7e6-2a10-49c1-8ec6-c922259ef5d9","Novel adherend laminate designs for composite bonded joints","Kupski, J.A. (TU Delft Structural Integrity & Composites)","Benedictus, R. (promotor); Teixeira De Freitas, S. (copromotor); Delft University of Technology (degree granting institution)","2020","Adhesive bonding is one of the most suitable joining technologies in terms of weight and mechanical performance for current carbon fiber reinforced polymer aircraft fuselage structures. However, traditional joint topologies such as single overlap joints induce high peel stresses, resulting in sudden failure and low joint strength when compared to metal adherends. This drawback in using carbon fiber reinforced polymer is hindering their performance and efficiency in full-scale structures where joints are essential. In this thesis, novel design concepts are proposed to tackle the challenge of poor out-of-plane properties of composite adherends that limit the performance of composite single lap bonded joints, by making use of the three design parameters: stacking sequence, ply thickness and overlap stacking. Design parameters of carbon fiber reinforced polymer bonded joints can be classified in three categories: Global topology relates to the global geometry of the joint, for example whether it is a single or a double overlap joint topology. Local topology refers to features that affect only a local region of the entire bond line, for example a certain spew fillet geometry or a tapered edge of the adherend. The third category describes any design parameters which are related to the specific materials of the adhesive and the adherends. The adherends themselves consist of laminated plies, which can be tailored, for example in terms of ply thickness or stacking sequence. These laminate specific design parameters are the core of this work. For all adherend laminate designs studied in the context of this thesis, the following approach is chosen: Single lap bonded joints were manufactured varying the design features (stacking sequence, ply thickness and/or overlap stacking). The experimental campaign consisted of quasi-static tensile tests using Acoustic Emission and Digital Image Correlation to monitor the damage and strain evolution of the overlap area during testing. 3D post-mortem failure analysis of the fracture surfaces was conducted using a 3D profiling microscope. Parallel to the experiments, a Finite Element Analysis is performed up to damage initiation, taking into account non-linear geometry and elasto-plastic behaviour of the adhesive. Damage initiation loads and strain fields are numerically predicted and validated with experimental data. Stacking sequence: Single overlap bonded joints with four different composite adherend stacking sequences are tested and numerically simulated, in order to evaluate the effect of the layups on the quasi-static tensile failure of the bonded joints. The results show that increasing the adherend bending stiffness postpones the damage initiation in the joint. However, this is no longer valid for final failure. The ultimate load is influenced by how the damage progresses from crack initiation up to final failure. For similar bending stiffness, a layup that leads to the crack propagating from the adhesive towards the inside layers of the composite increases the ultimate load. The failure mode is highly influenced by the orientation of the interface lamina in contact with the adhesive, such that, a 0° interface ply causes failure within the bond line, while a 90° interface ply causes failure inside the composite adherend.
Ply thickness: Another way to improve the out-of-plain properties of the laminate is to decrease their ply thickness. Single lap bonded joints with three different ply thicknesses of 200m,iv 100m and 50m are tested. Experimental results show an increase of 16% in the lap shear strength and an increase of 21% in the strain energy when using the 50m instead of 200m ply thicknesses. Acoustic Emission measurements show that the damage initiation is postponed up to a 47% higher load when using 50m instead of 200m ply thicknesses. Moreover, the total amount of acoustic energy released from initiation up to final failure is significantly less with thin plies. A failure analysis of the numerical results up to damage initiation indicates that with decreasing ply thickness, the damage onset inside the composite is postponed to higher loads and moves away from the adhesive interface towards the mid-thickness of the adherend.
Overlap stacking: In a third approach a change of global joint topology is achieved with multiple stacked overlaps, also referred as finger joints, by using the ply interleaving technique. The quasi-static tensile behavior of single lap joints with two overlap lengths 12.7mm and 25.4mm are compared to finger joints with 1 and 2 stacked overlaps through the thickness with a constant 12.7mm overlap length. Two composite adherend stacking sequences, [(0/90)s]4 and [(90/0)s]4, are tested for each topology. A difference in peak shear and peel stress at the tip of the bonded region can be observed: (i) the peak peel stress in the 1-finger joint is higher than in the single lap joint configurations because the beneficial effect of avoiding eccentricity in the finger joint is outperformed by the detrimental effect of reducing to half the adherend stiffness at the overlap; (ii) for 2 fingers, the stress field changes significantly with a doubled bonding area and leads to a 23% decrease in peak shear and 33% in peak peel stress, compared to the single lap joint topologies. It is concluded that a quasi-isotropic layup may not be the best choice in terms of tensile joint strength. In order to improve tensile strength up to damage initiation, the layup should be optimized for bending stiffness, while up to final failure, a stacking sequence that yields to a complex crack path inside the composite can lead to higher ultimate loads. Decreasing the single ply thickness of laminated composite adherends in a single overlap bonded joint increases the maximum load and delays damage initiation of the joint. However, the damage progression till final failure is more sudden. Comparing single overlap with finger joint topologies, different trends at damage initiation and at maximum load are believed to result from how the damage propagates inside the joint. A topology with 2 fingers and layup [(90/0)s]4, which fails entirely inside the adherend, provides the lowest peak shear and peel stress and the highest load at damage initiation. It is however outperformed in maximum load by a single lap joint topology with layup [(0/90)s]4, with mostly cohesive failure. It is found that, unlike in single overlap topologies, the most dominant stress component for damage initiation inside the finger joints is the in-plane tensile stress, at the butt joint resin pockets, rather than peel stresses at the overlap region. If weight efficiency is the main requirement, a finger joint design can effectively replace a single overlap joint design. However, for absolute maximum joint strength, the single overlap joint is a better choice than the finger joint. In total, all three approaches lead to an increase in joint strength, either till damage initiation (Chapter 3, 4, 5) or till final failure (Chapter 3).","CFRP; Adhesive bonding; Lap joining","en","doctoral thesis","","","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:6806500b-6ed9-4d94-a4d7-17965cfc9ca0","http://resolver.tudelft.nl/uuid:6806500b-6ed9-4d94-a4d7-17965cfc9ca0","Dynamic, Stochastic, and Coordinated Optimization for Synchromodal Matching Platforms","Guo, W. (TU Delft Transport Engineering and Logistics)","Negenborn, R.R. (promotor); Beelaerts van Blokland, W.W.A. (copromotor); Atasoy, B. (copromotor); Delft University of Technology (degree granting institution)","2020","With the increasing volumes of containers in global trade, efficient global container transport planning becomes more and more important. To improve the competitiveness in global supply chains, stakeholders turn to collaborate with each other at vertical as well as horizontal level, namely synchromodal transportation. Synchromodality is the provision of efficient, effective, and sustainable transport plans for all the shipments involved in an integrated network driven by advanced information technologies. However, the decision-making processes of a global synchromodal transport system is very complex. First, time-dependent travel times caused by traffic congestion need to be considered. Second, a dynamic approach that handles real-time shipment requests in a synchromodal network is required. Third, spot requests received from spot markets are unknown in advance. Fourth, travel time uncertainty is not handled yet for global synchromodal transport networks. Fifth, distributed approaches that stimulate cooperation among multiple stakeholders involved in global container transportation are still missing. This thesis addresses the above-mentioned challenges with dynamic, stochastic, and coordinated models.","","en","doctoral thesis","TRAIL Research School","979-90-5584-273-5","","","","","","","","","Transport Engineering and Logistics","","",""
"uuid:11b45415-5342-4efa-aaf5-69592076cb3f","http://resolver.tudelft.nl/uuid:11b45415-5342-4efa-aaf5-69592076cb3f","Creating a New Perspective by Integrating Frames Through Design: An Exploratory Research into the What, Why, and How of Integrated Design","Visser, J.L. (TU Delft Integral Design & Management)","Hertogh, M.J.C.M. (promotor); Badke-Schaub, P.G. (promotor); Delft University of Technology (degree granting institution)","2020","Integrated design is a frequently used term in academics and practice. However, a common ground in the application of terminology, methodology, and description of insights from practice with respect to integrated design is lacking. Therefore, this dissertation describes a framework for integrated design that can be used as a platform for discussion and development. The framework is the result of the exploratory research into the what, why, and how of integrated design, and can be used by anyone who has to deal with integrated design and would like to have more grip on this concept.","","en","doctoral thesis","","978-94-6384-172-6","","","","","","","","","Integral Design & Management","","",""
"uuid:2e0c5ecb-b049-45a6-8175-6b6a47101dc1","http://resolver.tudelft.nl/uuid:2e0c5ecb-b049-45a6-8175-6b6a47101dc1","Fingermarks, beyond the source: What their composition may reveal about the donor","van Helmond, W. (TU Delft OLD ChemE/Organic Materials and Interfaces)","van Esch, J.H. (promotor); Sudhölter, Ernst J. R. (promotor); de Poot, C.J. (promotor); de Puit, M. (copromotor); Delft University of Technology (degree granting institution)","2020","fingerprints are a commonly exploited type of evidence and can be crucial in a criminal investigation. The process of individualization or exclusion of a donor relies on the comparison of ridge detail characteristics between a fingermark, found at a crime scene, and reference fingerprints, collected under controlled conditions (either or not stored in a database). Although this process has been successfully used for over a century, fingermarks found at a crime scene are of limited value for a criminal trial if the corresponding reference fingerprint is not available, or the found fingermark is of poor quality. Fingerprints consist of donor secretion, mainly eccrine and sebaceous, of which the exact composition is likely influenced by many (both endogenous and exogenous) factors, including donor traits, habits and activities. Analysis of the chemical composition could thus potentially lead to the retrieval of donor information from those fingerprints that yielded no information in the traditional comparison process. The main aim of this dissertation was to determine what donor information can reliably and validly be derived from the chemical analysis of the fingerprint composition, in order to be used in forensic investigations...","fingerprints; mass spectrometry; compositional analysis","en","doctoral thesis","","978-94-6421-099-6","","","","","","","","","OLD ChemE/Organic Materials and Interfaces","","",""
"uuid:07641fdf-5c87-4417-ad7c-e4232bd49570","http://resolver.tudelft.nl/uuid:07641fdf-5c87-4417-ad7c-e4232bd49570","Visual Navigation and Optimal Control for Autonomous Drone Racing","Li, S. (TU Delft Control & Simulation)","de Croon, G.C.H.E. (promotor); de Visser, C.C. (copromotor); Delft University of Technology (degree granting institution)","2020","Drones, especially quadrotors, have shown their great value for applications like aerial photography, object delivery and warehouse inspection. At the same time, with the de- velopment of Artificial Intelligence (AI), computers can replace humans and even per- form better than humans in some areas where it was impossible before like the AI pro- gram Alpha Go which beat the human world champion in Go matches and Alpha star which was rated above 99.8% human players in the real-time strategy game StarCraft II. Concerning drones, the question is whether they can fly races completely by themselves and if they can fly even faster than human pilots’ racing drones? Although there exist many technologies for drones to fly autonomously in terms of navigation, guidance and control, autonomous drone racing still sets an enormous chal- lenge for the robotics community. For example, the most commonly used vision camera based navigation technologies such as Simultaneous Localization and Mapping (SLAM) and Visual Inertial Odometry (VIO) suffer motion blur when the drone moves fast and high computational demand which is scarce onboard the drone. Moreover, the com- monly used PID controller has no guarantees of optimality while much parameter tuning is required. Many other challenges like these require new technologies to satisfy more complex and challenging flying scenarios to challenge humans in drone races. This thesis attempts to answer the question mentioned above. First of all, this the- sis presents 2 systematic solutions for autonomous drone racing including navigation, guidance and control techniques. The solutions are computationally so efficient that they can run on board of a Bebop 1 quadrotor (made in 2014) without using the GPU and a cheap 72-gram quadrotor called the ’Trashcan’. With the constraints of the processing power and cheap onboard sensors, the Bebop can fly through 15 gates in a complex sce- nario with an average speed of 1.5m/s and the Trashcan can fly through a 4-gate racing track for 3 laps with an average speed of 2m/s. Both solutions helped the MAVLab, TU Delft, participate in the IROS autonomous drone racing in 2017 and 2018. In terms of visual navigation, a computationally efficient gate detection method ’snake gate’ is developed to detect the racing gate during the flight. Together with a revised version of Perspective-3-Point (P3P) method, the detection results are used to provide location information for the drone. A Kalman filter is developed to fuse these detec- tions with the onboard IMU readings. Unlike the traditional Kalman filter, this version deduces the velocity from the accelerometers readings by a linear drag model approx- imation instead of integrating the accelerometers. In this way, the Kalman filter has a faster convergence rate. Another filtering method, Visual Model-predictive Localization (VML), is also developed to fuse the vision detections and onboard attitude estimation. The simulation and real-world flight results show that the VML is more robust to outliers than the commonly used Kalman filter especially when there are invalid measurements. Also, the VML is more efficient than the Kalman filter in handling measurement delays. At last, a gradient descent based parameter estimation method is developed to estimate the quadrotor’s aerodynamics coefficients and the Attitude and heading reference sys- tem (AHRS) biases using the visual measurements and the onboard state predictions. With the estimated parameters, the quadrotor can have a better state prediction when no visual measurement is available in some time. In terms of guidance and control, a novel neural network based nonlinear optimal controller, G&CNet, is developed to steer the drone to the target with the minimum time. This G&CNet moves the time-consuming nonlinear controller onboard and can be run at 200HZ to map the current states and the optimal control policy calculated offboard. The simulation results show that the flying result is very close to the theoretical nonlinear optimal control solution. Both simulation and real-world flying results show that it has faster flights than a commonly used polynomial based trajectory generation and tracking method. Last but not least, the methods provided can be generalized to other applications. For example, for the outdoor flight where the Global Positioning System (GPS) is avail- able for navigation, the vision measurements can be directly replaced by the GPS signals in the proposed navigation strategies and they should work directly. For the proposed G&CNet, it should work in all scenarios where the guidance and control modules are needed to move the drone from one point to another point. In this way, the proposed methods allow drones to move faster in a robust way, extending their mission capabili- ties.","Autonomous drone racing; visual navigation; nonlinear model- predictive control","en","doctoral thesis","","978-94-6384-175-7","","","","","","","","","Control & Simulation","","",""
"uuid:1d92d61c-c124-4e7b-903e-bce246410bba","http://resolver.tudelft.nl/uuid:1d92d61c-c124-4e7b-903e-bce246410bba","Explaining Robot Behaviour: Beliefs, Desires, and Emotions in Explanations of Robot Action","Kaptein, F.C.A. (TU Delft Interactive Intelligence)","Neerincx, M.A. (promotor); Hindriks, K.V. (promotor); Broekens, D.J. (copromotor); Delft University of Technology (degree granting institution)","2020","Social humanoid robots are complex intelligent systems that in the near future
will operate in domains including healthcare and education. Transparency of what robots intend during interaction is important. This helps the users trust them and increases a user’s motivation for, e.g., behaviour change (health) or learning (education). Trust and motivation for treatment are of particular importance in these consequential domains, i.e., domains where the consequences of misuse of the system are significant. For example, rejecting treatment can have a negative impact on the user’s health. Transparency can be enhanced by having the robot explain its behaviour to its users (i.e., when the robot provides self-explanations). Selfexplanations help the user to assess to what extent he or she should trust the decision or action of the system.","","en","doctoral thesis","","978-94-6423-040-6","","","","","","","","","Interactive Intelligence","","",""
"uuid:49700510-47b3-4450-85da-c99b4d14878f","http://resolver.tudelft.nl/uuid:49700510-47b3-4450-85da-c99b4d14878f","Design of efficient magnetocaloric materials for energy conversion","You, X. (TU Delft RST/Fundamental Aspects of Materials and Energy)","Brück, E.H. (promotor); van Dijk, N.H. (promotor); Delft University of Technology (degree granting institution)","2020","The magnetocaloric effect (MCE) is a magneto-thermodynamic phenomenon in which a temperature change of a material is caused by exposing the material to a changing magnetic field under adiabatic conditions. There are two main applications based on the MCE. One application is magnetic refrigeration, which can expel heat in a magnetic field cycle. Another application is magnetic energy conversion in thermomagnetic motors/generators, which can transfer waste heat into kinetic/electric energy. Gadolinium metal is the standard reference material for the application of the MCE. However, it has a limited MCE with a second-order magnetic transition. Several intermetallic material systems with first-order magnetic transition resulting in a giant MCE have been discovered, including La(Fe,Si)13 based alloys, MnFeP(As, Ge, Si) alloys and Ni-Mn-based Heusler alloys. To design a magnetocaloric material that is suitable for applications, first of all, requires an estimated recipe, which can be obtained from the phase diagram. Secondly, an appropriate synthesis route should be chosen. Thirdly, the stoichiometry of the material should be optimised to avoid impurity phases. For the energy conversion applications, the desired material should preferentially be in the vicinity of the border between a first-order magnetic phase transition (FOMT) and a second-order magnetic phase transition (SOMT). If it is a FOMT or SOMT, the formula can be adjusted by changing the heat treatment, the element ratios and introducing new elements, until the transition is close to the critical point (CP). Finally, the transition temperature needs to be checked to see it is in the designed working temperature range. If not, the recipe needs to be adjusted until an optimised material is found. Experimental diagrams of the ferromagnetic transition temperature (TC) and the thermal hysteresis as a function of composition were constructed in the (Mn,Fe)2(P,Si) system as a guide to estimate suitable compositions for applications. The structure change across the magnetic phase transition is coupled with the thermal hysteresis of the magnetic transition in the experimental diagram. Both Mn-rich samples and Fe-rich samples with a low Si concentration were found to show a low hysteresis that can form promising candidates for applications in a thermomagnetic motor. The effect of V substitution for Fe is investigating in the Mn0.7Fex-zVzP0.6Si0.4 alloys. The (Mn,Fe)1.91(P,Si) stoichiometry was chosen as a starting point to obtain the smallest impurity content. For an increasing V content the a axis expands and the c axis shrinks (together with the c/a ratio), whereas the unit-cell volume remains about constant. The ferromagnetic transition temperature TC decreases with increasing V content. In the Mn0.7Fe1.18V0.03P0.6Si0.4 compounds, 93% of saturation magnetisation at 5 K was reached in an applied magnetic field of 0.5 T, which makes this compound a promising candidate for low-field applications. The heat treatment clearly affects the amount of the impurity phase, and thereby the composition of the main phase. In this case, oven-cooled samples contain a larger impurity phase fraction than the quenched samples, which results in a lower transition temperature. The currently applied methods to classify FOMT and SOMT materials were applied and compared using a series of samples Mn13Fe0.7P1-ySiy (y = 0.4, 0.5 and 0.6). The FOMT samples are easy to categorise. Every criterion shows that y = 0.4 and 0.5 sample is FOMT materials. However, the SOMT and CP samples are problematic. In this thesis, different criteria were found to result in different conclusions for the y = 0.6 sample. From the latent heat, the y = 0.6 is predicted to undergo a FOMT. From the XRD data and the field dependence of TC, the y = 0.6 sample is right on the CP. However, based on the Arrott plots, the gradual field dependence of the entropy change and the newly proposed field exponent n, the y = 0.6 sample is a SOMT material (but in close proximity to the CP). The structural, magnetic and electronic properties of LaFe11.8-bCobSi1.2 (b = 0.25, 0.69 and 1.13) compounds are studied. With increasing Co content, the material is tuned from a FOMT towards a SOMT, TC increases, and the thermal hysteresis remains neglectable. In the unit cell, the most remarkable change in bond length is between the 8b and 96i sites and for one of the bonds between two neighbouring 96i sites. The negative thermal expansion across the transition correlates with the angle change in the orientation of the cage formed by the atoms on the 96i sites within the cubic unit cell. The experimental electron density maps reveal how the cage rotates within the cubic primitive cell. The samples with a smaller Co content show a larger change in the electron density compared to the sample with the highest Co content when TC is crossed. The choice of synthesis method plays an important role in the physical properties of the prepared materials. For lab-scale samples, the most common way to synthesise (Mn,Fe)2(P,Si) compounds is ball milling. For Ni-Mn based Heusler alloys, the most common synthesis route is arc-melting. In this thesis, ball milling was applied to synthesise Ni-Mn based Heusler alloys. The advantage of ball milling is that the annealing time can be shortened. Based on the optimised sample fabrication, the maximum magnetisation can be tuned by adjusting the Ni/Mn and Mn/Sn ratios. Introducing small amounts of cobalt and aluminium leads to a significant increase in the magnetisation.","Magnetocaloric materials; phase transition; phase diagram; synthesize; electron density map; synchrotron; charge distribution; Fe2P compounds; LaFeSi compounds; NiMnSn compounds","en","doctoral thesis","","978-94-6384-174-0","","","","","","","","","RST/Fundamental Aspects of Materials and Energy","","",""
"uuid:862e11b6-4018-4f63-8332-8f88066b0c5c","http://resolver.tudelft.nl/uuid:862e11b6-4018-4f63-8332-8f88066b0c5c","Consistent thermosphere density and wind data from satellite observations: A study of satellite aerodynamics and thermospheric products","March, G. (TU Delft Astrodynamics & Space Missions)","Visser, P.N.A.M. (promotor); van den IJssel, J.A.A. (copromotor); Delft University of Technology (degree granting institution)","2020","The German CHAMP, US/German GRACE, and European Space Agency (ESA) GOCE and Swarm Earth Explorer satellites have provided a data set of accelerometer observations allowing the derivation of thermospheric density and wind products for a period spanning more than 15 years. With the advent of highly accurate satellite accelerometer measurements, the neutral density and wind characterization has been significantly improved. These observations provided detailed information on the thermospheric forcing by Solar Extreme Ultraviolet radiation and charged particles, and revealed for the first time the extent of forcing by processes in lower layers of the atmosphere. Because the focus of most of previous research was on relative changes in density, the scale differences between the CHAMP, GRACE, GOCE and Swarm data sets, so far, have been largely ignored. These scale differences originate from errors in the aerodynamic modelling, specifically in the modelling of the gas-surface interactions (GSI) of the satellite. Once detailed 3D geometry models of these satellites are available, the key parameters to describe the satellite aerodynamics can be estimated by cleverly making use of variations in satellite orientation and simultaneous observations by multiple satellites. The first step for obtaining more consistent density and wind data sets consisted of meticulously modelling the satellite outer surface. For this dissertation work, this was done by collecting information from technical drawings and pre-launch pictures, and generating a CAD model of the selected satellites. In the following phase, these geometries were given as input to a rarefied gas-dynamics simulator. The Direct Simulation Monte Carlo approach was used with the SPARTA software to compute the force coefficients under different conditions of satellite speed, atmospheric temperature and local chemical composition. Once all the mission scenarios had been simulated, an aerodynamic data set was generated and applied in the processing of satellite accelerations into thermospheric density and wind data products. To this aim, the Near Real-Time Density Model (NRTDM) software, developed at TU Delft, was used. The data were generated from accelerometer observations and, when necessary, with the help of GPS-based accelerations estimated by a Precise Orbit Determination (POD) technique. Multiple comparisons were performed with empirical and physics-based models. This helped in determining for which conditions the models are performing better, and also which models’ features would need further development. In the second step, the interaction between atmospheric particles and satellite surfaces was investigated. The way in which atmospheric particles collide with the satellite surfaces have a large influence on the satellite aerodynamic forces and, if proper assumptions are not implemented, can produce large discrepancies in the final thermospheric products. Initially, the GSI assumptions were selected in agreement with the fully diffusive reflection mode. This assumption was adopted to exclusively investigate the geometry modelling influence on thermospheric products. Later, to cover also this research area, multiple simulations described different reflection modes. A wide range of GSI parameters was investigated, and more optimal values were found allowing the derivation of new consistent thermospheric products. Within this study, the energy accommodation coefficient, which describes the energy exchange between particles and satellite surfaces, played a crucial role. Although the value of 0.93 is used commonly in the literature, in this study lower values were identified as optimal. Indeed, a value of 0.82 for the GOCE satellite, and a value of 0.85 for the Swarm and CHAMP satellites have been found to provide more consistent thermospheric data. This resulted in new improved thermospheric density and wind data sets, which have been made available to the scientific community. Among the possible applications, these data can be used for data assimilation for improving current atmospheric models. Resolving the problem of deriving the true absolute thermosphere density scale from satellite dynamics measurements improves orbit predictions for the space debris population and its long-term evolution. Moreover, the new capabilities for computing more consistent drag, density and wind, can also be exploited for future missions that are currently in the design phase.","Thermosphere; satellite drag; Thermospheric neutral density; Thermospheric wind; Gas-surface interaction","en","doctoral thesis","","978-94-6421-079-8","","","","","","","","","Astrodynamics & Space Missions","","",""
"uuid:a1bb9cd0-6eef-4665-a910-969d55667f35","http://resolver.tudelft.nl/uuid:a1bb9cd0-6eef-4665-a910-969d55667f35","Optimizing hydrographic operations for bathymetric measurements using multibeam echosounders","Mohammadloo, Tannaz H. (TU Delft Aircraft Noise and Climate Effects)","Simons, D.G. (promotor); Snellen, M. (promotor); Delft University of Technology (degree granting institution)","2020","Detailed information about the sea and river bed is of high importance for a large number of applications, such as marine geology, coastal engineering, safe navigation and offshore construction. Acoustic remote sensing techniques have become extremely attractive for obtaining bathymetry measurements and for mapping the sediment properties, due to their high coverage capabilities and relatively low costs. Among the available tools for remotely mapping the seafloor, the MultiBeam EchoSounder (MBES) belongs to the state-of-the-art technology enabling acquisition of high resolution measurements of bathymetry within a relatively short time period. Despite the widespread use of MBESs for hydrographic operations and the considerable efforts devoted to optimize these operations, the existing knowledge with regard to the measurement capabilities of the MBES is lacking in some respects. This can lead to an unreliable and inaccurate representation of the seafloor and/or unrealistic estimates of the measurement uncertainties. Moreover, realistic pre-survey predictions of the contribution of the various uncertainty sources affecting the quality of the bathymetric measurements is of importance to ensure sufficient accuracy of the soundings and a correct interpretation of the sediment properties. This thesis thus aims at bringing the insight of the MBES measurement capabilities to a new stage by addressing these issues. The contribution of this thesis to the field of MBES bathymetric mapping is to bring the knowledge of the MBES measurements capabilities to a stage such that hydrographic operations are optimized. This leads to a reliable and accurate representation of the bottom and a realistic expectation of the associated uncertainties. Optimizing hydrographic operations is accomplished by correcting the systematic errors (if present), using a realistic depth uncertainty prediction model and addressing proper distribution of the soundings while ensuring low uncertainties of the measurement. These issues allow for realistic bathymetry maps and need to be accounted for in survey planning.","Multibeam echosounder; Bathymetric measurements; Bathymetric uncertainty prediction; Bathymetry gridding; Erroneous water column sound speed profile; Doppler effect; Baseline decorrelation","en","doctoral thesis","","","","","","","","","","","Aircraft Noise and Climate Effects","","",""
"uuid:c84d75e5-6e49-466e-8ed6-9beb183d0d34","http://resolver.tudelft.nl/uuid:c84d75e5-6e49-466e-8ed6-9beb183d0d34","Effectiveness of bank filtration for water supply in arid climates: a case study in Egypt","Abdelrady, Ahmed (TU Delft Water Resources; TU Delft Sanitary Engineering)","Kennedy, M.D. (promotor); Sharma, S.K. (copromotor); Delft University of Technology (degree granting institution)","2020","In many developing countries, water demand is increasing while surface- and groundwater resources are threatened by pollution and overexploitation. Hence, a more sustainable approach to water resources management and water treatment is required. In this capacity, bank filtration is a natural treatment process that makes use of the storage and contaminant attenuation capacity of natural soil. However, BF is site-specific and a significant knowledge gap exists regarding the design and management of bank filtration systems, particularly in developing countries. This research aimed to address these gaps and contribute to the transfer of bank filtration to developing countries. This study comprised both column and batch laboratory-scale experiments to determine the effect of environmental variables such as temperature, raw water organic composition and redox conditions on the removal of chemical pollutants such as organic matter, micro-pollutants and heavy metals as well as the mobility of iron, manganese and arsenic under anaerobic conditions. Ultimately, the effectiveness of BF in improving the quality of drinking water was assessed in a case study in Egypt. The study showed that more than 80% of biodegradable organic matter was removed during BF at temperatures between 20 and 30 °C. However, post-treatment is required to remove humic compounds that were enriched during infiltration. Moreover, infiltrating water with a high concentration of humic compounds reduced the removal of heavy metals and promoted the release of metals into the infiltrating water, rendering it more feasible to install BF wells in surface water systems with low levels of organic matter. Moderately hydrophobic organic micropollutants were most persistent and required infiltration times in excess of 30 days for complete elimination, even at high temperatures (>20 °C). Finally, design parameters such as the number of infiltration wells, should be configured to minimise the proportion of polluted groundwater in the pumped water. Overall, this study provides insight into the effectiveness of BF in removing chemical pollutants from surface water and proposes guidelines for the successful application of BF in developing countries where arid conditions and high temperatures prevail.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-0-367-74673-5","","","","Dissertation submitted in fulfillment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of IHE Delft Institute for Water Education.","","","","","Water Resources","","",""
"uuid:31377889-a90e-4539-8837-da7aa51098e6","http://resolver.tudelft.nl/uuid:31377889-a90e-4539-8837-da7aa51098e6","Towards an Architecture of Self-reliance: Developing and Testing a Support Tool for Inhabitants and Practitioners in Mt-Elgon, Kenya","Smits, M.W.M. (TU Delft Situated Architecture)","Avermaete, T.L.P. (promotor); Quanjel, E.M.C.J. (promotor); Delft University of Technology (degree granting institution)","2020","This research project focuses on how decisions made by practitioners, articulating rural housing in Sub-Sahara Africa, contribute to the decreasing level of self-reliance inhabitants have regarding their housing. Multiple case studies on Mt. Elgon proved that inhabitants have a significantly higher self-reliance level, comparing traditional to modern housing. To study this phenomenon in practice and to articulate suitable design support the Design Research Methodology was chosen. The research clarification pinpointed inhabitant capacities as the key-contributor to self-reliant housing. Household survey outcomes proved that large numbers of rural inhabitant’s desire housing which they have insufficient capacities for. Indicating that the inhabitants experience a disparity between existing and desired housing capacities, moreover an inability to bridge this disparity independently, and consequently require external help. Architect seemed most appropriate to offer this help as it consist of sociocultural, engineering and design tasks. Architects are not trained in inhabitant capacity evaluation and as no suitable design tools existed, this research project developed the required design support, its application requirements and the impact measurements. These were then tested in a pilot project on Mt. Elgon. The findings were used to evaluate the support’s impact and suitability. The support tool users found it suitable to assess and integrate inhabitant capacities into housing solutions. The impact shows that the support group families have sustained their family’s level of self-reliance unlike the control group. The developed technological design, with modifications, could be used not only in rural Kenyan communities, but also help others around the continent.","Self-reliance; Rural Housing; Inhabitant capacities; Design support","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-334-2","","","","A+BE I Architecture and the Built Environment No 19 (2020)","","","","","Situated Architecture","","",""
"uuid:b5958370-d1b3-434e-ad26-4132b10caf9b","http://resolver.tudelft.nl/uuid:b5958370-d1b3-434e-ad26-4132b10caf9b","Land Degradation in the Dinder and Rahad Basins: Interactions Between Hydrology, Morphology and Ecohydrology in the Dinder National Park, Sudan","Hassaballah, K.E.A. (TU Delft Water Resources)","Uhlenbrook, S. (promotor); Abbas Mohamedali, Y. (copromotor); Delft University of Technology (degree granting institution)","2020","The spatial and temporal variability of the hydro-climate as well as land use and land cover (LULC) changes are among the most challenging problems facing water resources management. Understanding the interaction between climate variability, land use and land cover changes and their links to hydrology, river morphology and ecohydrology in the Dinder and Rahad basins in Sudan is confronted by the lack of climatic, hydrological and ecological data.
This book investigated the impacts of land degradation on the Dinder and Rahad hydrology and morphology, and interlinkage to the ecohydrological system of the Dinder National Park (DNP) in Sudan. It used an ensemble of techniques to improve our understanding of the hydrological processes and LULC changes in these basins. This included long-term trend analysis of hydroclimatic variables, LULC changes analysis, field measurements, rainfall-runoff modelling, hydrodynamic and morphological modelling of the Dinder river and its floodplain, with special focus on the Mayas wetlands. Moreover, this research is the first study to investigate the eco-hydrology of the DNP. It is expected that the results of the study will be beneficial to all stakeholders concerned and support decision-making processes for better management of water resources and ecosystem conservation in the area and possibly beyond.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-0-367-68355-9","","","","Dissertation submitted in fulfillment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of IHE Delft Institute for Water Education.","","","","","Water Resources","","",""
"uuid:6708acd8-92a7-449b-9275-d311bbfb06aa","http://resolver.tudelft.nl/uuid:6708acd8-92a7-449b-9275-d311bbfb06aa","Escherichia coli metabolism under dynamic conditions: The tales of substrate hunting","Vasilakou, E.","van Loosdrecht, Mark C.M. (promotor); Wahl, S.A. (copromotor); Delft University of Technology (degree granting institution)","2020","class=""MsoNormal"">Dynamic environmental conditions govern microbial metabolism and affect cellular growth. Many applications in biotechnology require cultivating microorganisms in large-scale bioreactors. These environments are commonly characterized by physicochemical gradients, due to imperfect mixing and have been the cause of reduced performance of cell factories in industry. Changes in substrate and gas concentrations, pH and temperature are some example of the generated gradients. The aim of this thesis is to unravel and understand the effects of repetitive substrate fluctuations on the cellular behaviour of Escherichia coli K12 MG1655, using experimental and modelling approaches. Chapter 1 is a general introduction to biotechnology and its applications, with a focus on upstream bioprocesses. In addition, the role of the bacterium Escherichia coli as a model organism, as well as a working horse of biotechnology, is discussed. In Chapter 2, the quantitative experimental and kinetic modelling approaches, currently used for studying microbial metabolism under dynamic conditions, are summarized and discussed. Current challenges and future perspectives finalize this chapter. In the experimental Chapter 3, a block-wise feeding regime was applied to an aerobic E.coli culture, with the aim to grow cells under substrate (glucose) gradients, following a reference chemostat (steady-state) growth. This regime was called “fast feast-famine”, as cells experienced periods of substrate excess, limitation and depletion in a time-scale of seconds. The regime was characterized by repetitive cycles of 20 s feeding and 380 s without feeding. The perturbations were applied for at least 8 generations, allowing the cells to adapt to the dynamic environment (highly reproducible cellular response). The specific substrate and oxygen consumption (average) rates increased during the feast-famine regime, compared to the reference steady state cultivation. The increased rates at same (average) growth rate led to a reduced biomass yield (30% lower), while there was no significant by-product formation. Such observation suggests the emergence of energy spilling reactions. With the increase in extracellular substrate concentration, the cells rapidly increased their uptake rate. Within 10 seconds after the beginning of the feeding, the glucose uptake rate was higher (4.68 μmol/gCDW/s) than reported during batch growth (3.3 μmol/gCDW/s). The high uptake led to an accumulation of several intracellular metabolites, during the feast phase, accounting for up to 34% of the carbon supplied. Although the intracellular metabolite concentrations changed rapidly, the cellular energy charge remained homeostatic, suggesting a well-controlled balance between ATP producing and ATP consuming reactions. The importance of combining experimental perturbation studies and kinetic modelling, in order to reveal metabolic strategies for coping with dynamic conditions is highlighted in the following Chapter 4. In Chapter 4, a published kinetic model for central carbon metabolism by Peskov K, et al. was used to investigate if the experimental observations from Chapter 3 could be reproduced with a model originating from steady-state calibration. Only after parameter optimization, with significant changes, could the data be reproduced, highlighting significant alterations in the enzymatic kinetics of glycolysis during feast-famine, compared to steady-state growth. Post transcriptional modifications were assumed to explain the sudden decrease in the substrate uptake rate, observed while glucose was still in excess. To reflect such change in the modelling approach, the feast-famine cycle was split into two phases and the experimental uptake rate was used as fixed input. Nevertheless, this was not yet sufficient to fully reproduce the experimental observations. The time course of the glycolytic intermediates could only be reproduced when introducing glycogen synthesis and assimilation in the model. Here, glycogen acted as a storage pool, providing carbon and energy to reinitiate growth during famine conditions. Furthermore, ATP spilling reactions were needed to reproduce the observed adenylate energy homeostasis. Additionally, a continuous draining of ATP supported the hypothesis of increased maintenance during the feast-famine regime. In Chapter 5, multi-omics approaches, i.e. shotgun cellular proteomics and 13C-labelled metabolomics were used for untargeted analysis and generation of new hypotheses on cellular regulatory mechanisms, when cells were subjected to fluctuations in substrate availability. The extracellular dynamics were expected to trigger global stress responses, in line with the observed reduced biomass yield. Surprisingly, this was not the case – stress related proteins did not alter from steady-state to feast-famine conditions. On the other hand, the cellular proteome adjusted for specific functional categories, including biosynthesis and translation processes (ribosomes). This increase can be explained by either increased protein production to support the rapid growth changes, during the short time of substrate availability, or ribosome stalling due to amino acid limitation during the famine phase. During substrate-limited growth (constant feeding) cells have an overcapacity of metabolic enzymes (involved in central carbon pathways), which is used under nutrient up-shift to handle rapid increase in metabolic fluxes. The down-regulation of several enzymes in glycolysis, TCA cycle and pentose phosphate pathway, as well as, transporter proteins, revealed that cells respond more to the substrate excess period than the starvation period during the block-wise feeding regime. This is also in accordance with the observed down-regulation of the glyoxylate-shunt enzymes. Moreover, the increased levels of polyphosphate kinase indicated the use of a polyphosphate pool as a putative buffer for energy homeostasis. Glycogen production and degradation was verified by the proteomic and 13C tracing analysis and is suggested to contribute to the ATP spilling (biomass yield losses), along with the increased protein turnover, which was identified by an increased section of the cellular proteasome. The generated insights of the whole thesis are summarized in Chapter 6. Additionally, open questions are discussed. The future challenges include scale-down experiments, research on the effects of dynamics on production hosts, the use of mutant strains for validation experiments and data integration toward multi-scale modeling. ","","en","doctoral thesis","","978-94-6366-329-8","","","","","","","","","OLD BT/Cell Systems Engineering","","",""
"uuid:5594067b-1646-46b1-a3d0-95aec9f5af7a","http://resolver.tudelft.nl/uuid:5594067b-1646-46b1-a3d0-95aec9f5af7a","Advances in Aerosol Instrumentation for Atmospheric Science","Barmpounis, K. (TU Delft ChemE/Materials for Energy Conversion and Storage)","Schmidt-Ott, A. (promotor); Biskos, G. (promotor); Delft University of Technology (degree granting institution)","2020","nderstanding the processes related to particle formation in the atmosphere is crucial in order to quantify more accurately their effect on climate. New particle formation is a global scale phenomenon, through which significant amounts of new particles are introduced in the atmosphere. A main mechanism of new particle formation is ion-induced nucleation, where vapor molecules nucleate on pre-existing atmospheric ions. It follows that in order to understand the mechanism of ion-induced nucleation, research must be focused on the very first steps of formation which normally lay in the sub-2nm size range. The work of this thesis is focused on facilitating the research on new particle formation by advancing the instrumentation state-of-the-art, but also performing experimental work on the dynamics of ion-induced nucleation.","Aerosol instrumentation; condensation particle counter; differential mobility analyzer; heterogeneous nucleation; sign-preference in ion-induced nucleation","en","doctoral thesis","","","","","","","","","","","ChemE/Materials for Energy Conversion and Storage","","",""
"uuid:7ca82a4e-cb46-458c-a6cd-52c631aa7ed9","http://resolver.tudelft.nl/uuid:7ca82a4e-cb46-458c-a6cd-52c631aa7ed9","Learning from our projects: Evaluating and Improving Risk Management of the Flood Protection Program (HWBP)","Hoseini, E. (TU Delft Integral Design & Management)","Hertogh, M.J.C.M. (promotor); Bosch-Rekveldt, M.G.C. (copromotor); Delft University of Technology (degree granting institution)","2020","The Netherlands has a long history of protecting against flooding and high water levels. Due to the climate change and the sea level rise, the next flood could have a high impact on this small country. Hoogwaterbeschermingsprogramma (HWBP), the current largest flood defence program in the Netherlands, makes sure that all the flood defence facilities (dike, pomp, dune) in the Netherlands comply with the safety norms. HWBP has two objectives: 1. Increasing the pace of improving the flood defence facilities to 50 km per year. 2. Reducing the average costs of the flood defence improvement to 7 million euro per kilometre. To reach these objectives, special attention should be given to the management of risks in HWBP projects. Current risk management literature mentions that risk management contributes to project success. Despite the benefits of risk management, projects still lack a proper application of risk","","en","doctoral thesis","","978-94-6375-817-8","","","","","","","","","Integral Design & Management","","",""
"uuid:6d23307d-01f1-44ce-9b86-0f1de3b81c3a","http://resolver.tudelft.nl/uuid:6d23307d-01f1-44ce-9b86-0f1de3b81c3a","Climate change and development impacts on groundwater resources in the Nile delta aquifer, Egypt","Ahmed, M.B.M. (TU Delft Water Resources)","Uhlenbrook, S. (promotor); Oude Essink, G.H.P. (promotor); Delft University of Technology (degree granting institution)","2020","Climate change (CC), as predicted by several global climate models, is very likely to have severe impacts in the future, on top of all other global changes. These impacts may have significant influence on natural resources especially surface and groundwater. This influence is particularly problematic for the Mediterranean coastal areas, and especially the northern Nile Delta Aquifer (NDA), where both natural and socio-economic resources of significant values are located. Moreover, population increase and development imperatives create additional pressure on the available water resources. These conditions may eventually lead to insufficient coverage of the needed water demands for agriculture, domestic usage as well as urban and industrial development,
unless adaptation and mitigation measures are developed ahead of time.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-0-367-68345-0","","","","Dissertation submitted in fulfillment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of IHE Delft Institute for Water Education.","","","","","Water Resources","","",""
"uuid:45e88cc3-3802-4169-a916-8ff7d170506c","http://resolver.tudelft.nl/uuid:45e88cc3-3802-4169-a916-8ff7d170506c","Mathematical modelling of Fast, High Volume Infiltration in poroelastic media using finite elements","Rahrah, M. (TU Delft Numerical Analysis)","Vermolen, F.J. (promotor); Vuik, Cornelis (promotor); Delft University of Technology (degree granting institution)","2020","As demand for water increases across the globe, the availability of fresh water in many regions is likely to decrease due to a changing climate, an increase in human population and changes in land use and energy generation. On the other hand, climate scenarios predict extreme periods of drought and rainfall. Heavy rainfall regularly leads to flooding, damaging infrastructure, and to erosion of valuable top soil. A simple and cheap solution for both global problems is emerging: storing rainwater in wet periods for use in dry periods. Fast, High Volume Infiltration (FHVI) is a recently discovered method to quickly infiltrate high volumes of fresh water and it was originally discovered in the field of construction. This research entails the mathematical modelling and the numerical simulation of FHVI in poroelastic media using the finite element method.","","en","doctoral thesis","","978-94-6380-979-5","","","","","","","","","Numerical Analysis","","",""
"uuid:51989f8f-f672-4f4b-a059-86233869ff47","http://resolver.tudelft.nl/uuid:51989f8f-f672-4f4b-a059-86233869ff47","Methods for Efficient Integration of FPGA Accelerators with Big Data Systems","Peltenburg, J.W. (TU Delft Computer Engineering)","Hofstee, H.P. (promotor); Al-Ars, Z. (promotor); Delft University of Technology (degree granting institution)","2020","Because of fundamental limitations of CMOS technology, computing researchers and the computing industry are focusing on using transistors in integrated circuits more efficiently towards obtaining a computational goal. At the architectural level, this has led to an era of heterogeneous computing, where various types of computational components are used to solve problems. In this dissertation, we focus on the integration of one such heterogeneous component; the FPGA accelerator, with one of the main drivers behind the increasing need of computational performance; big data systems. With the increased availability of these FPGA accelerators in data centers and clouds, and with an increasing amount of I/O bandwidth between accelerated systems and their host, the industry is trying to push these components into more widespread usage in big data applications. For big data systems, three related challenges are observed. First, the software systems consist of many layered run-time systems that have often been designed to raise the level of abstraction, often at the cost of potential performance. Second, hardware-unfriendly in-memory data structures, and (to the accelerator) uninteresting metadata may convolute designs required to integrate FPGA accelerators with big data systems software. Last, serialization is applied to face the second challenge, but the rate at which serialization is performed is much lower than the rate at which accelerators may absorb data. For FPGA accelerators, we also observe three challenges. First, highly vendor-specific styles of designing hardware accelerators hampers the widespread reuse of existing solutions. Second, developers spend a lot of time on designing interfaces appropriate for their data structure, since they are typically provided with just a byte-addressable memory interface. Third, developers spend a lot of time on the infrastructure or ‘plumbing’ around their computational kernels, while their focus should be the kernel itself. We describe a toolchain named Fletcher, based on the Apache Arrow in-memory format for tabular data structures, that uses Arrow to deal with the challenges on the big data systems software side, and also deals with the challenges on the FPGA accelerator development side. The toolchain allows to rapidly generate platform-agnostic FPGA accelerator designs where kernels operate on tabular data sets, requiring the developer to only implement the kernel, automating all other aspects of the design, including hardware interfaces, hardware infrastructure, and software integration. We describe applications in regular expression matching, k-means clustering, Hidden Markov Models with the posit numeric format, and decoding Parquet files. We finally apply the lessons learned on the work of the Fletcher framework in a new interface specification for streaming dataflow designs, named Tydi. We introduce a hardware-oriented type system that allows to express complex, dynamically sized data structures often found in the domain of big data analytics. The type system helps to increase the productivity when designing hardware transporting such data structures over streams, abstracting their use in hardware without losing the ability to make common design trade-offs.","Big Data; FPGA; accelerators","en","doctoral thesis","","978-94-6366-333-5","","","","","","","","","Computer Engineering","","",""
"uuid:d06dbcdc-b1dc-4442-9862-9a3fbf740203","http://resolver.tudelft.nl/uuid:d06dbcdc-b1dc-4442-9862-9a3fbf740203","Skin spectroscopy and imaging for cosmetics and dermatology","Ezerskaia, A. (TU Delft ImPhys/Optics)","Urbach, Paul (promotor); Pereira, S.F. (promotor); Delft University of Technology (degree granting institution)","2020","Skin is one of the most significant parts of the human body. It connects us with the environment and has a vast number of functions, among which defensive function is of a high importance. Skin structure and its layers may vary with a number of factors such as sight, age, sex, race and the overall health state of the individuals. The latter affects skin water to lipids ratio and their depth profile in the skin. Smaller changes in the water to lipids ratio may result in skin type variations. In both cases, skin appearance will change along with variations of skin conditions. Given the great importance of the state of the skin, a number of methods and devices for measuring water and lipids content were developed over the years. The research presented in this thesis proposes methods to achieve simultaneous measurements of water and lipids content of the skin and their ratio. We also analysed the impact of these measurements on determining the skin condition. Skin appearance is also addressed through measurement of the skin gloss, using several methods such as the ratio of specular to diffuse component of the image, the slope of the gradient intensity of the image from specular to the diffuse component, and an approached based on number of weighted pixels. The method proposed for simultaneous water and lipids content measurement is described in the Chapter 2, and is based on light measurements, comprising 3 wavelengths that are sensitive to primarily lipids, primarily water and equally sensitive to both, these wavelengths are: 1720 nm, 1770 nm, and 1750 nm, respectively. We benchmarked our measurement with those obtained with a corneomenter and sebumeter – benchmark devices, on induced skin conditions corresponding to combinations of high, low and neutral levels of water and lipids content in the skin. The study showed good agreement. The state of the protective function of stratum corneum (SC) and distribution as function of depth of skin lipids and water are addressed by means of short wave infrared spectroscopy. The method does not give information as a function of depth. This obstacle was overcome by tape stripping of one SC layer at a time. Comparative measurement was performed with Raman confocal microscopy and is described in the Chapter 3. Our proposed method showed similar pattern of the depth profile for water as obtained with the corneometer and with Raman confocal microscopy, while trans epidermal water loss measurement indicated the point of the barrier breaking point. Lipids measurements obtained with our method also showed similar trends as Raman confocal microscopy. As expected, water concentration increased and lipids concentration decreased with increasing depth into the stratum corneum. Additionally, a low-cost method for quantifying skin appearance by measuring skin gloss is proposed in Chapter 4. The method has proven to be reliable for skin gloss measurements via comparison with benchmark devices, and it also shows a great potential for other gloss measurements in a wide range, i.e., from an almost absolutely matte surface to a mirror like one. The proposed method comprises surface imaging by hand-held low-cost camera with ring-illumination along with image post processing based on weighting specular and diffuse components of the image. A gloss value is assigned as the result of the processing. Looking ahead, we discuss in Chapter 5 how the methods developed in this thesis could potentially be combined in one hand-held device. There will be several challenges such as the presence of other chromophores in the skin along with the low absorption coefficient of water and lipids in the spectral region suitable for the camera. The abovementioned obstacles can be solved by measuring absorption and scattering coefficients separately by means of illumination with spatial frequency modulation. The presence of several chromophores will as well require separating their impact on the absorption coefficient, potentially using more extensive data processing algorithms than those used in this research.","","en","doctoral thesis","","","","","","","","","","","ImPhys/Optics","","",""
"uuid:21922fff-e385-4de1-9957-8423221ef5a0","http://resolver.tudelft.nl/uuid:21922fff-e385-4de1-9957-8423221ef5a0","An investigation into the formation of squats in rails: modelling, characterization and testing","Naeimi, M. (TU Delft Railway Engineering)","Li, Z. (promotor); Dollevoet, R.P.B.J. (promotor); Delft University of Technology (degree granting institution)","2020","Rolling contact fatigue (RCF) is an important form of damage in wheels and rails that typically has surface and subsurface cracks. Squats are one of the major RCF defects that occur in the running band of rails and can create high dynamic forces and cause rail fracture if they are not detected and treated in time. In the current research, three advanced methods are developed in order to obtain a better understanding of the formation mechanism of RCF defects and, especially, squats in rails: 1) A new thermomechanical tool for numerically modelling the wheel–rail contact, 2) A new experimental setup for physically simulating the wheel–rail interaction and 3) A new computed tomography (CT) procedure for characterizing the wheel–rail defects. The first part presents a coupled thermomechanical modelling procedure for the wheel–rail contact problem and computes the flash–temperature and stress–strain responses when thermal effects are included. The contact temperature and thermal stresses could be driving factors for squats initiation. A three–dimensional (3D) elasto–plastic finite element model is built considering the wheel–track interaction. When the wheel is running on the rail, frictional energy is generated in the contact interface. The model is able to convert this energy into heat by using a coupled thermomechanical approach. The numerical models calculate the flash–temperature and thermomechanical stresses in the wheel and rail. In the second part, a new downscale test setup is designed and built for investigating the interaction between wheel and rail, especially under impact–like loading conditions, which are supposed to be often associated with rail squats. The test rig is intended to remedy the lack of dynamic similarity between the actual railway and the existing laboratory testing capability, by considering the factors that contribute to high–frequency dynamics of the wheel–track system. This part of the thesis further presents the results of some experiments carried out using the newly–built setup to verify the ideas behind its development. The third part presents the development of a computed tomographic (CT) scanning technique to reconstruct the 3D geometry of the RCF cracks in the railhead. Squat defects are associated with complex crack networks at the subsurface. Sample rails having squats of different severities are taken from the Dutch railway network. Various specimens of different sizes are prepared and investigated with the CT scanner. A detailed procedure of the CT experiment and post–processing is described. The proposed 3D visualization method, together with the necessary geometric definitions, is then used for enabling effective measurement and characterization of the squat cracks.
Based on this research, the main new insights into the formation of rail squats are as follows: i) the WEL formation via martensitic phase transformation turns out to be possible; this is confirmed through the thermomechanical wheel–rail contact modelling; ii) the impact–like loading conditions and high–frequency dynamic characteristics of the wheel–track system appear to be essential for the squat formation; this is confirmed through the vehicle–track testing using the new test rig; and iii) the occurrence of different crack orientations followed by the primary and secondary V–shaped cracks turns out to be important in the squat formation; this is confirmed through the CT scanning and metallographic observations.
In this thesis, an smart control system consisting of three subsystems is proposed for safe smart offshore heavy lifting, which aims to replace or assist human operators during offshore heavy lift construction. To develop this smart system, a robust switching Dynamic Positioning (DP) controller to stabilize the position of the vessel, a nonlinear model-based mode detection system to detect the mode switching, and a backstepping crane tension controller to stabilize the load are designed.","Smart Systems; Offshore Constructions; Heavy Lift Vessels; Position Control; Mode Detection","en","doctoral thesis","","","","","","","","","","Marine and Transport Technology","Transport Engineering and Logistics","","",""
"uuid:f02096b5-174c-4888-a0a7-dafd29454450","http://resolver.tudelft.nl/uuid:f02096b5-174c-4888-a0a7-dafd29454450","Outsourcing Cybercrime","van Wegberg, R.S. (TU Delft Organisation & Governance)","van Eeten, M.J.G. (promotor); Klievink, A.J. (promotor); Delft University of Technology (degree granting institution)","2020","Many scientific studies and industry reports have observed the emergence of so-called cybercrime-as-a-service. The idea is that specialized suppliers in the underground economy cater to criminal entrepreneurs in need of certain capabilities – substituting specialized technical knowledge with “knowing what to buy”. The impact of this trend could be dramatic, as technical skill becomes an insignificant entry barrier for cybercrime. Forms of cybercrime motivated by financial gain, make use of a unique configuration of technical capabilities to be successful. Profit-driven cybercrimes, as they are called, range from carding to financial malware, and from extortion to cryptojacking. Given their reliance on technical capabilities, particularly these forms of cybercrime benefit from a changing crime paradigm: the commoditization of cybercrime. That is, standardized offerings of technical capabilities supplied through structured markets by specialized vendors that cybercriminals can contract to fulfill tools and techniques used in their business model. Commoditization enables outsourcing of components used in cybercrime - i.e., a botnet or cash-out solution. Thus lowering entry barriers for aspiring criminals, and potentially driving further growth in cybercrime. As many cybercriminal entrepreneurs lack the skills to provision certain parts of their business model, this incentivizes them to outsource these parts to specialized criminal vendors. With online anonymous markets - like Silk Road or AlphaBay - these entrepreneurs have found a new platform to contract vendors and acquire technical capabilities for a range of cybercriminal business models. A configuration of technical capabilities used in a business model reflects the value chain of resources. Here, not the criminal activities themselves, but the technical enablers for all these criminal activities are depicted. To create a comprehensive understanding of how businessmodels in profit-driven cybercrime are impacted by the commoditization of cybercrime, we investigate how outsourced components can fulfill technical capabilities needed in profit-driven cybercrime. This is where we use an economic lens to deliver an overview of criminal activities, resources and strategies in profit-driven cybercrime. In turn, knowing how outsourcing fulfils parts of the value chain, can help law enforcement exploit ‘chokepoints’ – i.e., use the weakest link in the value chain where criminals appear to be vulnerable.","Cybercrime; Online anonymous markets; Outsourcing; Policing","en","doctoral thesis","","978-94-6419-036-6","","","","","","","","","Organisation & Governance","","",""
"uuid:f7a44425-27ee-4b59-a514-55df183b3c0c","http://resolver.tudelft.nl/uuid:f7a44425-27ee-4b59-a514-55df183b3c0c","Value conflicts in energy systems","de Wildt, T.E. (TU Delft Energie and Industrie)","Herder, P.M. (promotor); Kunneke, R.W. (promotor); Chappin, E.J.L. (promotor); Delft University of Technology (degree granting institution)","2020","This thesis introduces an approach to support the long-term social acceptance of energy systems by addressing value conflicts embedded in regulatory and technical designs. When designing energy systems, the realisation of some values can conflict with the realisation of other values. The decision to deploy energy systems therefore inevitably entails a prioritisation of some values over others. Societal groups that do not agree with this prioritisation may decide to oppose or not to support the deployment and use of these systems. Lack of social acceptance may occur during the planning phase, but also at a later point in time as a result of value change. This can be caused by a growing mismatch between values prioritized in energy systems and how societal groups are affected. To support the social acceptance of energy systems, value conflicts embedded in energy systems need to be addressed. Methods to do so were however lacking. This thesis provides a methodological contribution by demonstrating how the literature on data science and the complexity sciences can be used to address value conflicts. This thesis answers the following research question: How can value conflicts embedded in energy systems be addressed in support of social acceptance?
We use probabilistic topic modelling to explore how the academic literature addresses value conflicts. Identified tactics can be used to specify design requirements and policy guidelines in support of the social acceptance of energy systems. Agent-based modelling is used to identify value conflicts embedded in energy systems that result from the heterogeneous properties of the affected population. Agent-based models provide insights about the type of population affected by value conflicts and hence about the severity of the resulting lack of social acceptance. This thesis contributes to the literature on social acceptance by demonstrating how long-term acceptance can be supported by drawing on insights from ethics of technology. Additionally, we provide a systematic and practical approach to integrate human values in the regulatory and technical design of infrastructures, which is critical for supporting the ongoing energy transition.","value conflicts; value change; moral acceptability; social acceptance; agent-based modelling; exploratory modelling; probabilistic topic models; capability approach","en","doctoral thesis","","","","","","","","","","","Energie and Industrie","","",""
"uuid:64f8f06e-5cc6-40cd-8c8d-722da6304b06","http://resolver.tudelft.nl/uuid:64f8f06e-5cc6-40cd-8c8d-722da6304b06","In-situ Visual Quantification of Corrosion and Corrosion Protection","Denissen, P.J. (TU Delft Novel Aerospace Materials)","Garcia, Santiago J. (promotor); van der Zwaag, S. (promotor); Delft University of Technology (degree granting institution)","2020","The main objective of the work described in this dissertation is to explore the route towards new anticorrosion coatings for the protection of aerospace aluminium alloys using alternative strategies that can replace the currently used toxic chromate corrosion inhibitors. As such, each chapter of this dissertation is devoted to relevant scientific and industrial challenges whereby the research on corrosion, inhibition and coating systems is combined with a newly developed in-situ optical-electrochemical technique.","","en","doctoral thesis","","978-94-6366-327-4","","","","","","2020-10-23","","","Novel Aerospace Materials","","",""
"uuid:e7ef8e46-c941-4bd7-a34d-69d78d0df115","http://resolver.tudelft.nl/uuid:e7ef8e46-c941-4bd7-a34d-69d78d0df115","Wheel Load Reconstruction for Intelligent Vehicle Control","Kerst, S.M.A.A. (TU Delft Intelligent Vehicles)","Happee, R. (promotor); Shyrokau, B. (copromotor); Delft University of Technology (degree granting institution)","2020","After decades of incremental change in the automotive industry, we now face an era of disruption as environmental concerns and social change propel the introduction of electric vehicles and vehicle automation. Besides the clear benefit of zero-emission transport for society, there is a strong commercial incentive for automated driving, as it will lead to more efficient and safer mobility. A vast amount of research and development is therefore dedicated to its realization.
As human drivers are progressively taken out of the loop, intelligent vehicles impose increasing demands on the highly complex control loop, from measurement and perception to vehicle control. Of particular interest are limit and critical conditions, as optimal performance in these situations is paramount to maximize safety. Therefore, accurate real-time knowledge of the wheel forces is essential, since it represents the tire-road interaction of the individual wheels, determining vehicle behaviour and its handling limits. However, no commercially feasible method is available for the measurement of these important vehicle states.
Current vehicle control systems circumvent this measurement issue by focusing on downstream effects, such as wheel slip and body accelerations. Due to the focus on secondary effects these systems are overly complex and lead to sub-optimal performance. For optimal vehicle control of future intelligent vehicles, therefore, the development of wheel force measurement is considered invaluable. By providing direct access to the most important control variables for dynamics control, such measurement allows for less complex control algorithms with improved performance and robustness, and hence will lead to safer mobility.
Although various approaches for the reconstruction of wheel forces have been developed, no cost effective method is yet available. This can be explained by the fact that load measurement approaches generally require mechanical load decoupling to avoid crosstalk, something that is difficult to achieve on a wheel-end suspension setup that is already complex on itself. In this thesis, a novel method for wheel force reconstruction is proposed via load measurement at bearing level.
iii
The concept of bearing load measurement dates back to the early ’70s and has been investigated by all major bearing manufacturers ever since. This has led to various measurement approaches based on relative ring displacement and outer-ring deformation. Despite all efforts, currently still no accurate nor robust approach for multi-dimensional load reconstruction is available. The state-of-the-art provides unsatisfying results due to the complexity of bearing behaviour and the inability of the currently applied data-driven methods to leverage unique bearing characteristics.
In this thesis a novel approach to bearing load reconstruction is proposed based on outer-ring deformation measurement and real-time simulation of bearing physics. The novel approach includes an explicit description of important physical effects as the rearrangement of rolling elements and the one-dimensional nature of their load transfer. As such it captures the bearing behaviour and allows to make use of its unique characteristics. The proposed approach is based on Kalman filtering and includes two independent physical models: a bearing strain model and a bearing load model.
The bearing strain model defines the outer-ring surface strain variation as a function of the local rolling element loading and location. The proposed model provides a simple though effective continuous and parameterized description of this behaviour. The model is implemented in an Extended Kalman Filter as a means of signal conditioning to estimate local rolling element forces from the measured outer-ring strain. By considering the change of strain due to the reallocation of rolling elements over time, a differential measurement is performed that results in invariance to thermal effects.
The proposed bearing load model is an extension of traditional rigid bearing modelling by a semi-analytical description of outer-ring flexibility. The latter is achieved by static deformation shapes and a Fourier series-based compliance approximation. The proposed model thereby provides a computationally effective but highly accurate description of rolling element forces for common bearing designs, in which significant raceway deformation occurs. Included in an Unscented Kalman Filter, the model provides the relationship between the estimated rolling element forces and the bearing loading and as such serves as a load reconstruction method. By explicit description of the individual one-dimensional element forces the approach considers the internal load decoupling effect and thereby limits crosscoupling on the estimated loads.
The wheel load reconstruction algorithm has been validated in both laboratory and field conditions on a production vehicle wheel-end bearing instrumented with
iv
strain gauges. The study in laboratory conditions was performed on a bearing test setup at our industrial partner whereas the field validation has been performed on a dedicated test vehicle prepared as part of this thesis. Besides the proposed approach, a state-of-the-art algorithm and a variant including the model based signal conditioning method are evaluated to properly assess the results.
The experimental results show that the proposed approach leads to a considerable improvement in accuracy, reproducibility and robustness in comparison to the state-of-the-art data-driven approach. The proposed strain model-based conditioning approach leads to higher reproducibility and improved accuracy of up to 5 percent full scale due to its invariance to thermal effects and ability to discriminate in- and outboard rolling element forces. Additionally, the model-based load reconstruction method further improves accuracy by leveraging the internal bearing load decoupling behaviour to avoid crosstalk. This results in an improvement of over 5 percent full scale for combined loading conditions. Additionally, the approach is more robust, as important relationships are captured by modelling. The latter is well observed for loading conditions outside the calibration domain as an accuracy improvement of 6.8 to 18.4 percent full scale is achieved for the various reconstructed loads. The application of modelling furthermore leads to a significant reduction of parameters subject to calibration and provides physical meaning to these parameters.
Finally, an application study on anti-lock braking was performed to investigate both the load reconstruction performance in dynamic loading conditions and the advantages of load information for vehicle dynamics control. The study shows that sufficient signal bandwidth is provided and confirms the value of direct wheel force measurement for anti-lock braking control. In particular, as traditional difficulties like velocity estimation and slip threshold determination are circumvented whilst the effects of road friction fluctuations and brake efficiency are minimized.
By providing an accurate, robust and scalable solution for the processing of bearing outer-ring strain to the bearing loading, this thesis sets an important step towards a commercially viable solution for wheel-end load measurement. In addition, it is shown how this new information could push the boundaries of vehicle dynamics control. Next is the development of a suitable hardware setup to apply these results in a commercial solution, a topic currently pursued by the author.","","en","doctoral thesis","","978-94-6419-056-4","","","","","","","","","Intelligent Vehicles","","",""
"uuid:31022b5a-3d04-47fc-b3a6-62af4e3687b4","http://resolver.tudelft.nl/uuid:31022b5a-3d04-47fc-b3a6-62af4e3687b4","Design and Analysis of On-Demand Mobility Systems","Narayan S., Jishnu (TU Delft Transport and Planning)","Hoogendoorn, S.P. (promotor); Cats, O. (promotor); van Oort, N. (copromotor); Delft University of Technology (degree granting institution)","2020","The past decade has seen vast advancements in various ICT (Information Communication, and Technology) platforms. These advancements have enabled the rise of innovative mobility solutions e.g. on-demand transport services. Such solutions offer flexible transport services to users in which users can receive tailor-made mobility solutions. Increasing evidence from the literature points at the potentially disruptive effects of such innovative mobility solutions on urban mobility. The effects range from traditional modes such as privately owned cars and public transport losing their market share to on-demand services, to the subsequent need for public transport systems to evolve to stay relevant. Modelling tools for the design and assessment of such on-demand transport services therefore needs to account for its implications for urban mobility by considering its interaction with other travel modes. Existing studies that have looked into the design and analysis of on-demand services largely overlooked the impact of these services on other travel modes and vice-versa. The existing literature hence considered the design and analysis of on-demand systems without investigating this effect of on-demand services on other travel modes and their overall impact on urban mobility. This study attempts to fill this research gap by developing an approach to the design and analysis of on-demand services in an urban mobility context. Two types of on-demand services are considered in this thesis, namely private and pooled. Both private and pooled on-demand services are characterised by a fleet of vehicles controlled by a central dispatching unit. They provide door-to-door service to passengers’ travel requests in real time. While the private on-demand service provides taxi-like service to passengers, pooled on-demand service allows multiple passengers to share the service.","","en","doctoral thesis","TRAIL Research School","978-90-5584-271-1","","","","TRAIL Thesis Series no. T2020/15, the Netherlands TRAIL Research School","","","","","Transport and Planning","","",""
"uuid:70f6704f-30e4-4e1a-8c74-9fe2b699a80d","http://resolver.tudelft.nl/uuid:70f6704f-30e4-4e1a-8c74-9fe2b699a80d","Formal synthesis of analytic controllers: An evolutionary approach","Verdier, C.F. (TU Delft Team Tamas Keviczky)","Mazo, M. (promotor); Babuska, R. (promotor); Delft University of Technology (degree granting institution)","2020","Control design for modern safety-critical cyber-physical systems still requires significant expert-knowledge, since for general hybrid systems with temporal logic specifications there are no constructive methods. Nevertheless, in recent years multiple approaches have been proposed to automatically synthesize correct-by-construction controllers. However, typically these methods either result in enormous look-up tables, require online optimization, or are highly dependent on expert-knowledge. The goal of this thesis is to propose a novel approach that overcomes these limitations, i.e. to propose a framework for automatic controller synthesis, capable of synthesizing closed-form controllers for hybrid systems with temporal logic specifications, without a heavy reliance on expert-knowledge. To this end, we draw inspiration from the human design process and utilize two methods that show great similarities to it, namely evolutionary algorithms and counterexample-guided inductive synthesis (CEGIS). Specifically, we use genetic programming (GP), an evolutionary algorithm capable of evolving entire programs. This makes it possible to automatically discover the structure of a solution. Moreover, it enables the synthesis of compact closed-form controllers, circumventing the need for look-up tables or online optimization. In combination with GP, we use the concept of CEGIS to refine candidate solutions based on counterexamples, until the controller is guaranteed to satisfy the desired specification. In this thesis we propose two CEGIS-based synthesis frameworks, which differ in the employed verification paradigms, namely utilizing either (co-synthesized) Lyapunov-like functions or reachability analysis. Both frameworks result in correct-by-construction compact closed-form controllers, where the use of expert-knowledge is optional. Both frameworks are capable of synthesizing sampled-data controllers, enabling implementation in embedded hardware with limited memory and computation power, forming a stepping stone towards faster automation.","Formal controller synthesis; Hybrid systems control; Temporal logic; genetic programming; Lyapunov methods; reachability analysis","en","doctoral thesis","","","","","","","","","","","Team Tamas Keviczky","","",""
"uuid:ec128e53-e78a-4550-8aa5-0fafb36a7763","http://resolver.tudelft.nl/uuid:ec128e53-e78a-4550-8aa5-0fafb36a7763","Exemplifying smart functions for a next generation data analytics toolbox","Abou Eddahab-Burke, F. (TU Delft Cyber-Physical Systems)","Horvath, I. (promotor); Delft University of Technology (degree granting institution)","2020","","Data analytics; middle-of-life data; white goods designer; data analytics toolbox; user identification; data streams merging; recommender system; axiomatic theory fusion","en","doctoral thesis","","978-94-6384-162-7","","","","","","","","","Cyber-Physical Systems","","",""
"uuid:4cd6d858-5f09-4567-86e8-f9f61ca7941f","http://resolver.tudelft.nl/uuid:4cd6d858-5f09-4567-86e8-f9f61ca7941f","Modular engineering of synthetic glycolytic pathways in Saccharomyces cerevisiae","Boonekamp, F.J. (TU Delft BT/Industriele Microbiologie)","Daran-Lapujade, P.A.S. (promotor); Pronk, J.T. (promotor); Delft University of Technology (degree granting institution)","2020","Already for millennia, microbial fermentation is used for the production of dairy products, alcoholic beverages and bread. In the last decades, the field of biotechnology has tremendously expanded and nowadays, a wide range of compounds ranging from biofuels to chemicals and pharmaceuticals is produced using microbial cell factories. The development of genetic engineering tools has greatly contributed to this rapid development. Catalysing the conversion of renewable carbohydrate feedstocks into fuels and chemicals, microbial cell factories offer a sustainable alternative to fossil resources-based production, and thereby contribute to reduce greenhouse gas emissions. The yeast Saccharomyces cerevisiae plays an important role in industrial biotechnology. Its popularity for applied research and industrial production can be attributed to several factors as its fast fermentative metabolism, its tolerance to low pH, high sugar and alcohol concentrations and its genetic tractability. S. cerevisiae possesses one of the best furbished molecular toolboxes, which makes it possible to assemble complex heterologous pathways, as was recently illustrated by the successful biosynthesis of opioids in yeast. Despite this great progress, extensive genetic remodelling of native pathways remains challenging. This can largely be explained by the high genetic redundancy present in the yeast genome, in which multiple genes encode proteins with redundant functions, and by the fact that the genes belonging to a pathway are scattered over the entire genome. The goal of this thesis was to design, set up and validate a strategy aiming at facilitating the remodelling of (essential) pathways, based on simplifying and reorganizing the yeast genome. The starting point of this research is the central carbon metabolism and in particular, as proof of concept, the glycolytic pathway.","","en","doctoral thesis","","978-94-6421-020-0","","","","","","2020-12-31","","","BT/Industriele Microbiologie","","",""
"uuid:3c85713e-7158-4d67-b93b-54f02e213c12","http://resolver.tudelft.nl/uuid:3c85713e-7158-4d67-b93b-54f02e213c12","Accurate structural health monitoring in composites: With fibre Bragg grating sensors","Rajabzadeh, Aydin (TU Delft Signal Processing Systems)","Groves, R.M. (promotor); Hendriks, R.C. (promotor); Heusdens, R. (promotor); Delft University of Technology (degree granting institution)","2020","Compared to metals, composite materials offer higher stiffness, more resilience to corrosion, have lighter weights, and their mechanical properties can be tailored by their layup configuration. Despite these features, composite materials are susceptible to a diversity of damages, including matrix cracks, delamination, and fibre breakage. If these damages are not detected and mended, they can spread and result in the failure of the whole structure. In particular, when the structure is under fatigue and vibrations during flight, this process can expedite. Moreover, if such damages occur in the internal layers of the composite material, they will be difficult to detect and to characterise. There is thus a huge demand for reliable and accurate structural health monitoring methods to identify these defects. Such methods either try to monitor the structural integrity of the composite during service, or they are used for studying a desired configuration of a composite material during fatigue and tensile tests. This thesis provides structural health monitoring solutions that can potentially be used for both these categories. The structural health monitoring applications developed in this thesis range from accurate strain and displacement measurement, to detection of cracks and the identification of damages in composites. In this thesis, fibre Bragg grating (FBG) sensors were chosen for this purpose. The miniature size and small diameter of these sensors makes them an ideal candidate for embedding them between composite layers, without severely altering the mechanical properties of the host composite material. They can thus provide us with direct information about the current state of the laminated composite, potentially at any depth. This is especially useful for acquiring information about the internal layers of the composite material, as barely visible impact damages and micro-cracks often form beneath the surface of the material without being visible on its exterior. In spite of their interesting physical characteristics, applications of FBG sensors are typically limited to point strain or temperature sensors. Further, it is often assumed that the strain field along the sensor length is uniform. For this reason, there is currently a gap in the field of structural health monitoring in retrieving meaningful information about the non-uniform strain field to which the FBG sensor is subjected in damaged structures. The focus of this thesis is on analysing the response of FBG sensors to highly non-uniform strain fields, which are a characteristic of the existence of damage in composites. To tackle this problem, first a new model for the analysis of FBG responses to nonuniform strain fields will be presented. Using this model, two algorithms are presented to accurately estimate the average of such non-uniform axial strain fields, which conventional strain estimation algorithms fail to deliver. In fact, it is shown that the state-of-the-art strain estimation methods using FBG sensors can lead to errors of up to a few thousand microstrains, and the presented algorithms in this thesis can compensate for such errors. It was also shown that these methods are robust against spectral noise from the interrogation system, which can pave the way for more affordable FBG based strain estimation solutions. Another contribution of this thesis is the demonstration of two new algorithms for thedetection of matrix cracks, and for accurate monitoring of the delamination growth in composites, using conventional FBG sensors. These algorithms are in particular useful for studying the mechanical behaviour of laminated composites in laboratory setups. For instance, the matrix crack detection algorithm is capable of characterising internal transverse cracks along the FBG length during tensile tests. Along the same lines, the delamination growth monitoring algorithm can accurately localise the delamination crack tip along the FBG length in mode-I tensile and fatigue tests. These algorithms can perform in real-time, which makes them ideal for dynamic measurement of crack propagation under fatigue, and their spatial resolution and accuracy is superior to the other state-of-the-art damage detection techniques. Finally, to enhance the precision of the damage detection schemes presented in this thesis, two different methods are proposed to accurately determine the active gauge length of the FBG sensor, and its position along the optical fibre. This information is generally not provided for commercial FBG sensors with such accuracy, which can adversely affect the precision of crack tip localisation algorithms. Following the algorithms provided in this thesis, the sensor position can be marked on the optical fibre with micrometer accuracy.","fiber optic sensing; fiber bragg gratings; damage detection; aerospace composites; smart structures; structural health monitoring; optimization; algorithms","en","doctoral thesis","","978-94-6384-155-9","","","","","","","","","Signal Processing Systems","","",""
"uuid:a6c6ee3d-55a0-4a2a-8ac9-b6e837e4862e","http://resolver.tudelft.nl/uuid:a6c6ee3d-55a0-4a2a-8ac9-b6e837e4862e","Physico-chemical characterization of the extracellular polymer matrix of biofilms in membrane filtration systems","Pfaff, N.M. (TU Delft BT/Environmental Biotechnology)","van Loosdrecht, Mark C.M. (promotor); Kleijn, J.M. (copromotor); Kemperman, A.J.B. (promotor); Delft University of Technology (degree granting institution)","2020","Biofilms are the prevalent form of bacterial life on earth. Bacteria aggregate and embed themselves in a hydrogel matrix of extracellular polymeric substances (EPS), often spread over surfaces in thin films. The EPS matrix of biofilms is getting more and more attention from scientists for several reasons. On the one hand, it has been identified as the component of biofilms that is responsible for many of the adverse technological impacts of biofouling, for example for the increase of hydraulic resistance in membrane filtration systems. It has also been shown to provide structural integrity to biofilms and shield the embedded bacteria from chemicals, hampering removal in technological as well as medical environments. On the other hand, the same properties are interesting features for application as a biomaterial. Properties like water retention or the resilience against mechanical and chemical interference are defined by the molecular interactions between the different components of the EPS matrix. Therefore, a targeted biofouling cleaning strategy needs to start with understanding those molecular interactions. Owed to the high complexity and the to date still widely undisclosed molecular composition of biofilm EPS, research on these properties requires the use of models. In this work, several experimental and physical models were applied in order to unravel correlations between chemical composition, structure and mechanical properties of biofilm EPS in membrane filtration systems.","","en","doctoral thesis","","978-94-6366-319-9","","","","","","2020-10-21","","","BT/Environmental Biotechnology","","",""
"uuid:41c32a8f-2db7-4091-abb5-4d6a5e596345","http://resolver.tudelft.nl/uuid:41c32a8f-2db7-4091-abb5-4d6a5e596345","Molecular Simulation of Phase and Reaction Equilibria: Software and Algorithm Development","Hens, R. (TU Delft Engineering Thermodynamics)","Vlugt, T.J.H. (promotor); Delft University of Technology (degree granting institution)","2020","In the past decades, molecular simulation has become an important tool for studying phase and reaction equilibria. In this dissertation, we work on improvements of the Continuous Fractional Component (CFC) method in Monte Carlo simulations. We also develop a software package for molecular simulations that uses this method. In Chapter 2, we briefly introduce partial molar properties that we want to compute from molecular simulations. The CFC method is introduced and we explain how it can be modified to calculate properties in the NPT ensemble. After that, we focus on the Reaction Ensemble and modify the CFC method such that it becomes suitable for the calculation of chemical potentials and fugacity coefficients. We shortly point out the applicability of the CFC method in the Gibbs Ensemble and the software, BrickCFCMC, that was developed and used for this research. In Chapter 3, we study the vaporliquid equilibria of hydrogen sulfide, methanol and carbon dioxide. We use the CFC method for simulations in the Gibbs Ensemble and the Wolf method for calculations of the electrostatic interactions. The Wolf method is a computationally cheaper method than the commonly used Ewald method but has the same accuracy, provided that it is parametrized correctly. In Chapter 4, we test our new formulation of the CFC method in the Reaction Ensemble. For different systems of LennardJones particles, we compare the efficiency with the previous variant of the CFC method and the conventional method. Our formulation of the CFC method is more efficient and can directly check if the system has reached equilibrium. We continue our study by using this method for simulations of the HaberBosch process for the production of ammonia from nitrogen and hydrogen. In Chapter 5, we use the CFC method for computation of partial molar enthalpies and partial molar volumes. We start with simple systems of LennardJones particles and compare with a different method (similar to Widom’s method for obtaining chemical potentials). We calculate partial molar properties of nitrogen, hydrogen and ammonia in the stoichiometric compositions that were obtained in Chapter 4. From these results, we obtain the enthalpy of reaction. In Chapter 6, we combine the Gibbs Ensemble with the Reaction Ensemble for simulations of the esterification of methanol with acetic acid. We obtain a clear phase separation and calculate equilibrium compositions, chemical potentials, activity coefficients, and equilibrium constants. We distinguish two cases: one where the molecules are treated as rigid objects, and one where the molecules are flexible. No significant difference is observed between the results for the different cases. The simulations in Chapter 3 till Chapter 6 were performed with the software that was written as part of this research. This has lead to the software package BrickCFCMC and is available (open source) from: https://gitlab.com/ETh_TU_Delft/BrickCFCMC.","Molecular Simulation; Monte Carlo Simulation; Phase Equilibrium; Reaction Equilibrium","en","doctoral thesis","","978-94-6366-303-8","","","","","","","","","Engineering Thermodynamics","","",""
"uuid:752ebddf-82e9-4494-b7a7-d7ebeea5f5d9","http://resolver.tudelft.nl/uuid:752ebddf-82e9-4494-b7a7-d7ebeea5f5d9","anchoring the design process: A framework to make the designerly way of thinking explicit in architectural design education","van Dooren, E.J.G.C. (TU Delft Architectural Engineering)","Asselbergs, M.F. (promotor); van Dorst, M.J. (promotor); Boshuizen, H.P.A. (promotor); Van Merriënboer, J.J.G. (promotor); Delft University of Technology (degree granting institution)","2020","This thesis proposes a framework to address the design process in design education. Building upon the assumption that teachers, being professional designers, do not discuss the design process in the architectural design studio and do not have a vocabulary to do so, five generic elements or anchor points are defined which represent the basic design skills. The validity of the framework and the assumption is tested respectively in interviews with a variety of designers and in observations of dialogues between teachers and students. In the final test the design process is addressed in the design studio: the first experiences show that students’ understanding and self-efficacy may increase.
The five elements enable teachers and students to address the designerly attitude. The way designers reason consist of: (1) experimentation; an experimentation-based way of thinking; how to explore and reflect, (2) the frame of reference; a knowledge-based way of thinking; how to work with common and proven ‘professional’ knowledge, and (3) the guiding theme; a value-based way of thinking; how to take a position in the design process. Next to that, (4) the laboratory is the (visual) language or set of means designers use to think designerly, and (5) the domains are the playing field of the designer, the product aspects s/he should address.","design education; design process; generic elements; making explicit; architectural design","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-299-4","","","","A+BE I Architecture and the Built Environment No 17 (2020)","","","","","Architectural Engineering","","",""
"uuid:1ffe3bd6-9592-40be-9a2a-7830778db093","http://resolver.tudelft.nl/uuid:1ffe3bd6-9592-40be-9a2a-7830778db093","Additive Manufacturing for Design in a Circular Economy","Sauerwein, M.","Balkenende, A.R. (promotor); Bakker, C.A. (promotor); Doubrovski, E.L. (copromotor); Delft University of Technology (degree granting institution)","2020","This PhD project explored how the use of 3D printing can support design in a circular economy. 3D printing is an emerging technology and is viewed as a promising production process for the circular economy because of its unique additive and digital character. The aim of design in a circular economy is to preserve the value of products and materials through lifetime prolongation or high value reuse and recovery. Product integrity and material integrity are relevant for this, because they represent the quality of products and materials to remain whole and complete over time. In this research, we studied how 3D printing can support product integrity and material integrity in a circular economy. Research through design (RtD) was the main research method. In this method the design process is used to generate knowledge and we used a prototyping process to develop 3D printing in the new context of a circular economy. The main contributions of this research can be summarised as following: •We helped establish of a new research direction by exploring design approaches for product integrity and material integrity in a circular economy. •We developed a circular 3D print process flow for product integrity. This is demonstrated by showing that the digital and additive character of 3D printing can be harnessed to develop reversible connections that enable products to be disassembled and reassembled without loss of quality. We developed reversible joints and demonstrated these with a proof-of-concept of a lamp and vase. •We established a design approach for developing reprintable materials. This was demonstrated by producing reprintable materials from locally available bio-based resources, i.e. ground mussel shells with two different binders (sugar and alginate). We designed a lampshade and hairpin and 3D printed them using these materials. •We contributed to the domain of ‘research through design’ by using the prototyping process for knowledge generation; a less common use. The design goal in the prototyping process was used to obtain relevant information (from other disciplines) for developing technology in a new context. This resulted in an iterative process between experimental prototyping processes and scientific knowledge generation. We would like to conclude by nothing that, in spite of all the optimism about the way the use of 3D printing can accelerate the transition to a circular economy, there are currently few 3D print applications that actually support and enable the circular economy. Our exploration shows that to successfully print for product integrity and material integrity, both in-depth knowledge and understanding of the AM production technique is required.","Circular economy; Additive manufacture; 3D Printing; Product Design; Bio-based materials; Product Integrity; Material Integrity; Research through Design (RtD)","en","doctoral thesis","Delft University of Technology","78-94-6384-166-5","","","","","","","","","Design for Sustainability","","",""
"uuid:47218911-c93d-4295-a3de-231d023c1743","http://resolver.tudelft.nl/uuid:47218911-c93d-4295-a3de-231d023c1743","Modelling and managing massive 3D data of the built environment","Kumar, Kavisha (TU Delft Urban Data Science)","Stoter, J.E. (promotor); Ledoux, H. (promotor); Delft University of Technology (degree granting institution)","2020","A 3D city model is a digital representation of the spatial features in an urban
environment. Buildings, terrain, vegetation, water bodies, etc. all form an integral part of a 3D city model. The possibility to enrich these city models with additional application-specific information, whether new semantics or geometry, further increases their usability. However, in practice, the applications of 3D city models are mainly focused on buildings. The majority of standards available for representing 3D city models, such as IFC and CityGML, have well-defined specifications for modelling buildings, but often none for other city features. In addition, there are several other issues associated with the development and use of 3D city models of large cities, such as massive size of 3D city models, interoperability issues for 3D data from heterogeneous sources, harmonisation of different 3D standards, etc.
In this thesis, I investigate how to better model these massive and semantically
enriched 3D city models, and I focus on their use in different applications. I make five contributions. First, I explain how CityGML, the international standard for semantic 3D city modelling, is not efficient for storing massive TIN terrains, and present an improved solution to compactly store massive terrains in CityGML. Second, I describe how to model terrains at different LODs in CityGML, since the current CityGML data model lacks the specifications for modelling different terrain LODs at geometric and semantic level. Third, I explain how CityGML lacks precise specifications for modelling metadata of 3D city models and present an ISO 19115 compliant solution to add metadata. Fourth, I describe in this thesis how the development of the new standards LandInfra and InfraGML and their integration with the existing popular standards (IFC and CityGML) can
contribute to the BIM and 3D GIS interoperability and bring the two domains to a common footing. Fifth, I demonstrate my approach for the development of a harmonised semantic 3D city model based on CityGML for use in urban noise simulations. In addition, I have developed open source prototypes to help practitioners with the use of 3D city models. In this way, I also contribute to the open source community for 3D city modelling.
The thesis proposes additional research for future work. For example, since this research focuses specifically on the LODs of terrain models, it would be worthwhile to extend the research to explore the LOD concept for other urban features such as vegetation and landuse. Furthermore, LandInfra is a relatively young standard with low community support. This too requires more attention. Tools such as parsers, validators, visualisers, DBMS support, APIs, and so on are still lacking for LandInfra (and InfraGML). It would be interesting to see how the standards evolve and whether it can be applied in practice when such support is available. Interoperability of LandInfra with IFC and other standards is also an area that requires further investigation.","3D city models; CityGML; ADE; LandInfra; metadata","en","doctoral thesis","","978-94-6366-316-8","","","","","","","","","Urban Data Science","","",""
"uuid:c9020486-82a6-4e4a-a9f5-f2b00ebc432c","http://resolver.tudelft.nl/uuid:c9020486-82a6-4e4a-a9f5-f2b00ebc432c","Predicting sequence variant deleteriousness in genomes of livestock species","Groß, C. (TU Delft Pattern Recognition and Bioinformatics)","Reinders, M.J.T. (promotor); de Ridder, D. (promotor); Delft University of Technology (degree granting institution)","2020","Illuminating the functional part of the genome of livestock species has the potential to facilitate precision breeding and to accelerate improvements. Identifying functional and potentially deleterious mutations can provide breeders with crucial information to tackle inbreeding depression or to increase the overall health of their populations and animal welfare. By performing Genome Wide Association Studies (GWAS) the genome can be interrogated for mutations that co-occur with a phenotype of interest. However, every GWAS delivers a large number of potentially functionally important single nucleotide polymorphisms (SNPs). The exact effect of each of these SNPs is often not known, especially for SNPs in noncoding sequences. Investigating each candidate SNP variantin detail is laborious and, eventually, infeasible, given the sheer number of variants. Thus, there is a strong need for approaches to select the most promising SNP candidates. Prioritizing variants, in particular, SNPs, has seen major developments in recent years which led to several discoveries and insights inheritable diseases of humans. Despite their great economical value, for livestock and other non-human species, this development is lagging behind.A major contributing factor to the deficit in prioritization tools for non-human species is a lack of genomic annotations. In this thesis, we translated one of the currently popular SNP prioritization tools, CADD (Combined Annotation-Dependent Depletion), to mouse (mCADD) and performed an experiment in which we simulated a decrease in the number of available genomic annotations.These results showed that following the CADD approach to predict the putative deleteriousness of SNPs is meaningful in a non-human species, even when fewer genomic annotations are available than for the human case. This motivated us to build various CADD-like SNP prioritization tools for livestock species, in particular for pig (pCADD) and chicken (chCADD). We validated the pig prioritization tool on a set of well-known functional pig variants. Further, we showed how functional and non-functional parts of the pig genome are scored differently by pCADD. In collaboration with the breeding industry, we built upon the pCADD scores and implemented them in a pipeline to identify likely causal variants in GWAS. To this end, we utilized SNPs that were found significant in GWAS based on SNP-array data and found variants with high pCADD scores in whole genome sequence data that are in linkage disequilibrium with high GWAS-scoring SNPs. Thus, these pCADD-identified SNPs are likely (causal) functional candidates for the phenotypes tested. We also identified several expression quantitative loci (eQTL) variants, SNPs that explain observed differences in gene expression, which we were able to validate using RNA-seq data. This demonstrated the power of this new tool and its usefulness in identifying novel, functional variants. For chicken, we used the chCADD to interrogate highly conserved elements in the chicken genome. Here we found that, despite being highly conserved, not all parts of these elements might be functionally active. chCADD differentiates between regions within each conserved element that are predicted to be functionally different. Taken together, the results presented in this thesis demonstrate SNP prioritization can successfully be done in non-human species, which can greatly assist breeders and animal geneticists in their work to illuminate the functional genome.","","en","doctoral thesis","","","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:a5c7c12b-482b-48ca-a486-c29a4b254fe6","http://resolver.tudelft.nl/uuid:a5c7c12b-482b-48ca-a486-c29a4b254fe6","African New Towns: An adaptive, principle-based approach","Keeton, R.E. (TU Delft Urban Design)","Meyer, Han (promotor); Nijhuis, S. (promotor); Delft University of Technology (degree granting institution)","2020","Since the economic shifts of the 1990s, New Towns have become an increasingly popular approach to urban development across the African continent. While New Towns are not a new development model, their contemporary materialisation often targets middle- and high-income buyers, leaving no space for low-income residents. Strict regulations in these exclusive developments often impede spatial appropriations by the informal sector such as fresh markets, unregulated housing, street kiosks and ‘public’ transit options. As a result, this approach may exacerbate spatial segregation and increase the visibility of economic inequality.
This research addresses contemporary African New Towns as a group through the lens of urban design, identifying shared spatial challenges across a dataset of 146 New Towns. Through three case studies (Sheikh Zayed City, Egypt; BuraNEST, Ethiopia, and Kilamba, Angola) it takes a deeper look at the idiosyncrasies of individual New Towns, and the diversity of examples within this group. By bringing together wider trends with the case studies, this study translates challenges into potentials for future New Towns in the form of adaptive planning and design principles. Through a series of semi-structured interviews, transdisciplinary workshops and Research Through Design exercises, the principles are tested, refined, and validated by peer review. The study concludes that these principles can be an effective starting tool for developers, planners, and decision-makers initiating New Towns in Africa. It also concludes that the principles must be adapted locally according to geographic, political, and social contexts and urgencies.","New Towns; African Cities; Urbanization; Adaptive planning; planning principles","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-313-7","","","","A+BE I Architecture and the Built Environment No 18 (2020)","","","","","Urban Design","","",""
"uuid:e4e9fc11-dfca-4b5a-a988-eabe56928e07","http://resolver.tudelft.nl/uuid:e4e9fc11-dfca-4b5a-a988-eabe56928e07","Design with Symbolic Meaning: Introducing well-being related symbolic meaning in design","Casais, Mafalda (TU Delft Design Aesthetics)","Desmet, P.M.A. (promotor); Mugge, R. (promotor); Delft University of Technology (degree granting institution)","2020","This doctoral thesis focuses on the positive design strategy of designing with symbolic meaning as a way to support people’s well-being. It investigates the concept of wellbeing related symbolic meaning, and proposes design directions to inspire and inform designers to introduce it in the design process. This concept of well-being related symbolic meaning is defined as the intangible quality products have that links to and affects people’s psychological well-being (their mental fortitude, their sense of self-worth and belonging, their sense of purposefulness, etc.) and, in turn, affects their subjective well-being (their perception of how happy they are). It originates in representations of personally significant things (memories, people, places, ideals, achievements, goals, etc.) and in meaningful interactions (rituals, mediation of relationships, etc.) and subsequent representation of these interactions, linked to determinants of well-being – purpose in life, personal growth, self-acceptance, positive relations with others, autonomy, and environmental mastery. This meaning implies a process of cultivation and can evoke different types of emotions, often complex.
Positive design aims to develop design that is pleasurable, virtuous, and personally significant, having explored different strategic paths to achieve that: design as a direct source of well-being, as a facilitator of experiences and activities that are well-being conducive, as an indirect cue or nudge towards well-being, focusing on experiences, focusing on emotions, etc. Our approach contributes to that goal by investigating design as a symbol of well-being. Consequently, design with well-being related symbolic meaning – i.e., design with the deliberate intention to represent, anticipate, preserve, or revisit significant aspects of life linked to people’s well-being – can be a strategy to develop products that support people’s well-being, and that are potentially relevant and emotionally durable, have continued value and appreciation, and that stimulate deeper and longer-lasting person-product relationships.
This research fulfils three goals: to understand, to translate, and to communicate. The first goal refers to understanding the phenomenon of well-being related symbolic meaning in material possessions. We addressed it by investigating ‘lived’ products with personal meaning, in people’s homes. The second goal concerns the translation of knowledge about well-being related symbolic meaning into actionable design directions that are understandable and usable by designers. The third goal is about communicating the design directions in an engaging and usable format for designers.
Chapter 1 explains the concept of well-being and describes its link to products and to design, specifying our approach within the field of positive design. Chapter 2 reviews literature to present types of product meaning, and to characterize and differentiate symbolic product meaning. Chapter 3 reports a study that looks at determinants of psychological well-being in cherished material possessions, resulting in six well-being related symbolic meanings that can be designed for. Chapter 4 reports a study that resulted in sixteen design direction from the six symbolic meanings, as a way to design for well-being. Chapter 5 explores a means to communicate the developed design directions through an iterative process and proposes a toolkit for designers (i.e., the SIM toolkit). Chapter 6 reports the application of the developed toolkit in workshops and in industry cases, and results in insights about its use, format, and impact. Chapter 7 summarizes the main insights and respective implications of this thesis, discussing them and presents limitations and possible future avenues for research.","","en","doctoral thesis","","","","","","Mafalda Casais was born on November 25th, 1984, in Lisbon, Portugal. She did a three-year illustration course, followed by an undergraduate degree in Design, from the Lisbon School of Architecture. During the undergraduate degree, Mafalda spent one year in Italy, at the University of Genoa, within the European exchange programme Erasmus. She then was awarded a Master degree in Product Design, by the Lisbon School of Architecture, University of Lisbon. In her Master thesis, Mafalda investigated the green kitchen as a central part of an emerging type of consumer. This work laid the foundation for what it meant to conduct user research and design things that are relevant for longer – a seed of meaningful relations between products and people, which then inspired the PhD project. In 2011, Mafalda received a doctoral grant awarded by the Foundation for Science and Technology (FCT), a public agency part of the Portuguese Ministry for Science, Technology and Higher Education, to conduct a PhD project. This project was developed at the Delft Institute of Positive Design, at the Faculty of Industrial Design Engineering, TU Delft. The research focused on well-being related symbolic meaning in products.","","","","","Design Aesthetics","","",""
"uuid:9d197709-c5a2-4de4-86c0-f350c41b1660","http://resolver.tudelft.nl/uuid:9d197709-c5a2-4de4-86c0-f350c41b1660","MEMS Solutions For More Than Illumination","Li, X. (TU Delft Electronic Components, Technology and Materials)","Zhang, Kouchi (promotor); Sarro, Pasqualina M (promotor); Delft University of Technology (degree granting institution)","2020","The development of solid state lighting (SSL) has been involving with improving the luminaire efficiency and integrating non-luminous functions, also known as More than Illumination. The objective of this thesis is to investigate different solutions for the concept of More than Illumination for SSL applications, to combine general lighting with additional functions. Several distinct topics were explored to serve this purpose, in different forms of SSL packaging. In a higher packaging level, we studied integrated tunable optical system with existing lighting sources, to achieve desired beam shaping, which can be helpful for dynamic lighting applications. In a lower level, a sensor that can be integrated with the lighting system was also explored. Furthermore, we developed a new form of fresnel lens which can be mounted on the LED chip.
The thesis starts with exploring the tunable optics for lighting applications in chapter 2. A tunable optical system with an electromagnetic actuator fabricated on a flexible substrate was demonstrated. The electromagnetic actuator consists of a copper coil and polyimide beams, with a ring shape permanent magnet as the magnetic flux source. When applied with a DC voltage, the Lorentz force generated on the coil drives the polyimide substrate along with the mounted optics, which in turn controls the beam shape. The working principle was simulated in Tracepro to estimate the light distribution change for the light source. The simulation was validated by the following tests on the optical system, which demonstrated that the outgoing angle of the light changed accordingly with the applied voltage.
Apart from the general lighting applications, lighting is also expected to play an essential role in the sensory network for IoT applications. In chapter 3, a sensor for particulate matter (PM) detection which can be integrated into the lighting products was demonstrated. The sensor chip is made by microfabrication methods. It works by capturing the scatter light triggered by particles flowing through a microchamber. The microchamber consists of two submounts with cavities, assembled with a laser diode and a photodiode separately. The chip is also accompanied by an external commercial air flow generator to help the air flow through the microchamber. The principle of this work is validated by exposing the sensor to cigarette smoke, one of the most common sources of PM2.5. The sensor output is higher in the presence of cigarette smoke than in clean air
While the previous chapters were focusing on the external applications, in chapter 4, a micro size optical component was developed, which can be mounted on the LED chip for optical beam shaping. The proposed optics is a micro Fresnel lens, fabricated by encapsulating lithographically defined vertically aligned carbon nanotube (CNT) bundles inside a polydimethylsiloxane (PDMS) layer. The composite material combines the excellent optical absorption properties of CNT with the transparency and stretchability of PDMS. By stretching the elastomeric composite in the radial direction, the focal length of the Fresnel lens is tuned accordingly. A good focusing response was demonstrated and a broad focus range was achieved by stretching the lens radially.
In chapter 5, we continued to explore the property of CNT/PDMS composite. With the same format of vertically aligned CNT infiltrated in PDMS as in the previous chapter, the electric property of the composite is investigated for the potential application in flexible interposer. It is based on the PDMS as support material, while the embedded vertically aligned CNT bundles serve as conducting vias with its electrical conductivity. The composite combines the flexibility of the elastic material and the conductivity of the CNTs. The resistivity of the composite is much smaller than the resistivity of PDMS, yet its electrical performance falls short of expectations, thus further research is required to improve the electrical properties, including coating the CNT with conductive materials. Furthermore, the material properties, such as mechanical property are also required to be investigated in future work.","tunable optic; PM 2.5 detector; PDMS; CNT; Fresnel optics","en","doctoral thesis","","978-94-6421-064-4","","","","","","","","","Electronic Components, Technology and Materials","","",""
"uuid:538558fb-ac9a-414d-8a59-4b523d8ff74c","http://resolver.tudelft.nl/uuid:538558fb-ac9a-414d-8a59-4b523d8ff74c","Adaptive prognostics for remaining useful life of composite structures","Eleftheroglou, N. (TU Delft Structural Integrity & Composites)","Benedictus, R. (promotor); Zarouchas, D. (copromotor); Delft University of Technology (degree granting institution)","2020","Prognostics is an emerging field of research that enables the real-time health assessment of an engineering system and the prediction of its future state based on up-to-date information. This field integrates various scientific disciplines including physics/mechanics, computational statistics and probabilistic modeling, machine learning and sensing technologies. The main goal is the prediction of the remaining useful life (RUL) of the engineering system while it is in-service. Lately, there is an effort to study and predict the future status of engineering systems that exhibit a complex degradation process. The availability of condition monitoring (CM) data, the constantly increasing computational power, the development of machine learning algorithms and the advancements on the physics/mechanics for several engineering systems form a solid foundation to achieve that goal. Among the engineering systems that exhibit a complex degradation process are composite structures. Composite structures have made a significant mark in numerous industries, driven by advantages in structural efficiency, performance, versatility and cost. It is well known that the damage accumulation process of composite structures depends on several parameters, i.e. the type of material and the lay-up, the loading frequency and sequence, the manufacturing process. Additionally, the multi-phase nature of composites and the variation of defects result in a stochastic activation of the different failure mechanisms. So, one expects that the long-term behaviour of two comparable composites structures, subjected to comparable environmental and loading conditions, will differ and that makes the fatigue damage analysis, and consequently the prediction of RUL, very complex tasks. This difference is profound especially when unexpected phenomena may occur. The goal of this research is to develop a new RUL prediction model that is able to learn from unexpected phenomena and adapt its parameters accordingly. The model is composed of three elements; 1) sensing techniques to acquire online CM data, 2) machine learning algorithm for developing a damage modelling strategy and 3) stochastic modelling for uncertainty quantification. Based on the literature review, it was concluded that a frequentist data-driven model has the potential to fulfil the research goal and an extension of the Non-Homogenous Hidden Semi Markov model (NHHSMM) is a good candidate. The first step was to design the structure of the RUL prediction model and define its elements. The next step was to develop the extension of the NHHSMM, and verify its correctness and robustness, utilizing simulated Monte-Carlo (MC) data. A series of assumptions was necessary in order to frame the applicability of the model towards composite structures and to achieve an efficient prediction process.","structural health monitoring; prognostics; remaining useful life; outlier analysis; adaptive prognostics; data-driven model; condition monitoring","en","doctoral thesis","","978-94-028-2151-2","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:d7e85d9b-9ef2-4d16-b9d7-2882a6177174","http://resolver.tudelft.nl/uuid:d7e85d9b-9ef2-4d16-b9d7-2882a6177174","From Silicon toward Silicon Carbide Smart Integrated Sensors","Middelburg, L.M. (TU Delft Electronic Components, Technology and Materials)","Zhang, Kouchi (promotor); Delft University of Technology (degree granting institution)","2020","","harsh environment; smart sensors; system integration; wide-bandgap semiconductors; silicon carbide; MEMS; pressure sensors; particle sensors; suspended membranes; micromachining; SiC CMOS; monolithic integration","en","doctoral thesis","","","","","","","","2021-09-29","","","Electronic Components, Technology and Materials","","",""
"uuid:0a2ba212-f6bf-4c64-8f3d-b707f1e44953","http://resolver.tudelft.nl/uuid:0a2ba212-f6bf-4c64-8f3d-b707f1e44953","Control for Programmable Superconducting Quantum Systems","Rol, M.A. (TU Delft QCD/DiCarlo Lab)","DiCarlo, L. (promotor); Vandersypen, L.M.K. (promotor); Delft University of Technology (degree granting institution)","2020","The discovery of quantum mechanics in the 20th century forms the basis of many of the technologies that define our lives today. The ability to engineer and manipulate individual quantum systems -- even create artificial atoms -- promises a similar revolutionary leap in technology. A quantum technology of particular interest is quantum computing, which has the potential to solve problems that are intractable for classical computers, opening up new domains of computation. Meanwhile, an attractive approach to creating engineered quantum systems is circuit quantum electrodynamics (cQED). Where research initially focused on understanding the physics of cQED devices, focus has shifted to building systems capable of performing useful computations. However, this remains extremely challenging, in part due to the inherently fragile nature of the individual quantum bits, but also due to difficulties in controlling and scaling up these systems. This thesis focuses on the control aspects of building an extensible full-stack quantum computer based on superconducting transmon qubits. We define the demonstration of quantum fault-tolerance as our target application to give focus to our efforts. The QuSurf architecture for a full-stack quantum computer presented in this thesis is designed with this application in mind. We provide a detailed study of the error sources present in this system and give an overview of the relevant characterization techniques. In the second part of this thesis, we address several key challenges in the control of a quantum computer. To realize high-fidelity coherence limited gates, we present a novel tuneup protocol that achieves a tenfold speedup over the state-of-the-art. This is realized by eliminating the need for qubit initialization. We demonstrate this protocol by calibrating single-qubit gates to a coherence limited Clifford fidelity of 99.9% in one minute. Performing repeated parity checks, as is required for quantum error correction, requires reusing qubits quickly after they have been measured. By introducing a numerically optimized depletion pulse we are able to speeds up the depletion of measurement photons in a readout resonator without having to rely on specific symmetry conditions. Using this technique speed up photon depletion by more than six inverse resonator linewidths, reducing the error rate in an emulated ancilla parity check by a factor 75. Flux-pulsing based two-qubit gates are the fastest two-qubit gates. However, they are also very technically demanding. The key challenge in performing these gates is addressing the distortions that control signals experience as they traverse various electrical components. We have developed Cryoscope (short for cryogenic oscilloscope) to characterize and correct these distortions. Cryoscope is an in-situ technique that uses the qubit to sample control pulses of arbitrary shape. Even when correcting distortions to within $\sim 0.1\%$ two-qubit gates are history-dependent due to the long timescale upon which some of these distortions act. We have invented Net-Zero, a new type of flux-pulsing based two-qubit gate, to address this problem. It makes use of a symmetry condition of the transmon to have net-zero integral, making the gate resilient to long-timescale distortions. The gate suppresses leakage out of the computational subspace to 0.1% by making use of leakage interference and has a built-in echo effect that enhances the coherence of the gate, achieving a two-qubit gate fidelity of 99.1%. Custom software is required to perform the physics experiments needed to build and operate a quantum computer. PycQED is an open-source software framework we have developed for this purpose. We discuss the design choices and concepts of PycQED before turning our focus to characterization and calibration. Here we introduced dependency graphs as a useful abstraction and system emulation as an essential development tool for automating the characterization and calibration process. We conclude the thesis by reflecting on the limitations of our architecture and providing an outlook on the grand challenges of building a useful kilo-qubit sized quantum computer. We define these challenges as The Application Problem, The Fabrication Problem, and The Calibration Problem.","","en","doctoral thesis","","978-90-8593-451-6","","","","","","","","","QCD/DiCarlo Lab","","",""
"uuid:f4b5a89a-4466-40aa-b28a-ee26e5906696","http://resolver.tudelft.nl/uuid:f4b5a89a-4466-40aa-b28a-ee26e5906696","Doping on Demand: Permanent electrochemical doping of colloidal quantum dots and organic semiconductors","Gudjónsdóttir, S. (TU Delft ChemE/Opto-electronic Materials)","Houtepen, A.J. (promotor); Savenije, T.J. (promotor); Delft University of Technology (degree granting institution)","2020","Control over the charge carrier density of semiconductor materials is essential for various electronic devices. Unfortunately, common electronic doping methods have not always been successful for new generations of semiconductors, such as organic semiconductors and colloidal quantum dots. Therefore, a new doping method that offers a great control over the charge carrier density is needed. Electrochemistry is a powerful way of doping porous semiconductor films, where the charge carrier density can be controlled by a button on a potentiostat. Unfortunately, when the semiconductor film is disconnected from the potentiostat, injected charges leave the film. The work performed in this thesis is aimed to understand electrochemical doping and the instability with the final goal of producing stable electrochemically doped semiconductor films at room temperature for the use in devices.","","en","doctoral thesis","","978-94-6332-667-4","","","","","","","","","ChemE/Opto-electronic Materials","","",""
"uuid:2411b615-c28b-459a-8ddf-1c13c670a0f7","http://resolver.tudelft.nl/uuid:2411b615-c28b-459a-8ddf-1c13c670a0f7","Centrifuge modelling of the behaviour of buried pipelines subjected to submarine landslides","Zhang, W. (TU Delft Geo-engineering)","Jommi, C. (promotor); Askarinejad, A. (copromotor); Delft University of Technology (degree granting institution)","2020","The assessment of the potentially destructive impacts of subaqueous landslides on offshore pipelines is required when the pipeline route passes through zones with a risk of mass movements. Therefore, quantifying and evaluating the ultimate load/pressure acting on the pipeline is one of the key factors in geotechnical safety design of the pipeline. One of the triggers of subaqueous soil mass movements is the monotonic loads, which induce the trigger relative displacement between a soil layer and a pipe under both drained and (partially) undrained conditions. Two approaches based on geotechnical and fluid dynamics perspectives have been proposed for estimating the ultimate load/pressure for different stages of a submarine landslide. Traditionally, the former method focuses on the analysis of pipelines installed under flat seabed experiencing relative movements to the surrounding soil, whereas, the latter method focuses on the behaviour of pipelines laid on the surface of the seabed and subjected to debris flows. However, offshore pipelines are often buried under the seabed, which is not always flat and has a modest inclination in some cases. This engineering condition normally differs from that of the simplifying assumptions and boundary conditions (such as seabed inclination, and soil strength) commonly imposed to the geotechnical and fluid dynamics approaches. Accordingly, a better understanding of the soilpipeline interaction when the pipelines are buried in subaqueous slopes is essential for evaluating the ultimate load/pressure that would be caused by the slope failures.
This thesis presents a research effort on investigating the soilpipeline interaction during subaqueous slope failures using advanced physical modelling. In this research, the experiments can be divided into two main groups according to the soil drainage conditions. The first group of tests were carried out in the drained condition by using dry sand as the soil material for the slopes. The pipe was buried at 5 different locations inside the slopes to study the pipe burial position and pipe embedment ratio effects on the ultimate pressure during slope instability. Particle image velocimetry analysis was conducted to study the pipe movement and slope failure mechanisms. The results of these tests reveal that the slope angle and the pipe distance to slope crest play significant roles on the ultimate loads acting on the pipe.","Landslides; Static liquefaction; Soilpipeline interaction; Centrifuge modelling; Image analysis; Scaling laws","en","doctoral thesis","","978-94-6421-049-1","","","","","","","","","Geo-engineering","","",""
"uuid:2f0cdf01-3dd7-4085-8e10-3eee84936453","http://resolver.tudelft.nl/uuid:2f0cdf01-3dd7-4085-8e10-3eee84936453","Let It Go: Designing the Divestment of Mobile Phones in a Circular Economy from a User Perspective","Poppelaars, F.A. (TU Delft Circular Product Design)","Bakker, C.A. (promotor); van Engelen, J.M.L. (promotor); Delft University of Technology (degree granting institution)","2020","In a circular economy, the collection of devices is essential to enable reuse, refurbishment, remanufacturing and/or recycling at a system level. Yet, even though collection programmes are in place, users often store their mobile phones after use. This dissertation provides a better understanding of closing the loop from a user perspective in both access-based consumption and ownership-based consumption. It studies how to potentially enhance collection rates. The research first results in a conceptual model conceptualizing the user behaviour regarding the return of mobile phones in these two consumption modes. As the return of phones is contractual in access-based consumption, influencing factors and design interventions were identified to improve the user acceptance and support practitioners in the development of access services. To increase the collection rates in ownership-based consumption (i.e., where the return is voluntary), the lack of attention for the last phase of the consumption cycle – called divestment – is addressed. This dissertation explores the new research field of design for divestment. It defines the concept of divestment in design, structures this phase in six stages, offers design insights on smartphone divestment experiences, and proposes design for divestment principles.","","en","doctoral thesis","","978-94-6384-159-7","","","","","","","","","Circular Product Design","","",""
"uuid:25b01015-559c-418b-b52e-0c92a6b84531","http://resolver.tudelft.nl/uuid:25b01015-559c-418b-b52e-0c92a6b84531","Information Diffusion on Temporal Networks","Zhan, X. (TU Delft Multimedia Computing)","Hanjalic, A. (promotor); Wang, H. (promotor); Delft University of Technology (degree granting institution)","2020","As an important carrier of information diffusion, social media has experienced a huge increase in the number of users and also has a big effect on the way of how information diffuses. For example, Facebook and Youtube have attracted more than 1.6 and 1.3 billion users until 2020, respectively. The use of internet and online social network have largely reduced the cost of information propagation and sharing. Besides users and content-based features, social network properties are critical factors that may affect information diffusion. In this thesis, we focus on the influence of temporal network properties on information spreading. As researchers have proved that similar users tend to spread similar content of information, we further propose how to design network representation learning algorithms to better capture node similarity in a network. The first part of the thesis is mainly about how the local properties of nodes and links would affect information spreading on temporal networks. Chapter 2 studies which links are likely to appear in an information diffusion trajectory. We simulate the information diffusion process by a susceptible-infected (SI) model on various empirical temporal networks. An information diffusion backbone is proposed to characterize the probability of a link to appear in the diffusion trajectory. Due to the high complexity of constructing diffusion backbone, we further propose time-scaled weight to identify which links would appear in the diffusion backbone. Compared to the centrality metrics derived from static networks, time-scaled weight shows better identification performance. The conclusions in this chapter may inspire how to maximize information diffusion on temporal networks by deliberately choosing links to transmit information. Chapter 3 investigates which links should be temporally blocked in order to suppress information diffusion on temporal networks. We rank the links by different blocking strategies based on the link properties on static and temporal networks, including the ones derived from information diffusion backbone. We remove the links with high ranking values based on blocking strategies for a given time period. We show that four link blocking strategies outperform the others in suppressing information diffusion. The results show that the effectiveness of the metrics on suppressing information diffusion largely depends on the network properties. In chapter 4, we study how to identify influential nodes, i.e., nodes serving as the seed can spread information widely, on temporal networks. The information diffusion process is simulated by susceptible-infected-recovered (SIR) model on various empirical temporal networks. We propose a temporal information gathering process (Tig-process), which can iteratively gather neighboring information though temporal path, to identify influential nodes. Compared to the benchmark metrics, Tig-process can better identify influential nodes across different temporal networks with a small cost. The experimental designs and results in these three chapters further inspire us to study the local surrounding properties of nodes and links for other spreading processes as well as other types of networks. In the second part of the thesis, we work on designing network embedding algorithms to embed nodes to a low-dimensional space, which can make similar nodes be close in the embedding space. Chapter 5 designs a degree-biased random walk, i.e., DiaRW, to sample walks from a static network. If the source node of a random walk has higher degree, the walk length tends to be longer. Also, if a random walker walks to a low-degree node, the probability of backtracking the former high-degree node is higher. The node pairs generated from walks are further used as input for a learning model, i.e., Skip-Gram model. We unveil that DiaRW shows better performance compared to baseline embedding algorithms on tasks, e.g., link prediction and node classification. Chapter 6 proposes SI-spreading-based network embedding algorithms. We apply SI model on static and temporal networks to sample trajectories. The node pairs generated from trajectories are also used as input for Skip-Gram model. We show SI-spreading-based network embedding algorithms perform better than random-walk-based network embedding algorithms on missing link prediction task. Both of the two chapters consider node heterogeneity in designing embedding algorithms. The last chapter proposes insight of the thesis based on the research questions and provides the possible future directions that is related to our research.","Temporal Networks; Information Diffusion; Network Representation Learning; Link Prediction; Node Classification","en","doctoral thesis","","978-94-6416-144-1","","","","","","","","","Multimedia Computing","","",""
"uuid:04152eb5-52f3-49e9-acc9-2a88ed2ac98c","http://resolver.tudelft.nl/uuid:04152eb5-52f3-49e9-acc9-2a88ed2ac98c","Entropy Foundations for Stabilized Finite Element Isogeometric Methods: Energy Dissipation, Variational Multiscale Analysis, Variation Entropy, Discontinuity Capturing and Free Surface Flows","ten Eikelder, M.F.P. (TU Delft Ship Hydromechanics and Structures)","Huijsmans, R.H.M. (promotor); Akkerman, I. (copromotor); Delft University of Technology (degree granting institution)","2020","Numerical procedures and simulation techniques in science and engineering have progressed significantly during the last decades. The finite element method plays an important role in this development and has gained popularity in many fields including fluid mechanics. A recent finite element solution strategy is isogeometric analysis. Isogeometric analysis replaces the usual finite element basis functions by higher-order splines. This leads to significantly more accurate results and equips the numerical method with several desirable properties. By naively applying the finite element isogeometric method one may obtain solutions that are seriously perturbed and are as such not physically relevant. The reason is often linked to the stability of the method; a finite element method is not a priori stable. The overall objective of this thesis is centered around this point. The aim is to develop numerical techniques that inherit the stability properties of the underlying physical system. In particular we are interested in finite element techniques that can be applied to free-surface flow simulations. Stability issues in free-surface flow computations may already appear in single-fluid flow problems. Other causes of instabilities are steep layers or discontinuities and instabilities arising from the numerical treatment of the interface that separates the fluids. This thesis addresses each of these topics. Several stabilized finite element methods","","en","doctoral thesis","","978-94-6384-164-1","","","","","","","","","Ship Hydromechanics and Structures","","",""
"uuid:4f7b1f79-2d77-4a14-aa65-9303e44e7170","http://resolver.tudelft.nl/uuid:4f7b1f79-2d77-4a14-aa65-9303e44e7170","Urban Renewal Decision-Making in China: Stakeholders, Process, and System Improvement","Zhuang, T. (TU Delft Housing Quality and Process Innovation)","Visscher, H.J. (promotor); Elsinga, M.G. (promotor); Qian, QK (copromotor); Delft University of Technology (degree granting institution)","2020","To meet the growing rigid demand of urban housing, urban renewal has played a significant role, which significantly promotes the urban prosperity in China. However, at the same time, many problems occurred through large-scale urban renewal projects. To avoid unintended consequences that occurred in urban renewal and, how these decisions were made can be one key focus. To better achieve the goal of sustainability, this research aims to deepen the understanding of urban renewal decision-making in China and contribute to recommend strategies to improve the system. Based on the participatory decision-making theory and the characteristics of urban renewal, a conceptual framework is built to achieve the aim of this research. According to the research framework, this research firstly conducted an empirical study of stakeholders’ expectations in urban renewal projects. Eighteen factors are identified and compared among the main stakeholder groups. Secondly, this research explores the stakeholders and their participation in the decision-making of urban renewal in China. Stakeholder Analysis and Social Network Analysis are complemented as the research methodology. In the third step, transaction costs theory is adopted to improve the understanding of urban renewal decision-making process in China. Based on the results of the above three steps, the last step of this research systematically determines a set of strategies for improving urban renewal decision-making in China by adopting the Analytic Network Process. The findings of this research add new knowledge on the exploration of the decision-making of public projects and can be directly adopted by the authority in practice.","Urban Renewal; Decision-Making; China","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-312-0","","","","A+BE I Architecture and the Built Environment No 16 (2020)","","","","","Housing Quality and Process Innovation","","",""
"uuid:546b6c38-f280-4d97-8ad7-13fd609acdf6","http://resolver.tudelft.nl/uuid:546b6c38-f280-4d97-8ad7-13fd609acdf6","Circular Business Models for Consumer Markets","Tunn, V.S.C. (TU Delft Circular Product Design)","Schoormans, J.P.L. (promotor); Bocken, N.M.P. (promotor); van den Hende, E.A. (copromotor); Delft University of Technology (degree granting institution)","2020","Over the last decade, the circular economy has gained traction as a concept to transform the society and economy into more sustainable systems. In this context, research into circular business models arose to implement circular economy strategies at the company level. In this thesis, a consumer perspective on circular business models is taken. The research explored how business models can help achieve a circular economy and lead to sustainable consumption.","","en","doctoral thesis","","","","","","All research data supporting the findings described in this thesis are available in the 4TU.Centre for Research Data at: http://doi.org/10.4121/uuid:4be81795-bb09-44bab900-27036b643b19","","","","","Circular Product Design","","",""
"uuid:f0be1724-3041-4214-bb7e-7be1c473b17f","http://resolver.tudelft.nl/uuid:f0be1724-3041-4214-bb7e-7be1c473b17f","Investigation of MPM inaccuracies, contact simulation and robust implementation for geotechnical problems","Gonzalez Acosta, J.L. (TU Delft Geo-engineering)","Hicks, M.A. (promotor); Vardon, P.J. (promotor); Delft University of Technology (degree granting institution)","2020","The material point method (MPM) is a numerical technique which has been demonstrated to be suitable for simulating numerous mechanical problems, particularly large deformation problems, while conserving mass,momentum and energy. MPM discretises material into points and solves the governing equations on a background mesh which discretises the domain space. The points are able to move through the mesh during the simulation. MPM is an improvement over other well-established numerical techniques, such as the finite element method (FEM), as it is able to simulate large deformations and therefore can simulate mechanical problems from initiation to the final outcome. It has the potential to become the preferred numerical tool to analyse many engineering problems. Nonetheless, it has been demonstrated throughout this thesis that the performance of MPM has often been far from the levels of accuracy desired in order to be considered a reliable technique for providing quantitative analyses for engineering problems. In this thesis, the implicit solution version of MPM has been taken as the starting point to investigate and solve its current main drawbacks, i.e. (i) the lack of accuracy when computing stresses (stress oscillations), and (ii) interaction between bodies, e.g. soil and structures. The stress oscillation problem is well-known in the MPM community, and is attributed mostly to material points crossing background cell boundaries, termed the cell-crossing problem. It has been shown in this thesis that cell-crossing is indeed one of the primary sources of oscillation. However, there are also other aspects contributing to the observed inaccuracies. In the literature, cell-crossing has been addressed by creating a particle domain, e.g. in the generalised interpolated material point (GIMP) method. It has been shown in this thesis that major problems also include (i) the use of linear shape function (SF) gradients to calculate (material point) strains and (ii) non-Gauss numerical quadrature to integrate material stiffness. The integration is made worse when using GIMP. In order to reduce the inaccuracies caused by integration a double mapping (DM) technique has been developed, which reduces the errors when integrating nodal stiffnesses. This is shown to also work well with GIMP (DM-G method). Additionally,DM has been combined with a Lagrangian interpolation technique, which uses a larger solution domain (through the combination of background cells to formpatches) to enhance the stresses computed at the material points (DM-C or DM-GC methods). The developed methods have been able to significantly improve the accuracy and stability of the simulated problems. This improvement will allow more robust use of more advanced constitutive models. The interaction of bodies is of benefit in large deformation simulations, although MPM can roughly simulate contact without special treatment. An MPM contact algorithm was initially proposed by other researchers for explicit time integration schemes, but no method was available for the implicit time integration scheme. An implicit contact scheme has been developed based on the original (explicit) contact formulation in order to calculate the change of nodal velocity during the Newton–Raphson iterative procedure. The results obtained with this contact methodology are shown to be as accurate as those computed using the explicit scheme, although generally with a larger time step. Additionally, it has been observed that, in most of the cases, implicit contact simulations are analysed faster than explicit simulations. However, the contact loads computed with this technique and the internal forces developed are inconsistent (i.e. not equal), reducing the energy conservation and remains an issue to be solved. An analysis of the problem is presented as a first step towards a solution. One challenge is that any method using consistent contact and internal forces is sensitive to stress oscillations, which can lead to highly unrealistic contact forces. Using the improvements developed in this thesis (i.e. DM-GC combined with the contact algorithm), soil-structure interaction problems and landslides have been successfully simulated. Incorporating the contact algorithm into the model has allowed the simulation of complex failure mechanism development during slope failure. The impact on neighbouring structures was realistic, and captured expected behaviours such as the sliding and rotation of the rigid elements. It has been demonstrated that (i) the accuracy in MPM has been improved via the combination of several (existing and novel) techniques, (ii) techniques developed for the explicit scheme (or other numerical methods) can be converted and introduced in implicit MPM, maintaining as much as possible the consistency of the formulation, and (iii) by improving diverse aspects of the formulation,more realistic simulations can be obtained. The work presented in this thesis makes several steps contributing to the improvement of MPM, which will lead towards it being used in engineering practice.","Double mapping; Implicit contact; Landslide; Large displacements; Material point method; Soil-structure interaction; Stress oscillations","en","doctoral thesis","","978-94-6366-310-6","","","","","","","","","Geo-engineering","","",""
"uuid:feb1b467-f601-489d-87cf-a99e4cbbb055","http://resolver.tudelft.nl/uuid:feb1b467-f601-489d-87cf-a99e4cbbb055","Adaptive data-driven reduced-order modelling techniques for nuclear reactor analysis","Alsayyari, F.S. (TU Delft RST/Reactor Physics and Nuclear Materials)","Kloosterman, J.L. (promotor); Lathouwers, D. (promotor); Perko, Z. (copromotor); Delft University of Technology (degree granting institution)","2020","Large-scale complex systems require high-fidelity models to capture the dynamics of the system accurately. For example, models of nuclear reactors capture multiphysics interactions (e.g., radiation transport, thermodynamics, heat transfer, and fluid mechanics) occurring at various scales of time (prompt neutrons to burn-up calculations) and space (cell and core calculations). The complexity of thesemodels, however, renders their use intractable for applications relying on repeated evaluations, such as control, optimization, uncertainty quantification, and sensitivity studies.","Proper Orthogonal Decomposition; Locally adaptive sparse grids; Greedy; Nonintrusive; Machine learning; Uncertainty quantification; Sensitivity analysis; Molten Salt Reactor; Large-scale systems","en","doctoral thesis","","978-94-6421-022-4","","","","","","","","","RST/Reactor Physics and Nuclear Materials","","",""
"uuid:952c6c51-911b-4bab-a844-af23210dcfcf","http://resolver.tudelft.nl/uuid:952c6c51-911b-4bab-a844-af23210dcfcf","Opto-electrical modelling of CIGS solar cells","Rezaei, N. (TU Delft Photovoltaic Materials and Devices)","Isabella, O. (promotor); Zeman, M. (promotor); Delft University of Technology (degree granting institution)","2020","One of the key approaches to slow down and eventually prevent dramatic climate change is direct electricity generation from sunlight. Thin-film copper indium gallium (di)selenide (CIGS) is an excellent candidate for highly efficient and stable solar cells. A tuneable and direct bandgap as well as a high absorption coefficient allow for CIGS solar cells to be nearly 100 times thinner than their crystalline silicon (c-Si) counterparts; a feature suitable for flexible photovoltaic (PV) applications. In this thesis, light management for sub-micron CIGS solar cells is studied with the help of opto-electrical simulations. In Chapter 2, the theoretical optical limits for CIGS solar cells as well as the various available opto-electrical modelling platforms are briefly discussed. We study the Green absorption benchmark as a function of thickness and bandgap. Our modelling tools of choice, namely Ansys HFSS for the optical simulations, and Sentaurus TCAD for the electrical simulations are introduced in more details. The interface between CIGS and molybdenum (Mo) back contact is subject to a considerable amount of optical and electrical loss. This issue is investigated in Chapter 3, where we firstly discuss the plasmonic nature of the optical losses. Later, we introduce a double-layer dielectric spacer consisting of MgF2 and Al2O3 with periodic point contacts to quench the Mo-associated losses. We optimize the spacer thickness and the point contact area coverage for maximal photo-current density (Jph) in a CIGS solar cell with 750-nm thick absorber. The front reflection losses, contributing to roughly 10% of optical losses, are addressed in Chapter 4. We show that an MgF2-based double-layer porous-on-compact anti-reflection coating (ARC) allows for gradual refractive index change from air to CIGS and, therefore, according to the Rayleigh effect leads to a wideband antireflection effect. This is done by means of Bruggemann’s effective medium approximation and sequential nonlinear programming (SNLP) for the optimization process. Our models suggest that the proposed ARC surpasses the conventional single-layer ARC in resiliency against angle of incidence. A hybrid light management, employing both the suggested ARC at the front side and MgF2 / Al2O3 dielectric spacer at the rear side, proves to increase Jph of a 750-nm thick CIGS solar cell beyond that of a 1600-nm thick absorber (without light managment). In the rest of the thesis, we take an approach beyond the state-of-the-are architecture of CIGS solar cells and, for the first time, introduce the inter-digitated back-contacted (IBC) structure for CIGS technology. This structure, which no longer suffers from parasitic absorption (associated with the buffer and window layers), is optically studied in Chapter 5. We compare the results with a reference front- and back-contacted (FBC) solar cell with the same absorber volume, and take the Green limit as the benchmark. Two ARC schemes are studied; (i) high-aspect ratio features at the front side of the absorber and, (ii) the as-grown CIGS morphology with optimized MgF2 / Al2O3 layers. Once the optical potential of the IBC CIGS solar cells is realized, we continue our studies with an opto-electrical analysis in TCAD Sentaurus environment (Chapter 6). We not only optimize the geometry of electron- and hole-contacts, the gap between them and the contacts’ period, but also, study the CIGS bandgap grading and its defect density. The electric field map around the gap region is used to highlight the importance of electrical passivation in achieving a high performance. Our models (calibrated with real FBC solar cells fabricated at Solliance at the High-tech campus in Eindhoven) show the high potential of IBC CIGS solar cells for high efficiency PV applications","","en","doctoral thesis","","978-94-6384-161-0","","","","","","","","","Photovoltaic Materials and Devices","","",""
"uuid:21829001-1f0f-40e8-a3cd-31ec07d30071","http://resolver.tudelft.nl/uuid:21829001-1f0f-40e8-a3cd-31ec07d30071","Response of monopiles subjected to combined vertical and lateral loads, lateral cyclic load, and scour erosion in sand","Li, Q. (TU Delft Geo-engineering)","Gavin, Kenneth (promotor); Askarinejad, A. (copromotor); Delft University of Technology (degree granting institution)","2020","Although wind energy capacity has increased significantly in the last few decades, the installed capacity of offshore wind turbine still lags far behind that of onshore wind turbines due to the installation and foundation cost. The aim of this research project has been to clarify the influence of combined vertical and lateral loads, lateral cyclic load, and scour erosion on monopile foundations, in order to achieve more realistic and cost beneficial solutions for offshore wind turbine foundations and thereby increase its competitiveness when compared with other energy sources. Monopiles are the most popular foundation system today for offshore wind turbines installed in shallow to medium water depths. These relatively light structures (low vertical load), need to resist substantial lateral and moment loads. There have been a dearth of studies conducted to investigate the influence of vertical load on the lateral response of these rigid monopiles and the few available have drawn contradictory conclusions. In addition the lateral and moment loading exerted on monopiles due to wind, wave, and water currents is cyclic in nature. This type of loading can lead to the accumulation of lateral displacement/rotation and possible degradation of soil resistance over time. This evolution of pile head displacement and the change in soil stiffness with increasing cycles of load is poorly understood. Cylindrical structures, like monopiles, founded in offshore regions are commonly subjected to scour erosion caused by flowing water and currents, which induces loss of soil support around the pile, reducing the lateral load capacity and causing increased pile displacement. As a result, the system dynamics of the structure might be adversely affected. The results of numerical models suggest that the shape of the scour hole affects the loss of pile lateral capacity, however, there is a shortage of experimental test data that measure this effect. More than 60 centrifuge tests which are categorized into three groups are presented in this thesis, which consider the interaction of combined vertical and lateral loads, lateral cyclic load and scour erosion on the behaviour of rigid monopiles. The tests have been performed in homogeneous dry Geba sand in order to mimic simplified drained offshore soil conditions.","Centrifuge modelling; monopile; combined vertical and lateral loads; cyclic load directional characteristic & amplitude; scour shape & depth; p-y reaction curves","en","doctoral thesis","","978-94-6384-171-9","","","","","","","","","Geo-engineering","","",""
"uuid:26655b53-2aab-4fa2-943d-943ebd037c5e","http://resolver.tudelft.nl/uuid:26655b53-2aab-4fa2-943d-943ebd037c5e","Just Energy? Designing for Ethical Acceptability in Smart Grids","Milchram, C. (TU Delft Economics of Technology and Innovation)","Kunneke, R.W. (promotor); Hillerbrand, R.C. (promotor); van de Kaa, G. (promotor); Doorn, N. (promotor); Delft University of Technology (degree granting institution)","2020","Smart grids within the transition to sustainable energy systems Smart grid systems are widely considered as crucial in the energy transition, because they allow for greater flexibility in bridging temporal gaps between electricity supply and demand in renewable energy systems. To do so, the systems make use of information and communication technologies to measure and monitor supply and demand in real-time, on the basis of which the use of renewable electricity can be optimized. Despite this important role in future renewable energy systems, the introduction of smart grids comes with serious moral repercussions, for example for data privacy and security, autonomy and control, or distributive justice. This dissertation analyzes the moral implications of smart grid systems, and provides guidance for designers and policymakers on how to address these implications in smart grid technologies and institutions, with the ultimate motive to increase the systems’ ethical acceptability. Interdisciplinary in nature, the research contributes to value-sensitive design, institutional analysis, and energy justice. It is in line with academic endeavors to enrich energy research with insights from the social sciences and humanities. It thereby adds to a literature that is dominated by technological approaches and presents smart grids as a technical ‘fix’ to make electricity systems more sustainable. The main body of this dissertation consists of four papers that, collectively, address the ethical acceptability of smart grids. It combines conceptual insights with empirical investigations. Conceptual investigations draw from ethics of technology, value-sensitive design and theories of justice used in the energy justice literature. Empirical methods involve qualitative content analysis and case study research to understand affected stakeholders’ value conceptions and perceptions of a technology...","Energy systems; Energy transition; Sustainability; Energy policy; Smart grids; Smart energy; Ethical acceptability; Social acceptance; Values; Value-sensitive design; Design for values; Energy justice; Design for justice; Institutional analysis; Institutional Analysis and Development framework; IAD framework; Responsible research and innovation","en","doctoral thesis","","978-94-6384-163-4","","","","","","","","","Economics of Technology and Innovation","","",""
"uuid:6d00bdc4-f985-4b3a-9238-38be68cb3f2f","http://resolver.tudelft.nl/uuid:6d00bdc4-f985-4b3a-9238-38be68cb3f2f","Urban Climate at Street Scale: Analysis and Adaptation","Schrijvers, P.J.C. (TU Delft Atmospheric Remote Sensing)","Jonker, H.J.J. (promotor); Kenjeres, S. (promotor); de Roode, S.R. (copromotor); Delft University of Technology (degree granting institution)","2020","It is well known that the urban environment changes local climate inside the city. This change of the local climate manifests itself mainly through di_erences in air temperature, where cities remain warmer than the rural environment during the night. This phenomenon is called the Urban Heat Island (UHI) e_ect, and is de_ned as di_erence in air temperature between the urban and rural environment. The UHI e_ect is found in many cities of di_erent sizes around the world, and ranges between 1 and 10oC during the night. The combination of the increasing urbanisation, global warming and the impact of increasing temperature on human health makes the urban heat island a topic that is gaining more and more attention. This thesis focusses on the urban micro-climate, which treats indivicual buildings and their direct surroundings. A numerical modelling approach is used in this thesis, such that the local urban climate can be investigated and perturbed in a systematic way. The developed 2D model, called URBSIM, combines computation of radiative transfer by a Monte-Carlo model, conduction of energy into the urban material and a Computational Fluid Dynamics (CFD) model to compute air ow and air temperature. With this model, it is shown that the main source of energy to the urban heat budget is due to radiative transfer. During the night, the long wave trapping e_ect (de_ned in this theses as radiation emitted by one surface and absorbed by an other) and absorbed long wave radiation emitted from the sky are of the same order of magnitude for a building height (H) over street width (W) ratio of H=W=0.5. With increasing building height, longwave trapping becomes the main source of energy to the urban energy budget. During the day time, absorbed shortwave radiation is the main source of energy, followed by the long wave trapping e_ect. The relative contribution of these radiative components is decreasing with increasing building height, vi Summary and the conductive heat ux becomes more important. The large impact of radiation sparked the question which high albedo adaptation measure (white surfaces) is best suited to reduce the Urban Heat Island e_ect. This thesis shows that there is a clear distinction between the atmospheric UHI (air temperature) and pedestrian heat stress. Lower air temperatures can be achieved by using high albedo materials, whereas thermal comfort at street level can be improved by using low albedo materials. By using a low albedo material, less radiation is reected back inside the canyon, thereby reducing the mean radiant temperature. The lowest pedestrian heat stress is found by using a vertical albedo gradient from high albedo at the bottom part to a low albedo at the top part of the wall for H=W=1.0. This study indicated that using a high albedo material can decrease the UHI e_ect, but increases pedestrian heat stress, which might not be the desired e_ect. The developed micro-scale model is also compared to a large-scale urban parametrisation scheme that is used in meso-scale models. In this parametrisation, a 2D geometry is used to compute the uxes of the 3D environment. Results indicate that radiative transfer is well captured in the parametrisation. Canyon wind speeds and the sensible heat ux showed much larger di_erences between the two models, which is most likely due to the 2D geometry that is used as a basis for the parametrisation. It is very likely that these parametrisations are adapted to better represent the 3D urban environment. The result of this thesis is an advanced numerical model that includes most processes relevant to the urban environment. Despite the fact that the model is limited to 2D cases, the studies presented in this thesis have aided the understanding of the elementary processes that control urban air temperature, the feedback processes and interactions between the di_erent mechanisms in the urban surface energy","","en","doctoral thesis","","978-94-6332-656-8","","","","","","","","","Atmospheric Remote Sensing","","",""
"uuid:5d46cffe-032b-4084-8c09-88ab7f0f767d","http://resolver.tudelft.nl/uuid:5d46cffe-032b-4084-8c09-88ab7f0f767d","Turbulent shear flow over complex surfaces: An experimental study","Greidanus, A.J. (TU Delft Fluid Mechanics)","Westerweel, J. (promotor); Picken, S.J. (promotor); Delfos, R. (copromotor); Delft University of Technology (degree granting institution)","2020","class=""MsoNormal"">This thesis describes the investigation of the dynamics of turbulent shear flows over non-smooth surfaces. The research was conducted in two parts, related to the experimental facility used in combination with the applied functional surface. The first part describes the experiments of a turbulent Taylor-Couette flow over a riblet surface. The Taylor-Couette facility proves to be an accurate measurement device to determine the frictional drag of surfaces under turbulent flow conditions. Sawtooth riblets are applied on the inner cylinder surface and have the ability to reduce the total measured drag by 5.3% at Res=4.7x104. Under these conditions, a small shift is observed in the azimuthal velocity profile that indicates the change in the net system rotation, which on its turn affects the quantity of drag change, the so-called rotation effect. A model based on the angular momentum balance is proposed and quantifies the drag change due to the rotation effect. Using the total measured drag change, the model accurately predicts the velocity shift in the azimuthal direction. In addition to the steady operational conditions, periodically driven Taylor-Couette flows were investigated by modulating the velocity between the two cylinders as a sinusoidal function, while maintaining RΩ = 0. The main scaling parameters are the shear Reynolds number Res, the oscillation Reynolds number Reosc and the Womersley number Wo, such that the required power to overcome the frictional drag becomes equal to <Pd> = f(Res,Reosc, Wo). Large velocity amplitudes A = Reosc/Res > 0.10 induce the growth of frictional drag due to the additional turbulent fluctuations. The required power to overcome the frictional drag is given by <Pd> = <Pd,0>(f(A)+ K*Wo4A2). The first term represents the analytical quasi-steady state solution with the accompanying velocity modulation, while the second term involves the magnitude of the boundary acceleration with the associated velocity fluctuation, where K* is the conditional scaling-factor between the additional drag and the dimensionless acceleration. Riblets are still able to reduce the frictional drag under small accelerations of the periodically driven boundaries, but the effect declines drastically or even enhances the frictional drag when the boundary acceleration becomes more significant. The second part of this thesis describes the assessment of the applied water tunnel and the interactional behavior between a compliant coating and a turbulent boundary layer flow in the tunnel. In the assessment of the water tunnel, the Clauser chart method showed to be a suitable procedure to quantify the local wall shear stress τw. The interaction between a compliant wall and the near-wall turbulent flow was examined by applying in-house produced visco-elastic coatings with three different stiffnesses. Two typical flow-surface interaction regimes were identified; the one-way coupled regime and the two-way coupled regime. The one-way coupled regime is valid when the turbulent flow initiates moderate coating surface deformation, while the fluid flow remains undisturbed. All of the three coatings exhibited the one-way coupled interactional behavior, where the surface modulations ζ were smaller than the viscous sublayer thickness δv and scale with the turbulent pressure fluctuations over the coating shear modulus, i.e. ζrms ~ prms/|G*|. In this regime, the surface waves have the propagation velocity in the order of cw = 0.70-0.80 Ub, indicating a strong correlation with the high-intensity pressure fluctuations in the turbulent boundary layer away from the wall. The two-way coupled regime has only been observed for the coating with the lowest shear modulus when Ub > 4.5 m/s, indicating significant surface deformation accompanied by additional fluid motions (u',v') and an increase in the local Reynolds stresses. The velocity profile shifts downwards Δu+ in the log region, which verifies the drag increase due to the significant surface undulations. The visualizations of the surface deformation showed the formation of wave-trains with high amplitudes originating from the initial surface undulations caused by the pressure fluctuations in the turbulent boundary layer (i.e. one-way coupling). When these early surface undulations start to protrude the viscous sublayer, the turbulent flow is capable of transfering more energy towards the coating and initiates the wave-train with high amplitudes. The wave-trains dominate the coating surface incrementally with increasing bulk velocity and propagate with a wave velocity of cw = 0.17-0.18 Ub. The 1-way/2-way regime transition is estimated to occur around ζrms > δv/2. The turbulent flow along the slow-moving wave-trains resembles the classical phenomenon of a turbulent flow over a rigid wavy surface, with a local acceleration and deceleration of the fluid. When the wave-trains start to dominate the coating surface, a linear correlation determines the abovementioned downward shift Δu+, based on the wall-normal velocity component dζ/dt. No frictional drag reduction under turbulent flow conditions was found in this study with this type of visco-elastic compliant coatings. ","Turbulent flow; Compliant wall; riblets; Drag reduction; fluid-structure-interaction; Taylor–Couette flow","en","doctoral thesis","","978-94-6416-152-6","","","","","","","","","Fluid Mechanics","","",""
"uuid:33c7fc94-50cc-4b3f-be18-5439a9e29b43","http://resolver.tudelft.nl/uuid:33c7fc94-50cc-4b3f-be18-5439a9e29b43","Tightly Focused Spot Shaping and its Applications in Optical Imaging and Trapping","Meng, P. (TU Delft ImPhys/Optics)","Urbach, Paul (promotor); Pereira, S.F. (promotor); Delft University of Technology (degree granting institution)","2020","The Rayleigh criterion explains the diffraction limit and provides guidance for improving the performance of an imaging system namely by decreasing the wavelength of the illumination and/or increasing the aperture (NA) of the objective lens. If the wavelength and NA are set, is it possible to improve the spatial resolution further? This question motivates the research work of this thesis. Polarization is an important property of light and it can not be ignored in a tightly focusing system. It is demonstrated both theoretically and experimentally that radially polarized light can produce a sharper focal spot in a high NA focusing system because of the tight longitudinal field component. Based on this, in this thesis, we start our investigation on the unique focusing properties of the radially polarized beam with the vectorial diffraction theory. We show that the amplitude of the focal field can be shaped by engineering the pupil field of the radially polarized beam. The shaped focal spot is smaller than the unmodulated one, which can be used to improve the resolution of optical systems. Here, we consider a confocal scanning imaging system, offering several advantages over conventional widefield microscopy. In the simulation, longitudinal electric dipoles are regarded as the objects to make the full use of the optimized longitudinal component. An experimental proof is also given, showing that higher spatial resolution can be achieved when the modulated radially polarized light is applied in the confocal imaging set-up as compared to the non-modulated case. Radially polarized light can be obtained with a liquid crystal based polarization convertor, starting with a linearly polarized beam. Amplitude modulation of the pupil such as the annular pupil field and the designed pupil field where the amplitude increases gradually with the radius can be realized with a spatial light modulator (SLM). The substrate is essential for supporting the sample to be imaged. Usually, the material of the substrate is glass. In the near field, when the object interacts with the light field, it may produce evanescent waves which decays very quickly and has little influence on the imaging. However, the evanescent wave carries higher spatial frequency than the propagating wave. A well designed substrate with a thin TiOኼ layer on top can enhance the evanescent wave in the near field. The enhanced field transfers to a propagating wave with the help of the object deposited on the substrate and it can be detected in the far field. The principle can be explained with a dipole model, and simulated using nanospheres. It is demonstrated that the designed structure helps to improve the imaging quality including contrast and resolution. In addition, such sample model can be combined with other imaging techniques, e.g. confocal scanning microscopy, widefield imaging system, etc. Besides amplitude and polarization, focal fields can also be shaped in phase. Unlike the specific radially or azimuthally polarized vector beam, the cylindrical vector beam is a more general form. The focusing properties and the spin-orbit interacitions of cylindrical vector vortex beams in high NA focusing systems are theoretically studied. An absorptive nanosphere can be trapped at the hot-spot of the focused field, even when the field has its axial symmetry broken. The analysis on the influence of parameters such as the initial phase of the vortex beam, the topological charge, or the size and the material of the trapping sphere on the interplay between spin and angular momentum may be helpful for optical trapping, particle transport and super-resolution.","super-resolution imaging; pupil engineering; optical trapping; angular momenta","en","doctoral thesis","","978-94-6416-164-9","","","","","","","","","ImPhys/Optics","","",""
"uuid:6e3ae33c-e95d-474f-8d6b-d8c0f8aa4788","http://resolver.tudelft.nl/uuid:6e3ae33c-e95d-474f-8d6b-d8c0f8aa4788","Constitutive modelling of cyclic sand behaviour for offshore foundations","Liu, H. (TU Delft Geo-engineering)","Hicks, M.A. (promotor); Pisano, F. (promotor); Delft University of Technology (degree granting institution)","2020","The wind energy industry has gained a key role in the global fight against greenhouse gas emissions. Although fossil fuels have still the largest share in the global energy mix, the production of wind energy, especially offshore, has rapidly grown since the 1990s. According to the International Energy Association, Europe targets offshore wind to become the main electricity source by 2040. To meet such a goal, significant developments in research and technology are ongoing to reduce/optimise the high costs for materials, manufacturing, transportation, and installation of offshore wind farms. A significant cost item for offshore wind turbines (OWTs) is represented by their supporting structures and foundations, especially as the rush towards bigger OWTs and larger water depths poses a serious threat to cost reduction targets. To make offshore wind competitive in the energy market, reducing investment costs is mandatory. Since OWT foundations mobilise up to around 18% of the total investment in typical offshore wind projects, the importance of reconsidering/improving geotechnical design approaches for offshore foundations is selfapparent.","sand; cyclic; constitutive model; Finite Element; monopile","en","doctoral thesis","","978-94-6384-157-3","","","","","","","","","Geo-engineering","","",""
"uuid:b803bf1d-cf2d-4049-baab-c5de098bdeb9","http://resolver.tudelft.nl/uuid:b803bf1d-cf2d-4049-baab-c5de098bdeb9","Experimental Investigation of Ciliary Flow with C. reinhardtii: Hydrodynamics, Ultrastructure, and Ciliary difference","Wei, D. (TU Delft BN/Marie-Eve Aubin-Tam Lab)","Aubin-Tam, M.E. (promotor); Tam, D.S.W. (copromotor); Delft University of Technology (degree granting institution)","2020","The microscopicworld is surprisingly busywith swimming micro-organisms. In a droplet of pondwater, there can be tens of thousands of microbes. They are predators and preys, producers and consumers, and they formthe bottom levels of the ecology of our world. The swimming micro-organisms can be in general classified into prokaryotes and eukaryotes. Eukaryotes come later in evolution. They are higher organisms than the prokaryotes as they possess more complex cellular structures such as the nucleus and mitochondria. Motile eukaryotic micro-organisms all use an active hair-like structure for swimming, known as flagella or cilia. Although nuanced difference exists between flagella (cilia) of different species, their general internal structure and driving mechanisms are mostly the same. In some sense, they are one of the bestselling machines for locomotion on themicron scale. Although flagella are the first-ever documented organelles in cell biology, our understanding of them is still limited. For example, we have only begun to appreciate how the conformational change of single protein motors results in the waveformon the scale of a flagellum. Our understanding of the flow generated by even a single flagellum is rudimentary: resolving the temporal features of such flow field remains experimentally challenging. On a larger scale, how thousands of cilia interact with each other to facilitate fluid transport is still elusive: theoretical models and simulations are waiting for experimental verification. In this thesis, I explore different topics centering around flagellar/ciliary motility by employing novel experimental and numerical techniques, and hence advance our understanding. My experimental investigation starts by resolving the flow generated by the beating cilia of single cells. Due to the high beating frequency, high temporal resolution is required to map the time-varying flow field, which conventional tracer particle-based flow velocimetry techniques cannot provide. To tackle this challenge, I implemented an optical tweezers-based flow velocimetry (OTV) technique. In this technique, a bead is trapped and placed at a particular location by a focused laser beam. The local flow displaces the bead from the trapping center. This displacement, although small, can be accurately monitored by laser interferometry and converted into an electrical signal by photoelectric detectors. Essentially,we gain the desired accuracy and temporal resolution by exploiting the high resolution and large bandwidth of interferometric and electrical measurements. With this technique, I revealed that the ciliary flow deviates fundamentally fromhow it is often modeled by Stokes equations. More specifically, the flow’s amplitude decays faster spatially, and its phase shifts over distance. These discrepancies are resolved by adding a linear unsteady term to Stokes equations. Furthermore, I systematically characterized the ciliary flow field created by captured C. reinhardtii cells. The flow field in different directions and over the ciliary beating plane are measured experimentally, modelled numerically, and analyzed theoretically. Results displayed excellent agreement with each other, and altogether increased our knowledge in the ciliary flow. With the OTV measurements, I not only studied the basic hydrodynamics of ciliary flowbut also addressed a long-standing hypothesis regarding the function of a ciliary appendage. Many cilia have fibrous ultrastructures called mastigoneme. These fibrous appendages are believed to help cells swim faster by increasing the ciliary surface area. Our experiments, together with numerical studies, completely refute this hypothesis: such fibrous hairs do not show any hydrodynamic significance in C. reinhardtii. Instead, its absence in genetically modifiedmutants appeared to result in some behavioral changes, causing the cells to turn abruptly more often than usual. Therefore, I have re-opened the question about the function of the fibrous mastigonemes. Future investigation towards this direction is needed and is likely to lead to more exciting findings. Lastly, I attempted to bridge the physics of ciliary flowwith the biology of ciliary beating. I focused on the ciliary difference and investigated it by selectively loading each cilium of C. reinhardtii with external flows. The ciliary difference is critical for the steering of biflagellates (micro-organisms swimming with two flagella/cilia). I observed an unreported functional difference between the two cilia, as I found that the coupling between the two cilia is unilateral. One cilium serves as the coordinator of beating, and a cell is coupled to external hydrodynamic forces mostly through this coordinating cilium. Altogether, by introducing the OTV technique and incorporating different numerical methods, I was able to elucidate the ciliary flowin a time-resolvedway, updating the current understanding in these unsteady flows. The effectiveness of this methodology was demonstrated again by its application in studying the function of the fibrous ultrastructures. By further moving on to the biological aspect of ciliary beating, we found a new type of difference between the two cilia, which enriches our knowledge in inter-ciliary coupling.","ciliary flow; C. reinhardtii; flagella; mastigoneme; synchronization","en","doctoral thesis","","978-90-8593-450-9","","","","","","","","","BN/Marie-Eve Aubin-Tam Lab","","",""
"uuid:361ba18e-298a-483c-bfb9-0528a4ee6119","http://resolver.tudelft.nl/uuid:361ba18e-298a-483c-bfb9-0528a4ee6119","Magnetic fluid bearings & seals: Methods, design & application","Lampaert, S.G.E. (TU Delft Mechatronic Systems Design)","van Ostayen, R.A.J. (promotor); Spronck, J.W. (copromotor); Delft University of Technology (degree granting institution)","2020","The bearing and the seal are two commonly used tribological components since nearly all moving machinery relies on them for proper operation. Even a small improvement in these components can have a big impact on both the market and the environment. The two main problems of these components are wear and friction. In addition, seals suffer from the problem of leakage which is fundamental to both their functioning as well as their performance. The application of magnetic fluids has the potential to be beneficial for these systems. Magnetic fluids consist of a suspension of magnetic particles in a carrier fluid. This gives them the unique properties of being attracted to a magnetic field and changing their rheological behaviour in the presence of a magnetic field. These special properties can give these bearing and sealing systems unique behaviour, potentially improving their performance. Therefore, this thesis has two main objectives. The first objective is to further investigate the potential of magnetic fluids in bearing and sealing system. This part consists of exploratory, fundamental and early stage research. The unique properties of magnetic fluids are eventually used in ferrofluid bearings, ferrofluid seals, bearings with rheological textures and self-healing bearings. The second objective is to develop the necessary knowledge to bring these concepts to society in the form of applications. The research on ferrofluid bearings as described in this thesis consists of different experimentally validated models for the load capacity, torque capacity, out of plane stiffness, rotational stiffness, friction and operational range for both ferrofluid pocket bearings and ferrofluid pressure bearings. These models are suitable to determine the most important bearing parameters which makes it possible to design a bearing system according to desired specifications. The most recent demonstrator shows specifications that are competitive with conventional bearing systems. The research on ferrofluid seals as described in this thesis has resulted in the first seal concept that does not show any leakage over time. The concept relies on a replenishment system that makes sure that that degraded ferrofluid is removed and replaced by fresh ferrofluid. This gives the system a theoretical infinite lifetime, as long as the replenishment system continues to work. The research on bearings with magnetorheological fluids as described in this thesis has resulted in the new design concept of rheological textures. A rheological texture is defined to be a local change in the rheological behaviour of the lubricant in the lubricating film such that a local change in lubricant transport and flow resistance occurs. The idea is to replace geometrical surface textures of more traditional bearings with these rheological textures to enhance the performance of the bearing. This work has in addition provided a new lubrication theory for Bingham plastics that properly models the behaviour. Furthermore, a new experimental method is developed to characterize the rheology of magnetic fluids at high shear rates. The last significant result of this research is the new concept of a self-healing bearing using a lubricant with a suspension of magnetic particles: the application of a local magnetic field with high gradients in the lubricating film causes the particles to settle at locations where the magnetic field gradient is high. This creates geometrical surface textures that will regrow when worn away. In this way we have a bearing with a surface texture that has a theoretical infinite lifetime.","ferrofluid; magnetorheological fluids; rheology; lubrication theory; self-healing; load capacity; stiffness; friction; hydrodynamic bearing; hydrostatic bearing","en","doctoral thesis","","9789464190427","","","","","","","","","Mechatronic Systems Design","","",""
"uuid:67ba116d-6770-468a-ad5a-5135ac646290","http://resolver.tudelft.nl/uuid:67ba116d-6770-468a-ad5a-5135ac646290","Assessing Balance Control After Minor Stroke: Moving from Laboratory towards Clinic","Schut, I.M. (TU Delft Biomechatronics & Human-Machine Control)","van der Kooij, H. (promotor); Schouten, A.C. (promotor); Weerdesteyn, V. (copromotor); Delft University of Technology (degree granting institution)","2020","In this thesis, we determined how system identification can be integrated in the clinic to assess human balance control. Adequately balance assessment is important in order to detect people with high fall risks and to optimize balance training, which would ultimately result in better rehabilitation and thus less falls. Current clinical tests suffer from ceiling effects, are subjective and do not provide insight in the underlying mechanisms. System identification techniques seem promising, but they depend on large, expensive and complex devices such as motion platforms and motion capture cameras, and are therefore less suitable for clinical use. In part 1, we first focussed on the technical and methodological characteristics of the system identification techniques. In part 2, we used the resulting system identification method, whereby the treadmill applied support surface perturbations, on a minor stroke population to evaluate subtle changes in balance control of the paretic leg.","","en","doctoral thesis","","978-94-6375-955-7","","","","","","","","","Biomechatronics & Human-Machine Control","","",""
"uuid:cbb5165a-7e9f-4416-97e1-010599c75324","http://resolver.tudelft.nl/uuid:cbb5165a-7e9f-4416-97e1-010599c75324","The role of asymmetries in Nordic Seas dynamics","Ypma, S.L. (TU Delft Environmental Fluid Mechanics)","Pietrzak, J.D. (promotor); Katsman, C.A. (copromotor); Delft University of Technology (degree granting institution)","2020","The oceanic transport of heat and salt from the equator northward is one of the main reasons for the mild climate of Europe. This transport occurs in the upper layer of the ocean. In the north, strong cooling occurs due to the large difference in temperature between the ocean surface and the atmosphere. The cooled watermass has a higher density and therefore sinks and returns toward the south at depth. This so-called AtlanticMeridional Overturning Circulation is driven in part by the wind and in part by the difference in temperature and salinity between the equator and the poles. Polar climate change will result in warmer and fresher oceans whichwill likelyweaken this global overturning circulation. Especially processes that concern the transformation from the light (warm) watermasses to dense (cold) watermasses are sensitive to changes in buoyancy forcing. This thesis focuses on an area where a large part of this transformation from light to dense watermasses takes place; the Nordic Seas. The Nordic Seas are located between Greenland and Norway and consist of several sub-basins, like the Lofoten Basin, the Greenland Basin and the Norwegian Basin. The main aim of this thesis is to better understand the dynamical processes involved in the watermass transformation in the Nordic Seas.","","en","doctoral thesis","","978-94-6384-156-6","","","","","","","","","Environmental Fluid Mechanics","","",""
"uuid:b772802d-b738-4312-8903-9eedfab62e6e","http://resolver.tudelft.nl/uuid:b772802d-b738-4312-8903-9eedfab62e6e","Seat Comfort Objectification: A new approach to objectify the seat comfort","Wegner, M.B. (TU Delft Internet of Things)","Vink, P. (promotor); Kortuem, G.W. (promotor); Delft University of Technology (degree granting institution)","2020","The seat is the largest contact area between a human and the car. Optimizing this contact area is therefore highly relevant for long termcustomer satisfaction. The share of the total production costs for the interior can be between 20% - 30% of which almost 40% is for the seats. The current literature of seat research ismostly on the properties of foam related to comfort and discomfort. In this PhD a study is done to prove that the perception and the (dis-)comfort of an automotive seat is not only influenced by the foam properties and contour. Other seat components like the seat cover, lamination and the seat suspension might play a role as well. To study the effect of different elements a measurement tool was developed which records properties that are relevant for some sensors in the skin. These mechanoreceptors in the skin are shear forces, elongation, friction and pressure. A tool was developed, which is a stamp in the formof a half sphere. The half sphere is equipped with pressure and elongation sensors. This makes it possible to measure pressure and pressure distribution, elongation and shear forces. With the tool 648 samples of seats with different seat components were tested. Additionally 98 participants tested various seats with properties comparable to the 648 samples. The tests showed that it was able to measure differences in elongation, pressure and shear force.","","en","doctoral thesis","","978-94-6384-151-1","","","","","","2020-09-29","","","Internet of Things","","",""
"uuid:fd01db04-7432-443f-9c00-49d66db1ab2c","http://resolver.tudelft.nl/uuid:fd01db04-7432-443f-9c00-49d66db1ab2c","Optimisation of photon detector tynode membranes using electron-matter scattering simulations","Theulings, A.M.M.G. (TU Delft RST/Neutron and Positron Methods in Materials)","van der Graaf, H. (promotor); Hagen, C.W. (promotor); Delft University of Technology (degree granting institution)","2020","The object of this thesis work was to develop a (Monte Carlo) simulation package that can be used to aid in the design of the Timed Photon Counter (TiPC). The TiPC is a single photon detector whose working principle is based upon the multiplication of an electron signal by transmission dynodes, or tynodes. For TiPC to be feasible, it is necessary to develop tynodes that have a secondary electron yield of more than 4, preferably with a primary electron energy as low as possible.","Tynodes; ultra-thin membranes; timed photon counter; transmission; secondary electron emission; Monte Carlo simulations","en","doctoral thesis","","978-94-028-2163-5","","","","","","","","","RST/Neutron and Positron Methods in Materials","","",""
"uuid:8c8e04ec-b697-4ebd-aee4-46a01434021c","http://resolver.tudelft.nl/uuid:8c8e04ec-b697-4ebd-aee4-46a01434021c","Characterization of a Fracture-Controlled Enhanced Geothermal System (EGS) in the Trans-Mexican-Volcanic-Belt (TMVB): Predictive Mechanical Model for Fracture Stimulation in an Enhanced Geothermal System Context (EGS)","Lepillier, B.P. (TU Delft Reservoir Engineering)","Bruhn, D.F. (promotor); Bertotti, G. (promotor); Delft University of Technology (degree granting institution)","2020","In 2020, as the world Energy demand keeps on rising (International Energy Agency (IEA), 2019), and with the global climate warming a reality (The Organisation for Economic Co-operation and Development (OECD), 2020), reducing our societal impact on Earth is of utmost importance. Energy and Climate have always been intrinsically related. Therefore, solving the Energy-Climate problem is a challenge where not one but several solutions should come together. Part of this global solution is the potential of geothermal resources. Geothermal energy is a renewable energy resource which has large potential to reduce the dependency on fossil fuels. Within the several uses of geothermal resources, a promising technique is titled Enhanced Geothermal Systems. More than renewable, this method has the potential to be sustainable. EGS consists of an originally low permeability reservoir rock that is artificially enhanced. The enhancement can be achieved by different stimulation techniques, such as mechanical, chemical, thermal or a combination of all. This thesis focuses on the mechanical EGS stimulation, where opening of existing fractures and creation of new ones is achieved by injecting a pressurized fluid in the reservoir rock formation. Such a process results in propagating a hydraulic fracture. The complexity of the EGS technique stands in predicting the hydraulic fracture propagation phenomena. EGS research and development is part of the GEMex goals. The GEMex project is a collaboration between Mexican institutions and the European Commission, dedicated to the development of non-conventional geothermal techniques. The Acoculco geothermal field, located in Puebla, is foreseen as a potential EGS. Because this field has been explored with two geothermal wells, and because an analogue exhumed system is available nearby, in the LasMinas area, this systemconstitutes a great research site for developing knowledge on EGS.","Geothermal reservoir; Enhanced Geothermal System; Natural fractures; Discrete fracture network; thermo-hydro-mechanical modelling; hydraulic fracture modelling; phase-field; Finite Element Method","en","doctoral thesis","","978-94-6384-168-9","","","","","","","","","Reservoir Engineering","","",""
"uuid:446183ec-7974-46d9-a23b-bdcd0ffcde00","http://resolver.tudelft.nl/uuid:446183ec-7974-46d9-a23b-bdcd0ffcde00","Systematic Design Methodology for Prognostic and Health Management Systems to Support Aircraft Predictive Maintenance","Li, R. (TU Delft Air Transport & Operations)","Hoekstra, J.M. (promotor); Verhagen, W.J.C. (copromotor); Delft University of Technology (degree granting institution)","2020","With the rapid development in the past century, the air transport network has become one of the most important infrastructure networks for both the domestic and global economy. Within an airline, the maintenance area is responsible for planning and executing all preventive actions required to meet safety standards including maintenance tasks, etc. for each aircraft, demanding skilled jobs, e.g., aircraft mechanics, avionics systems experts, electricians, cabin experts. Aircraft maintenance concerns the maintenance, repair, and overhaul (MRO), inspection or modification to retain an aircraft and its aircraft systems, components and structures in an airworthy condition. A variety of strategies are available to guide determination, planning, and execution of appropriate maintenance actions for given capital assets. These include Condition-Based Maintenance (CBM), where the detection of an abnormal condition directly triggers a maintenance task, and predictive maintenance, where the optimal maintenance interval is predicted based on condition, time, usage or loads.","Aircraft Predictive Maintenance; Prognostic and Health Management; DesignMethodology; System Engineering; Remaining useful life","en","doctoral thesis","","978-94-6366-318-2","","","","","","","","","Air Transport & Operations","","",""
"uuid:110b0119-1b0f-436d-b9a5-81445c17d542","http://resolver.tudelft.nl/uuid:110b0119-1b0f-436d-b9a5-81445c17d542","Uncoupling yeast growth and product formation in chemostat and retentostat cultures","Liu, Y. (TU Delft OLD BT/Cell Systems Engineering)","Pronk, J.T. (promotor); van Gulik, W.M. (copromotor); Delft University of Technology (degree granting institution)","2020","The progress of modern biotechnology has enabled the development of fermentation processes for the production of fuels and chemicals from renewable feedstocks. The current fermentation processes for bio-based production commonly start with a growth phase of the microorganisms followed by a production phase. This implies that biomass formation competes with the production of the desired product in terms of consumption of the feedstock. In industrial fermentations, maximizing the product yield, in other words, minimizing the substrate flux to biomass, CO2 and byproducts is the primary goal. To reach this objective, the uncoupling of microbial growth from product formation seems like a feasible approach, providing that the microbial host maintains high productivity in the absence of growth. The research presented in this thesis aims to improve understanding of the physiology of microbial at near-zero growth rates and thereby provide insights for the design of industrial fermentation processes based on the zero-growth concept. Specifically, S. cerevisiae was applied as the microbial cell factory, and succinic acid, a non-catabolic product and its synthesis from sugar requires a net input of ATP, was chosen as a model product. The microbe was cultivated in the chemostat and retentostat mode under industrially relevant conditions, as reflected by aerobic cultivation at low pH and at a high dissolved CO2 level. To the end, the quantitative physiology of yeast at slow and near-zero growth states was investigated, and uncoupling S. cerevisiae growth and succinic acid production was achieved with considerable succinic acid productivity. This study illustrates the potential for high-yield production of non-dissimilatory products at near-zero growth rates, with growth being limited by nutrients other than the carbon and energy source. In addition, it highlights a requirement for further research into enhancing strain robustness under industrial conditions, with specific attention for low-pH tolerance.","Fermentation; S. cerevisiae; Succinic acid; Retentostat; Physiology; Near-Zero Growth","en","doctoral thesis","","978-94-6421-013-2","","","","","","","","","OLD BT/Cell Systems Engineering","","",""
"uuid:db706644-1397-4e97-99a1-6ed748fe4ed4","http://resolver.tudelft.nl/uuid:db706644-1397-4e97-99a1-6ed748fe4ed4","Incentivizing renewables and reducing grid imbalances through market interaction: A forecasting and control approach","Lago, Jesus (TU Delft Team Bart De Schutter)","De Schutter, B.H.K. (promotor); Delft University of Technology (degree granting institution)","2020","As the penetration of renewable energy sources (RESs) increases, so does the dependence of electricity production on weather and, in turn, the uncertainty in electricity generation, the volatility in electricity prices, and the imbalances between production and consumption. In this context, while RES integration does complicate grid balance and increase price volatility, it also opens up opportunities for flexible market agents to reduce grid imbalances. In particular, by using the nature of the interactions between electricity markets and grid balance, market agents can reduce grid imbalances while increasing their profit. However, despite this obvious win-win situation, traditional research in this field has focused on balancing mechanisms that do not always exploit these relations between electricity markets and grid balance.","","en","doctoral thesis","","978-94-6402-444-9","","","","","","","","","Team Bart De Schutter","","",""
"uuid:532c57be-a57d-4234-8bd7-1970f4f33428","http://resolver.tudelft.nl/uuid:532c57be-a57d-4234-8bd7-1970f4f33428","Community-based monitoring initiatives of water and environment: Evaluation of establishment dynamics and results","Gharesifard, M. (TU Delft Water Resources)","van der Zaag, P. (promotor); Wehn, U. (promotor); Delft University of Technology (degree granting institution)","2020","Citizen participation in water and environmental management via communitybased monitoring (CBM) has been praised for the potential to facilitate better informed, more inclusive, transparent, and representative decision making. However, methodological and empirical research trying to conceptualize and evaluate the dynamics at play that might enable or hinder these initiatives from delivering on their potential is limited. This research contributed to the conceptualization of CBMs through development of a conceptual framework that is suitable for Context analysis, Process evaluation and Impact assessment of CBMs – the CPI Framework. This conceptualization provides an interpretation of what ‘community’ means in the context of a CBM initiative. In addition, this research contributed to the existing empirical knowledge about the establishment, functioning and outcomes of CBMs by testing the CPI Framework for studying two real life CBMs throughout the lifetime of an EU-funded project – the Ground Truth 2.0. The first CBM is called Grip op Water Altena that focuses on the issue of pluvial floods in ‘Land van Heusden en Altena’ of the Netherlands. The second CBM is Maasai Mara Citizen Observatory and aims at contributing to a better balance between biodiversity conservation and sustainable livelihood management in the Mara ecosystem in Kenya.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-0-367-67401-4","","","","","","","","","Water Resources","","",""
"uuid:1489bacb-4e60-47fa-9409-3866a164efcd","http://resolver.tudelft.nl/uuid:1489bacb-4e60-47fa-9409-3866a164efcd","Large deviations for stochastic processes on Riemannian manifolds","Versendaal, R. (TU Delft Applied Probability)","Redig, F.H.J. (promotor); van Neerven, J.M.A.M. (promotor); Delft University of Technology (degree granting institution)","2020","This thesis is concerned with large deviations for processes in Riemannian manifolds. In particular, we study the extensions of large deviations for random walks and Brownian motion to the geometric setting. Furthermore, we also consider large deviations for random walks in Lie groups. The additional group structure allows for improvements over the general case. Finally, we also study large deviations for Brownian motion in evolving Riemannian manifolds. For this, we use the so-called 'rolling without slipping' construction of Riemannian Brownian motion, adepted to the time-inhomogeneous setting.","","en","doctoral thesis","","","","","","","","","","","Applied Probability","","",""
"uuid:edddffc4-b87b-496f-b0f8-619f1c3971c7","http://resolver.tudelft.nl/uuid:edddffc4-b87b-496f-b0f8-619f1c3971c7","Adaptive Marchenko internal multiple attenuation","Staring, M. (TU Delft Applied Geophysics and Petrophysics)","Wapenaar, C.P.A. (promotor); Delft University of Technology (degree granting institution)","2020","Curiosity regarding what we cannot see has always driven research. Science has helped us to uncover many of those hidden secrets. In particular, geophysics has helped us to image the inside of the Earth. By sending a seismic signal into the Earth and recording the signal that comes back, geophysicists can characterize the layers of the subsurface. Nowadays, geophysics is used for many purposes, for example, the localization of fossil fuels, the characterization of the subsurface for the construction of wind farms and the evaluation of reservoirs for geothermal energy. In order to decrease the risk and cost involved in these activities, we need images of the subsurface that are as accurate as possible.
These images can only be obtained if we fully understand the propagation of the seismic signal in the subsurface. A long-standing problem in geophysical imaging is the presence of internal multiple reflections. When imaging the subsurface, we assume that the signal only reflects once when there is a contrast in velocity and/or density (for example, when changing from sand to rock). However, in reality, the signal can reflect many times inside the subsurface before being recorded at the surface. When treating the arrivals that have reflected many times as arrivals that have only reflected once, we incorrectly image the subsurface and create ghost reflectors that do not exist. This problem is particularly strong in geological settings that have a complex structure with many strong velocity and/or density contrasts above an area of interest. This may happen, for example, when there is a reservoir of oil below a thick stratified salt layer. In such cases, the image of the area of interest is unreliable due to the presence of many ghost reflectors. Therefore, we have to use knowledge of wave propagation to predict and attenuate the internal multiples in the data prior to imaging.
In this thesis, I further develop the data-driven and wave-equation-based Marchenko method to make it suitable for the attenuation of internal multiples in seismic field data. In addition, I evaluate the performance of suitable methods by applying them to field datasets recorded in different geological settings. I start this evaluation by demonstrating that what we call the conventional Marchenko method is perhaps not the most suitable Marchenko method for the application to field data. I develop an alternative Marchenko method instead: the adaptive double-focusing method. I show that this method indeed produces improved results compared to the conventional Marchenko method when applying it to a line of 2D data of the Santos Basin, Brazil.
Since the 2D results show promise, I continue with the extension to 3D applications. I first identify the key acquisition parameters that affect the result of our Marchenko method on 3D synthetic data and conclude that the limited crossline aperture and the coarse sail line spacing have the strongest effect on the quality of the result. Based on this evaluation, I interpolate the sail line spacing on 3D field data acquired in the Santos Basin and use the adaptive double-focusing method to predict and subtract internal multiples. I conclude that 3D Marchenko internal multiple attenuation seems to be sufficiently robust for the application to narrow azimuth streamer data in a deep marine setting, provided that there is sufficient aperture in the crossline direction and that the sail lines are interpolated. In addition, the adaptive double-focusing method is suitable for the attenuation of internal multiples generated by a complex overburden and for simultaneously redatuming to a level below this overburden.
Next, I modify the adaptive double-focusing method to obtain an adaptive double dereverberation method that is suitable when only aiming to attenuate internal multiples generated in an overburden without redatuming. Moreover, this method does not require a velocity model. I apply this method to a 2D line of data acquired in the very shallow Arabian Gulf. Also, I assess how to meet the data requirements for the Marchenko method in shallow water environments (e.g., the removal of surface-related multiples, the deconvolution of the source signature) and demonstrate that the state-of-the-art Robust Estimation of Primaries by Sparse Inversion (R-EPSI) method is capable of producing the correct input data for the Marchenko method in such settings.
Subsequently, I discuss the role of the adaptive filter in the application of the Marchenko method to field data. I argue that developments in seismic data processing allow us to predict internal multiples with more accuracy, such that only a conservative adaptive filter is needed to correct for the unavoidable minor amplitude and phase discrepancies between the internal multiples in the data and the predicted internal multiples. I demonstrate this by using a conservative adaptive filter to subtract internal multiples that were predicted by applying an adaptive Marchenko multiple elimination method to a 2D line of field data acquired in the Norwegian North Sea.
Finally, based on the results presented in this thesis, I conclude that the Marchenko method is an effective, data-driven and robust method for the prediction of internal multiples in marine seismic data. Different Marchenko methods are suitable for different purposes. There are two key elements for the successful application of a Marchenko method to field data: 1) the acquisition geometry needs to be sufficiently dense and 2) a careful processing workflow needs to be constructed that accounts for the specifics of the geological setting at hand, with significant emphasis on amplitude and phase preservation.","seismic; internal multiples; adaptive subtraction","en","doctoral thesis","","","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:34e0eb58-9ba4-4fa3-9f25-e91c136b97f7","http://resolver.tudelft.nl/uuid:34e0eb58-9ba4-4fa3-9f25-e91c136b97f7","In-situ interfacial approaches on chemisorption and stability of buried metal oxide-polymer interfaces","Fockaert, L.I. (TU Delft (OLD) MSE-6)","Mol, J.M.C. (promotor); Terryn, H.A. (promotor); Delft University of Technology (degree granting institution)","2020","Until today, interfacial bond formation and degradation between polymer coatings and metal substrates is still far from fully understood, whilst it is a limiting factor for the durability of metal-polymer hybrid systems. To improve the corrosion resistance and adhesion properties of metal substrates, a chemical surface treatment is applied prior to painting. However, due to ecological and health related issues, traditional well established surface treatments containing hexavalent chromate or high phosphate loads are being replaced by a new generation of ecologically-justified surface treatments. This comes with the need of gaining fundamental insights on the impact of substrate and pretreatment variations on the (chemical) adhesion of polymers to guarantee the lifetime of newly developed metal-polymer hybrid systems. A challenge in this regard is the hardly accessible buried interface, which until today requires the use of model systems when using non-destructive surface sensitive techniques. Yet, industrial metal-polymer hybrid systems are typically highly heterogeneous, creating a distinct gap between model and industrial systems. This dissertation aims to close this gap starting from simplified model systems to which complexity is gradually added. This has been done using the thin organic film approach on one hand, and the thin (thermally vaporized) metal substrate approach on the other hand, allowing non-destructive access of the metal-polymer interface from the polymer side and metal side, respectively. Complementary use of both approaches allows systematically comparison of model systems to industrially relevant paint and metal substrates.","","en","doctoral thesis","","978-94-6366-309-0","","","","","","","","","(OLD) MSE-6","","",""
"uuid:defe4405-a1c1-4e18-af48-7a93fcd55152","http://resolver.tudelft.nl/uuid:defe4405-a1c1-4e18-af48-7a93fcd55152","The hydrodynamics of rowing propulsion: An experimental study","Grift, E.J. (TU Delft Fluid Mechanics)","Westerweel, J. (promotor); Tummers, M.J. (copromotor); Delft University of Technology (degree granting institution)","2020","The aim of this thesis is to analyse the hydrodynamics of rowing propulsion and to enhance this propulsion. This requires to have insight in both the flow phenomena and the generated hydrodynamic forces. In (competitive) rowing athletes generate a propulsive force by means of a rowing oar blade. During propulsion the oar blade is submerged close to the surface and the athlete exerts a force on the handle of the oar. This causes a reaction force fromthe water at the other end of the oar, the oar blade, which together with the force at the handle generates the propulsive force at the oar lock, the pivot point on the boat. For optimal performance it is essential to maximise the propulsion caused by this hydrodynamic reaction force at the blade. To achieve this, understanding of the flow field around the oar blade during this propulsive phase is vital. In chapter 2 the results are presented on the drag on, and the flow field around, a submerged rectangular normal flat plate, which is uniformly accelerated to a constant target velocity along a straight path. The plate aspect ratio is chosen to be AR = 2 to resemble an oar blade in (competitive) rowing. The plate depth, i.e. the distance from the top of the plate to the air–water interface, the plate acceleration and the plate target velocity are varied, resulting in a plate width based Reynolds number of 4£104 · Re · 8£104. In the analysis three phases are distinguished; (i) the acceleration phase during which the plate drag is increased, (ii) the transition phase during which the plate drag decreases to a constant steady value upon which (iii) the steady phase is reached. The plate drag force is measured as function of time which showed that the steady-phase plate drag at a depth of 1/5 plate height (20 mm depth for a plate height of 100 mm) increased by 45% compared to the plate top at the surface (0mm). Also, it is shown that the drag force during acceleration of the plate increases over time and is not captured by a single added mass coefficient for prolonged accelerations. Instead, an entrainment rate is defined that captures this behaviour. The formation of starting vortices and the wake development during the time of acceleration and transition towards a steady wake are studied using hydrogen bubble flow visualisations and particle image velocimetry. The formation time, as proposed by Gharib et al. (J. Fluid Mech., vol. 360, 1998, pp. 121–140), appears to be a universal time scale for the vortex formation during the transition phase. These findings serve as the basis for defining a best practice during the start of a rowing race as described in chapter 4. In chapter 3 the results are presented of experiments in which the flow around a realistic rowing oar blade, in combination with realistic kinematics, was measured using concurrent force measurements and PIV measurements. The aim of these experiments is to identify which flow phenomena govern rowing propulsion and subsequently adjust the oar blade configuration to optimise rowing propulsion. The oar blade moves along a cycloidal path, and due to the large accelerations and decelerations replicating the oar blade path is all but trivial. The oar blade and kinematics are scaled by a factor of 0.5 due to limitations of the experimental set-up. The flow field around the oar blade during the drive phase is measured and several flow phenomena such as the generation of leading and trailing edge vortices are linked to the generation of lift and drag, which both contribute to rowing propulsion. The oar blade performance is defined as the energetic and impulse efficiencies ´E and ´J , where the latter can be seen as the alignment of the generated impulse with the propulsive direction. It is found that when using a standard configuration of a rowing oar blade, the generated impulse is not aligned with the propulsive direction. This suggests that the propulsion is not optimal. By adjusting the angle at which the blade is attached to the oar an optimal oar blade angle was found (¯ = 15°) that aligns the generated impulse with the propulsive direction. At this angle the generation of leading and trailing edge vortices changes such that the overall hydrodynamic efficiency of the propulsion is optimised.","","en","doctoral thesis","","978-94-6416-068-0","","","","","","2021-02-01","","","Fluid Mechanics","","",""
"uuid:48460103-30a0-4338-8c4a-ebc80c73c2d3","http://resolver.tudelft.nl/uuid:48460103-30a0-4338-8c4a-ebc80c73c2d3","Geotechnical uncertainties and reliability-based assessments of dykes","Varkey, D. (TU Delft Geo-engineering)","Hicks, M.A. (promotor); Vardon, P.J. (promotor); Delft University of Technology (degree granting institution)","2020","This thesis utilises the random finite element method (RFEM) to provide practical guidance and tools for geotechnical engineers to account for the influence of soil spatial variability. This has involved: (a) practical insight and guidance on the choice of characteristic soil property values and scales of fluctuation; (b) a robust approach to reliability assessment and design that obviates the need for explicit calculation of characteristic values; and (c) the benchmarking and improving of simpler analysis tools.","Hetereogeneity; Reliability; RFEM; Slope stability; Statistical analysis","en","doctoral thesis","","978-94-6366-298-7","","","","","","","","","Geo-engineering","","",""
"uuid:18ebf770-feb0-4835-bd0e-04173a346308","http://resolver.tudelft.nl/uuid:18ebf770-feb0-4835-bd0e-04173a346308","The anammox house: On the extracellular polymeric substances of anammox granular sludge","Boleij, M. (TU Delft Sanitary Engineering)","van Loosdrecht, Mark C.M. (promotor); Lin, Y. (copromotor); Delft University of Technology (degree granting institution)","2020","In biofilms, microorganisms are embedded in a hydrated matrix that provides a stable structure and protection against influences fromthe environment. Thismatrix is formed by extracellular polymeric substances (EPS) that are produced by the microorganisms of the biofilm. A major part of the microorganisms in nature lives in aggregated forms like biofilms. Yet, knowledge about biofilm formation, composition and structure is limited. A specific formof biofilm is granular sludge. A granule is a spherical biofilmthat is not attached to a surface or carrier. In wastewater treatment, granular sludge systems are used for efficient wastewater treatment. Due to the high settling velocity of granules, granular sludge-based plants can be built smaller, compared to conventional plants (with flocculent sludge). Anaerobic ammonium oxidizing (anammox) bacteria are applied in granular sludge systems in wastewater treatment. Anammox bacteria are important players in the nitrogen cycle in wastewater treatment, as well as in the natural environment. Although the formation of granular sludge is not completely understood, EPS are the key components in the formation of the matrix that provides a stable structure wherein the bacteria are embedded. The aim of this thesis was to characterize the EPS composition of anammox granular sludge.","","en","doctoral thesis","","978-94-6380-739-5","","","","","","","","","Sanitary Engineering","","",""
"uuid:e13606bc-e466-44c6-8a98-dd89aac8fdc4","http://resolver.tudelft.nl/uuid:e13606bc-e466-44c6-8a98-dd89aac8fdc4","Using Social Media to Characterise Crowds in City Events for Crowd Management","Gong, X. (TU Delft Transport and Planning)","Hoogendoorn, S.P. (promotor); Daamen, W. (promotor); Bozzon, A. (copromotor); Delft University of Technology (degree granting institution)","2020","Using social media data, such as Twitter and Instagram, I identify relevant information, develop data models, estimate and analyse crowds’ characteristics in city events, including demographic and city-role composition, Spatial-temporal distribution, sentiment estimation, Points of Interest preferences, word use, crowd size and density estimation, which help crowd managers make better decisions.","Social media; crowd management; Data analysis","en","doctoral thesis","TRAIL Research School","978-90-5584-270-4","","","","TRAIL Thesis Series no. T2020/14, the Netherlands Research School TRAIL","","","","","Transport and Planning","","",""
"uuid:34f174d5-f8ad-4bf2-86f7-47660a84fe64","http://resolver.tudelft.nl/uuid:34f174d5-f8ad-4bf2-86f7-47660a84fe64","On the role of microtubules in cell polarity: A reconstituted minimal system","Vendel, K.J.A. (TU Delft BN/Marileen Dogterom Lab)","Dogterom, A.M. (promotor); Delft University of Technology (degree granting institution)","2020","Every living organism consists of cells. Even for the simplest single-cell organism, this cell is extremely complex. Thousands of components (such asDNA, cytoskeletal filaments, proteins, lipids, nutrients and energy) are organized both spatially and temporally to ensure proper functioning of vital cellular processes. One of those processes is pattern formation, or cell polarity. Cell polarity is defined as the morphological and functional differentiation of cellular compartments in a directional manner. This directionality is crucial for processes like cell division, cell migration and cell growth, which require an asymmetric action of the cell. When polarity is inhibited by silencing of cell polarity proteins, cells become deformed and have trouble to function, if viable at all. It is well-known that cell polarity is the result of reaction-diffusion and cytoskeleton-based mechanisms, but the exact mechanisms remain unknown. In this thesis, we focus on the latter, and more specifically on one type of cytoskeletal filaments: microtubules (MTs). Our goal is to study the role of MTs in cell polarity establishment.","cell polarity; emulsion droplets; microtubules; reconstitution","en","doctoral thesis","","978-90-8593-447-9","","","","","","","","","BN/Marileen Dogterom Lab","","",""
"uuid:0c9f992c-7390-4d8f-aa6d-d89f0e7866a0","http://resolver.tudelft.nl/uuid:0c9f992c-7390-4d8f-aa6d-d89f0e7866a0","Electronic Properties of (Pseudo-) Two-Dimensional Materials","Janssen, V.A.E.C. (TU Delft QN/van der Zant Lab)","van der Zant, H.S.J. (promotor); van der Molen, S.J. (promotor); Delft University of Technology (degree granting institution)","2020","This thesis describes research into the interaction between electrons and various (pseudo) two-dimensional materials. This research is using two approaches: in Chapters 3 and 4 a low-energy electron microscope is used, and in Chapters 5 and 6 transport properties are studied. Chapter 1 introduces the concept of a two-dimensional material. First, the various kinds of such materials are illustrated. Secondly, the specific materials used in this thesis will be treated. We will see that two-dimensionality can be achieved in different ways: first of all top-down in a method where layers are peeled off a crystal until a single atomic layer remains. Secondly: bottom-up, in a method where a single layer is created from smaller components. Chapter 2 introduces the setup which was used for the measurements in Chapters 3 and 4. In these chapters, we will look at materials using electrons, in a low-energy electron microscope (LEEM). A regular microscope works by illuminating a sample with light. In a microscope, we observe bright and dark patches (corresponding to reflection and absorption of the light, respectively), as well as colors (corresponding to reflection and absorption of different wavelengths or energies of the light). We can also magnify objects using lenses. The LEEM works in a very comparable way, with the major difference that we do not use light (i.e. photons) but electrons to image the sample. An image is formed by electrons after interaction with the sample has taken place. This image can also be magnified, and contains bright and dark patches, from which the interaction of the material with the electrons can be established. Besides this, it is possible to change the electron energy in the setup, which makes it possible to measure the interaction at different energies. In the third Chapter we use the LEEM’s ability to measure the atomic orientation of thin layers of crystal. We look at graphene, a two-dimensional lattice of carbon atoms. This graphene was grown on a wafer. Contrary to peeling a crystal to atomically thin layers, this growth method is compatible with industrial processes, which require large slabs of graphene in predictable shapes. In developing these growing methods, it turns out to be difficult to grow large pieces of single-crystal material. With LEEM we look at differences in angular orientation in a layer of graphene. The motivation for this is that boundaries between such domains have a negative influence on the conductive properties of the material. In the fourth Chapter a method is extended to measure and visualize band structures in two-dimensional materials. We look specifically at molybdenum disulfide (MoS2) and hexagonal boron nitride (hBN). The method (scanning ARRES) rapidly scans the electron bream across the first Brillouin zone. This gives a complete image of the band structure of these materials at energies above the Fermi level plus work function. The fifth and sixth Chapters concern single layer superstructures built out of nanocrystals. The building blocks are lead selenide (PbSe) single crystals in the form of a truncated cube, with a diameter of about 5 nm. By allowing these crystals to organize on a fluid surface, a single layer of crystals emerges. These crystals bond covalently in the direction of the atomic lattice. The material which emerges from this process can have multiple shapes, in this thesis we study the square structure. In Chapter 5 we study the conductance properties of such a structure at room temperature, under the influence of an ionic-fluid gate. This gate makes is possible to achieve high charge densities in these structures. We measure high mobilities for these systems, in the order of 1 cm2/Vs. In the sixth Chapter these samples are cooled to approximately 4 K. Despite the high mobilities measured in Chapter 5, the dependence of the conductance with temperature shows that transport is dominated by a hopping process and not by band transport, at the length scale of these samples.","2D materials; low energy electron microscopy (LEEM); angleresolved reflected-electron spectroscopy (ARRES); nano-crystal supper lattices; transport; ionic-liquid gate","en","doctoral thesis","","978-90-8593-448-6","","","","","","","","","QN/van der Zant Lab","","",""
"uuid:e0fa631a-29a5-4aca-b245-dad04db03904","http://resolver.tudelft.nl/uuid:e0fa631a-29a5-4aca-b245-dad04db03904","Transferases in Biocatalysis","Mestrom, L. (TU Delft BT/Biocatalysis)","Hanefeld, U. (promotor); Hagedoorn, P.L. (copromotor); Delft University of Technology (degree granting institution)","2020","Enzymes are nature’s catalyst of choice for the highly selective and efficient coupling of carbohydrates. Enzymatic sugar coupling is a competitive technology for industrial glycosylation reactions, since chemical synthetic routes require extensive use of laborious protection group manipulations and often lack regio- and stereoselectivity.","Transferases; Biocatalysis; trehalose transferase; glycosyl transferase; immobilization,; acyl transferase; carbohydrate","en","doctoral thesis","","9789464023886","","","","","","","","","BT/Biocatalysis","","",""
"uuid:155ea22f-6686-4196-a270-743613ad1c33","http://resolver.tudelft.nl/uuid:155ea22f-6686-4196-a270-743613ad1c33","Methods for Multi-Parametric Quantitative MRI","van Valenberg, W. (TU Delft ImPhys/Medical Imaging; TU Delft ImPhys/Computational Imaging)","van Vliet, L.J. (promotor); Vos, F.M. (promotor); Poot, D.H.J. (copromotor); Delft University of Technology (degree granting institution)","2020","Magnetic resonance imaging (MRI) is the primary modality for the imaging of soft tissues (e.g. brain, muscle, liver). Therefore, it is an essential radiological tool for diagnosis and surgical planning. The contrast in MR images is due to tissues responding differently to the magnetic fields generated by the scan-ning system. This response can be de-scribed by the physical properties of the tissue (e.g. proton density, magnetic relaxation) and the magnetic fields. These physical properties are represent-ed by multiple parameters that can be estimated through quantitative MRI (qMRI) methods. The parameters are considered more reproducible than con-ventional MR images, which simplifies the comparison of MR data from differ-ent subjects or scanning systems. Esti-mating multiple parameters simultane-ously is needed to reduce error from system imperfections and deliver accu-rate estimates of the physical tissue pa-rameters...","","en","doctoral thesis","","","","","","","","","","","ImPhys/Medical Imaging","","",""
"uuid:c46af1d3-77b7-40f8-9e0a-3ad634bbdb47","http://resolver.tudelft.nl/uuid:c46af1d3-77b7-40f8-9e0a-3ad634bbdb47","Effects Assessment for Targeting Decisions Support in Military Cyber Operations","Maathuis, E.C. (TU Delft Information and Communication Technology)","van den Berg, Jan (promotor); Pieters, W. (promotor); Delft University of Technology (degree granting institution)","2020","Cyber Warfare is perceived as a radical shift in the nature of warfare. It can represent a real alternative next to other types of Military Operations to achieve military and/or political goals in front of adversaries. To this end, Cyber Operations use specific technologies i.e. cyber weapons/capabilities/means. With a short but intense history of incidents/ events labelled as Cyber Operations or Cyber Warfare incidents, their potential and scale of impact has proven to cross geographical and digital borders. In this way, their effects are impacting not only their engaged targets, but also other collateral actors and systems, at local, national, regional, and global scale.","cyber security; cyber operations; cyber warfare; cyber weapons; artificial intelligence; intelligent systems; fuzzy logic; ontology; military operations; war; targeting; collateral damage; laws of war","en","doctoral thesis","","","","","","","","","","","Information and Communication Technology","","",""
"uuid:0e828acb-5ade-4de2-b5a6-e5ae38d2aa9c","http://resolver.tudelft.nl/uuid:0e828acb-5ade-4de2-b5a6-e5ae38d2aa9c","Self–healing Thermal Interface Materials","Zhong, N. (TU Delft Novel Aerospace Materials)","van der Zwaag, S. (promotor); Garcia, Santiago J. (promotor); Delft University of Technology (degree granting institution)","2020","Thermal interface materials (TIMs) are widely used as gap–filler materials between electronic devices such as LEDs and ICs and their heat sink and aim to control the heat dissipation and to provide mechanical anchoring. As the rated power densities of electronic devices are increasing rapidly, the TIMS are exposed to higher thermal and mechanical loads. The resulting aging of the TIMs may lead to delamination and internal crack formation causing loss of heat transfer as well as mechanical integrity leading to premature device failure. Therefore there is an increased demand for efficient and reliable TIM which can avoid, or even better, mitigate this thermomechanical damage. This thesis aims to contribute to the introduction of self–healing concepts into polymer–based TIMs to increase their reliability and service lifetime. As such, each chapter targets one of the scientific issues that are related to the development of commercial self–healing TIMs.","Thermal interface materials; Functional composites; Reliability; Disulfide","en","doctoral thesis","","978-94-028-2155-0","","","","","","","","","Novel Aerospace Materials","","",""
"uuid:f00eb0bb-6a04-44ba-a7ed-89127a4b3029","http://resolver.tudelft.nl/uuid:f00eb0bb-6a04-44ba-a7ed-89127a4b3029","Self-sensing of coil springs and twisted and coiled polymer muscles","van der Weijde, J.O. (TU Delft Biomechatronics & Human-Machine Control)","Vallery, H. (promotor); Babuska, R. (promotor); van Ostayen, R.A.J. (promotor); Delft University of Technology (degree granting institution)","2020","The need to integrate robots in society grows, as several socioeconomic issues put pressure on our current level of productivity and prosperity. This requires robots to safely interact with unpredictable and fragile stakeholders, such as humans. Compliant actuation can facilitate such safe physical interaction.
The Series Elastic Actuator (SEA) and the Twisted and Coiled PolymerMuscle (TCPM) constitute two compliant actuators with favorable properties. However, both need sensors to be able to performclosed-loop control. This complicates design and integration of SEAs, and negates two major benefits of TCPMs. This problem can be solved by determining the state of the actuator via structures ormaterials that are already part of the actuator, i.e. self-sensing....","Self-Sensing; Compliant Actuation; Coil Springs; Twisted and Coiled Polymer Muscles","en","doctoral thesis","","978-94-6402-484-5","","","","","","","","","Biomechatronics & Human-Machine Control","","",""
"uuid:9b8243b3-cc41-4c70-a95d-158c7ad96bb8","http://resolver.tudelft.nl/uuid:9b8243b3-cc41-4c70-a95d-158c7ad96bb8","Cavitation implosion loads from energy balance considerations in numerical flow simulations","Schenke, S. (TU Delft Ship Hydromechanics and Structures)","van Terwisga, T.J.C. (promotor); Westerweel, J. (promotor); Delft University of Technology (degree granting institution)","2020","Cavitation erosion is a problem in the design of a wide range of fluid machinery involving liquid flows. Ship propellers, rudders, hydro pumps and turbines or diesel injectors are some of the most prominent examples. Cavitation occurs at locations of high local flow velocity, where the pressure may drop so low that the liquid phase vaporizes. The violent collapse of cavitating structures in regions of pressure recovery can result in high pressure loads and severe damage of such devices. Erosive cavitation is typically encountered when the hydrodynamic efficiency of fluid machinery is optimized. In order to find an appropriate balance in the design trade-off between hydrodynamic efficiency and the risk of cavitation erosion damage, there is a need for computational tools that can predict the risk of cavitation erosion in the early design and optimization process. The prediction of cavitation erosion risk using Computational Fluid Dynamics (CFD), however, is a major challenge because the local erosion damage is the result of extreme pressure loads forming at the final stage of cavity collapses at extremely small scales in both space and time. Due to limited computational resources, such small scales can usually not be resolved for flow problems relevant to engineering applications...","cavitation erosion; implosion; energy balance","en","doctoral thesis","","978-94-6366-305-2","","","","","","","","","Ship Hydromechanics and Structures","","",""
"uuid:35022242-b3ba-48d9-b3f6-c6ab20b8cc19","http://resolver.tudelft.nl/uuid:35022242-b3ba-48d9-b3f6-c6ab20b8cc19","Closing the loop in model-based wind farm control","Doekemeijer, B.M. (TU Delft Team Jan-Willem van Wingerden)","van Wingerden, J.W. (promotor); Verhaegen, M.H.G. (promotor); Delft University of Technology (degree granting institution)","2020","Open-loop wake steering inconsistently improves the energy yield of wind farms in field experiments in the literature. This thesis matures wind farm control technologies for power maximization in a model-based closed-loop framework towards real-world practical applicability. Firstly, open-loop wake steering is evaluated on a commercial onshore wind farm in this thesis. While a net increase in energy yield is measured, the experiment highlights the inaccuracy of the simplified wind farm model, leading to situational losses in energy yield. Secondly, this thesis demonstrates that wind turbine power and wind vane measurements can be used to calibrate the simplified model online to maintain a high model accuracy. This closed-loop concept is tested in high-fidelity simulation, showing a consistent energy yield increase of 1.4%. Finally, this thesis explores the usage of dynamic wind farm models for control, which would further push the accuracy of simplified models yet is held back by the large computational costs. The contributions in this dissertation greatly advance the status quo of wind farm control solutions and their applicability in commercial wind farms.","closed-loop wind farm control; wake steering; yaw-based wake redirection; power maximization; model-based optimization; real-time model adaptation; nonlinear state estimation; FLORIS; WFSim; SOWFA","en","doctoral thesis","","","","","","","","","","","Team Jan-Willem van Wingerden","","",""
"uuid:706a49f8-f86d-49ef-aee8-22ab105b8175","http://resolver.tudelft.nl/uuid:706a49f8-f86d-49ef-aee8-22ab105b8175","The role of ppGpp in E. coli cell size control","Büke, F. (TU Delft BN/Greg Bokinsky Lab)","Tans, S.J. (promotor); Bokinsky, G.E. (promotor); Delft University of Technology (degree granting institution)","2020","Bacteria have been an integral part of human life since the ancient times either as cooperative tenants living in and around us or as constant threats to our health and wellbeing. Since prehistoric times we unknowingly used them as tools of fermentation and fought against them with haphazardly discovered natural remedies. Today, after more than 3 centuries since they were first observed with a microscope, our understanding of their functions has increased immensely along with our ability to alter it. We have discovered on a molecular level how life stores and transfers information, how this information is used to build biochemical machines with a myriad of functions, namely proteins, and how these proteins undertake their functions. Along with a better understanding came the ability to alter the biological information within DNA and to create new proteins that does not occur in nature.","Single cell; live cell microscopy; microfluidics; metabolism; regulation; E. coli; cell size; growth; ppGpp; Guanosine tetraphosphate","en","doctoral thesis","","978-94-6402-501-9","","","","","","","","","BN/Greg Bokinsky Lab","","",""
"uuid:25a762bf-f782-4ac2-b997-c91e95605c4f","http://resolver.tudelft.nl/uuid:25a762bf-f782-4ac2-b997-c91e95605c4f","Protecting quantum entanglement by repetitive measurement","Bultink, C.C. (TU Delft ALG/General; TU Delft QCD/DiCarlo Lab)","DiCarlo, L. (promotor); Vandersypen, L.M.K. (promotor); Delft University of Technology (degree granting institution)","2020","Information processing based on the laws of quantum mechanics promises to be a revolutionary new avenue in information technology. This emerging field of quantum information processing (QIP) is however challenged by the fragile nature of the quantum bits (qubits) in which quantum information is stored and processed. An error in even a single qubit makes the quantum processor go off-track, corrupting the calculation as a whole. Therefore, the chance for an erroneous outcome increases with the number of qubits in the processor. Large-scale QIP thus hinges on the ability to correct for these errors. Classical information processing often uses error correction algorithms to identify errors by checking whether information is consistent in multiple copies. This strategy is unfortunately not applicable to QIP as quantum states cannot be copied. Moreover, direct measurements on qubits collapse their quantum states, reducing them to classical information. Fortunately, the theory of quantum error correction (QEC) overcomes these complications by encoding quantum information in entangled states of many qubits and performing parity measurements to identify errors in the system without destroying the encoded information. Implementing these codes is challenging as it requires many qubits and quick interleaving of operations and measurements. Moreover, to not introduce more errors in the system than QEC can solve for, these operations and measurements need to be of sufficient fidelity and speed...","","en","doctoral thesis","","978-90-8593-449-3","","","","","","","","","ALG/General","","",""
"uuid:0710d631-fcba-496d-8436-48961bcaaf21","http://resolver.tudelft.nl/uuid:0710d631-fcba-496d-8436-48961bcaaf21","Countercurrent Heat Exchange Building Envelope Using Ceramic Components","Vollen, J.O. (TU Delft Design of Constrution)","Knaack, U. (promotor); Klein, T. (promotor); Delft University of Technology (degree granting institution)","2020","Research and development in building envelope design have promoted the convergence of two system types, Thermo-Active Building Systems and Adaptive Building Envelopes, that re- conceptualize the envelope as a distributed energy transfer function that captures, transforms, stores, and even re-distributes energy resources. The widespread deployment of Thermo-Active Building Systems as a building envelope will depend on several factors. These factors include the value of the design attributes that impact energy transfer in relation to the performance of the building envelope assembly and the return on investment that these attributes individually or in the aggregate can provide as a reduction in Energy Use Intensity. The research focus is on the design development, testing, and energy reduction potential of a Thermo-Active Building System as an adaptive countercurrent energy exchange envelope system using ceramic components: the Thermal Adaptive Ceramic Envelope.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-268-0","","","","A+BE I Architecture and the Built Environment No 5 (2020)","","","","","Design of Constrution","","",""
"uuid:3c6817fd-9d04-4461-9253-f02f0ca78a6a","http://resolver.tudelft.nl/uuid:3c6817fd-9d04-4461-9253-f02f0ca78a6a","An experimental approach into the quantification of steering and balance behaviour of bicyclists","Dialynas, G. (TU Delft Biomechatronics & Human-Machine Control)","Happee, R. (promotor); Schwab, A.L. (copromotor); Delft University of Technology (degree granting institution)","2020","The aim of this thesis is to derive bicycle rider control models, based on experimental data, that mimic the rider in his balance control task at various forward speeds. These rider control models can help to understand cyclists falls, improve training techniques, assess the handling properties of new bicycle designs and create active balance control systems (e.g. steer assist). This thesis consists of 9 chapters; Chapter 1 introduces relevant background theory and identifies the research gap.
Chapter 2 presents some effects of crosswind on the lateral dynamics of a bicycle and on rider control. The chapter gives an insight on how rider control modelling can be used to assess crosswind related falls. Simulations indicated that crosswind has a considerable effect on the stability and control of the bicycle. Increasing wind speed can make an uncontrolled bicycle resonate for all forward speeds. The rider control effort increases considerably and a constant steer torque is required to keep the bicycle at a straight heading.
Chapter 3 investigates the dynamic response of the bicycle rider’s body during vertical, fore-and-aft and lateral perturbations in order to understand how riders are using postural control to restrain excessive movements and prevent falling off the seat. The analysis is presented by means of apparent mass (APMS) and seat-to-sternum transmissibility (STST) functions in the frequency domain. Measured forces at saddle, steer and pedals revealed that for each individual motion the rider applied forces in all three directions. Heave and surge motion interacted with each other and had similar responses. Sway showed totally different responses and weak interaction with the other two motions. Resonant frequencies were considerably higher in the vertical direction as compared to the longitudinal direction. Lateral measurements showed no resonance, and trunk postural control was evident in the APMS. The results of this chapter can be used to identify the parameters of biodynamic lumped human-machine models. Such models can support the development of more comfortable and safe bicycle designs and suspension systems.
Chapter 4 presents the design and implementation of an instrumented steer-by-wire bicycle (SBW) that was designed and built at TU Delft bicycle laboratory. The
SBW was used as a versatile experimental platform to capture the rider’s responses with (haptics on) and without steering torque feedback (haptics off) during lateral perturbation experiments. Simulations and testing of the steer-by-wire system indicated good tracking performance between 0-2.5 Hz and almost identical steer stiffness with the Carvallo Whipple bicycle model in a frequency range of 0-3 Hz and in a forward speed range of 0-10 m/s. The bicycle served its purpose successfully, the responses of the rider’s control actions with lateral perturbations were captured by means of impulse response functions (IRFs) in chapter 5. Results failed to indicate any statistically significant difference between the two steering configurations (haptics on/ off).
Chapter 6 presents and validates a parametric rider control model using data presented in Chapter 5 and uses this model to further assess the effect of haptic
feedback in the balance task of bicycling. Bicycle and rider mechanics have been modelled using the Carvallo Whipple bicycle model extended with rider inertia. A
balancing and heading controller was added, capturing visual, vestibular and proprioceptive sensory information using feedback of roll angle, roll angle rate, heading angle, heading angle rate, steering angle and steering torque, taking into account muscular activation dynamics. Non-parametric and parametric model responses failed to indicate any statistically significant difference between the haptics on/off configurations. However, further analysing the haptic off configuration it became apparent that the rider still receives relevant torque feedback due to the inertia of the handlebars. The reduced feedback was proven to be adequate for the rider to control the bicycle without any major steering discrepancies. To further evaluate the effect of torque feedback in simulations we disconnected the handlebar torque feedback loop of the parametric rider model. In addition, we also disconnected the handlebar position and velocity feedback. Results showed that handlebar torque feedback is significantly important during the riding process. This knowledge might be crucial for the development of new safety systems that could further optimize bicycle handling and assist the rider’s steer control actions in critical situations preventing falls.
Chapter 7 outlines the design and hardware selection for a bicycle simulator. The design requirements together with a detailed description of the hardware selection and testing are presented. The simulator was designed to explore human control behaviour in a safe environment. Preliminary tests showed that all subjects can balance and manoeuvre the bicycle when a simplified bicycle model is used to generate haptic feedback and project the dynamics in the virtual environment. Visual roll of the horizon turned out to be an effective tool for creating the illusion of physical roll but motion sickness was reported.
This thesis ends with the discussion and conclusion Chapters 8, 9 highlighting the developed experimental facilities and the main findings of the research. The chapters herein investigate the effects of external perturbations on bicycle stability and human control using numerical modelling and experimental bicycles capable of measuring kinematics and rider applied forces. This interdisciplinary approach delves into the foundations of human control modelling from both a biomechanical and biomechatronics engineering perspective in an effort to improve cycling safety and reduce falls.","bicycle dynamics; bicycle control; human biomechanics","en","doctoral thesis","","","","","","","","2021-09-30","","","Biomechatronics & Human-Machine Control","","",""
"uuid:780736bf-c2ae-45a0-9fb0-059bbbca9ce7","http://resolver.tudelft.nl/uuid:780736bf-c2ae-45a0-9fb0-059bbbca9ce7","Multiscale spatial contexts and neighbourhood effects","Petrović, A. (TU Delft Urban Studies)","van Ham, M. (promotor); Manley, D.J. (copromotor); Delft University of Technology (degree granting institution)","2020","This thesis has developed alternative methods of operationalising neighbourhoods at multiple spatial scales and used them to advance our understanding of spatial inequalities and neighbourhood effects. The underlying problem that motivated this thesis is that many empirical studies use predefined administrative units, and often this does not align with the underlying theory or geography. Despite the extensive literature on neighbourhood effects and, more generally, on sociospatial inequalities, spatial scale remains an under-analysed concept. As a response to this research gap, this thesis takes a multiscale approach to both theory and the empirical analysis of neighbourhood effects, highlighting the multitude of spatial processes that may affect individual outcomes of people. To operationalise this, we created bespoke areas (centred around each residential location) at a range of one hundred scales representing people’s residential contexts, primarily in the Netherlands but also in multiple European capitals. Using microgeographic data and a large number of scales combined with small distance increments revealed subtle changes in sociodemographic characteristics across space. In doing so, we provided new insights into ethnic segregation, potential exposures to poverty, and neighbourhood effects on income, all in light of the fundamental issue of spatial scale: The analyses of sociospatial inequalities are substantially affected by the scale used to operationalise spatial context, and this varies within and between cities and urban regions. The aim of this thesis was therefore not to find a single, ‘true’ scale of neighbourhood, but to acknowledge, operationalise, and better understand the multiplicity of spatial scales.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-307-6","","","","A+BE I Architecture and the Built Environment No 15 (2020)","","","","","Urban Studies","","",""
"uuid:37c8ec51-59f0-4dec-a3fa-bdd3c69e09db","http://resolver.tudelft.nl/uuid:37c8ec51-59f0-4dec-a3fa-bdd3c69e09db","Modelling Safety Impacts of Automated Driving Systems in Multi-Lane Traffic","Mullakkal-Babu, F.A. (TU Delft Transport and Planning)","van Arem, B. (promotor); Happee, R. (promotor); Wang, M. (copromotor); Delft University of Technology (degree granting institution)","2020","The past three decades have witnessed the emergence of several automotive applications that take over the task of vehicle driving on a sustained basis. The most advanced class of such applications is known as Automated Driving Systems (ADSs). ADS can autonomously operate the vehicle on road stretches that fall under its operational design domain. Industry and governments expect that such systems will be technologically feasible shortly and the traffic will be mixed with system-driven and human-driven vehicles. Even though ADSequipped vehicles will have an impact on traffic safety, there is no clarity on if they would enhance or detriment traffic safety and at what conditions and magnitude. A human and an ADS apply fundamentally different processes to acquire information, make decisions, and operate the vehicle. Therefore, our current insights on the relationship between driving behaviour and safety may not be sufficient to predict the possible impacts of ADS systems. Hence there is an urgent need to study the impacts of ADS functionalities and design factors on traffic safety.","","en","doctoral thesis","TRAIL research school / School of transportation of SEU research school","978-90-5584-265-0","","","","TRAIL Thesis Series no. T2020/6, The Netherlands Research School TRAIL","","","","","Transport and Planning","","",""
"uuid:a5428de8-dc6b-4a45-b0de-d4ac1ad54697","http://resolver.tudelft.nl/uuid:a5428de8-dc6b-4a45-b0de-d4ac1ad54697","Convective heat transfer in coarse-grained porous media: A numerical investigation of natural and mixed convection","Chakkingal, M. (TU Delft ChemE/Transport Phenomena)","Kleijn, C.R. (promotor); Kenjeres, S. (promotor); Delft University of Technology (degree granting institution)","2020","Heat transfer by the motion of fluids, referred to as convective heat transfer, is ubiquitous. Convective heat transfer in enclosures packed with solid obstacles, is of great importance in various engineering and real-life applications, such as in blast furnaces, refrigeration devices, distribution transformers, nuclear waste disposal, energy storage, etc. Both natural convection, where the heat transfer occurs due to the flow induced by density differences, and mixed convection, where the combined influence of natural convection and forced convection is of importance, play an important role in these applications.","Natural convection; mixed convection; packed bed; simulations","en","doctoral thesis","","978-94-6416-089-5","","","","","","","","","ChemE/Transport Phenomena","","",""
"uuid:5cac3120-16ac-4b06-977e-c703d37bb342","http://resolver.tudelft.nl/uuid:5cac3120-16ac-4b06-977e-c703d37bb342","PC-MRI Blood-Flow measurements: Visualization and Data Assimilation","de Hoon, N.H.L.C. (TU Delft Computer Graphics and Visualisation)","Vilanova Bartroli, A. (promotor); Eisemann, E. (promotor); Jalba, A.C. (promotor); Delft University of Technology (degree granting institution)","2020","Heart and vessel diseases, or cardiovascular diseases (CVDs), are globally the main cause of mortality and morbidity. The blood flow plays an important role in their occurrence and progression. Therefore, knowledge of the blood flow is of key importance to reduce and threat these diseases. This knowledge requires both high-quality data and an insightful visual representation. Using advanced imaging techniques, for example phase-contrast magnetic resonance imaging (PC-MRI), the blood flow in a patient can be measured. This technique provides patient-specific 3D data over time, that is, for every measured position, a so-called voxel, the velocity of the blood is measured in all three directions. From these three values per voxel, a vector can then be reconstructed that tells us how fast and in which direction the blood was flowing at the measured location. By doing this for multiple moments one can obtain this data throughout a heart cycle. Since this data is three dimensional and changing over time generating a visual representation of this data is challenging for multiple reasons. One such reason is occlusion where part of the visualization is hidden by the rest of the visualization. Another is visual clutter where the visualization is ``too busy'' and therefore unclear. Moreover, the measured data is subject to noise and artefacts which further hinder the visualization. In this dissertation, novel visualization approaches are presented that address these and other visualization challenges of PC-MRI data. Besides PC-MRI data, blood flow data can be created using computer simulation models, for example using computational fluid dynamics (CFD) models, that are based on physical models. For this usually the shape of the blood vessel is measured using imaging techniques which is in turn used to simulate the blood flow inside the vessel. However, both measuring and modelling the blood flow have their own advantages and disadvantages. For example, PC-MRI measurements suffer from the inevitable effects of measurement noise which causes the data to deviate from the actual blood flow in the patient. Simulations, on the other hand, require detailed input information and are based on model assumptions, and hence result in data that is not always representative for the patient, however, the resulting data does correspond to the physical model. In addition to visualization approaches, this work also presents novel methods that combine PC-MRI measurements and simulations such that the resulting data is both physically-plausible and patient-specific.","visualization; Medical imaging; MRI; simulation; CFD; Data assimilation","en","doctoral thesis","","","","","","","","","","","Computer Graphics and Visualisation","","",""
"uuid:521577f0-a361-4f92-94c5-02a3bc61ef44","http://resolver.tudelft.nl/uuid:521577f0-a361-4f92-94c5-02a3bc61ef44","Wind turbine control: Advances for load mitigations and hydraulic drivetrains","Mulders, S.P. (TU Delft Team Jan-Willem van Wingerden)","van Wingerden, J.W. (promotor); Verhaegen, M.H.G. (promotor); Delft University of Technology (degree granting institution)","2020","In the last decades, tremendous efforts have been put in advancements of wind turbine technology by scientific research and industrial developments. One of the focal areas has been the upscaling of turbines to increase power capacity. However, by enlarging turbine sizes, the square-cube law dictates rising costs per unit of power capacity. To break this trend of increased expenses, more advanced control techniques are key in facilitating load reductions and system level advances. The synthesis of novel controller designs, and advancements of existing strategies, are in this thesis effectuated by leveraging well-established control theory. This method resulted in analysis tools, that gave rise to practical applicable implementations, of which some are evaluated on real-world setups. The employed approach has thereby shown to stimulate further advancements of wind turbine technology...","wind turbine control; open-source controller; individual pitch control; model predictive control; hydraulic drivetrain","en","doctoral thesis","","978-94-6402-183-7","","","","Due to the Corona epidemic, this promotion has been moved from March 31, 2020 to September 11, 2020. The PDF still contains the original promotion date.","","","","","Team Jan-Willem van Wingerden","","",""
"uuid:0fdce774-854d-4b2e-a391-758479dd5abc","http://resolver.tudelft.nl/uuid:0fdce774-854d-4b2e-a391-758479dd5abc","Co-design in the coastal context","d’Hont, Floortje (TU Delft Policy Analysis)","Slinger, J (promotor); Thissen, W.A.H. (promotor); Delft University of Technology (degree granting institution)","2020","In this research, we set out to investigate the phenomenon of ‘co-design’ and explore the applicability of co-design in the complex coastal context. We have turned to investigating design-oriented, collaborative activities aimed at innovative coastal solutions (co-design) and how they strengthen the development of solutions for coastal problems. Co-design provides a means of realising various ends. We observed, for instance, co-design activities with the goal of collaboratively designing engineering solutions to coastal problems. In other situations, the goals of the co-design activities were wider or were aimed at adapting policy. In general, we see that co-design activities aid in identifying (value) dilemmas, clarifying the diversity in actor perspectives, and broadening the potential space for solutions. Co-design activities ideally give different actors the room to work together in a creative and open manner. Local, scientific, practice-based and other forms of knowledge are ideally used in an egalitarian fashion in the search for solutions to coastal management problems. The thesis ‘co-design in the coastal context’ contributes to insights in how to design the co-design activities, reflects upon insights offered by a broad range of literature on co-design in complex systems, and offers usable and practical methods to evaluate such co-design activities.","co-design; coastal management; systems thinking; participation; Stakeholder engagement; Transdisciplinary; Social-ecological systems","en","doctoral thesis","","9789065624475","","","","","","","","","Policy Analysis","","",""
"uuid:2544d1be-5c42-4eea-b360-9e9273f4f218","http://resolver.tudelft.nl/uuid:2544d1be-5c42-4eea-b360-9e9273f4f218","Holographic wavefield imaging for surface reconstruction and 3d tomography","van Rooij, J. (TU Delft ImPhys/Computational Imaging)","Kalkman, J. (promotor); van Vliet, L.J. (copromotor); Delft University of Technology (degree granting institution)","2020","Optical imaging is the imaging of objects with visible light. It is a tool often used for diagnostic purposes, such as in biomedical and material sciences. Digital holography is an optical imaging technique that captures and images the amplitude as well as the phase (the complex amplitude) of the lightwave. An advantage is that the complex amplitude can be calculated in different propagation planes. The goal of this thesis is to use digital holography to image depth of a reflecting surface aswell asmake 3D images of biological samples, and to contribute to the theoretical understanding in this regard.","Holography; Tomography; Imaging; Interferometry; Metrology; Zebrafish; Microscopy; 3D Imaging; Biomedical imaging; Digital holography; Phase imaging; Polarization; Surface characterization; Talbot effect; Inverse problems; Optics: optical devices and systems","en","doctoral thesis","","978-94-6421-006-4","","","","","","","","","ImPhys/Computational Imaging","","",""
"uuid:52b8e925-b619-45f8-9056-39454e82fe02","http://resolver.tudelft.nl/uuid:52b8e925-b619-45f8-9056-39454e82fe02","Acoustic mapping and monitoring of the seabed: From single-frequency to multispectral multibeam backscatter","Gaida, T.C. (TU Delft Aircraft Noise and Climate Effects)","Simons, D.G. (promotor); Snellen, M. (promotor); Delft University of Technology (degree granting institution)","2020","With the increasing human activities in the marine environment, such as fisheries, dredging, coastal protection or construction of marine infrastructure, seabed sediment and habitat mapping have become highly relevant for the development of sustainable marine management strategies. Compared to traditional mapping methods, primarily based on bed sampling, multibeam echosounding belongs to the cutting-edge technology to time-efficiently acquire high-resolution bathymetric and backscatter (BS) data over large areas. Using classification methods to combine the acoustic data with ground-truthing, large-scale maps can be automatically and objectively produced, that enables to describe the distribution of benthic habitats or quantify marine resources. However, acoustic sediment classification still does not allow to discriminate between the entire heterogeneity of the seabed and is generally applied to a single multibeam echosounder dataset by means of revealing the seabed state only at a given time instant. Two challenging issues addressed within the scope of this thesis are summarized as: (1) Investigation on the applicability of repetitive multibeam (single-frequency) BS measurements for monitoring the seabed; and (2) Evaluation of the potential of multispectral BS to increase the acoustic discrimination between different seabed environments.","Multibeam echosounder; acoustic backscatter; sediment classification; habitat mapping; multispectral; coastal monitoring","en","doctoral thesis","","978-94-028-2129-1","","","","","","","","","Aircraft Noise and Climate Effects","","",""
"uuid:6b6f6ab4-0849-4bda-a024-9b06305e3b3c","http://resolver.tudelft.nl/uuid:6b6f6ab4-0849-4bda-a024-9b06305e3b3c","A simulation study for future satellite gravimetry missions","Miragaia Gomes Inacio, P. (TU Delft Physical and Space Geodesy)","Klees, R. (promotor); Ditmar, P.G. (copromotor); Delft University of Technology (degree granting institution)","2020","The Gravity Recovery and Climate Experiment (GRACE), launched in 2002, was the first low-low satellite-to-satellite tracking (ll-SST) satellite gravity mission. One of its primary objectives was to monitor the redistribution of mass in the Earth's system, which is of vital importance not only to the scientific community, but also to society in general. GRACE allowed for the mass redistribution monitoring at much smaller spatial scales than ever before. The data collected by the mission lead to a proliferation of researches in many scientific domains. The GRACE mission, completed in 2017, was considered as an outstanding success. Consequently, the GRACE Follow-On (GFO) mission was launched in 2018 to continue its legacy. With the GFO mission underway, it is now timely to look into the future of satellite gravimetry. The major goal of this thesis was to design and benchmark a set of ll-SST mission concepts with the potential to deliver unprecedented accuracy of mass redistribution estimates. The approach taken was to develop a simulation tool capable of handling arbitrarily complex satellite mission designs. In the first instance, this tool was used to analyze the error budget of the GRACE mission. A combination of simulated errors from various sources showed a very good agreement with observed noise in the GRACE inter-satellite acceleration data. Noise in the frequency range between 1 and 9 mHz, the origin of which was previously unknown, was explained by a combination of positioning, acceleration and ranging errors and errors in the atmosphere and ocean de-aliasing model. A good agreement between simulated and actually observed noise was only possible by properly accounting for the propagation of errors through the computed reference orbits. I called this error propagation mechanism the indirect effect. I formally defined the indirect effect and demonstrated that it propagates differently in different types of ll-SST missions. Next, the error budget of future missions which replicate GRACE was simulated. I confirmed that temporal aliasing errors are the ones that limit the performance of these missions. A better instrumentation will not improve the performance of those missions in any significant way. New mission concepts are required in order to surpass the performance level of the current ones. Afterwards the tool was used to run small-scale simulations in order to gain insight into the mission design aspects which determine the performance of the mission. Small-scale simulations consider relatively short timespans (between 2 and 5 days) and the obtained solutions are typically computed up to a relatively low maximum SH degree (normally between 40 and 60). Using small-scale simulations, I could identify mission design aspects which impact the temporal and spatial resolution of ll-SST missions. Considering different gravity gradient directions as observables, I have shown that collecting multiple observables from a single formation greatly increases the spatial resolution of the mission compared to the single-observable case. This discovery begs the consideration of formations consisting of more than two satellites in order to maximize the spatial resolution. I have also considered missions consisting of multiple formations. For these, I have shown that temporal aliasing errors can be minimized by orienting the polar orbital planes of the satellite formations such that they equipartition 3-D space. Specifically, for two-formation missions, the orbital planes should be perpendicular, while for three-formation missions they should be set 60° apart. On the basis of the small-scale simulations, I have proposed a set of satellite missions, which were benchmarked with full-scale simulations. The missions were designed to combine multiple observables in a single or multiple formations. In the latter case, their orbital planes were correctly oriented in order to minimize temporal aliasing errors. Of the proposed concepts, missions which considered along-track/pendulum (which I called gamma) and along-track/cartwheel (which I called sigma) combinations were found to yield the lowest total errors. Of those, I selected the single-formation along-track/pendulum combination (gamma) mission as the most promising for future ll-SST mission. I have shown that this concept yields large improvements in terms of spatial and temporal resolutions. At the same time, the gamma mission avoids the complexities of the cartwheel pair of satellites and, given that it considers a single satellite formation, it is potentially cheaper and less complex than the other alternatives which considered two. The gamma mission shows substantially lower errors compared to existing ll-SST missions, which may be further reduced when used as the basis for a multi-formation constellation of satellites.","GRACE; Temporal aliasing errors; Satellite formations; Satellite geodesy; Future gravity missions","en","doctoral thesis","","978-94-6366-311-3","","","","","","","","","Physical and Space Geodesy","","",""
"uuid:056d0084-880b-438f-9589-e41a108b425a","http://resolver.tudelft.nl/uuid:056d0084-880b-438f-9589-e41a108b425a","Multi-modal Whole Slide Imaging","van der Graaff, L. (TU Delft ImPhys/Computational Imaging)","Stallinga, S. (promotor); van Vliet, L.J. (promotor); Delft University of Technology (degree granting institution)","2020","This thesis explores a novel optical architecture for Whole Slide Imaging (WSI). This new architecture allows for multi-focal (3D) image acquisitions in a single scan pass. The multi-focal imaging capability is used to demonstrate 3D phase imaging and 3D imaging of thick tissue sections on a prototype scanner. Further, instrumentation for the extension of WSI to fluorescence imaging is developed: a technologically robust and cost effective method based on LEDs and a highly efficient method based on a multi-line laser illumination source. A WSI system is an optical instrument aimed at creating digital images of bio- logical samples mounted on a microscopy slide at a high throughput. WSI systems image tissues over large fields of view (∼ few cm), in 2D or in 3D (up to hundred layers of μm thickness), and at cellular resolution (∼1 μm). They are applied in high throughput screening in biology, and for novel computer aided medical diagnoses in the field of digital pathology. The optical architecture explored in this thesis is based on a tilted multi-line image sensor concept, originating from Philips. The goal of multi-line image ac- quisition is to enable closed-loop autofocus scanning, but it also allows for multi- focal image acquisitions. The core of this scanner concept lies in a novel design for a multi-line image sensor. The sensor is experimentally characterized for gain and noise, and a system model is developed to find the optimum signal-to-noise ratio (SNR) given the available photo-electron flux. This showed that images with a very high SNR of 292 can be acquired, provided that a sufficiently high photon flux can be realized. Two major issues with the sensor were found in our exper- imental characterization. At high line rates, the sensor showed missing symbols, leading to non-linearities in the read out. Further, the sensor showed high frequent fluctuations in the gain. 3D phase imaging and 3D imaging of thick specimens are two novel contrast modalities based on computational imaging techniques and are enabled by the availability of multi-focal images. For both techniques simplified algorithms were developed compatible with parallel processing at very high speeds. For 3D imaging of thick specimens a deconvolution technique is developed for improving the in- herently low axial contrast. 3D phase imaging is realized by a simplified algorithm for Quantitative Phase Tomography (QPT). QPT imaging is found to be able to im- age the sites labeled for Fluorescence in situ Hybridization (FISH) imaging, and provide additional structural information on unlabeled tissues or tissues stained for immunofluorescence. A system design study is presented showing that the in- plane transfer function has the character of a band-pass spatial frequency filter. The major opportunity for WSI systems to become compatible with fluores- cence imaging is addressed by the development of two imaging modalities. First, a widefield fluorescence WSI system with an LED illumination source is developed and built. A color sequential illumination strategy in combination with multi-band dichroics is used for multi-color imaging using a single monochromatic sensor. The main speed limitation is formed by the exposure time required to capture enough photo-electrons for a decent SNR. Based on the experimental results, a system with 96 Time Delayed Integration (TDI) lines is estimated to achieve a rea- sonable throughput of about 130 kPixel/s. This makes scanning possible of an area of 15 × 15 mm2 in three colors in about 23 min. Second, a novel optical architecture for multi-focal fluorescence image acqui- sitions based on a laser illumination source is proposed and realized in a proto- type. Illumination PSF engineering using diffractive optics is applied to generate a set of parallel scan lines in object space, that span a plane conjugate to a tilted im- age sensor. An important new element in the design is the use of higher order astig- matism to improve the uniformity of peak intensity and line width along the scan lines. Focusing the illumination on the sample provides a very high illumination efficiency and a confocal suppression of background. This optical architecture is projected to ultimately achieve a throughput of several hundreds MPixel/s, which would enable scanning an area of 15 × 15 mm2 in 8 layers in less than a minute. This thesis is concluded with an outlook to opportunities for future research in WSI systems. The challenges and some potential solutions for using a general purpose scientific CMOS (sCMOS) camera for multi-line scanning of a tilted ob- ject plane, and some opportunities for extension of WSI techniques to Light Sheet Microscopy (LSM) and Structured Illumination Microscopy (SIM) are discussed. In summary, this thesis investigates the imaging qualities and extension to computational imaging modalities of a brightfield WSI system and describes two approaches for fluorescence WSI.","","en","doctoral thesis","","978-94-6384-158-0","","","","","","","","","ImPhys/Computational Imaging","","",""
"uuid:11fc3020-2e13-4825-92ca-d4c9f7fd6815","http://resolver.tudelft.nl/uuid:11fc3020-2e13-4825-92ca-d4c9f7fd6815","Rational Design of Afterglow and Storage Phosphors","Lyu, T. (TU Delft RST/Luminescence Materials)","Dorenbos, P. (promotor); Delft University of Technology (degree granting institution)","2020","In this thesis, we have studied two types of charge carrier capturing and detrapping processes: (a) electron capturing and electron liberation; (b) hole
capturing and hole liberation. Both the (a) and (b) processes can be utilized for the rational design of afterglow and storage phosphors in different compounds.","afterglow; storage phosphor; charge carrier trapping processes; trap depth engineering; lanthanides; energy storage; Bi2+; Bi3+","en","doctoral thesis","","978-94-6380-906-1","","","","","","","","","RST/Luminescence Materials","","",""
"uuid:5b7f0c06-9664-4fa1-a1b8-0e5e64b500bb","http://resolver.tudelft.nl/uuid:5b7f0c06-9664-4fa1-a1b8-0e5e64b500bb","Making sense of cancer mutations: Looking into the wilderness beyond genes","Rashid, M.M. (TU Delft Pattern Recognition and Bioinformatics)","Reinders, M.J.T. (promotor); de Ridder, J. (promotor); Delft University of Technology (degree granting institution)","2020","Cancer is an umbrella terminology that binds hundreds of complex genetic diseases based on a set of common phenotypic hallmarks. Each cancer and their sub-types have their unique genomic profiles. The common factor that binds them all together is that they all arise from changes in the DNA. Theses changes range from single nucleotide levels variation to large scale chromosomal aberrations. The consequences of these changes also have distinct impacts on disease development and progression depending on their ability to alter the protein function. Changes in the DNA of a protein-coding gene might have a directly quantifiable impact while quantifying the impact of a change in the regulatory DNA (viz. noncoding) element is a non-trivial task. A better understanding of the complex interplay between coding and noncoding genetic variation will lead to a better understanding of the diseases and improve diagnostics and patient care.
This thesis proposes a novel framework for reliable prediction of somatic point
mutations in cancer genomes. The framework was applied to several whole-genome and exome sequenced cancer datasets. Our findings suggested that a consensus-based approach produces a more reliable result than individual mutation detection tools. We also proposed an in-silico post-processing workflow and in-vitro validation guideline to improve the detection accuracy of using orthogonal techniques. Different cancers have distinct mutational burden and profile and understanding these genomic sub-types will lead to better patient stratification and clinical management. Using mutational signature analysis we investigated the inter- as well as intra-tumour heterogeneity in colon adenomas and skin adnexal tumours. By comparing the mutational signature as well as mutation burden between adult and paediatric patients, we identified striking genomic similarities between them. Based on these findings, we recommend that like many adult patients, genomic profiles of paediatric patients should also be routinely taken into consideration while deciding the therapeutic course.
Mutations that give selective survival advantages to cancer cells are commonly
referred to as driver mutations. These mutations can occur both in the protein-coding region of the genome or beyond it. This thesis reviewed several available
driver mutation detection tools and identified a few areas with a considerable scope of improvement. We proposed a novel machine learning-based framework to prioritize noncoding driver mutations in cancer genomes.","","en","doctoral thesis","","978-94-6416-052-9","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:ee8c4d2e-2c63-4f11-8124-4d67fbffd859","http://resolver.tudelft.nl/uuid:ee8c4d2e-2c63-4f11-8124-4d67fbffd859","The Privatisation of a National Project: The settlements along the trans-Israel Highway since 1977","Schwake, G. (TU Delft History, Form & Aesthetics)","Hein, C.M. (promotor); van Bergeijk, H.D. (copromotor); Delft University of Technology (degree granting institution)","2020","The settlements along the Trans-Israel Highway illustrate the privatisation of the national settlement enterprise. To understand this process, this dissertation focuses on the settlement production mechanism, which consists of the reciprocal interests of the government and various private groups to develop and domesticate the border area between the State of Israel and the occupied West-Bank - the Green-Line. Centring on the spatial privileges the state granted to diverse spatial agents, this dissertation examines the manner in which different favoured groups were given the power to colonise, plan, develop and market space as a means to enhance the state’s power over it. Investigating the gradual transformation of this production mechanism, this dissertation explores the increasing privatisation of the local economy and culture, as well as how this was manifested in the built environment. Examining the modifications in the architectural and urban products this mechanism produced, this dissertation analyses the materialisation of the privatised national settlement project and how it transformed together with the changing political and economic interests.","history; architecture; urbanism; privatisation; housing; conflict; frontiers; Israel/Palestine","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-304-5","","","","A+BE I Architecture and the Built Environment No 14 (2020)","","","","","History, Form & Aesthetics","","",""
"uuid:0da54248-4739-4bd2-af52-08109aa37113","http://resolver.tudelft.nl/uuid:0da54248-4739-4bd2-af52-08109aa37113","Quantum resource-saving protocols for early quantum networks","Lipinska, V. (TU Delft QID/Wehner Group)","Wehner, S.D.C. (promotor); Delft University of Technology (degree granting institution)","2020","The Internet as we know it has had an immense impact on the way we communicate. We can now do it faster and more securely than ever before. Enabling quantum communication between any two points on Earth is the next step towards even more secure communication. This is the goal of the quantum internet. Although it is hard to predict all of the applications for the quantum internet, many protocols running on a network connecting nodes able to process qubits have already been identified. Typically, these applications require many qubits to be realized, a requirement which will not likely be met in the early quantum internet.","quantum networks; quantum internet; quantum cryptography; quantum communication; distributed quantum computation","en","doctoral thesis","","978-94-6402-485-2","","","","","","","","","QID/Wehner Group","","",""
"uuid:d58185c6-1630-4e4a-a60a-be878f54e7fa","http://resolver.tudelft.nl/uuid:d58185c6-1630-4e4a-a60a-be878f54e7fa","On the dynamics of non-planar thin liquid films","Shah, M.S. (TU Delft ChemE/Product and Process Engineering)","Kreutzer, M.T. (promotor); Kleijn, C.R. (promotor); Delft University of Technology (degree granting institution)","2020","Thin liquid films are fluid structures with perpendicular length scale, typically of the O(< 10 μm), being much smaller than the lateral length scale, typically of the O(> 1 mm). From foams and emulsions to tear films on eyes, they widely occur in industrial processes and natural phenomena. Depending on the wetting energies between its different interfaces, it is susceptible to developing an instability which can lead to its subsequent rupture. It is a great example of how dynamics at microscopic scale influence large scale physical behaviour, with instabilities at micron scale influencing a foam collapse or the blinking action of an eye. The subject of this thesis focuses on non-planar thin liquid films that are found, for instance, in between two foam bubbles or in partial wetting systems in microfluidic channels. The dynamics of such non-planar films is governed by two thinning mechanisms. The first mechanism involves drainage due to curvature differences, and results in a localized depression, commonly referred to as a dimple, at the connection between the planar and curved regions. The second thinning mechanism involves growth of a fluctuation originated instability arising from the competition between a stabilizing surface tension and destabilizing van der Waals forces. For this second thinning mechanism to manifest, the film’s lateral length (radius) needs to be large enough to accommodate unstable waves to fit within the film. We study thin film dynamics, by performing numerical simulations that incorporate all these crucial physical processes in the thin film equation...","thin liquid films; foams; Stochastic simulations","en","doctoral thesis","","978-94-028-2159-8","","","","","","","","","ChemE/Product and Process Engineering","","",""
"uuid:8389892d-f54c-4a35-ab18-7809f011c1f6","http://resolver.tudelft.nl/uuid:8389892d-f54c-4a35-ab18-7809f011c1f6","Effect of micro-cracking and self-healing on long-term creep and strength development of concrete","Lyu, W. (TU Delft Materials and Environment)","van Breugel, K. (promotor); Delft University of Technology (degree granting institution)","2020","When concrete is subjected to sustained load, it first deforms elastically and then continues to deform with time. The stress-induced time-dependent deformation is, by definition, creep. Creep plays an important role in view of the serviceability, durability and sometimes even the safety of concrete structures. Prediction of the long-term creep is still a challenge. Apart from the time-dependent deformation, the microstructure, strength and elasticity of concrete are also continuously changing under sustained load. This will, in turn, have an influence on the creep deformation. Micro-cracking has been detected experimentally by acoustic emission techniques during creep tests for concrete loaded at different stress levels. It could contribute to both an extra deformation and a reduction in the strength and elasticity. This is somehow contradictory to the experimental observation that there is an extra increase in strength (and elastic modulus) of concrete under sustained load, especially at low and medium stress levels, compared to load-free concrete. There must be another phenomenon during the creep process that ""resolves"" this contradiction. Despite a few theories which have been proposed in the paste to explain the extra increase in strength of concrete under sustained load, the mechanism behind this phenomenon has not been fully understood yet. Besides, how this extra increase in strength influences the long-term creep deformation has rarely been studied. In this research self-healing is considered as a promising mechanism to explain the extra increase in strength of concrete under sustained load. The main aim of this research is to study the effect of micro-cracking and self-healing on the long-term creep and strength development of concrete under sustained load and to gain a better understanding of the behaviour of concrete under sustained load.","Long-term creep; Micro-cracking; Self-healing; Strength increase; Lattice model","en","doctoral thesis","","978-94-6366-283-3","","","","","","","","","Materials and Environment","","",""
"uuid:21b4b06f-31c9-4381-8490-31cad9b3c04f","http://resolver.tudelft.nl/uuid:21b4b06f-31c9-4381-8490-31cad9b3c04f","Investment planning for flexibility sources and transmission lines in the presence of renewable generation","Khastieva, D. (TU Delft Energie and Industrie)","Amelin, M. (promotor); De Vries, Laurens (promotor); Delft University of Technology (degree granting institution); KTH Royal Institute of Technology (degree granting institution); Comillas Pontifical University (degree granting institution)","2020","Environmental and political factors determine long-term development for renewable generation around the world. The rapid growth of renewable generation requires timely changes in power systems operation planning, investments in additional flexible assets and transmission capacity. The development trends of restructured power systems suggest that the current tools and methodologies used for investment planning are lacking the coordination between transmission and flexibility sources. Moreover, a comprehensive analysis is required for efficient investment decisions in new flexibility sources or transmission assets. However, literature does not provide an efficient modeling tool that will allow such a comprehensive analysis. This dissertation proposes mathematical modeling tools as well as solution methodologies to support efficient and coordinated investment planning in power systems with renewable generation. The mathematical formulations can be characterised as large scale, stochastic, disjunctive, nonlinear optimization problems. Corresponding solution methodologies are based on combination of linearization and reformulation techniques as well as tailored decomposition algorithms. Proposed mathematical tools and solution methodologies are then used to provide an analysis of transmission investment planning, energy storage investments planning as well as coordinated investment planning. The analysis shows that to achieve socially optimal outcome transmission investments should be regulated. Also, the results of the simulations show that coordinated investment planning of transmission, energy storage and renewable generation will result in much higher investments in renewable generation as well as more efficient operation of renewable generation plants. Consequently, coordinated investment planning with regulated transmission investments results in the highest social welfare outcome.","energy storage; wind generation; regulation; Incentive mechanism; transmission; investment planning; coordinated investments; decomposition techniques; Benders decomposition; large scale optimization; disjunctive programming","en","doctoral thesis","","978-91-7873-572-3","","","","","","","","","Energie and Industrie","","",""
"uuid:e88a1897-0033-47e3-8b4f-84fd9cd5eec0","http://resolver.tudelft.nl/uuid:e88a1897-0033-47e3-8b4f-84fd9cd5eec0","Model-based control for hybrid and uncertain smart energy systems","Pippia, T.M. (TU Delft Team Tamas Keviczky)","De Schutter, B.H.K. (promotor); Sijs, J. (copromotor); Delft University of Technology (degree granting institution)","2020","Energy systems influence many aspects of society, from the residential sector to the commercial one. Improving the performance and efficiency of energy systems and guaranteeing their stability is a fundamental task of control engineers. In this regard, this thesis presents modeling and control solutions for energy systems, with a focus on both electric and thermal ones. The thesis is divided in three parts. Firstly, we consider an online partitioning and stability problem of a network applied to frequency regulation. Secondly, we present algorithms for energy management system of an electrical microgrid. In particular, we focus on providing a trade-off between computational complexity and performance of the obtained solution. Lastly, we focus on thermal energy systems by designing an algorithm for room temperature control in commercial buildings. In the first part of the thesis, we consider a linear switching large-scale system and we focus on the problem of partitioning the system into smaller subsystems. We assume that the different modes of the switching system are not known a priori, but they can be detected. We propose an online scheme that can partition the system when the mode switches, adapting therefore the partition to the mode of the switching system. The goal of the partitioning algorithmis on the one hand to minimize the coupling between subsystems, in order to facilitate the task of a distributed/decentralized controller, and on the other hand to obtain subsystems with similar sizes, in order to distribute the control effort equally. Moreover, after the system has been partitioned, we apply a decentralized state-feedback control scheme to stabilize the overall system. In order to prove stability, we apply a dwell time stability scheme such that the closed-loop system remains stable even after both the mode and partition changes. The online partitioning method, together with the control algorithm, is applied to an automatic generation control problem of frequency regulation in a large-scale power network. In the second part of the thesis,we consider the energy management system problem in a microgrid. We present several Model Predictive Control (MPC) approaches for optimally managing the power flows in the microgrid, from an economical point of view. The microgrid is modeled using the Mixed Logical Dynamical (MLD) framework. We provide three different strategies that yield a trade-off between computational complexity and performance by parameterizing the inputs to the system. First, we propose a parametric MPC approach, in which the continuous inputs are expressed as parametric functions and the binary variables are heuristically parameterized. Next, we propose an if-then-else parametrization of the binary variables in the MLD model, so that they are assigned a value before the optimization takes place, yielding therefore a real-valued optimization instead of a mixed-integer one. Finally, we use past optimization results obtained from simulations to develop two machine learning methods, i.e. decision trees and random forests, that can provide a binary variable configuration so as to, once again, remove the binary variables from the optimization problem. The results obtained show that the methods can provide a very large decrease in computation time while having almost no loss in performance. Simulation results show how the developed methods are able to provide a large reduction in computation time while having a very little performance loss. Lastly, in the third part we focus on thermal networks. We propose a scenario-based MPC approach to control the temperature room in office buildings. The building is modeled using the tool Modelica that yields a better model description compared to linearized models. The adopted scenario generation method improves upon the current literature by considering that the marginal distributions depend both the prediction time steps and on time itself and that the distributions of the disturbances are not stationary. By combining scenario-based MPC together with Modelica, we can improve the performance of the controller of the building and we show this by comparing our method against a deterministic method using a Modelica model description, but also against the same controllers with a linearized model.","model predictive control; system partitioning; building heating systems; microgrid; energy management system; scenario-based control; energy systems; hybrid systems","en","doctoral thesis","","978-94-6402-435-7","","","","","","","","","Team Tamas Keviczky","","",""
"uuid:25de1e90-f586-4973-8f04-f2c744609959","http://resolver.tudelft.nl/uuid:25de1e90-f586-4973-8f04-f2c744609959","The effect of stray current on hardening and hardened cement-based materials","Susanto, A. (TU Delft Materials and Environment)","van Breugel, K. (promotor); Koleva, D.A. (copromotor); Delft University of Technology (degree granting institution)","2020","Stray current has become a main concern for many years due to its effect on (reinforced) concrete structures and underground infrastructures. It has been reported that stray current affects not only steel reinforcement embedded in concrete, but can also induce degradation of the cement-based matrix. Stray current causes an increase of temperature in hardening concrete due to Joule heating which will accelerate cement hydration. The accelerated cement hydration results in faster evolution of materials properties (e.g. stiffness and strength) and a faster decrease of the capillary porosity. The microstructural change due to stray current flow will affect transport properties, as well as the service life performance of cement-based materials. In case the concrete is exposed to water, leaching of alkali ions will decrease compressive strength and increase permeability and diffusion coefficient of concrete. Under stray current, leaching of alkali ions in concrete is accelerated which will increase level of structural degradation. Deterioration of concrete due to stray current involves many mechanisms including ion and mass transport, electrical conduction, heat transfer and corresponding occurrence of mechanical stresses. However, the study on the effect of stray current on material properties (e.g. microstructural, mechanical, electrical properties) and longterm performance/durability of cement based materials is still lacking. The aim of this thesis is to investigate the effects of stray current on long-term performance of cementbased materials. The results of this project will contribute to a better understanding on beneficial (positive) and/or detrimental (negative) effects of stray current on cementbased materials, which is a point of significant importance for real practice...","Stray current; Joule heating; cement-based materials; temperature; hydration process; microstructure; diffusion coefficient; insulation; service life","en","doctoral thesis","","978-94-92597-47-2","","","","","","","","","Materials and Environment","","",""
"uuid:b463325f-e22c-48e5-b665-ab487e7eff77","http://resolver.tudelft.nl/uuid:b463325f-e22c-48e5-b665-ab487e7eff77","Modifying TiO2 Nanoparticles by Atomic Layer Deposition for Enhanced Photocatalytic Water Purification","Benz, D. (TU Delft ChemE/Product and Process Engineering)","van Ommen, J.R. (promotor); Kreutzer, M.T. (promotor); Hintzen, H.T.J.M. (copromotor); Delft University of Technology (degree granting institution)","2020","Photocatalysts, contrary to conventional catalysts, utilize light to drive a reaction. By absorption of photons, electrons are excited and reach higher states to initiate a redox reaction. This principle can be broadly used to produce chemicals such as hydrogen via the reduction of water or transform carbon dioxide into solar fuels, i.e., methane, methanol, and formaldehyde. Besides chemical production, the reductive/oxidative potential of the excited electrons/holes can be utilized to degrade molecules. This is especially useful for self-cleaning materials or for cleaning waste water streams from molecules such as dyes, pesticides, or pharmaceuticals, as water pollution levels increasingly harm human health, especially in developing countries. Despite the great potential and vast research activities over the last decades, the development of cheap and efficient photocatalysts and especially the evaluation of the mechanism of photocatalytic degradation pathways are still lacking, which hampers the translation from lab to application. TiO2 (P25) is a cheap and non-toxic photoactive material, yet too inefficient for water cleaning. This thesis focuses on the surface modification of TiO2 (P25), minimizing various drawbacks of TiO2 (P25) and on the evaluation of the photocatalytic mechanisms. Atomic layer deposition (ALD) allows us to precisely control the deposition of many different materials at the atomic level, an advantage that could improve the photocatalyst design and optimize the activity.","photocatalysis; water cleaning; TiO2; nanoparticles; atomic layer deposition","en","doctoral thesis","","978-94-028-2143-7","","","","","","","","","ChemE/Product and Process Engineering","","",""
"uuid:b11a2e72-eb71-4720-bd9b-c0d926e6f7a2","http://resolver.tudelft.nl/uuid:b11a2e72-eb71-4720-bd9b-c0d926e6f7a2","Control Shift: European Industrial Heritage Reuse in review, Volume 1 and 2","Chatzi Rodopoulou, T. (TU Delft Heritage & Values)","Kuipers, M.C. (promotor); Zijlstra, H. (promotor); Belavilas,, N. (promotor); Delft University of Technology (degree granting institution)","2020","This dissertation focuses on Industrial Heritage Reuse practice in Europe, with special emphasis on the United Kingdom, the Netherlands, Spain and Greece. This vastly complex yet fascinating topic has not been studied holistically under the circumstances of the contemporary era. In the 21st century, Industrial Heritage Reuse is required to be more responsive, more sustainable, more inclusive and more value-driven than before. An enhanced approach for the transformation of industrial relics is therefore urgently needed. The aim of this dissertation is to explore the potential of enhancement of the Industrial Heritage Reuse by identifying and analysing its influencing Aspects under the light of the contemporary theoretical conservation concepts, the current demands of the field of practice and the rising challenges of the 21st century context. Drawing upon both theory and practice on an international level, this research gives a holistic and multileveled view on the subject under investigation. Industrial Heritage Reuse and its stakeholders are investigated in the setting of the four selected countries through the detailed analysis of 20 case studies of best practice. Volume 1 introduces the research problem and explains the thesis’ rationale; it presents the research methodology, the academic analysis and it finally offers the research products. Volume 2 presents the analysis and evaluation of the 20 selected case studies, varying from Ironbridge in Shropshire, to the Technological and Cultural Park of Lavrion and from Westergasfabriek in Amsterdam to the 22@ district of Barcelona.","industrial heritage; reuse; regeneration; stakeholders; participation; European heritage","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-292-5","","","","A+BE I Architecture and the Built Environment No 13 (2020)","","","","","Heritage & Values","","",""
"uuid:a9ed38e9-4cf5-4b35-a43c-68c1b283e938","http://resolver.tudelft.nl/uuid:a9ed38e9-4cf5-4b35-a43c-68c1b283e938","Data-driven Patient Profiles: Definition, validation, and implementation for tailored orthopaedic healthcare services","Dekkers, T. (TU Delft Applied Ergonomics and Design)","de Ridder, H. (promotor); Melles, M. (copromotor); Delft University of Technology (degree granting institution)","2020","In order to provide patients with the highest possible quality of care, healthcare institutions often standardize the way they provide healthcare. Yet, there are also more and more calls for tailored healthcare services that are intended for one specific person and based on characteristics that are unique to that person. This dissertation investigates tailored healthcare services and does so specifically in the orthopaedic context. Orthopaedic patients, in particular patients who have undergone joint replacement surgery of the hip or knee joint, are relatively dissatisfied with the current healthcare service provided to them. Specifically, the communication with total joint replacement patients (including the way in which patients are informed about the surgery, its risks and the treatment plan, but also the emotional support they receive from healthcare providers) often leaves something to be desired. In examining tailored healthcare as a potential solution to dissatisfaction with patient-provider communication, this dissertation focuses on the definition, validation and implementation of so-called patient profiles. Patient profiles represent the common characteristics of a specific subgroup of patients that are unique compared to the overall patient population. The patient profiling approach is derived from the principles of mass customization and assumes that representations of the common and unique preferences, needs, and competences of different groups of patients can be used to design tailored healthcare services. These tailored healthcare services can then be offered to individual patients based on their profile. It is expected that tailored healthcare services will lead to improvements in patient experience. This dissertation examines patient profiles and the effect of the patient profiling approach on patient experience following four questions: (1) what are relevant patient characteristics for patient profiling?, (2) which data driven patient profiles can be distinguished?, (3) which orthopaedic healthcare services are suitable for tailoring?, and (4) what is the effect of tailored healthcare services on patient experience? These questions are approached using the biopsychosocial model. The biopsychosocial model assumes that biomedical factors (such as pain and physical functioning) as well as psychological and social factors (such as coping mechanisms and communication preferences and competences) influence how someone experiences their illness, and therefore, what type of healthcare service would suit them. A combination of research methods including observations, interviews, questionnaires, machine learning, systematic literature reviews and experiments were used to answer the specific research questions...","health psychology; service design; total joint arthroplasty; personalisation","en","doctoral thesis","","978-94-028-2001-0","","","","","","","","","Applied Ergonomics and Design","","",""
"uuid:cd8011ea-8f77-4127-9e51-b2574c4cc3e2","http://resolver.tudelft.nl/uuid:cd8011ea-8f77-4127-9e51-b2574c4cc3e2","DC Distribution Systems: Modeling, Stability, Control & Protection","van der Blij, N.H. (TU Delft DC systems, Energy conversion & Storage)","Bauer, P. (promotor); Spaan, M.T.J. (promotor); Ramirez Elizondo, L.M. (copromotor); Delft University of Technology (degree granting institution)","2020","Historically speaking, alternating current (ac) has been the standard for commercial electrical energy distribution. This is mainly because, in ac systems, electrical energy was easily transformed to diffierent voltages levels, increasing the efficiency of transmitting power over long distances. However, technological advances in, for example, power electronics, and societal concerns such as global warming indicate that a re-evaluation of the current distribution systems is timely. Direct current (dc) distribution systems are foreseen to have advantages over their ac counterparts in terms of efficiency, distribution lines, power conversion and control. Moreover, most renewable energy sources and modern loads produce or utilize dc, or have a dc link in their conversion steps. However, the stability, control, protection and standardization of these systems, and the market inertia of ac systems are major challenges for the broad adoption of dc distribution systems.
Steady-State, Dynamic and Transient Modeling
Adequate models of dc distribution grids are required for the analysis, design and optimization of these systems. In this thesis new and improved methods are proposed for steady-state and dynamic modeling. Two novel steady-state methods are presented, which are shown to be better than the methods in existing literature with respect to convergence, computational effort and accuracy. Furthermore, a dynamic state-space model is proposed that can be efficiently applied to any system topology, and can be used for the stability analysis of these systems. Moreover, an improved symmetrical component decomposition method is presented, which enables simplied (fault) analysis. Transient models for dc distribution systems are briefly discussed, but the development of transient models is outside of the scope of this thesis.
Algebraic and Plug-and-Play Stability
As a result of the decreasing conventional generation, the inertia of electrical grids is signicantly decreased. Furthermore, more and more tightly regulated load converters that have a destabilizing effect on the system's voltage (and frequency) are proliferated throughout the grid. Consequently, the stability of systems with substantial renewable generation is more challenging. In this thesis a method to algebraically derive the stability of any dc distribution system is presented. Moreover, utilizing a Brayton- Moser representation of these systems, two simple requirements are derived for plug-and-play stability (i.e., stability requirements that can be applied to any system, even systems that are subjected to uncertainty or change).
Decentralized Control Strategy and Algorithm
Decentralized control is essential to deal with the trend to decentralize generation and segment the distribution grid, and to manage the potential absence of a communication infrastructure. In this thesis a decentralized control scheme is proposed that ensures global stability and voltage propriety for dc distribution grids. The control scheme divides the acceptable voltage range into demand response, emission, absorption and supply response regions, and species the behavior of converters in these regions. Furthermore, it is shown that inadequate energy utilization can occur, when voltage dependent demand response is utilized. Therefore, the Grid Sense Multiple Access (GSMA) is proposed, which improves the system and energy utilization by employing an exponential backoff routine.
Decentralized Protection Framework and Scheme
Because of the absence of a natural zero crossing, low inertia, meshed topologies and bi-directional power ow, the protection of low voltage dc grids is more challenging than conventional ac grids. In this thesis a decentralized protection framework is presented, which partitions the grid into zones and tiers according to their short-circuit potential and provided level of protection respectively. Furthermore, a decentralized protection scheme is proposed, which consists of a modied solid-state circuit breaker topology and a specied time-current characteristic. It is experimentally shown that this protection scheme ensures security and selectivity for radial and meshed low voltage dc grids.","","en","doctoral thesis","","978-94-6384-152-8","","","","","","","","","DC systems, Energy conversion & Storage","","",""
"uuid:d3e7f749-08a3-47a8-a3a3-22fe90b5df4e","http://resolver.tudelft.nl/uuid:d3e7f749-08a3-47a8-a3a3-22fe90b5df4e","Homogeneous Rotating Turbulence: Inverse Energy Cascade and the Dissipation Scaling Law","Pestana, T. (TU Delft Aerodynamics)","Hickel, S. (promotor); Delft University of Technology (degree granting institution)","2020","Using direct numerical simulations, this thesis studies primarily fundamental aspects of homogeneous rotating turbulence. Within this context, we first investigate the transition from a split to a forward kinetic energy cascade system with a parametric study covering large aspect ratio domains, which are in the direction of rotation up to 340 times larger than the initial eddy size, and a broad range of rotation rates. This unprecedented database shows that, for fixed geometrical dimensions, the Rossby number governs the amount of energy that cascades to large scales, whereas, for a fixed Rossby number, the control parameter is given by the product between domain size along the rotation axis and forcing wavenumber. Second, we quantify the growth rate of the columnar eddies typical of rotating flows and seek for a scaling law for the energy dissipation rate. Our results indicate that the growth rate of the columnar eddies varies exponentially with the Rossby number, while, for the dissipation scaling law, an analysis based on timescales yields to a power law dependence on the Rossby number. Additionally, we also examine an inertia-gravity wave breaking in the middle-upper mesosphere. We show that optimal perturbations lead to an almost instantaneous wave breaking and secondary bursts of turbulence, a process marked by the formation of fine flow structures around the wave's least stable point. Further, we find that during the breaking events the energy dissipation rate tends to be an isotropic tensor and the local energy transfer, which is predominantly from mean to fluctuating field, is in balance with the pseudo kinetic energy dissipation rate. The latter is relevant to atmospheric flows and a case where rotation and stratification effects coexist.","Turbulence; Homogeneous Rotating flows; Energy Cascade; Dissipation Scaling Law","en","doctoral thesis","","978-94-6366-306-9","","","","","","","","","Aerodynamics","","",""
"uuid:f2947dfb-1482-44f0-8f87-5fa3f5a27b6c","http://resolver.tudelft.nl/uuid:f2947dfb-1482-44f0-8f87-5fa3f5a27b6c","Angles-only relative navigation in low earth orbit","Ardaens, J.H. (TU Delft Space Systems Egineering)","Gill, E.K.A. (promotor); Fonod, R. (copromotor); Delft University of Technology (degree granting institution)","2020","Rendezvous in orbit has recently regained considerable attention, as it is required to enable on-orbit servicing or active debris removal activities. The pressing need for the realization of suchmissions falls within the more general societal attempt to make human activities more sustainable, avoiding wasting valuable resources and ensuring that the environment remains clean after exploitation. Despite the technical heritage of decades of experience, space rendezvous faces, with these new prospects, additional challenges due to the possible noncooperative nature of the target of the rendezvous. A successful and safe approach has to be ensured with limited relative navigation capabilities while reducing the overall mission costs. This quest for cost-effectiveness is indeed required to eventually reach an economically viable large-scale solution able to mitigate the threat posed by the evergrowing population of orbiting space debris.
This dissertation demonstrates that the first part of a rendezvous to a noncooperative object, starting from large separations of several tens of kilometers down to a few hundred meters, can be safely and reliably performed using line-of-sight navigation and solely relying on a single spaceborne camera. More specifically, this research shows that it is possible to use a simple, low-cost, computationally-light and autonomous camerabased embedded navigation system to perform the far-to mid-range approach, thus greatly reducing the necessary onboard equipment and the operational costs. In order to demonstrate this assertion, the dissertation is articulated around three Research Questions:
How to design a reliable and accurate spaceborne real-time angles-only relative
navigation system? How does it behave under real conditions? How can future anglesonly relative navigation systems be improved?","space rendezvous; angles-only navigation; noncooperative spacecraft","en","doctoral thesis","","978-94-028-2153-6","","","","","","","","","Space Systems Egineering","","",""
"uuid:e553e8ae-73be-4718-ab93-81f466db7347","http://resolver.tudelft.nl/uuid:e553e8ae-73be-4718-ab93-81f466db7347","Augmented Fine-Grained Defect Prediction for Code Review","Pascarella, L. (TU Delft Software Engineering)","van Deursen, A. (promotor); Bacchelli, A. (promotor); Delft University of Technology (degree granting institution)","2020","Code review is a widely used technique to support software quality. It is a manual activity, often subject to repetitive and tedious tasks that increase the mental load of reviewers and compromise their effectiveness. The developer-centered nature of code review can represent a bottleneck that does not scale in large systems with the consequence of com- promising firms’ profits. This challenge has led to an entire line of research on code review improvement.
In this thesis, we present our results and remarks on the effectiveness of using fine- grained defect prediction in code review while investigating what are the information needs that lead a proper code review. We started reimplementing the state of the art of defect prediction to understand its replicability; then, we evaluated this model in a more realistic scenario that is typically considered. To improve defect prediction techniques, we come up with a fine-grained just-in-time defect prediction model that anticipates the prediction at commit time and reduces the granularity at the file level. After that, we explored how to improve further prediction performance by using alternative sources of information. We conducted a comprehensive investigation of code comments written by both open and closed source developers. Finally, to understand how to improve code review further, we explored from a reviewers’ perspective what is the information that reviewers need to lead a proper code review.
Our findings show that the state of the art of defect prediction, when evaluated in a realistic scenario, cannot be directly used to support code review. Furthermore, we assessed that alternative sets of metrics, anticipated feedback, and fine-grained suggestions represent independent directions to improve prediction performance. Finally, we discovered that research must create intelligent tools that other than predict defects must satisfy actual reviewers’ needs, such as expert selection, splittable changes, realtime communication, and self summarization of changes.","Code review; defect prediction; software analytics","en","doctoral thesis","","","","","","","","","","","Software Engineering","","",""
"uuid:1e3cedbf-acf7-412f-9927-25ad5f5f1de3","http://resolver.tudelft.nl/uuid:1e3cedbf-acf7-412f-9927-25ad5f5f1de3","From Radar to Reality.Associating persistent scatterers to corresponding objects","Yang, M. (TU Delft Mathematical Geodesy and Positioning)","Hanssen, R.F. (promotor); Lopez Dekker, F.J. (copromotor); Delft University of Technology (degree granting institution)","2020","Multi-epoch Synthetic Aperture Radar Interferometry (InSAR) is widely used to estimate displacements of selected scatterers from phase observations. However, their interpretation needs a connection to objects in the real world.
To associate InSAR scatterers to their corresponding geo-objects, it is necessary to (i) accurately estimate the phase center of radar scatterers in radar coordinates, (ii) precisely position the scatterers in 3D geographic coordinates, and (iii) satisfy the constraint that these positions need to be physically realistic. This study addresses these three requirements.
The effective phase center of a scatterer is not situated at the nominal position of the pixel. As a result, scatterers are evaluated at the wrong position and the reference phase calculated at that location will be biased. We evaluate the influence of this sub-pixel position on the geolocation of the scatterer and its deformation quality for various satellite platforms. A method to locate the phase center of the dominant scatterer is developed and is applied to a stack of TerraSAR-X, Radarsat-2, and Sentinel-1 images. The sub-pixel correction shows to be significant for improving the geolocation, up to a few meters—especially for planar (horizontal) precision. It is only of limited influence for the displacement estimation and more relevant in the case of large orbital baselines.
Even after sub-pixel correction, the position of scatterers in an earth-centered, earth-fixed geodetic datum is often in order of a few meters, which is not always sufficient to physically link the scatterer to a geo-object. We evaluate four approaches for correcting this positioning bias,i.e., (i) an advanced geophysical correction, (ii) the single-epoch deployment of a corner reflector (CR), (iii) a multi-epoch deployment of a CR, and (iv) a correction using a high-precision digital surface model (DSM). The positioning performance of these approaches is analyzed from the aspects of practicability, reliability, and precision with TerraSAR-X and Sentinel-1 data. We show that while the multi-epoch CR approach achieves the best positioning results, the DSM-assisted correction is able to obtain comparable results if a high precision DSM is available, better than DTED-4.
The position of the estimated geometric phase center may differ from the position of the physical phase center. We use ray-tracing to predict the position of point scatterers using generic 3D models, and match them with the detected point scatterers from a stack of TerraSAR-X images. We find that the majority of detected scatterers appears to be positioned at their correct physical location. Moreover, many point scatterers correspond to multiple scattering mechanisms—more than half of the identified scatterers correspond to double- or triple-bounce scatterers. The mismatch between the geometrically estimated position and the signal source occurs mainly for multiple scattering: fourfold and more. This shows that the bounce levels of the scatterers are a relevant attribute to understand and interpret the displacements of persistent scatterers.
In general, we conclude that sub-pixel correction and positioning bias correction should be included in default InSAR data processing, and that the majority of detected scatterers are positioned at physically realistic locations.","satellite radar interferometry; time series InSAR technique; sub-pixel correction; precise point positioning; corner reflectors; digital surface model; multiple scattering; ray-tracing","en","doctoral thesis","","978-94-6384-128-3","","","","","","","","","Mathematical Geodesy and Positioning","","",""
"uuid:f636539f-64a5-4985-b77f-4a0b8c3990f4","http://resolver.tudelft.nl/uuid:f636539f-64a5-4985-b77f-4a0b8c3990f4","A Markov-based vulnerability assessment of distributed ship systems in the early design stage","Habben Jansen, A.C. (TU Delft Ship Design, Production and Operations)","Hopman, J.J. (promotor); Kana, A.A. (copromotor); Delft University of Technology (degree granting institution)","2020","Naval ships are designed to operate in hostile environments. As such, vulnerability reduction is an important aspect that needs to be assessed during the design. With the increased interest in electrification and automation on board naval ships, the vulnerability of distributed systems has become a major topic of interest. However, assessing this is not trivial, especially in early stage design, where the level of detail is limited, but consequences of design decisions are large. As such, a new method for assessing the vulnerability of distributed systems in early stage design has been developed. This method not only evaluates the vulnerability of a pre-defined ship concept, but also provides direction for finding other, potentially better concept. This is done from the perspective of operational capabilities. The method helps ship designers and naval staff in setting vulnerability requirements, developing new concepts, and identifying trade-offs in operational capabilities. The method uses a discrete Markov chain and the eigenvalues of the associated transition matrix. A test case considering vulnerability of a notional Ocean-going Patrol Vessel (OPV) with two different powering concepts illustrates the method. Furthermore, the new method is discussed in terms of design knowledge, including a comparison with other early stage vulnerability reduction methods. In addition to that, an improvement of an existing early stage design procedure for distributed ship systems is made, which shows how the various methods, including the new method, are envisioned to be applied in practice.","early stage ship design; vulnerability reduction; Markov chain; eigenvalues","en","doctoral thesis","","978-94-6384-145-0","","","","","","","","","Ship Design, Production and Operations","","",""
"uuid:2b392951-3781-4aed-b093-547c70cc581d","http://resolver.tudelft.nl/uuid:2b392951-3781-4aed-b093-547c70cc581d","Intertidal Flats in Engineered Estuaries: On the Hydrodynamics, Morphodynamics, and Implications for Ecology and System Management","de Vet, P.L.M. (TU Delft Coastal Engineering)","van Prooijen, Bram (promotor); Wang, Zhengbing (promotor); Delft University of Technology (degree granting institution)","2020","Intertidal flats — regions of estuaries that emerge every tide from the water — form unique ecosystems. Benthic communities living in the bed are a valuable food source for wading birds. Salt marshes present on these flats further enhance the biodiversity. Through the damping of waves, intertidal flats also contribute to the safety of the hinterland against flooding. In engineered estuaries, human interventions such as storm surge barriers, navigation channels, dams, and levees affect these ecologically valuable intertidal flats and may even threaten their existence. Therefore, these systems should be managed with care, requiring a thorough understanding of the mechanisms shaping intertidal flats. This dissertation aims to identify and quantify the natural and anthropogenic processes driving hydrodynamics and morphodynamics of intertidal flats, and to reveal the implications for ecology and system management. The Eastern Scheldt and Western Scheldt estuaries (the Netherlands) were selected for this study. These were chosen because of the extensive datasets measured in both estuaries and the different types of human interventions affecting these systems. In the Eastern Scheldt, a storm surge barrier closes during storm conditions and reduces tidal flow velocities inside the estuary at normal conditions. Tidal velocities are also reduced by dams in the branches of this estuary. In the Western Scheldt, sediment is being relocated from too shallow parts of the navigation channel to other parts of the estuary, enabling navigation to economically important harbors. In this dissertation it is shown that it is the aggregated system of natural forces and human interventions that drives the eco-morphological evolution of intertidal flats in estuaries. Intertidal flats respond to local as well as to system-wide changes in sediment availability and hydrodynamics due to human interventions. Even under major human interventions, the natural forces remain relevant. Due to many spatial and temporal scales involved in the eco-morphological response of intertidal flats to changing natural and anthropogenic forces, estuaries require adaptive management strategies.","Intertidal flats; Estuaries; Human interventions; Natural processes; Morphodynamics; Hydrodynamics; Ecology; Numerical modeling; Field measurements","en","doctoral thesis","","978-94-6384-123-8","","","","","","","","","Coastal Engineering","","",""
"uuid:7c413cdf-1cd0-44e8-b1f5-347a8f888166","http://resolver.tudelft.nl/uuid:7c413cdf-1cd0-44e8-b1f5-347a8f888166","Fog from the Ground Up: Investigating the Conditions Under Which Fog Forms and Evolves Within the Nocturnal Boundary Layer","Izett, J.G. (TU Delft Atmospheric Remote Sensing)","van de Wiel, B.J.H. (promotor); Russchenberg, H.W.J. (copromotor); Delft University of Technology (degree granting institution)","2020","Fog is of critical importance to forecast accurately, not least because of the hazard it presents to human safety. Yet, while weather forecasts have improved significantly over recent decades—and continue to improve—fog remains a particularly challenging phenomenon to predict. The research presented within this thesis takes a step back from prediction, and aims to better understand the conditions under which fog forms and deepens. Topics investigated include the observational likelihood of fog, the near-surface conditions during the infancy of a fog layer, the spatial variability of fog (and the influences thereon), and the growth and evolution of a fog layer.","Analytical analysis; Atmospheric science; CESAR; Clouds; Fog; Meteorology; Mist; Observational analysis; Stable boundary layer; Weather","en","doctoral thesis","","978-94-6366-295-6","","","","","","","","","Atmospheric Remote Sensing","","",""
"uuid:d8dd6f55-d4f8-4f3e-bc15-09c324ea0860","http://resolver.tudelft.nl/uuid:d8dd6f55-d4f8-4f3e-bc15-09c324ea0860","Josephson junctions in superconducting coplanar DC bias cavities: Fundamental studies and applications","Schmidt, F.E. (TU Delft QN/Steele Lab)","Steele, G.A. (promotor); Akhmerov, A.R. (promotor); Delft University of Technology (degree granting institution)","2020","This thesis investigates fundamental properties of Josephson junctions embedded in microwave circuits, and an application arising from this hybrid approach. We used the versatility of superconducting coplanar DC bias cavities to extract previously inaccessible information on phase coherent and subgap mechanisms of graphene Josephson junctions. Chapter 1 gives an introduction to the technology of Josephson field effect transistors, among which graphene junctions show promise for future improvements in quantum computation. Together with an overview of the Josephson effect in superconducting-semiconducting systems, we introduce the concept of coplanar DC bias cavities for probing Josephson junctions at gigahertz frequencies. In chapter 2, we describe the experimental methods developed for carrying out the subsequent measurements. We include details on fabrication, material properties and measurement setup. Results of graphene Josephson junctions embedded in DC bias microwave resonators are presented in chapters 3 and 4. By following the resonance frequency and losses of the circuit, we are able to extract the junctions’ Josephson inductance and subgap resistance. Studying the nonlinear power and bias current response reveals further information on the underlying loss mechanisms and current phase relation. We turn to an application of our hybrid bias cavity – Josephson junction devices to detect small, low-frequency currents in chapter 5. Our device is competitive with state-of-the-art techniques for microwave radiation detection and, with minor modifications, should be able to outperform existing technologies by orders of magnitude. Finally, we conclude the presented work in chapter 6 and provide an outlook on potential future research.","Graphene; Josephson junctions; Josephson inductances; superconducting microwave circuits; current detection; DC bias","en","doctoral thesis","","978-90-8593-442-4","","","","","","","","","QN/Steele Lab","","",""
"uuid:cde75cae-91c8-4f4d-9b65-51bee023cd08","http://resolver.tudelft.nl/uuid:cde75cae-91c8-4f4d-9b65-51bee023cd08","Bayesian Nonparametric Estimation with Shape Constraints","Pang, L. (TU Delft Statistics)","Jongbloed, G. (promotor); van der Meulen, F.H. (copromotor); Delft University of Technology (degree granting institution)","2020","This thesis deals with a number of statistical problems where either censoring
or shape-constraints play a role. These problems have mostly been treated from a frequentist statistical perspective. Over the past decades, the Bayesian approach
to statistics has gained popularity and this is the approach that is adopted in this
thesis. We consider nonparametric statistical models, i.e. models indexed by a parameter that is not of finite dimension. For three different models we investigate the asymptotic properties of the posterior distribution under a frequentist setup. We derive either posterior consistency or posterior contraction rat es. Such results are relevant, as these provides a frequentist justification of using point estimators derived from the posterior. Besides theoretical results, we develop computational methods for obtaining draws from the posterior. Overall, this work is at the intersection of the research areas ""estimation under shape constraints and censoring"", ""Bayesian nonparametrics"" and ""Bayesian computation"".","","en","doctoral thesis","","","","","","","","","","","Statistics","","",""
"uuid:7f2b90c3-d6a9-4034-9304-7e520fd992c9","http://resolver.tudelft.nl/uuid:7f2b90c3-d6a9-4034-9304-7e520fd992c9","Model Predictive Control of Water Level and Salinity in Coastal Areas","Aydin, B.E. (TU Delft Water Resources)","van de Giesen, N.C. (promotor); Abraham, E. (copromotor); Oude Essink, G. H. P. (copromotor); Delft University of Technology (degree granting institution)","2020","POLDERS are low-lying and artificially drained areas surrounded bywater storage canals. In low-lying delta areas such as theMississippi delta in Louisiana (USA), the Ganges-Brahmaputra delta (Bangladesh), or the Rhine-Meuse delta (The Netherlands), polders experience surface water salinization problem due to saline groundwater exfiltration, which is the upward flow of saline groundwater from the subsurface. A significant increase in surface water salinization is expected globally driven by rising sea levels, leading to a decreasing freshwater availability. Land subsidence, climate change induced decrease in precipitation and sea level rise are expected to accelerate salinization of groundwater and surface water systems. To counteract surface water salinization, freshwater diverted from rivers is used for flushing the canals and ditches in coastal areas.
Sustaining freshwater-dependent agriculture in such areas will entail an increased demand for flushing, while the demand of a better water quality will tend to increase. On the other hand, freshwater usage is not explicitly considered for polder operation and results in excessive use. Decreasing the amount of freshwater usage for polder flushing can create additional supply opportunities for industrial users, drinking water companies or other irrigation systems. To meet the increasing demand for flushing due to expected increase of salinization while the freshwater availability is decreasing, new operational designs are required for polders that will use the available freshwater resources
efficiently.","Salinity control; Groundwater exfiltration; Model Predictive Control; Irrigation; Freshwater; Operational water management; Sensor Placement","en","doctoral thesis","","","","","","","","","","","Water Resources","","",""
"uuid:2b5f24d3-48ec-49a8-a20a-b6fd13a84dc9","http://resolver.tudelft.nl/uuid:2b5f24d3-48ec-49a8-a20a-b6fd13a84dc9","Collective dynamics and pattern formation in systems of communicating cells: Theory and Simulations","Dang, Y. (TU Delft OLD BN/Hyun Youk Lab)","Dogterom, A.M. (promotor); Youk, H.O. (copromotor); Delft University of Technology (degree granting institution)","2020","How a system of genetically identical biological cells organizes into spatially heterogeneous tissues is a central question in biology. Even when the molecular and genetic underpinnings of cell-cell interactions are known, how these lead to multicellular patterns is often poorly understood. Of particular interest are dynamic patterns such as traveling waves, which confer spatiotemporal control over key developmental processes such as differentiation, segmentation and cell division. Theoretical approaches based on mathematical descriptions of underlying physical and chemical processes provide a promising avenue to explore biological pattern formation. In particular, theoretical models connect processes on the molecular scale to biological function on the tissue level and may provide mechanistic descriptions of how patterns are generated and maintained.","pattern formation; multicellular systems; cell-cell communication; self-organization; complex systems; cellular automata; gene networks","en","doctoral thesis","","978-90-8593-435-6","","","","Casimir PhD Series, Delft-Leiden 2020-08","","","","","OLD BN/Hyun Youk Lab","","",""
"uuid:b597524f-3321-465e-90da-9963bd9f7754","http://resolver.tudelft.nl/uuid:b597524f-3321-465e-90da-9963bd9f7754","Human-centered design in laparoscopic skills acquisition: Shifting paradigms in the age of technology","Ganni, S. (TU Delft Applied Ergonomics and Design)","Jakimowicz, J.J. (promotor); Goossens, R.H.M. (promotor); Botden, SMBI (copromotor); Delft University of Technology (degree granting institution)","2020","Minimal Access Surgery (MAS) has multi-faceted implications on the different stakeholders involved in implementation. It requires the surgeon to cope with the ergonomic and cognitive challenges required to perform a surgical procedure. It needs the surgical team to work coherently for the smooth functioning of the technologically complex operating room (OR). It needs educators and policy makers to embrace reform into the novel practices of training skills in MAS. It needs the industry to engage in research to bring out newer tools and technologies akin to training tools. This thesis explores the premise of implementation of MAS in the areas of curriculum design, implementation of training protocols, assessment tools and emerging technologies and trends that improve the learning curve and the transfer of skills from a skills lab setting to the OR. Several surgeons were included from different parts of Europe and India with expertise levels ranging from novice to expert.
To study the unique requirements of individual surgeons and the detriments of performance the Laparoscopic Surgical Skills (LSS) Grade 1 Level 1 was used as a common training curriculum. First, a worldwide survey was conducted on the variety of training modalities and tools. Then, Self-assessment and the implications of “reflection-before-practice” on skills progression was studied. Automated assessment tools were devised using motion tracking software and thresholds for optimal performance were proposed. Physiological markers were studied during OR performance to determine factors that influence immersion in virtual reality surroundings and replicated in physical space using 3D projection for team training. Evolving technology in MAS requires multifaceted approach from training to customization to ensure optimal and efficient patient and ergonomic outcomes.
Forest succession is one factor affecting the classical partitioning in Tropical Deciduous Broadleaf Forest. Using cumulative daily collectors in three different stages of Tropical Dry Forest in Costa Rica, we were able to depict how the increment in forest complexity affects the interception of precipitation. Also, the Plant Area Index was the only structural parameter significantly correlated with the estimates of both, interception and effective precipitation. The capacity of the other parameters (e.g., tree densities, tree heights, number of species) was not enough to describe the effect of a growing forest on the interception of precipitation.
Tropical forests with less water stress during the dry season allocate more biomass to their canopies. This increases the forest complexity in terms of the number of species, canopy height, and plant types. Tropical Evergreen Broadleaf forests have a more complex canopy structure than the Deciduous ones. The tropical wet forest in Costa Rica has a canopy of 45 m height and a large number of plant species including trees, lianas, palms, and bushes that provide a completely different canopy structure than mono-specific forests. Here, we were able to define three canopy layers according to canopy height (overstory, lower and upper understory) and monitor the evaporation process during one dry season. Applying conventional micro-meteorological measurements we were able to determine that the lower and upper understory layers contributed 9 % and 15 % of the evaporation, respectively. Meanwhile, the use of water stable isotopes did not allow us to determine the contribution of transpiration using the keeling plot method. However, the signatures of the stable water isotopes allowed us to determine that the source of water used by the plants depends on its type (liana, tree, palm, or bush). Also, we quantified the evaporation during precipitation events as one-third of the amount measured during dry sunny days. The proportion did not change during rain events per canopy layer. This water vapor was produced by the ""splash droplet evaporation"" process, that together with the energy convection and low air temperature produced the visible vapor plumes. We were able to identify the conditions during which the visible vapor plumes can be spotted. These conditions are the presence of precipitation, air convection, and a lifting condensation level at the top of the canopy with values lower than 100 m.
Plants growing in arid environments developed strategies that help them to cope with the scarcity of water. Usually, these plants grow lumped in patches and the introduction of tree species to fight desertification changed the landscape introducing a forest-like land cover. In a Temperate Shurbland in China, we evaluated the effect of Willow trees (Salix matsudana) and Willow bushes (Salix psammophila) on the soil water after summer. Using stable water isotopes we identified the redistribution of groundwater beneath the plants through the hydraulic lift process.
Mono-specific forest ecosystems such as the Temperate Evergreen Needleleaf Forest may modify the micro-meteorological conditions beneath their canopies. In Speulderbos, we monitor the evaporation process through eddy-covariance and stable water isotope techniques in a Douglas-Fir (Pseudotsuga menziesii) stand. Also, the evaporation process in the forest floor layer was analyzed in detail under laboratory conditions. Different forest floor layers evaporates up to 1.5 mm d-1, differing from field conditions, where the evaporation from these layers do not exceed the 0.2 mm d-1. This evaporation, represents only the 5.5 % of the total measured during the monitoring period. However, there is no evidence that the forest floor evaporation move upwards to contribute to the total evaporation measured above the overstory. This was confirmed by the eddy-covariance footprint and stable water isotopes signatures of the air measured continuously on the forest. Finally, the partitioning of evaporation based on canopy structure is suitable for complex ecosystems with a large number of species and a multilayered canopy. This leaves the classical partitioning for more homogeneous ecosystems where it can be carried out with a smaller monitoring investment.","evaporation; ecosystems; water stable isotopes; canopy","en","doctoral thesis","","978-94-6366-300-7","","","","","","","","","Water Resources","","",""
"uuid:cd3da8fa-fd23-4d27-823c-b34eb99ad07d","http://resolver.tudelft.nl/uuid:cd3da8fa-fd23-4d27-823c-b34eb99ad07d","Antenna Array Synthesis and Beamforming for 5G Applications: An Interdisciplinary Approach","Aslan, Y. (TU Delft Microwave Sensing, Signals & Systems)","Yarovoy, Alexander (promotor); Delft University of Technology (degree granting institution)","2020","Realization of the future 5G systems requires the design of novel mm-wave base station antenna systems that are capable of generating multiple beams with low mutual interference, while serving multiple users simultaneously using the same frequency band. Besides, small wavelengths and high packaging densities of front-ends lead to overheating of such systems, which prevents safe and reliable operation. Since the strict cost and energy requirements of the first phase 5G systems favor the use of low complexity beamforming architectures, computationally efficient signal processing techniques and fully passive cooling strategies, it is a major challenge for the antenna community to design multibeam antenna topologies and front-ends with enhanced spatial multiplexing, limited inter-beam interference, acceptable implementation complexity, suitable processing burden and natural-only/radiative cooling.
Traditionally, array design has been performed based on satisfying the given criteria solely on the radiation patterns (gain, side lobe level (SLL), beamwidth etc.). However, in addition to the electromagnetic aspects, multibeam antenna synthesis and performance evaluation in 5G systems at mm-waves must combine different disciplines, including but not limited to, signal processing, front-end circuitry design, thermal management, channel & propagation and medium access control aspects.
Considering the interdisciplinary nature of the problem, the main objective of this
research is to develop, evaluate and verify innovative multibeam array techniques and solutions for 5G base station antennas, not yet used nor proposed for mobile communications. The research topics include the investigation of (i) new array topologies, compatible with IC passive cooling, including sparse, space tapered arrays and optimised subarrays, meeting key requirements of 3-D multi-user coverage with frequency re-use and power-efficient side-lobe control, (ii) adaptive multiple beam forming strategies and digital signal processing algorithms, tailored to these new topologies, and (iii) lowcost/competitive and sufficiently generic implementation of the above array topologies and multi beam generation concepts to serve multiple users with the same antenna(s) with best spectrum and power efficiencies.
This doctoral thesis consists of three parts. Part I focuses on the system-driven aspects which cover the system modeling (including the link budget and precoding), propagation in mm-wave channels and statistical assessment of the Quality of Service (QoS). Although separate comprehensive studies exist both in the field of propagation/system modeling and antennas/beamforming, the link between the two disciplines is still weak. In this part, the aim of the study is to bridge the gap between the two domains and to identify the trade-offs between the complexity of beamforming, the QoS and the computational cost of precoding in the 5G multibeam base station arrays for various use cases. Based on the system model developed, a novel quantitative relation between the antenna SLLs/pattern nulls and the statistical QoS is established in a line-of-sight (LoS) dominated mm-wave propagation scenario. Moreover, the potential of using smart (low in-sector side-lobe) array layouts (with simple beam steering) in obtaining sufficiently high and robust QoS, while achieving the optimally low processing costs is highlighted. For a possible pure non-line-of-sight (NLoS) scenario, the system advantages (in terms of the beamforming complexity and the interference level) of creating a single, directive beam towards the strongest multipath component of a user are explained via ray-tracing based propagation simulations. The insightful system observations from Part I lead to several fundamental research questions: Could we simplify the multiple beamformingarchitecture while keeping a satisfying QoS? Are there any efficient yet effective alternative interference suppression methods to further improve the QoS? How should we deal with the large heat generation at the base station? These questions, together with the research objectives, form the basis for the studies performed in the remaining parts.
Part II of the thesis focuses on the electromagnetism-driven aspects which include innovative, low-complexity subarray based multibeam architectures and new array optimization strategies for effective SLL suppression. The currently proposed multibeam 5G base stations in the literature for beamforming complexity reduction use either a hybrid array of phased subarrays, which limits the field-of-view significantly, or employ a fully-connected analog structure, which increases the hardware requirements remarkably. Therefore, in the first half of this part, the aim is to design low-complexity hybrid (or hybrid-like) multiple beamforming topologies with a wide angular coverage. For this purpose, two new subarray based multiple beamforming concepts are proposed: (i) a hybrid array of active multiport subarrays with several digitally controlled Butler Matrix beams and (ii) an array of cosecant subarrays with a fixed cosecant shaped beam in elevation and digital beam forming in azimuth. Using the active (but not phased) multiport subarrays, the angular sector coverage is widened as compared to that of a hybrid array of phased subarrays, the system complexity is decreased as compared to that of a hybrid structure with a fully-connected analog network, and the effort in digital signal processing is reduced greatly. The cosecant subarray beamforming, on the other hand, is shown to be extremely efficient in serving multiple simultaneous co-frequency users in the case of a fairness-motivated LoS communication thanks to its low complexity and power equalization capability. Another critical issue with the currently proposed 5G antennas is the large inter-user interference caused by the high average SLL of the regular, periodic arrays. Therefore, in the second half of Part II, the aim is to develop computationally and power-efficient SLL suppression techniques that are compatible with the 5G’s multibeam nature in a wide angular sector. To achieve this, two novel techniques (based on iterative parameter perturbations) are proposed: (i) a phase-only control technique and (ii) a position-only control technique. The phase-only technique provides peak SLL minimization and simultaneous pattern nulling, which is more effective than the available phase tapering methods in the literature. The position-only technique, on the other hand, yields uniform-amplitude, (fully-aperiodic and quasi-modular) irregular planar phased arrays with simultaneous multibeam optimization. The latter technique combines interference-awareness (via multibeam SLL minimization in a predefined cell sector) and thermal-awareness (via uniform amplitudes and minimum element spacing constraint) for the first time in an efficient and easy-to-solve optimization algorithm.
Part III of the thesis concentrates on the thermal-driven aspects which cover the ther mal system modeling of electronics, passive cooling at the base stations and the role of antenna researchers in array cooling. The major aim here is to form a novel connection between the antenna system design and thermal management, which is not yet widely discussed in the literature. In this part, an efficient thermal system model is developed to perform the thermal simulations. To effectively address the challenge of thermal management at the base stations, fanless CPU heatsinks are exploited for the first time for fully-passive and low-cost cooling of the active integrated antennas. To reduce the size of the heatsinks and ease the thermal problem, novel planar antenna design methodologies are also proposed. In the case of having a low thermal conductivity board, using a sparse irregular antenna array with a large inter-element spacing (such as a sunflower array) is suggested. Alternatively, for the densely packed arrays, increasing the equivalent substrate conductivity by using thick ground planes and simultaneously enlarging the substrate dimensions is proven to be useful.
The performed research presents the first ever irregular/sparse and subarray based antennas with wide scan multibeam capability, low temperature, high efficiency power amplifiers and low level of side lobes. The developed antenna arrays and beam generation concepts could have also an impact over a broad range of applications where they should help overcome the capacity problem by use of multiple adaptive antennas, improve reliability and reduce interference. ","antenna array synthesis; Beamforming; 5G base station; Wireless Communications; Cooling","en","doctoral thesis","","978-94-6366-297-0","","","","","","","","","Microwave Sensing, Signals & Systems","","",""
"uuid:7e914dd9-2b53-4b2c-9061-86087dbb93b9","http://resolver.tudelft.nl/uuid:7e914dd9-2b53-4b2c-9061-86087dbb93b9","Design Inquiry Through Data","Kun, P. (TU Delft Design Conceptualization and Communication)","Kortuem, G.W. (promotor); Mulder, I. (promotor); Delft University of Technology (degree granting institution)","2020","The emergence of the internet and subsequent massive data collection and storage is creating vast opportunities for design research and practice. In this dissertation, we investigate the interrelationship between design and data science practices and explore data as a new creative lens for design inquiry. While digital data has been increasingly used by designers, such as using A/B testing to drive design decisions for internet products, data has been less explored as a resource for inquiry about the world. Despite how data-connected artifacts increasingly facilitate human interactions, designers’ repertoire still primarily relies on practices established for inquiring in the physical world. The current industry practice of integrating data scientists into the design team is neither affordable nor feasible to apply across the vast majority of contexts and cases where design operates. To address these problems, in this dissertation, we aim to deepen the theoretical and practical knowledge on the intersection of design and data science, and to develop methodological contributions to support future data-rich design practices. The main research question we pursue in this dissertation is “How can designers integrate data practices into design inquiry?” We address this question through conducting a Research-through-Design program to gain, on the one hand, a better understanding of how the fields of design and data science intersect, and on the other hand, to develop methodological contributions for future data-rich design practices. The resulting conceptual framework of Design Inquiry Through Data has been constructed throughout a series of empirical studies in which data-rich design practices are studied. For each study, practical data methods and techniques have been curated and/or developed...","","en","doctoral thesis","","978-94-6384-154-2","","","","","","","","","Design Conceptualization and Communication","","",""
"uuid:852a8fb0-5dd6-4759-9510-fa18a0708a29","http://resolver.tudelft.nl/uuid:852a8fb0-5dd6-4759-9510-fa18a0708a29","Information availability and data breaches: Data breach notification laws and their effects","Bisogni, F. (TU Delft Organisation & Governance)","van Eeten, M.J.G. (promotor); Delft University of Technology (degree granting institution)","2020","In response to evolving cybersecurity challenges, global spending on information security has grown steadily, and could eventually reach a level that is inefficient and unaffordable. A better understanding of new socio-technical-economic complexities around information security is urgently needed, which requires both reconsideration of traditional cybersecurity issues and investigation of new and unexplored research directions. In recent times, interdisciplinary research has elucidated the many economic and behavioural dimensions of security. This research is rooted in the field of Information Security Economics, and primarily addresses disclosure policy and specifically, data breach notification laws. Data breach notification laws require any business that suffers a data breach, or believes that it suffered a data breach, to notify customers about the incident that entails the unauthorised acquisition of unencrypted and computerised personal information. Such laws offer incentives to the party who owes the notification duty to minimise the number of triggering events and also enable the affected third parties to diminish the consequences, namely identity theft, and to make prudent choices in the future. Public policy that seeks to improve the effects of data breach notification legislation must be informed by a comprehensive understanding of the behaviour and incentives of the organisations and individuals involved in the notification flow. Thus, this dissertation poses the fol-lowing research question: What are the effects of the provisions of data breach notification laws on (1) communications issued by breached organisations to their customers; (2) the timing of breach detection and reaction; (3) the number of data breaches reported; and (4) the volume of identity theft stemming from data breaches? As we live in the era of big data, it was possible to access and utilise data on the number of breaches and the number of notifications sent. However, it was also necessary to examine further the types of breaches that occurred as well as the types of communication sent and how individuals perceived them. This analysis allows to develop specific metrics, activating critical thinking about the measurement and the underlying phenomenon. This dissertation examines these notions and answers the research question through one theoretical peer-reviewed paper and four peer-reviewed empirical studies, each addressing a separate aspect related to the implementation of notification mechanisms, specifically data breach notification laws. Chapter one studies the role of information availability in the cybersecurity landscape and describes a theoretical model for evaluating data breach notification laws as a solution to tackle information asymmetries in the digital arena. Chapter two fo-cuses on the tangible tools needed to implement such laws, specifically the notification process itself, and analyses the extent to which each organisation has leeway to ensure compliance with the law. Drawing on the variation in time for data breach detection and notification and letter content analysis, chapter four discusses the necessity to implement superseding law in order to bring coherence to the diverse approaches used in different geographical areas. Chapter five then addresses underreporting of data breaches. Finally, chapter six explores the relationship between data breaches and identity theft. The dissertation concludes by reflecting on the shared elements across the studies. The conclusion reflects on the role of disclosure policies in the information security arena and on the implications, given the results of these studies, for European data breach notification policies.","Cybersecurity; Economics; Governance; Privacy; Disclosure policy; Data breaches; Identity thefts; Data breach notification laws","en","doctoral thesis","","978-94-6402-415-9","","","","","","","","","Organisation & Governance","","",""
"uuid:1e51acf6-6203-44f3-b6f9-c370f16f5b5d","http://resolver.tudelft.nl/uuid:1e51acf6-6203-44f3-b6f9-c370f16f5b5d","Energetische upgrading van Nederlandse Wederopbouw flats","Schultheiss, F.G. (TU Delft Building Product Innovation)","Eekhout, A.C.J. (promotor); Mohammadi, M. (promotor); Delft University of Technology (degree granting institution)","2020","Introductie | De gebouwde omgeving wordt energieneutraal en circulair, daarvoor zijn renovatieconcepten nodig. Voor hoogbouw woningen is hierover weinig kennis beschikbaar. Het onderzoek richt zich op Nederlandse Wederopbouw hoogbouw systeemwoningen uit de periode 1950-1975 met focus op het ruimtelijk energetische deel van een industrieel gericht upgradeconcept. - Methoden | Het onderzoek is ingedeeld in Flat 1.0 (bestaande hoogbouwsystemen), Flat 2.0 (comfort upgradingen) en Flat 3.0 (gebouwmodel, upgrading en overcladding met ontwerpprincipes). - Resultaten | Uit Flat 1.0 toont technische, bouwfysische, sociale en functionele onvolkomenheden. De geslotenheid van woon- en ontsluitingsgevels voor energieopwekking varieert per bouwmethode: voor stapelbouw 36-68 %, voor zware montagebouw 20-48 % en voor gietbouw 10-26 %. Flat 2.0 laat oplossingen zien die echter vaak suboptimaal zijn om tot een energieneutraalgebouw te kunnen komen. Flat 3.0 bewijst met een energetisch gebouwmodel dat bij woningen met een geslotenheid van de woon- en ontsluitingsgevel van 40 % en de kopse gevels en dak van 100 % op jaarbasis het gebouw en gebruiker in haar eigen energie kan voorzien. Boven de 10 woonlagen is daarvoor een geslotenheid van 50 % vereist. Een elektrische auto als mogelijke toevoeging aan gebruikersgebonden energie is niet haalbaar. Een circulaire overcladding met industriële modulaire tiny active flat house modules inclusief een vernieuwde balkon- en toegankelijke galerijstructuur is een kans voor energetische en functionele upgrading en voor extra wooneenheden voor kleine huishoudens. - Conclusie | Een circulaire industriële overcladding met de tiny active flat house modules in combinatie met een vernieuwde balkon- en galerijstructuur met installatietechniek en upgrading van de bestaande gevels is een nieuw technisch energieneutraal upgrade concept.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-302-1","","","","A+BE I Architecture and the Built Environment No 12 (2020)","","","","","Building Product Innovation","","",""
"uuid:62af27c9-20be-4ad4-97a8-52ea7632742e","http://resolver.tudelft.nl/uuid:62af27c9-20be-4ad4-97a8-52ea7632742e","Architectural Record: 1942-1967: Chapters from the history of an architectural magazine","Panigyrakis, Phoebus I. (TU Delft History, Form & Aesthetics)","Hein, C.M. (promotor); van Bergeijk, H.D. (copromotor); Delft University of Technology (degree granting institution)","2020","The Architectural Record during its midcentury years of 1942 to 1967, was a riveting centre of architectural journalism following and participating in the changing development of the architectural profession. Through the Second World War and the Korean War that brought functionalist modernism to the foreword and through the emerging consumer market of the 1950s, the magazine’s editors’ mission was one of “helping this new-born architectural infant to learn to walk, talk, and attain his full power.” Through archival research, this study deals with the particular history of the Record editors, publishers and contributors along the course of US midcentury modernism and the developing “image of the architect”.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-301-4","","","","A+BE I Architecture and the Built Environment No 11 (2020)","","","","","History, Form & Aesthetics","","",""
"uuid:12a7e93e-34f1-41b5-864f-156ac0f60d30","http://resolver.tudelft.nl/uuid:12a7e93e-34f1-41b5-864f-156ac0f60d30","The application of particle image velocimetry for the analysis of high-speed craft hydrodynamics","Jacobi, G. (TU Delft Ship Hydromechanics and Structures)","Huijsmans, R.H.M. (promotor); Akkerman, I. (copromotor); Delft University of Technology (degree granting institution)","2020","As soon a ship operates at high forward speeds its weight is pre-dominantly supported by hydrodynamic, rather than hydrostatic forces. Small changes in the dynamic pressure distribution on the ship hull can have a significant influence on the ship’s running attitude in calm water, but also on its seakeeping performance. In order to further improve these vessels it is important to experimentally and numerically investigate the flow in the vicinity of the ship hull and to accurately determine global as well as local pressure distributions. In contrast to traditional experimental techniques, which often lack spatial resolution, this thesis presents an alternative experimentalmethod for the analysis of the flow field and the reconstruction of hydrodynamic pressures from particle image velocimetry (PIV). This is a non-intrusive, laser-optical measurement technique where the velocity field of an entire region within the flow is measured simultaneously. The thesis discusses to what extend the PIV technique can be used to analyse the hydrodynamics of high-speed ships during model tests in towing tanks. The research particularly focusses on the influence of high towing tank carriage velocities, that can result in structural vibrations and high out-of-plane velocities, on the quality of the measured velocity fields. Furthermore it is focussed on the reconstruction of hydrodynamic pressures from these, and the propagation of measurement uncertainties towards the final hydrodynamic pressure fields. Hereby, the spatial variation of uncertaintieswithin the measurement region is taken into account. The analysis is done by means of two practical applications with a towed underwater stereo-PIV system. A first test-case analyses the flow in the transom region of a generic planning hull and the influence of an interceptor on the local pressure distribution. A second test-case focusses on the analysis of the flow field in the bow region of a semi displacement hull. Results from both cases show, thatmeasurements can be obtained in regions, where high-spatial resolution is necessary, but cannot be provided by traditional techniques. Being interested in time- or phase-averaged results,multi-plane PIV measurements are used to extend the observed region to capture the three-dimensional velocity and pressure fields. The obtained experimental results are in good agreement with results from numerical simulations.","particle image velocimetry (PIV); underwater PIV; underwater PIV uncertainty; pressure from PIV; fast ships; interceptor","en","doctoral thesis","","978-94-6402-176-9","","","","","","","","","Ship Hydromechanics and Structures","","",""
"uuid:f59f8f98-ebdd-4266-99ab-e70e8e6433ee","http://resolver.tudelft.nl/uuid:f59f8f98-ebdd-4266-99ab-e70e8e6433ee","Development of a high-fidelity multi-physics simulation tool for liquid-fuel fast nuclear reactors","Tiberga, M. (TU Delft RST/Reactor Physics and Nuclear Materials)","Kloosterman, J.L. (promotor); Lathouwers, D. (promotor); Delft University of Technology (degree granting institution)","2020","The Molten Salt Reactor (MSR) is one of the six Generation-IV nuclear reactor designs. It presents very promising characteristics in terms of safety, sustainability, reliability, and proliferation resistance. Numerous research projects are currently carried out worldwide to bring this future reactor technology to a higher maturity, and in Europe efforts are focused on developing a fast-spectrum design: the Molten Salt Fast Reactor (MSFR).
Numerical simulations are essential to develop MSR designs, given the scarce operational experience gained with this technology and the current unavailability of experimental reactors. However, modeling an MSR is a challenging task, due to the unique physics phenomena induced by the adoption of a liquid fuel that is also the coolant: transport of delayed neutron precursors, strong negative temperature feedback coefficient, distributed generation of heat directly in the coolant. Moreover, the geometry of the core cavity of fast-spectrum designs often induces complex three-dimensional flow effects. For these reasons, legacy codes traditionally used in the nuclear community often prove unsuitable to accurately model MSRs, in particular fast-spectrum designs, and must be replaced by dedicated tools.
This thesis presents the development of one of these multi-physics codes, which aims at accurately modeling the three-dimensional neutron transport, fluid flow, and heat transfer physics phenomena characterizing a fast-spectrum liquid-fuel nuclear reactor. The coupling is realized between an incompressible Reynolds-Averaged Navier-Stokes model and a discrete ordinates neutron transport solver, both based on a discontinuous Galerkin Finite Element space discretization which guarantees high-quality of the solution.
As the research was carried out in the context of the Euratom SAMOFAR project, the MSFR is taken as reference case study. We extensively analyze its behaviour at steady-state and during several transient scenarios, assessing the safety of the current design and thus deriving useful information on its further development.
manufacturing and distribution. This technology facilitates the production
of complex, customized products without the need of any specific tooling,
thereby enabling products to be delivered at a lower cost than with traditional
manufacturing. From a design perspective, AM allows designers to selectively
place (multi-)material where it is needed to achieve the designed functionality.
However, despite remarkable progress in the domain of AM, a variety
of challenges – like support structures, staircase effects, and mechanical
performance [1] – should be investigated at depth to fully explore the potential
of AM. On the other hand, these challenges limit the designers’ freedom to
realize their creativity...","Additive Manufacturing; 3D Printing; Robotics","en","doctoral thesis","","978-94-028-2096-6","","","","","","","","","Materials and Manufacturing","","",""
"uuid:8bf72125-9e0f-4ed7-90e8-a8d3d1a82615","http://resolver.tudelft.nl/uuid:8bf72125-9e0f-4ed7-90e8-a8d3d1a82615","Modeling CO2 dissociation in microwave plasma reactors through quasi-steady state non-equilibrium vibrational kinetics","Moreno Wandurraga, S.H. (TU Delft Intensified Reaction and Separation Systems)","Stankiewicz, A.I. (promotor); Stefanidis, Georgios (promotor); Delft University of Technology (degree granting institution)","2020","Plasma reactors emerge as a promising alternative to cope with some of the biggest challenges currently faced by humanity, the global warming, the increasing global energy demand and the need for efficient storage of electricity from renewable energy sources. Plasma reactors have the potential to enable the storage of green renewable electricity into fuels and chemicals through processes whereby CO2 can be used as a feedstock. Owing to these potential benefits there is a need to investigate this technology from a chemical and process engineering perspective. Big challenges are still hindering the development of plasma reactors into a feasible industrial technology. Despite its limitations, computer modelling is an excellent tool to tackle such challenges. It is well known that the chemistry of non-thermal plasmas is usually the most challenging and complex part of plasma modelling due to the large number of species and reactions involved, which can reach hundreds and thousands ones, respectively; hence, there is need for practical approaches to study, design and optimize plasma reactors. This thesis summarizes the research performed towards the development of engineering approaches to study and model plasma reactors by taking CO2 dissociation in a non-thermal microwave plasma reactor as the case study. The vibrational kinetics of CO2 under the typical conditions of non-thermal microwave plasma are studied through an isothermal reaction kinetics model. The importance of the different collisional processes is evaluated throughout the conditions and timescales at which the CO2 dissociation takes place. The long timescale behavior of the vibrational-to-translational temperature ratio at different conditions is discussed and it is shown that its behavior at increasing gas temperatures can be fitted to an algebraic expression. This temperature ratio has been identified as a key variable to achieve an energetically efficient dissociation. The vibrational-to-translational temperature ratio is shown to be useful for the reduction of vibrational kinetics, enabling their implementation in practical engineering models. A novel reduction methodology is developed and demonstrated for the case of CO2 by lumping relevant vibrationally excited states within a single group. Through this methodology, the dissociation and vibrational kinetics of CO2 can be captured in a reduced set of reactions, dramatically decreasing the calculation time of the model. A two-step modelling approach for plasma reactors is also developed. The approach is applied for the case of CO2 dissociation in a surface wave microwave plasma reactor. The reduction methodology is applied to incorporate the vibrationally enhanced dissociation of CO2 in the chemistry of the model. The model predictions are compared to experimental results to validate the model and obtain insight into the performance of the reactor. The reduction methodology and the modelling approach are the result of studying the CO2 dissociation in a non-thermal microwave plasma reactor. Nonetheless, these are based on general fundamentals that apply to other types of discharges and chemistries as well. The modelling approach can be used for process engineering applications involving the design, optimization and verification of plasma reactors and their performance. The reduction methodology can be implemented in the modelling approach when the vibrationally enhanced dissociation is considered relevant.","","en","doctoral thesis","","978-94-6402-172-1","","","","","","","","","Intensified Reaction and Separation Systems","","",""
"uuid:3cef9da8-d432-4d6a-8805-4c094440bd56","http://resolver.tudelft.nl/uuid:3cef9da8-d432-4d6a-8805-4c094440bd56","Replacement optimisation for public infrastructure assets: Quantitative optimisation modelling taking typical public infrastructure related features into account","van den Boomen, M. (TU Delft Integral Design & Management)","Bakker, H.L.M. (promotor); Kapelan, Z. (promotor); Delft University of Technology (degree granting institution)","2020","Ageing infrastructures and shortage of financing induce the need for optimising public infrastructure replacements. From an economic perspective, classical net present value comparison is traditionally the method of choice to decide on investments and replacements. The current research observes that typical infrastructure related features make the classical net present value comparison less suitable in its application for optimising infrastructure replacements. Especially the low discount rate of public sector organisations, price increases and price uncertainty contribute to this phenomenon in which the application of classical net present value comparison leads to suboptimal timing and costs. This observation led to the development of six dedicated replacement optimisation models for common types of infrastructure replacement challenges. A decision support guideline is provided to assist in selecting an appropriate model based on the sequence of intervention strategies, the development of forecasted cash flows and whether uncertainty is involved. The quantitative replacement optimisation models function as blueprints for similar challenges and support a wider decision-making context.","replacement; optimisation; public infrastructure; reliability; real options; uncertainty; Markov decision process","en","doctoral thesis","","978‐94‐028‐1965‐6","","","","","","2020-03-25","","","Integral Design & Management","","",""
"uuid:43d5107c-020d-4a54-896f-37ea759fad4f","http://resolver.tudelft.nl/uuid:43d5107c-020d-4a54-896f-37ea759fad4f","Agent Interactions & Mechanisms in Markets with Uncertainties: Electricity Markets in Renewable Energy Systems","Methenitis, G. (TU Delft Intelligent Electrical Power Grids)","la Poutré, J.A. (promotor); Kaisers, Michael (copromotor); Delft University of Technology (degree granting institution)","2020","Electricity consumption is highly correlated with the level of human development,
which alongside electrification is expected to significantly increase global demand
for electricity in the coming decades. In current electricity systems, most of the electricity is generated by large fossil-fuel power plants on-demand and it is distributed by centrally-managed electricity grids. The increasing demand for electricity, however, should not go hand in hand with the simultaneous intensification of fossil-fuel mine and use, which is a driving cause of rising average temperatures on Earth’s surface. Natural sources such as the sun and wind are expected to replace conventional sources of electricity, such as coal and gas power plants, in the near future, providing a key measure to address climate change and abate the effects of global warming. However, the intermittent and distributed nature of renewable electricity sources requires a redesign of conventional electricity grids that were originally designed following a top-down approach.","","en","doctoral thesis","","978-946-40236-33","","","","","","","","","Intelligent Electrical Power Grids","","",""
"uuid:6ed94a0a-200d-470c-80be-f6c7ab56f2af","http://resolver.tudelft.nl/uuid:6ed94a0a-200d-470c-80be-f6c7ab56f2af","Optimising marine seismic acquisition: Source encoding in blended acquisition and target-oriented acquisition geometry optimisation","Wu, S. (TU Delft ImPhys/Medical Imaging; TU Delft ImPhys/Computational Imaging)","Blacquière, G. (promotor); Verschuur, D.J. (promotor); Delft University of Technology (degree granting institution)","2020","Seismic data acquisition is a trade-off between cost and data quality subject to operational constraints. Due to budget limitations, 3D seismic acquisition usually does not have a dense spatial sampling in all dimensions. This causes artefacts in the processed images, velocity models, or other physical properties. However, we rely on, for example, the accurate images in determining the location of oil and gas-bearing geological structures, and the accurate elastic properties to characterise the reservoir. In this thesis, we propose new methods to improve existing technologies that can optimise marine seismic acquisition. In Part I, we aim at obtaining dense data in less time by improving the so-called blended seismic acquisition techniques. In Part II, we aim at obtaining an improved target illumination with a limited number of sources and receivers by developing an acquisition optimisation framework.","acquisition design; optimisation; deblending; simultaneous source; inversion; genetic algorithm; parameterisation","en","doctoral thesis","","978-94-6384-130-6","","","","","","","","","ImPhys/Medical Imaging","","",""
"uuid:3609236e-ec53-4157-9cdc-e461a9297b71","http://resolver.tudelft.nl/uuid:3609236e-ec53-4157-9cdc-e461a9297b71","On Ship Structure Risk and Total Ownership Cost Management Assisted by Prognostic Hull Structure Monitoring","Stambaugh, K.A. (TU Delft Ship Hydromechanics and Structures)","Kaminski, M.L. (promotor); Ayyub, B. (copromotor); Walters, C.L. (copromotor); Delft University of Technology (degree granting institution)","2020","Ships must perform their missions with a high degree of reliability to maximize availability through their service life. The ultimate safety of the hull structure is time-dependent with degradation caused by the operational environment. Achieving the fore mentioned reliability and mission availability requirements are complicated because ships operate in random seaways producing random loading on the hull structure. The subsequent strength degradation also involves random processes including the material properties themselves. Furthermore, the models used to estimate the loading and responses are not perfect and result in additional randomness and related uncertainty. The potential Risks involved are very high, given the combination of uncertainties and high value of the assets, crews, and related resources. The primary research questions posed by this dissertation include; 1) what approaches are needed to make Risk informed decisions in Ship Structure Life Cycle Management (SSLCM) and, 2) how can Hull Structural Monitoring (HSM) be used effectively to support these decisions? This dissertation addresses these research questions by building on the fundamentals of hull structural loading and failure mechanisms on both component and systems-levels that are unique to ship structure. This fundamental research includes a correlation analysis of the system loading to support new definitions of ship structural system response. This new definition of structural system response provides insights into definitions of serviceability failure, reserve strength, and redundancy. Following the structural systems definition development, this dissertation proposes a Risk and Total Ownership Cost (TOC) trade-space perspective for making informed decisions and managing both Risk and costs associated with SSLCM and fundamental characterization of Risk and uncertainty. The development of Risk-TOC approach provides tangible and relatable benefits for understanding uncertainty in Risk terms required to make informed decisions. The Risk-TOC approach provides a more informed perspective than prior proposals for Decision Theory-based Optimal Inspection approaches with assumptions and parameters that do not fully quantify the uncertainties involved in the SSLCM processes. The Risk-TOC approach also provides a quantitative means for assessing the consequences of different failure modes (i.e., fatigue cracking and corrosion). The Risk-TOC approach provides a quantified basis for comparing Risk and costs given the magnitude of resources at Risk by monetizing uncertainty. In this manner, the Risk - TOC approach provides a framework for fundamental definitions, including monetized uncertainty, analysis of alternatives (AoAs), Return on Investment (RoI), and Value of Information (VoI). The benefits of prognostic HSM are presented in the context of reduction of uncertainty in the SSLCM processes; thereby, reducing Risk and TOC with favorable RoI and VoI. The Risk-TOC approach is verified as demonstrated in example applications involving a US Coast Guard Cutter. A discussion is provided on the implications of the Risk-TOC approach on SSLCM and sustainability. Conclusions and recommendations are presented for further development of the Risk-TOC approach for SSLCM.","Ship Structure; Risk; Total Ownership Cost; Reliability; Fatigue; Corrosion; Hull Structure Monitoring","en","doctoral thesis","","","","","","","","","","","Ship Hydromechanics and Structures","","",""
"uuid:31eff784-f51f-4389-95c5-e47429c43088","http://resolver.tudelft.nl/uuid:31eff784-f51f-4389-95c5-e47429c43088","Hydraulic functioning of permeable pile groins: Numerical simulation","Zhang, R. (TU Delft Coastal Engineering)","Stive, M.J.F. (promotor); Aarninkhof, S.G.J. (promotor); Delft University of Technology (degree granting institution)","2020","Beach erosion, the loss of sand from a beach due to longshore and/or cross-shore sediment transport mechanisms, is a challenging problem. In order to stabilize the beach and to slow down the rate of beach erosion, the construction of hard hydraulic structures is a traditional option. Groins are one of the oldest man-made hydraulic structures designed to intercept the longshore sediment transport and to stimulate sediment deposition within the groin compartments. However, erosion is likely to appear at the downdrift beach stretch of a groin system, due to lack of sufficient sand feeding from the updrift groined beach reaching the downdrift beach. To alleviate sand starvation at a downdrift beach of groins, groins are suggested to be gradually shorter and more permeable approaching the downdrift terminal groin. The primary advantage of permeable groins, compared to impermeable groins, is they do not entirely block longshore currents. The large openings of permeable groins allow littoral drift to flow through. The shoreline response to permeable groins is comparable to a straight line, other than a zig-zag shape response to impermeable groins. Nevertheless, even though the benefits of permeable groins seem obvious, the research on the subject of the hydrodynamics of permeable groins in coastal waters is limited.","groin; Longshore currents; groyne; numerical simulation","en","doctoral thesis","","978-94-6416-000-0","","","","","","","","","Coastal Engineering","","",""
"uuid:d292bb78-062d-4f58-a112-e951d7297878","http://resolver.tudelft.nl/uuid:d292bb78-062d-4f58-a112-e951d7297878","Individually controlled noise reducing devices to improve IEQ in classrooms of primary schools","Zhang, D. (TU Delft Indoor Environment)","Bluyssen, P.M. (promotor); Tenpierik, M.J. (promotor); Delft University of Technology (degree granting institution)","2020","It is well-known that the indoor environmental quality (IEQ) at schools affects the health, comfort and performance of school children. Considering the need for a more effective way to improve both the IEQ in primary school classrooms and children’s satisfaction, along with the positive potential of individual control, this thesis aimed to propose a new way - individual control - to improve the IEQ in classrooms of primary schools and to increase children’s satisfaction in the Netherlands. First the main IEQ problem in classrooms as well as IEQ perceptions and preferences of the school children were identified through literature and field studies. The outcome showed that noise was the main IEQ problem in classrooms of Dutch primary schools, children could be clustered in according to their IEQ perceptions and preferences, and the reported IEQ-improving actions of the teachers could not effectively improve the IEQ for each child. As a follow-up, lab studies were performed in the SenseLab to explore the effect of background sound on children’s sound perception and performance. Together with the outcome of the field studies, results suggested that individual control is a better way to improve IEQ in classrooms. Therefore, to address the main problem – noise - in classrooms, an individually controlled noisereducing device was designed, prototyped and tested with school children in the SenseLab. The results obtained from the simulations, measurements, and children’s feedback on the prototype of the device, demonstrated the feasibility of such devices in classrooms at primary schools.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-287-1","","","","A+BE I Architecture and the Built Environment No 10 (2020)","","","","","Indoor Environment","","",""
"uuid:bf37535b-3f93-42ab-a337-22aec9cdf981","http://resolver.tudelft.nl/uuid:bf37535b-3f93-42ab-a337-22aec9cdf981","On the Development of Wideband Direct Detection Focal Plane Arrays for THz Passive Imaging Applications","van Berkel, S.L. (TU Delft Tera-Hertz Sensing)","Llombart, Nuria (promotor); Neto, A. (promotor); Cavallo, D. (copromotor); Delft University of Technology (degree granting institution)","2020","In the design of millimeter and sub-millimeter wave radiometric imaging systems a persistent goal is the increase in the speed of acquisition of the image while maintaining a high sensitivity. Typically, the highest sensitivity is achieved by cryogenically cooling the detectors, specifically in astronomical applications. However, for the purpose of low-cost imaging applications it is desirable to operate at room temperature. Without cryogenically cooling, the electronic noise introduced by the detectors becomes dominant, making the detectors less sensitive. Resorting to detection architectures containing amplification circuitry might be impractical for implementation in large focal plane arrays (FPAs) fabricated in integrated technologies. This work derives the focal plane architecture that maximizes the imaging speed of radiometers operating at room temperature without using any amplification circuitry. It is shown that in such scenario a practical image acquisition speed can still be achieved when a very broad portion of the THz-band is exploited. Ultimately the imaging speed is maximized when the FPA is undersampled, implying a trade-off in the size of the optics. The analysis is substantiated by a case study using wideband leaky lens antenna feeds operating over a 3:1 relative frequency band. ..","millimeter-waves; submillimeter-waves; Terahertz; ultrawideband; passive imaging; radiometry; schottky barrier diodes; leaky-wave antenna; double slot; connected array; CMOS; focal plane arrays (FPAs)","en","doctoral thesis","","978-94-028-2093-5","","","","","","","","","Tera-Hertz Sensing","","",""
"uuid:ba4563e2-2a45-4443-9c5f-8903b0881236","http://resolver.tudelft.nl/uuid:ba4563e2-2a45-4443-9c5f-8903b0881236","Wind driven circulation in large shallow lakes: Implications for Taihu Lake","Liu, S. (TU Delft Coastal Engineering)","Stive, M.J.F. (promotor); Wang, Zhengbing (promotor); Ye, Qinghua (copromotor); Delft University of Technology (degree granting institution)","2020","Large shallow lakes plays a significant role in the rapid urbanization process. A series of problems have occurred due to urbanization including water quality degradation, flood intensity increase, ecological and environmental issues etc. One of the most important threat comes from eutrophication, as it deteriorates water quality, introduces harmful algal blooms, harms lake ecosystems, affecting human health and hinders social economic development.
This thesis presents a series of studies focusing on wind induced hydrodynamic circulation in large shallow lake, with the implication of Taihu Lake from lake scale hydrodynamic study, to lake scale water quality implication, and to basin scale implication. The proposed modelling approach could serve as a basis and provide information on lake scale wind effects on hydrodynamic circulation and catchment scale urbanization implication on water environment for management and planning of Taihu Lake and Taihu Basin.
highest utility.","","en","doctoral thesis","TRAIL Research School","978-90-5584-268-1","","","","TRAIL Thesis Series no. T2020/11, the Netherlands Research School TRAIL","","","","","Transport and Logistics","","",""
"uuid:39e7cf46-5d0e-49b0-8718-e8adac9cb8bb","http://resolver.tudelft.nl/uuid:39e7cf46-5d0e-49b0-8718-e8adac9cb8bb","From Visionary Tale to Application The Bright and Dark Side of Photo-Biocatalysis","Höfler, G.T. (TU Delft BT/Biocatalysis)","Hollmann, F. (promotor); Arends, I.W.C.E. (promotor); Paul, C.E. (copromotor); Delft University of Technology (degree granting institution)","2020","Redox biocatalysis is a promising approach to carry out redox reactions in industry. The chemical nature of redox reactions requires a stoichiometric supply of redox equivalents. The outstanding class of enzymes “Oxidoreductases” require specific redox equivalents for their function. We envision more environmentally friendly sources for these redox equivalents, in order to promote the application of various sorts of oxidoreductases.","photo-biocatalysis; chloroperoxidase; NADH regeneration","en","doctoral thesis","","978-94-6384-143-6","","","","","","","","","BT/Biocatalysis","","",""
"uuid:c333497d-05ac-422f-9688-31246a6fa7b1","http://resolver.tudelft.nl/uuid:c333497d-05ac-422f-9688-31246a6fa7b1","On the Coupling of Orbit and Attitude Determination of Satellite Formations from Atmospheric Drag: Observability and Estimation Performance","Chaves Jimenez, A. (TU Delft Discrete Mathematics and Optimization)","Gill, E.K.A. (promotor); Guo, J. (copromotor); Delft University of Technology (degree granting institution)","2020","Spacecraft orbit and attitude dynamics have been classically seen as two separated
subjects, since the effect of attitude in orbit dynamics was deemed too small to be
considered, or simply, modeled as noise in the estimation process for most satellites.
In the eighties, a study by [1] showed that for a very large spacecraft, approximately
the size of the International Space Station, the effect of the attitude in the
orbit dynamics should be considered.","Orbit attitude coupling; estimation; spacecraft relative dynamics; Observability Gramian; Extended Kalman Filter","en","doctoral thesis","","978-94-028-2095-9","","","","","","","","","Discrete Mathematics and Optimization","","",""
"uuid:93bba631-f675-4b28-8780-3bd9d686680a","http://resolver.tudelft.nl/uuid:93bba631-f675-4b28-8780-3bd9d686680a","Gaming Simulation and Human Factors in Complex Socio-Technical Systems: A Multi-Level Approach to Mental Models and Situation Awareness in Railway Traffic Control","Lo, J.C. (TU Delft Organisation & Governance)","de Bruijn, J.A. (promotor); Meijer, S.A. (promotor); Delft University of Technology (degree granting institution)","2020","The most dominant reason for the Dutch railways to innovate has been the expected increase in railway passengers and freight demand. To achieve this a large scale (re)design process of the railways as complex socio-technical system is needed. Since infrastructural expansion alone is not a sustainable option, solutions are also sought in innovative process optimizations.
Train traffic and network controllers are responsible operators for the operational management of the railway system. With their experience, these operators hold unique knowledge about the system’s actual characteristics and its dynamics. Therefore, it is crucial to test and train any new system design with them, for example by gaming simulations.
This dissertation focuses on the role of the human operator as part of a large-scale socio-technical system (re)design process of the Dutch railway system.","","en","doctoral thesis","","978-94-028-2078-2","","","","","","","","","Organisation & Governance","","",""
"uuid:8707309f-b9a3-4e09-916d-8fb64328a138","http://resolver.tudelft.nl/uuid:8707309f-b9a3-4e09-916d-8fb64328a138","Sailing Efficiency and Course Keeping Ability of Wind Assisted Ships","van der Kolk, N.J. (TU Delft Ship Hydromechanics and Structures)","Huijsmans, R.H.M. (promotor); Akkerman, I. (copromotor); Delft University of Technology (degree granting institution)","2020","","Wind assist; Hybrid ships; Sailing ships; Naval architecture; EEDI / EEOI; Transport efficiency; RANS CFD","en","doctoral thesis","","978-94-6384-147-4","","","","","","","","","Ship Hydromechanics and Structures","","",""
"uuid:e8068b85-f8f6-4ebd-951f-e9216eb2cc2c","http://resolver.tudelft.nl/uuid:e8068b85-f8f6-4ebd-951f-e9216eb2cc2c","Design, fabrication and characterizations of AlGaN/GaN heterostructure sensors","Sun, J. (TU Delft Electronic Components, Technology and Materials)","Sarro, Pasqualina M (promotor); Zhang, Kouchi (promotor); Delft University of Technology (degree granting institution)","2020","The microelectronics industry, next to the powerful, continuously scaling of integrated circuits, is currently evolving in the diversification of integrated functions, generally referred to as more than Moore (MtM). MtM concerns all technologies enabling microsystems to be elevated to a higher integration level, and with small package size, lower power consumption and lower cost. Microelectromechanical (MEMS) are crucial within this development. While Si has proven to be the primary contestant in the MEMS sensor market, there is a growing need for sensors operating at conditions beyond the limits of Si. Si-based micro-sensors cannot operate in harsh environments such as high temperature, high radiation, high pressure, and chemically corrosive conditions. Wide bandgap semiconductors such as Gallium Nitride (GaN) are potential candidates to replace silicon due to their specific characteristics and proven performance in the power or LED applications. The research objective of this thesis is to develop a MEMS sensor platform utilizing GaN-based materials. The design, fabrication, packaging, and measurement of pressure, deep UV photodetector, and gas sensors are presented and discussed.","AlGaN/GaN; HEMT; MEMS; Micro-heater; Pressure sensor; UV sensor; Gas sensor","en","doctoral thesis","","978-94-6402-350-3","","","","","","","","","Electronic Components, Technology and Materials","","",""
"uuid:32355f3a-a62d-4a03-bcc9-a04e904a3ab9","http://resolver.tudelft.nl/uuid:32355f3a-a62d-4a03-bcc9-a04e904a3ab9","Fan-Out SiC MOSFET Power Module in an Organic Substrate","Hou, F. (TU Delft Electronic Components, Technology and Materials)","Zhang, Kouchi (promotor); Ferreira, Jan Abraham (promotor); Delft University of Technology (degree granting institution)","2020","","SiC MOSFET; phase-leg power module; fan-out packaging; organic substrate; material characterization; static characterization; switching characterization; microchannel thermal management; two-phase cooling","en","doctoral thesis","","978-94-6402-349-7","","","","","","2022-06-25","","","Electronic Components, Technology and Materials","","",""
"uuid:4122ce24-5973-4afc-a084-1520dc8c5738","http://resolver.tudelft.nl/uuid:4122ce24-5973-4afc-a084-1520dc8c5738","Development of a nature-based Geo-engineering solution to reduce soil permeability in-situ","Zhou, Jianchao (TU Delft Geo-engineering)","Heimovaara, T.J. (promotor); Jansen, B. (copromotor); Delft University of Technology (degree granting institution)","2020","Stability of dikes is a national security issue for densely populated low-lying countries situated in delta areas, like the Netherlands. One of the dominant dike failuremechanisms in the Netherlands is piping, where high seepage flow rates transport sand particles and subsequently form a ’pipe’ under a dike structure. As such, one manner to reinforce dike lies in the modification of the seepage flow field. Though many of conventional approaches have demonstrated varied degree of success in creating flow barrier, which is a subsurface structure that can alter the seepage flow field, they are commonly costly in terms of energy and labour. Facing the ever-growing awareness of climate change as well as the large economic scale of the dike stability issue in the Netherlands, the development of alternative techniques is thus desired. The focus of this research project is to develop a cost-effective, robust and environmentally compatible technology for insitu permeability reduction of sub-surface systems. We took inspiration from nature, where a natural soil stratification process (namely Podzolization) shows the viability of organo-metallic complexes precipitation in reducing soil permeability in-situ. The aim of the research presented in this thesis is to quantitatively study the feasibility of using Podzolization-derived approaches to install flow barrier in dikes. Chapter 2 of this thesis presents two approaches for applying organo-metallic complexes to reduce soil permeability in-situ, which are derived from the detailed analysis of Podzolization and the flocculation process between metal salt with organic matter. The first approach bases on the in-situ mixing and reaction between two components (i.e., aluminium (Al) and organic matter (OM) solutions), while the second approach makes use of the direct injection of Al-OM flocs. To understand the feasibility of using these approaches to install flow barrier on site, a 3D process-oriented model was developed. An important aspect of this model development is to incorporate engineering conditions on site into the simulation of processes. A series of scenario analyses were therefore performed with the model in order to facilitate the design and evaluation of the full-scale experiments where the two delivery approaches were applied to install a flow barrier in two dikes.","building with nature; in-situ permeability reduction; metal-organic matter complexation; flow barrier; reactive transport modelling; metal-organicmatter flocs","en","doctoral thesis","","978-94-6366-288-8","","","","","","","","","Geo-engineering","","",""
"uuid:ef8223e4-5c89-4960-8d6e-0603fd514368","http://resolver.tudelft.nl/uuid:ef8223e4-5c89-4960-8d6e-0603fd514368","Image quality assessment and image fusion for electron tomography","Guo, Y. (TU Delft ImPhys/Computational Imaging)","Rieger, B. (promotor); Delft University of Technology (degree granting institution)","2020","Electron tomography is a powerful tool in materials science to characterize nanostructures in three dimensions (3D). In scanning transmission electron microscopy (STEM), the sample under study is exposed to a focused electron beam and tilted to obtain twodimensional (2D) projections at different angles; many imaging modes are available such as high-angle annular dark-field (HAADF). In tomography, the collection of projections is called a tilt-series, from which we can reconstruct a 3D image that represents the sample. While HAADF tomography can clearly reveal the inner structure of the sample, it cannot directly provide compositional information. To better understand nanomaterials with more types of elements, spectral imaging techniques like energy dispersive X-ray spectroscopy (EDS) must be pursued. EDS tomography, however, is currently hampered by slow data acquisition, resulting in a small number of elemental maps with low signalto- noise ratio (SNR). Electron tomography, especially EDS tomography, is an ill-posed inverse problem whose solution is not stable and unique. Although advanced reconstruction techniques may yield a more accurate result by incorporating prior knowledge, they also involve fine-tuning parameters that highly influence the reconstruction quality. Furthermore, while great efforts have been dedicated to developing tomography techniques for image enhancement, directly combining reconstruction volumes at hand has still not been widely considered to the best of our knowledge.","image quality assessment; multimodal image fusion; electron tomography; HAADF-STEM; X-ray spectroscopy; EDS; nanomaterials","en","doctoral thesis","","","","","","","","","","","ImPhys/Computational Imaging","","",""
"uuid:116a487e-c14d-47f8-b1f5-8e9738d263d0","http://resolver.tudelft.nl/uuid:116a487e-c14d-47f8-b1f5-8e9738d263d0","The application perspective of mutatoin testing","Zhu, Q. (TU Delft Software Engineering)","Zaidman, A.E. (promotor); van Deursen, A. (promotor); Panichella, A. (copromotor); Delft University of Technology (degree granting institution)","2020","The main goal of this thesis is to investigate, improve and extend the applicability of mutation testing. To seek the potential directions of how to improve and extend the applicability of mutation testing, we have started with a systematic literature review on the current state of how mutation testing is applied. The results from the systematic literature review have further guided us towards three directions of research: (1) speeding up mutation testing; (2) deepening our understanding ofmutation testing; (3) exploring new application domains ofmutation testing. For the first direction, we have leveraged compression techniques and weak mutation information to speed up mutation testing. The results have shown our proposed mutant compression techniques can effectively speed up strong mutation testing up to 94.3 times with an accuracy > 90%. Given the second direction, we are interested in gaining a better understanding of mutation testing especially in the situation where engineers cannot kill all the mutants by just adding test cases. We have investigated the relationships between code quality regarding the testability and observability, and the mutation score. We have observed a correlation between observability metrics and the mutation score. Furthermore, relatively simple refactoring operations/adding tests enable an increase in the mutation score. As for the third direction, we have explored two new application domains: one is physical computing, and the other is GPU programming. In both application domains, we have designed new mutation operators based on our observations of the common mistakes that could happen during the implementation of the software. We have found promising results in that mutation testing can help in revealing weaknesses of the test suite for both application domains. In summary, we have improved the applicability of mutation by proposing a new speed-up approach and investigating the relationship between testability/observability and mutation testing. Also, we have extended the applicability of mutation testing in physical computing and GPU programming domains.","Software Testing; Software Quality; Mutation Testing","en","doctoral thesis","","","","","","","","","","","Software Engineering","","",""
"uuid:aa3a5641-47b9-44e0-b158-9dd00b0430fe","http://resolver.tudelft.nl/uuid:aa3a5641-47b9-44e0-b158-9dd00b0430fe","The Thing Itself: AA Files and the Fates of Architectural Theory","Weaver, T. (TU Delft Situated Architecture)","Rosbottom, D.J. (promotor); Avermaete, T.L.P. (promotor); Havik, K.M. (copromotor); Delft University of Technology (degree granting institution)","2020","The Thing Itself: AA Files and the Fates of Architectural Theory explores architecture’s relationship with its editing. This is ostensibly discussed through the lens of a particular architectural journal, AA Files (part of a long tradition of journals produced by the Architectural Association in London since its inception in the mid-nineteenth century; and four issues of which form an accompanying second volume to this thesis), which in turn leads to a historical, semantic and polemical examination of what this dissertation contends are the three principal components of any architectural journal: its text (that is, its relationship to words, language and writing); its images (that is, architecture’s visual grammar and iconography) and finally its fundamental subject (that is, the architect themselves). The research is bookended, in its introduction, by an analysis of Victor Hugo’s novel Notre-dame de Paris (1831/32), whose celebrated protagonist, Quasimodo the hunchback, is overshadowed in architectural terms by its even more celebrated formulation – ‘this will kill that’ (meaning, that with invention of the printing press in the fifteenth century, architecture’s responsibility to carry thoughts, ideas, images, had been destroyed). Echoes to the resonances of this formulation then reappear throughout each of the dissertation’s parts, before the conclusion more explicitly identifies the theoretical consequences of Hugo’s much-quoted mantra, and more generally frames the preceding parts of this dissertation through its advocacy of a specific model of writing, image-making, publishing, teaching and essentially considering architecture.","Architecture; editing; writing; image-making; biography; theory","en","doctoral thesis","","","","","","2 volumes","","","","","Situated Architecture","","",""
"uuid:df917670-8d78-4021-b409-5b3961b00f66","http://resolver.tudelft.nl/uuid:df917670-8d78-4021-b409-5b3961b00f66","Through Package Via: A bottom-up approach","Yi, H. (TU Delft Electronic Components, Technology and Materials)","Zhang, Kouchi (promotor); Delft University of Technology (degree granting institution)","2020","The rapid development of semiconductor industry in the past decades has reshaped the world tremendously and greatly changed people's lives. Two-dimensional (2D) down-sizing of semiconductor devices following the “Moore's law” for many years is now reaching its boundary conditions of further exponential growth and three-dimensional (3D) fabrication approaches are getting more attention. Next to the IC manufacturing, 3D chip packaging has been playing a more important role nowadays. Much more cost-effective and scalable technologies are being developed. Vertical interconnect via technology such as through-silicon via (TSV), through mold via (TMV), tall Cu pillar (TCP), and vertical wire bonds (VWB) are the key processes in 3D chip packaging. However, they have limitations in cost, reliability, via density, scalability, and aspect ratio. In addition, as the function integration density keeps increasing, not only electrical interconnection but also optical, mechanical, fluidic, and thermal interconnections are needed in the near future. Thus, a low-cost, reliable, high density, scalable, high-aspect-ratio multi-functional advanced through package via technology is highly demanded in the future semiconductor industry. Through-Polymer Via (TPV) is a novel bottom-up vertical through package via technology potentially low-cost process, suitable for high density and high-aspect-ratio via formation, compatible with the various packaging process and suitable for multi-functional applications. The main process of TPV technology consists of the formation of a high-aspect-ratio polymer structure through lithography in combination with subsequent film assisted molding. TPV process works with both spin coating and dry film lamination for applying the polymer material, which is suitable for low-cost wafer-level and panel-level mass production. The process uses thick photosensitive polymer materials, such as SU-8 and SUEX, and the structures are patterned via lithography, which enables a broad range of high density, and high-aspect-ratio features. Depending on the applications, non-coated polymer structures can serve for optical and mechanical functions while an optional functional coating is applied to the polymer structures to target specific applications, such as metal coating for electrical function.","3D integration; Microelectronic packaging; Vertical interconnection; Through-Polymer Via; Film assisted molding; Polymer; System-in-package (SiP); Radar; Antenna-in-package; Optical encoder; QFN; PCB; Mechanical characterization; Shear test","en","doctoral thesis","","","","","","","","2021-01-01","","","Electronic Components, Technology and Materials","","",""
"uuid:c51a53df-60cd-41ca-9418-364df17eba56","http://resolver.tudelft.nl/uuid:c51a53df-60cd-41ca-9418-364df17eba56","Isothermal Phase Transformations Below the Martensite Start Temperature in a Low-Carbon Steel","Navarro Lopez, A. (TU Delft (OLD) MSE-3)","Santofimia, Maria Jesus (promotor); Sietsma, J. (promotor); Hidalgo Garcia, J. (copromotor); Delft University of Technology (degree granting institution)","2020","Advanced High Strength Steels (AHSS) have been used extensively for the last three decades in the automotive industry as they exhibit an enhanced combination of strength and ductility which has successfully allowed the weight reduction of structural components. This breakthrough has been highly beneficial for the environment, as lighter vehicles have reduced the CO2 emissions during use. In the last decade, the development of AHSS has been focused on the design of complex microstructures containing high strength phases, such as bainite and martensite, as well as a softer phase providing ductility and strain hardening, such as austenite. However, the thermomechanical processing of these multiphase steels requires long, complex, and energy-intensive thermal treatments with a high environmental footprint. New alternative processing routes are being developed for producing these multiphase steels sustainably, without compromise on strength and ductility, thus achieving reduced CO2 emissions throughout the lifecycle of steel. In this framework, a new thermal treatment consisting of a rapid cooling below the martensite start temperature (Ms) followed by an isothermal treatment at the same quenching temperature is proposed as a promising environmentally sustainable alternative for the production of such multiphase steels. This Ph.D. thesis investigates, from a scientific point of view, the phase transformations and the interactions between the phases formed during the above-described novel isothermal treatment below Ms in a low-carbon high-silicon steel. The thermal treatment is applied in different combinations of quenching temperature and isothermal holding time in order to stimulate the formation of diverse phase fraction mixtures. The research also elucidates the effects of the formation of each of the phases on the microstructure-property relationships of these multiphase steels.","","en","doctoral thesis","","978-94-6384-144-3","","","","","","","","","(OLD) MSE-3","","",""
"uuid:59bdd480-5fdb-461e-b053-46fc0bc01a9f","http://resolver.tudelft.nl/uuid:59bdd480-5fdb-461e-b053-46fc0bc01a9f","ThinkingSkins: Cyber-physical systems as foundation for intelligent adaptive façades","Böke, J. (TU Delft Design of Constrution)","Knaack, U. (promotor); Hemmerling, Marco (copromotor); Delft University of Technology (degree granting institution)","2020","New technologies and automation concepts emerge in the digitalization of our environment. This is, for example, reflected by intelligent production systems in Industry 4.0. A core aspect of such systems is their cyber-physical implementation, which aims to increase productivity and flexibility through embedded computing capacities and the cooperation of decentrally networked production plants. This development stage of automation has not yet been achieved in the current state-of-the-art of façades. Being responsible for the execution of adaptive measures, façade automation is part of hierarchically and centrally organised Building Automation Systems (BAS). The research project ThinkingSkins is guided by the hypothesis that, aiming at an enhanced overall building performance, façades can be implemented as cyber-physical systems. Accordingly, it addresses the research question:
How can cyber-physical systems be applied to façades, in order to enable coordinated adaptations of networked individual façade functions?
The question is approached in four partial investigations. First, a comprehensive understanding of intelligent systems in both application fields, façades and Industry 4.0, is elaborated by a literature review. Subsequently, relevant façade functions are identified by a second literature review in a superposition matrix, which also incorporates characteristics for a detailed assessment of each function’s adaptive capacities. The third investigation focuses on existing conditions in building practice by means of a multiple case study analysis. Finally, the technical feasibility of façades implemented as cyber-physical systems is investigated by developing a prototype. The research project identifies the possibility and promising potential of cyberphysical façades. As result, the doctoral dissertation provides a conceptual framework for the implementation of such systems in building practice and for further research.","Climate-adaptivity; building automation; internet of things; machineto- machine communication; embedded façade functions; decentralized control; conceptual framework; system architecture","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-284-0","","","","A+BE I Architecture and the Built Environment No 8 (2020)","","","","","Design of Constrution","","",""
"uuid:14a225cf-0a70-414c-8897-b288e8113ab0","http://resolver.tudelft.nl/uuid:14a225cf-0a70-414c-8897-b288e8113ab0","Architecture and the Time of Space: The Double Progression of Body and Brain","Hauptmann, D. (TU Delft History, Form & Aesthetics)","Hein, C.M. (promotor); Radman, A. (copromotor); Delft University of Technology (degree granting institution)","2020","In this work Deborah Hauptmann deals with the relationships between mind, body, architecture and the city. Major authors ranging from Henri Bergson and Walter Benjamin to Henri Lefebvre and Gilles Deleuze are discussed in order to open up thinking on the roles of perception and the cognitive sciences in today’s society. Various themes are explored. Matter and mind are considered as kinds of multiplicities that affect our distinctions between subject and object. A theoretical framework is carefully constructed and argued in detail, allowing us to grapple with the existing problems of a rapidly changing field of disciplinary actions. The author looks at how vitalism has been applied to space, offers a view of the city through the question of who is allowed to claim right to the city and addresses the idea of the virtual and emergent. She examines the problem of experience by posing questions pertaining to both voluntary and involuntary memory. She concludes by making concepts surrounding biopolitics and noopolitics explicit and investigates their past discourses, demonstrating that they are still pertinent to both the field of architecture and philosophy. This study should be regarded as an original contribution to the discipline of architecture in its broadest sense.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-286-4","","","","A+BE | Architecture and the Built Environment No. 9 (2020)","","","","","History, Form & Aesthetics","","",""
"uuid:3ae4aaeb-fafd-43fc-93f5-663a1fc2e9c1","http://resolver.tudelft.nl/uuid:3ae4aaeb-fafd-43fc-93f5-663a1fc2e9c1","Value Deliberation: Towards mutual understanding of stakeholder perspectives in policymaking","Pigmans, K.A.M. (TU Delft Information and Communication Technology)","Dignum, M.V. (promotor); Doorn, N. (promotor); Delft University of Technology (degree granting institution)","2020","This research is part of the Values4Water project, which includes TU Delft, Waterschap de Dommel, Deltares, Royal HaskoningDHV and Synmind as consortium partners. Policymaking can involve as many perspectives as there are stakeholders. In case of complex societal policies, many interpretations of the problem are possible and often there is no optimal solution. Such problems have also been referred to as wicked problems. Stakeholders are increasingly participating in policymaking to ensure that all perspectives are considered. In a wicked problem, stakeholder perspectives can be so different that they are conflicting. So before a solution can be accepted, stakeholders need mutual understanding of each others’ perspectives. This thesis uses a dialogic action research approach to explore the role of values in facilitating mutual understanding by using deliberation, not necessarily to find consensus but to allow for the exploration of stakeholder perspectives.","","en","doctoral thesis","","","","","","","","","","","Information and Communication Technology","","",""
"uuid:9e3a8b94-87b4-4167-a80c-8cddec3ae58e","http://resolver.tudelft.nl/uuid:9e3a8b94-87b4-4167-a80c-8cddec3ae58e","Correlated Spin Phenomena in Molecular Quantum Transport Devices","de Bruijckere, J. (TU Delft QN/van der Zant Lab)","van der Zant, H.S.J. (promotor); Delft University of Technology (degree granting institution)","2020","In this thesis we study charge transport through individual molecules and mainly focus on the properties of molecular spin. We fabricate nanoscale structures for transport measurements and employ the electromigration break-junction technique to realize three-terminal—transistor-like—single-molecule devices. We investigate the spin-related phenomena that occur in these devices by performing transport experiments.","quantum transport; single-molecule devices; correlated spin phenomena; Kondo effect; hybrid superconducting devices","en","doctoral thesis","","978-90-8593-441-7","","","","","","","","","QN/van der Zant Lab","","",""
"uuid:c2137466-cdb5-42c4-9303-85e64b87acd5","http://resolver.tudelft.nl/uuid:c2137466-cdb5-42c4-9303-85e64b87acd5","Metal sulfides for gas sensing applications: devices and mechanisms","Tang, H. (TU Delft Electronic Components, Technology and Materials)","Zhang, Kouchi (promotor); Delft University of Technology (degree granting institution)","2020","Nanostructured materials have attracted more and more attention in the applications of gas sensing due to their high specific surface area, numerous surface-active sites, as well as the effect of crystal facets with high surface reactivity. These kinds of gas sensors are mainly used for detecting air quality, environment situation, and breath analysis. Among different gas sensors, metal sulfide-based sensors have generated considerable interest in recent years because of their excellent sensitivity, fast response, and good selectivity. Alternatively, driven by the increasing demand for environmental and health monitoring, the sensors are required to have low limit of detection (LOD) in ppb-level, higher response and selectivity, and real-time recording. There are several ways to improve the sensing performance, such as functionalizing metal sulfide (defects, dopants), constructing heterojunction (Schottky junction, p-n, n-n, and p-p semiconductor junction), and using field-effect transistor (FET) gas sensor. Herein, my research aims to explore high-performance gas sensors through these techniques and to research the fundamental mechanism of the gas sensing process for metal sulfides devices. A comprehensive literature review of the state-of-the-art of metal sulfide-based gas sensor is presented in chapter 2. It includes the basic crystal structures, synthesis methods, device fabrication methods, and the gas sensing performances of various metal sulfide-based gas sensors. Since metal sulfides have a shallow valence band and different shapes, sizes, crystalline forms, chemical compositions, they have excellent sensing performance. It is found that the devices based on Schottky diode, metal oxide/metal sulfide heterojunction, and transistor have enhanced gas-sensing performance. Thus in this work, I analyzed the sensing behaviour of an SnS-Ti Schottky contact humidity sensor, an SnOx/SnS heterostructures-based NO2 gas sensor with rich oxygen vacancies, and a WS2/IGZO-based thin film transistor for NO2 gas sensing. To improve the humidity sensing performance, an SnS-Ti Schottky-contacted sensor is designed and analyzed in chapter 3. The SnS nanoflakes were mechanically exfoliated and then transferred on a rigid or flexible substrate. The as-fabricated sensor exhibited high response of 67600% towards 10% RH and 2491000% towards 99% RH, wide RH range from 3% RH to 99% RH, and fast response/recovery time of 6 s /4 s. The flexible humidity sensor shows a similar performance. Through the density functional theory (DFT) analysis and band alignment analysis, it is found that excellent sensing performance is attributed to the Schottky nature of SnS-Ti contact. H2O absorption moves the Fermi level of SnS toward the conduction band, decreasing the Schottky barrier (φB) byΔφB, resulting in thinning of the φB and an increase of the device current. Different relative humidity levels induce different ΔφB and sensitivity. The recovery mechanism is also attributed to the φB. When air flows out of the chamber, the water molecule shifts from the adsorption sites, and the conductivity decreases due to the increased φB. To extend the device’s application, a smart home system based on the sensors is proposed to process the signal from breath and finger touch experiments for noncontact controlling and respiration monitoring. To further improve the LOD and sensitivity for humidity and NO2 gas, four types of SnS-based gas sensors, including liquid phase exfoliated (LPE) SnS nanosheets, SnO2 nanosheets, SnO2/SnS nanocomposites, and SnOx/SnS heterostructure, are explored and comparatively analyzed in chapter 4. The results show that the sensor based on SnOx/SnS heterostructure that formed by the post-oxidation of LPE-SnS nanosheets in air, has excellent humidity sensing response among these four types of sensors. Accordingly, the SnOx/SnS is also used for detecting NO2 gas, which exhibits a high response of 161% towards 1 ppb NO2, wide detecting range (from 1 ppb to 1 ppm), an ultra-low theoretical LOD of 5 ppt, and excellent repeatability. To the best of my knowledge, such a LOD is the lowest among metal sulfide-based and metal oxide-based gas sensors. The sensor also shows excellent gas selectivity to NO2 with comparison to several other gas molecules, such as NO, H2, CO, NH3}, and H2O. The gas sensing mechanism analysis based on experiments and DFT calculations reveals that oxygen vacancies provide more adsorption sites, superior band gap modulation, and more active charge transfer in the sensing interface layer. Metal oxide/metal sulfide heterojunction is a great potential candidate for gas sensing applications. Thus we vertically stacked a p-type narrow bandgap semiconductor (WS2) and an N-type wide bandgap semiconductor (IGZO) to form a type I heterojunction WS2/ IGZO in chapter 5. The straddling gap results in both electrons and holes accumulating on the same side, and sensitive to the external stimulations. First of all, the structural, electronic, and optical properties of WS2/IGZO heterostructure are analyzed by DFT calculation under different E-field, mechanical strain, and gas molecules. The results demonstrate that the band gap of WS2/IGZO heterostructure shows a near-linear decrease with the increase of the E-field both in the negative and positive direction, resulting in a semiconductor-metal transition, revealing an application for the FET. The heterostructure exhibits broad spectral responsivity (from visible light to deep UV wavelengths) and enhanced optical properties under mechanical strain. The tensile strain can weaken the photoresponse of the heterostructure to the UV light and improve the response for the visible light; while for compressive strain, the heterostructure shows a sharp absorption peak in UV light. Moreover, the gas adsorption energy of NH3 and NO2 on the WS2/IGZO heterostructure are calculated, which shows high gas adsorption energy with NO2, indicating the potential application in NO2 gas sensor. The unique and tunable properties based on DFT calculation endow that the WS2/IGZO heterostructure is a good candidate for transistor and gas sensors. Thus, CVD-WS2/IGZO heterojunction-based devices are designed and investigated in two modes, chemiresistor, and transistor mode. The device has a maximum response of 18170% in the chemiresistor mode, and 499400% in the transistor mode under 300 ppm NO2 after applying -20 V gate bias. The heterojunction device is much better than that of only WS2 and IGZO. Moreover, the sensor shows excellent gas selectivity toward NO2 with comparison to several gas vapors such as CO, NH3, and humidity. The superior gas sensing performance could benefit from the heterojunction of WS2 and IGZO and the external electric field under the back gate voltage. In addition, the transistor notably presents a typical ambipolar-behaviour under dry air, while the transistor becomes p-type as the amount of NO2 increases. The mobility, on/off ratio, and subthreshold slope of the device is modulated by the NO2 gas concentration. The unique tunable behaviour can be associated with the doping effects of NO2 on the heterojunction and the modulated Schottky barrier value at the WS2 and IGZO with a metal contact interface. This thesis is concluded with summarizing the main obtained results and providing suggestions for future research opportunities in the field of 2D/nano- metal sulfides materials-based devices. The research for 2D/nanomaterials based device is still at an early stage. It is full of challenges to exploring high-quality materials suitable for gas sensors to guarantee the reliability and long-term stability of the device, to evaluate/test the sample accurately, and to integrate the sensor with the existing system. These fundamental research challenges need to be resolved in the future.","Metal sulfide; gas sensing; 2D-/nano- materials; Schottky contact; heterostructures; transistor; density functional theory (DFT)","en","doctoral thesis","","978-94-6402-348-0","","","","","","2021-06-15","","","Electronic Components, Technology and Materials","","",""
"uuid:7d395c2e-0cd8-457e-a74a-9e8c3b731ef4","http://resolver.tudelft.nl/uuid:7d395c2e-0cd8-457e-a74a-9e8c3b731ef4","Failure of thin-walled structures under impact loading","Mostofizadeh, S. (TU Delft Applied Mechanics)","Sluys, Lambertus J. (promotor); Larsson, R. (promotor); Fagerström, M. (copromotor); Delft University of Technology (degree granting institution)","2020","Increase in computational power during recent years contributed to a significant development in numerical methods in mechanics. There are many methods developed that address various complex problems, yet modelling of initiation and propagation of failure in thin-walled structures requires further development. Among numerous challenges involved, onemain complexity is to capture the behaviour of the material at the failure process zone, where the underlying micro-structure governs the macroscopic process. Accounting for all details in amodel will increase the computational cost, which thereby requires finding a balance between the level of details and the cost incurred. The research in the present thesis aims at developing a framework capable of analysing ductile fracture in terms of initiation and propagation of cracks, which is applicable to thin-walled steel structures subjected to high strain rates. Of particular importance is to address the application to large scale structures for which capturing the accurate response of the structure calls for an efficient numerical procedure.
First, a method is developed to analyse and predict the crack propagation in
thin-walled structures subjected to large plastic deformation under high strain rate loading. In order to represent crack propagation independent of the finite element discretisation, the extended finite element method (XFEM) based on a 7-parameter shell formulation with extensible directors is employed. For the temporal discretisation, as typically used in high speed events and high strain rates, an explicit time integration is used which is observed to be prone to generate unphysical oscillations upon crack propagation. To remedy this problem, two possible solutions are proposed. To verify and validate the proposed model, various numerical examples are presented. It is shown that the results correlate well with the experiments.
Second, to capture the fine scale nature of the ductile fracture process, a new
XFEM based enrichment of the displacement field is proposed that allows for a crack tip and/or kink to be represented within an element. It concerns refining the crack tip element locally yet retaining the macroscale node connectivity unchanged. This in turn results in a better representation of the discontinuous kinematics, however, unlike regular mesh refinement, this requires no change to the macroscale solution procedure. To show the accuracy of the proposed method, a number of examples are presented. It is shown that the proposed method enhances the analyses of the ductile fracture of the thin-walled large scale structures under high strain rates.
Third, in line with the previous developments, a new Phantom node based approach for analyses of the ductile fracture of thin-walled large scale structures is proposed. It concerns subscale refinement of the elements through which the crack progresses. As compared to the XFEM approach, the Phantom node method is more efficient implementation-wise and computationally. It allows for a detailed representation of the crack tip and kink, which leads to a more smooth progression of the crack. The proposed approach is applicable to both low and high order elements of different types. In order to show the accuracy of the new approach a number of examples are presented and compared to the conventional approach.
Finally, a new approach to analyse ductile failure of thin-walled structures based on the continuum damage theory is developed. For this, a Johnson-Cook viscoplasticity formulation coupled to continuum damage is developed, whereby the total response is obtained from a damage enhanced effective visco-plastic material model. Production of the fracture area is governed by a rate dependent damage evolution law, where the damage-visco-plasticity coupling is realised via the inelastic damage driving dissipation. In addition, a local damage enhanced model (without damage gradient terms) is used, which contributes to the computational efficiency. A number of examples are presented to investigate the accuracy of the proposed model and it is shown that themodel provides good convergence properties.","XFEM; Phantom node method; shells; Continuum damage; ductile fracture; cohesive zone; rate dependence","en","doctoral thesis","","","","","","","","","","","Applied Mechanics","","",""
"uuid:724f70a9-efd9-4598-9a24-064aa77db509","http://resolver.tudelft.nl/uuid:724f70a9-efd9-4598-9a24-064aa77db509","Biosynthesis of protein filaments for the creation of a minimal cell","Kattan, J.M. (TU Delft BN/Christophe Danelon Lab)","Danelon, C.J.A. (promotor); Dogterom, A.M. (promotor); Delft University of Technology (degree granting institution)","2020","Humanity has achieved to decipher the most fundamental mechanics of cellular life. Nevertheless, despite intense efforts there are still considerable gaps in our understanding of cellular processes. Traditionally, biologists investigate life by observation of existing lifeforms. In order to assign functions to biological components, it is common practice to remove components from the system and then note the effect this has on the organism. Done repeatedly, this top-down approach allows for the creation of lifeforms with a reduced complexity, which makes it easier to fully map and model their cellular processes. In contrast, more and more additional effort is now exerted by the scientific community to recombine in vitro biological components in order to form a cellular lifeform. This bottom-up approach might not only yield in the end a minimal cellular system created de novo, but will further challenge us to verify and sophisticate our knowledge about cells. Gene expression through transcription and translation is the probably most fundamental process present in all cells existing today and any attempt of designing a minimal cellular system mimicking a real cell faithfully will have to involve these processes at its core. Thus, cell-free gene expression constitutes a key tool for the creation of a minimal cell. In this thesis we applied the bottom-up approach to investigate eukaryotic and prokaryotic microtubules, as well as the yeast ESCRT-III (endosomal sorting complex required for transport) and archaeal Cdv (cell division) system regarding their potential for cell-free expression and synthetic cell research. Overall, all proteins utilized in this thesis for cell-free expression (Mal3, BtubA, BtubB, BtubC, Vps20ΔC, Snf7, Vps2, Vps24, Vps4, CdvA, CdvB, and CdvC) have been synthesized by the PURE system at full-length. Therefore, expression itself was not a problem for any of the applied systems and the most critical step for each protein system was to evaluate if the expressed proteins were active regarding their functions and interactions. A key factor for each project was thus to find reliable testing conditions for the respective protein activity. For cell-free protein synthesis, we applied the commercially available PURE system, which is comprised exclusively of reconstituted components. A current drawback this system suffers from is that expression stops after a few hours due to unknown causes. This time interval is too short to reconstitute certain cellular functions and in the long run the design of a minimal cell will require a translation system that is more stable over time. Therefore, we attempted to enhance the expression lifetime of the PURE system by implementation of a semi-open system. However, no changes in duration of expression or yield was observed (Chapter 2). This result supports the hypothesis that neither accumulation of toxic waste products, nor the depletion of NTPs or amino acids are primarily responsible for break-down of PURE system activity over time. Another question we investigated was if it would be possible to regulate eukaryotic microtubules by expression of microtubule associated proteins (MAPs). We chose to attempt expression of the end-binding MAP Mal3 due to its ability to be expressed functionally in E. coli and its crucial role in organizing protein recruitment at the plus-end of microtubules. To visually confirm activity of expressed Mal3, we added it to microtubules together with the purified proteins Tea2 and Tip1, which are recruited by Mal3 to the microtubule plus-end (Chapter 3). In this plus-end tracking assay distinctive prove of the activity of expressed Mal3 was visually given by formation of comets at microtubule tips. A restriction faced with eukaryotic tubulin is that it cannot be synthesized by any prokaryotic expression system such as the PURE system. However, the tubulin homologues BtubA and BtubB have been previously discovered in bacteria of the genus Prosthecobacter, in which they form filaments similar to microtubules. We synthesized BtubA/B with the PURE system and were able to show that it was expressed at full-length and was fully capable of forming dynamic bacterial microtubules (Chapter 4). Assembly took mostly place on top of a supported lipid bilayer (SLB) to which the filaments were binding without addition of any cofactors. A fraction of labelled bacterial tubulin, which would not result in any filaments on its own, was added for visualization. Further, the capability of synthesized BtubC to recruit bacterial microtubules to lipid membranes beyond the tendency for binding already observed could be confirmed by flotation assays. Moreover, when expressed inside liposomes BtubA and BtubB formed filaments that were deforming the vesicles similar to what is known of encapsulated tubulin or actin. The encapsulated filaments could be disintegrated by intense laser illumination upon which vesicles appeared to reverse into their former shape. Overall, bacterial microtubules have the potential to become a useful tool for engineering synthetic cells under the premise that more proteins associated their regulation and function will be discovered. One of the challenges for the creation of a minimal cell is to achieve cell division and we explored the yeast ESCRT-III system in respect to its potential to facilitate division in a minimal cell setup (Chapter 5). However, assessing the activity of the four ESCRT-III proteins turned out to be difficult because of a lack of purified proteins. Nevertheless, we could assert membrane binding capabilities of the ESCRT proteins by flotation assays and colocalization to SLB membranes. The formation of filament complexes composed of expressed Vps20ΔC and Snf7 was confirmed by transmission electron microscopy. However, it is not certain if these structures are truly resembling ESCRT filaments. Membrane deformation initiated by expressed ESCRT-III proteins could not be achieved, which is in line with more recent literature that proved the dependency of the ESCRT complex on the ATPase Vps4 for this function. Vps4 can be expressed by the PURE system, but its ATPase activity was not analyzed, as consumption of ATP cannot be reliably detected in the PURE system and depolymerization of filaments would require more efficient visualization of filaments or filament complexes.
Activity of expressed Cdv proteins could not be confirmed or analyzed (Chapter 6). It could only be determined that expressed CdvA, which is responsible for anchoring the Cdv complex to the membrane in archaea, did not bind to the lipid membranes we used in our settings. A possible reason for this could be differences in membrane composition between bacteria and archaea.
As elaborated in Chapter 1 and 7, the terms and definitions that entail synthetics cells and the phenomenon of life are generally not very concise and rather arbitrary. We proposed that in most scientific work the respective definitions should be orientated with respect to the aim of the research and explicitly be restricted to the applied framework. Regarding a more general definition of lifeforms, we suggested that life is characterized by self-reproduction with variations, based on an internal information carrier. This definition excludes no lifeforms in general, but certain representatives of living entities which are uncapable of reproduction. Therefore, an even more basic and fundamental principle was proposed, according to which any kind of pattern which is capable of evolving could be considered alive. This definition not only includes all organisms generally considered alive but as well several other phenomena.","synthetic biology; liposomes; minimal cell; synthetic cell; cell-free gene expression; microtubules; ESCRT-III; bacterial tubulin; Cdv-system","en","doctoral thesis","","978-90-8593-440-0","","","","","","","","","BN/Christophe Danelon Lab","","",""
"uuid:b8275053-49a0-4d77-aaea-27a92758f6e8","http://resolver.tudelft.nl/uuid:b8275053-49a0-4d77-aaea-27a92758f6e8","Probabilistic Risk Analysis for Ship Collision-Theory and Application for Conventional and Autonomous Ships","Chen, P. (TU Delft Safety and Security Science)","van Gelder, P.H.A.J.M. (promotor); Papadimitriou, E. (copromotor); Delft University of Technology (degree granting institution)","2020","Maritime transportation has been one of the major contributors to the global trade and economy. Accidents, however, have been continuously posing risks to individuals and societies in terms of loss of human life, economic and environmental consequences, etc. This thesis paid particular attention to the probabilistic risk analysis of ship collision accident and explored the possible influence of implementing MASS on the risk of collision in maritime traffic. A comprehensive literature review is conducted to investigate the stakeholders on maritime traffic safety and related methodologies for risk analysis of ship collision accident. A series of methods based on Non-linear Velocity Obstacle (NLVO) is proposed to identify the encounter between ships that have the potential for collision accident from the perspective of the whole encounter process. The causal relationships between accident contributing factors are modelled with credal network to estimate the causation probability of ship collision accident with consideration of encounter situation. Finally, an initial analysis of the potential influence of MASS on the collision risk in maritime traffic was also explored based on the proposed approaches. The objective of this research is to furtherly develop a quantitative risk analysis model for ship collision accident in waterways in an integrated manner that can introduce multiple sources of information into analysis and further to obtain insights of collision risk for safety management.","Maritime Safety; Probabilistic Risk Analysis; Ship Collision; Velocity Obstacles (VO); AIS; Collision Candidate; Near Miss; Credal Network; Maritime Autonomous Surface Ship","en","doctoral thesis","","9789464022742","","","","","","2021-08-31","","","Safety and Security Science","","",""
"uuid:d3ca1eab-12bf-4efe-82e2-661deb3e0520","http://resolver.tudelft.nl/uuid:d3ca1eab-12bf-4efe-82e2-661deb3e0520","Systemic Flood Risk Management Planning: A decision-support framework for large-scale flood risk management accounting for risk-distribution across flood-protected areas and deeply uncertain hydraulic interactions","Ciullo, A. (TU Delft Policy Analysis)","Klijn, F. (promotor); Kwakkel, J.H. (promotor); de Bruijn, K.M. (copromotor); Delft University of Technology (degree granting institution)","2020","Floods are natural phenomena which have potentially catastrophic effects on societies and their economies. Flood losses have been increasing in the last years and they are expected to increase further in the future due to climatic and socio-economic changes. It is therefore paramount to design measures and plan strategies (i.e. combination of measures) to limit flood losses. The current practice of designing flood risk management strategies adopts a risk-based approach, which recognizes that losses from floods cannot be reduced to zero but, at best, to a tolerable level against acceptable costs. Typically, a risk-based approach to flood risk management allows choosing measures by comparing them based on investment costs and effectiveness in reducing flood risk. A measure can e.g. be evaluated based on total societal costs, i.e. the sum of investment costs and the residual flood risk, with the most desirable measure being the one which minimizes total costs. In addition to minimizing total costs, objectives related to reducing individual risk or societal risk might also be applied. Although the risk-based approach aims at wisely allocating economic resources while, at times, also guaranteeing basic individual safety as well as avoiding large societal flood losses, it often neglects that measures implemented at one location may affect flood risk elsewhere. Acknowledging this was a reason for scientists and policy makers to advocate a move towards a comprehensive system approach. Such approach supports system-wide flood risk management planning and fully accounts for hydraulic interactions, i.e. the effects on hydraulic loading at one area due to events, e.g. response of the embankment to hydraulic loading or implementation of measures, occurring elsewhere. Two challenges are identified as crucial in adopting such a comprehensive system approach while accounting for hydraulic interactions.","","en","doctoral thesis","","978-94-6384-142-9","","","","","","","","","Policy Analysis","","",""
"uuid:6850d9dd-5e8f-4af8-ae03-828745b4f128","http://resolver.tudelft.nl/uuid:6850d9dd-5e8f-4af8-ae03-828745b4f128","Formic acid-driven biocatalytic oxyfunctionalisation: The alchemy of ants, mushrooms and air","Willot, S.J. (TU Delft BT/Biocatalysis)","Hollmann, F. (promotor); Arends, I.W.C.E. (promotor); Paul, C.E. (copromotor); Delft University of Technology (degree granting institution)","2020","Selective oxyfunctionalisation of inert C-H bonds is an important but challenging transformation in organic synthesis. Enzymes excel in incorporating an oxygen atom into organic molecules selectively. The well described P450 monooxygenases in particular are extremely active thanks to their iron heme cofactor that can selectively oxidise non activated C-H bonds. These enzymes rely on a complex redox system, whereas peroxygenases utilise only peroxide as oxidant and therefore arise as a better alternative for synthetic chemistry. However, in presence of high concentration of peroxide, the heme cofactor is going through a self-destruction. An in situ generation system is consequently needed to mildly provide the peroxide to the enzyme for performing the catalysis efficiently. All systems are based on the reduction of O2. They all have their own advantages and drawbacks, but have in common the use of rather complex molecules as reductant. The aim of this thesis was to apply formate as an atom efficient reductant for H2O2-dependent enzymes, peroxizymes. The overall system is then the oxyfunctionalisation of molecules using only formate and O2 as reactant.","","en","doctoral thesis","","978-94-6366-272-7","","","","","","","","","BT/Biocatalysis","","",""
"uuid:a855e51e-368f-4f0e-afc0-07acfe8da2b0","http://resolver.tudelft.nl/uuid:a855e51e-368f-4f0e-afc0-07acfe8da2b0","Experimental Investigation of Partial Cavitation","Jahangir, S. (TU Delft Fluid Mechanics)","Poelma, C. (promotor); Westerweel, J. (promotor); Delft University of Technology (degree granting institution)","2020","Cavitation is a well-known phenomenon, occurring in a wide range of applications. In most applications, cavitation is undesirable, such as turbines, pumps, ship propellers and diesel injector nozzles. Cavitation can cause material erosion, flow blockage, noise and degradation of equipment over time. The ability to predict the behavior of this type of flow will be beneficial to a wide range of systems. One complex form of cavitation is the periodic shedding of cavitation clouds. This thesis experimentally describes the mechanisms which are responsible for the periodic shedding of vapor clouds. A converging-diverging nozzle (venturi) is selected as a canonical geometry for this project. The venturi has the highest contraction ratio, due to its shape, which results in a broader dynamic cavitation range. The venturi gives us the ability to precisely differentiate between different cavitation mechanisms due to their more intense nature.","hydrodynamic cavitation; venturi; partial cavitation","en","doctoral thesis","","978-94-6366-280-2","","","","","","","","","Fluid Mechanics","","",""
"uuid:7846bdfe-fcb4-432d-bb53-64959a37db36","http://resolver.tudelft.nl/uuid:7846bdfe-fcb4-432d-bb53-64959a37db36","Thermodynamic properties of the actinide oxides solid solutions: A calorimetric study","Valu, S.O. (TU Delft RST/Applied Radiation & Isotopes)","Konings, R. (promotor); Wolterbeek, H.T. (promotor); Delft University of Technology (degree granting institution)","2020","The work presented in this thesis provides new data on the behaviour of the binary mixed actinide oxides of the following systems: (Th,Pu)O2, (U,Pu)O2, (Th,U)O2, (U,Am)O2. The results were obtained by performing measurements in a large temperature interval, for some compounds the thermodynamic properties were studied in the range from about 2 to 2400 K, from cryogenic to sub-melting state. The goal of the thesis was extending the knowledge of the thermophysical behaviour of the oxides containing different mixture of actinides in various ratios by performing analysis and use the results for providing and improving the foundational knowledge of such materials. Enthalpy, heat capacity, thermal diffusivity and thermal conductivity of the investigated mixed oxides were determined. In addition, the effect of dilution and radiation damage on the magnetic transition present in uranium dioxide-based solid solutions was studied. Since most of the studied samples are highly radioactive, they were prepared in limited quantities in glove boxes, and techniques suitable for measuring thermal properties of small samples (< 100 mg) were used. Experiments have been performed in the low- and high-temperature regimes. Heat capacity was either measured directly or indirectly by measuring the enthalpy increments using different calorimeter types. Thermal diffusivity was measured in the temperature range from 500 to 1550 K, with a laser flash instrument, and thermal conductivity was determined from this.","actinide oxides; enthalpy; heat capacity; calorimetry; thermal diffusivity; magnetic disorder","en","doctoral thesis","","978-94-6384-140-5","","","","","","","","","RST/Applied Radiation & Isotopes","","",""
"uuid:89466e36-b579-4943-b3da-b251dd52209f","http://resolver.tudelft.nl/uuid:89466e36-b579-4943-b3da-b251dd52209f","Combining LiDAR and Photogrammetry to Generate Up-to-date 3D City Models","Zhou, K. (TU Delft Mathematical Geodesy and Positioning)","Hanssen, R.F. (promotor); Lindenbergh, R.C. (promotor); Delft University of Technology (degree granting institution)","2020","3D city models are increasingly used to maintain and improve urban infrastructure. Keeping 3D city models accurate and up-to-date is essential for municipalities to make decisions in a time of strongly increasing urbanization. 3D information provided by airborne laser scanning (ALS) is widely used for generating 3D city models. However, ALS data is sparse and irregularly spaced, and not frequently acquired due to its high costs. Airborne camera imagery (ACIM) is an alternative to extract denser but less accurate 3D information. Given these limitations in acquisition frequency and quality, using either ALS or ACIM to generate up-to-date large-scale 3D city models is sub-optimal. Therefore, we combine the complementary characteristics of both data sources to achieve two objectives: (i) 3D change detection and updating of buildings in ALS data using ACIM data, and (ii) improving the planimetric accuracy of building extraction from ALS data using ACIM data. ALS data is integrated with a single image or a single stereo pair for the first objective, and with multiple stereo pairs for the second objective. Our methods are validated over three areas: Vaihingen, Germany, and Amersfoort and Assen, the Netherlands. Shadow in a single image is indicative for a 3D object and is represented in the image by RGB color values. However, these color values are not unique, as they depend on the local conditions, such as material and environment. We propose a supervised machine learning approach, random forest, to effectively characterize the color properties. To generate training samples, accelerated ray tracing is used to efficiently reconstruct shadow locations in the image using 3D ALS data. Using shadow alone is not sufficient to detect accurate building changes, as shadows only partially represent 3D information. 3D information can be extracted from corresponding pixels in a stereo pair, but this information is not accurate in shadow and low texture areas. To address this, we propose LEAD-Matching (LiDAR-guided edge-aware dense matching). It starts from using accurate plane information extracted from ALS data to densify sparse ALS points. Three candidate heights are then obtained for each densified point to guide the dense matching in these problematic areas. Subsequently, detailed building information in the stereo pair is integrated to choose the final optimal height. If the optimal height obtained by LEAD-Matching points to corresponding pixels of different color, a likely building change is found. Test results on the Amersfoort and Assen data show a successful verification of unchanged buildings while changes are detected starting from 2 × 2 × 2 m 3 , as conventionally required for large-scale 3D mapping, with an F1 score of 0.8 and 0.9 respectively. To achieve the second objective, we extend LEAD-Matching to multiple stereo pairs, to improve the planimetric accuracy of building extraction in ALS data. E-LEAD-Matching integrates building boundaries of high planimetric accuracy from multiple stereo pairs to the ALS data. Using multiple stereo pairs, occlusions in single stereo pairs are compensated, while the accuracy of building boundaries is improved. Compared to using ALS alone, the planimetric accuracy of extracted buildings improves from 0.40 m to 0.22 m in Vaihingen, and from 0.48 m to 0.21 m in Amersfoort. This improved planimetric accuracy actually meets conventional requirements of large-scale mapping. Our methods enable us to integrate the beneficial aspects from ALS and ACIM to generate accurate and up-to-date large-scale 3D city models. We anticipate that our research will save both money and time in generating future up-to-date large-scale 3D city models.","Airborne laser scanning; Airborne images; 3D city model; Change detection; Dense matching; Supervised learning; Multi-view Photogrammetry; LIDAR; Photogrammetry; building model; large scale mapping","en","doctoral thesis","","978-94-6366-278-9","","","","","","","","","Mathematical Geodesy and Positioning","","",""
"uuid:03641b5f-f8f6-4ff9-be7f-11948f6d3cc7","http://resolver.tudelft.nl/uuid:03641b5f-f8f6-4ff9-be7f-11948f6d3cc7","Design and Application of Gene-pool Optimal Mixing Evolutionary Algorithms for Genetic Programming","Virgolin, M. (TU Delft Algorithmics)","Bosman, P.A.N. (promotor); Witteveen, C. (promotor); Alderliesten, T. (copromotor); Delft University of Technology (degree granting institution)","2020","Machine learning is impacting modern society at large, thanks to its increasing potential to effciently and effectively model complex and heterogeneous phenomena. While machine learning models can achieve very accurate predictions in many applications, they are not infallible. In some cases, machine learning models can deliver unreasonable outcomes. For example, deep neural networks for self-driving cars have been found to provide wrong steering directions based on the lighting conditions of street lanes (e.g., due to cloudy weather). In other cases, models can capture and reflect unwanted biases that
were concealed in the training data. For example, deep neural networks used to predict likely jobs and social status of people based on their pictures, were found to consistently discriminate based on gender and ethnicity–this was later attributed to human bias in the labels of the training data.","evolutionary algorithms; genetic programming; machine learning; pediatric cancer; radiotherapy","en","doctoral thesis","","978-94-6384-138-2","","","","","","","","","Algorithmics","","",""
"uuid:3a1eca29-7eef-401e-8f77-45003536eb70","http://resolver.tudelft.nl/uuid:3a1eca29-7eef-401e-8f77-45003536eb70","The effect of haptic feedback on operator control behaviour in telemanipulation","Wildenbeest, J.G.W. (TU Delft Human-Robot Interaction)","Abbink, D.A. (promotor); van der Helm, F.C.T. (promotor); Steinbuch, M (promotor); Delft University of Technology (degree granting institution)","2020","class=""MsoNormal"">Telemanipulation systems - in 1925 a vision to remotelytreat patients, today widely adopted in a variety of applications - allow humanoperators to perform tasks which otherwise could not be performed, due to, forexample, limitations with respect to distance (e.g., space), scale (e.g.,surgery or micro-assembly) or hostile environments (e.g., subsea, nuclear).Effectively, a telemanipulation system functions as an extension to the humanoperator’s motor apparatus, in which the mapping between motor commands andhuman hand is shifted to a mapping between motor commands and slave robot.Haptic feedback, both proprioceptive and tactile, is often essential for motorcontrol and motor learning (i.e., building the `mappings'), but may bedistorted or even lost when not appropriately re-engineered. There is, however, no consensus on how todesign haptic feedback to best enable humans to perform practicaltelemanipulated tasks, as no theory or integrated view for human-in-the-loopdesign and evaluation of haptic feedback is available. Empirically, we knowdesign guidelines `depend’ on aspects such as operator talent, training, thetype of task or application, quality of the visual feedback, or taskinstruction. As a result, the design and evaluation of a telemanipulationsystem is heuristic: for each case, the required quality of haptic feedback isdetermined by trial-and-error. This lacuna in design guidelines based onhuman-in-the-loop theory makes telemanipulation performance suboptimal, anddevelopment slow and costly. The aim ofthis thesis is to provide an integrated, human-centered view on the design andevaluation of haptic feedback, which can serve as a basis for generalizedhaptic feedback design. More specifically, this thesis is on the one handfocused on (i) assessment of haptic feedback design requirements for positionand rate control within a uniform evaluation framework, and on the other on(ii) the development of a fundamental understanding of the role of hapticfeedback on operator (neuromuscular) control mechanisms, and moreover, togeneralize experimental findings by adapting existing motor-control paradigmsand control-theoretic models. To do so, four key human-factor experiments wereperformed. The first experiment focused on the benefit of haptic feedback forposition controlled telemanipulation scenarios and the impact of taskinstruction and availability of visual feedback for several fundamentalsubtasks. In a second experiment the efficacy of four different haptic feedbackinterface designs for rate control was determined in a similar manner; bothstudies adopted a uniform evaluation framework, providing an integrated view onrequirements for the haptic feedback. Wefound that such a framework should incorporate at least a (abstract) tasktaxonomy, a baseline to compare against, task instruction, speed-accuracytrade-offs (i.e. what metrics to look at), performance-control efforttrade-offs, operator training, and a control on the quality of visual feedback.Furthermore, these studies showed that the best haptic feedback design toperform a given telemanipulation task predominantly depends on the requiredtask workspace and task accuracy, and the need to reflect back contacttransitions. Large workspaces are more easily (i.e. low workload) covered usingrate control, where accuracy for positions and forces is higher using positioncontrol. Also, as an increase in device (i.e. haptic feedback) quality does notalways correlate to an increase in task performance. This implies design ofhaptic feedback should be human-centered evaluation, both assessing the problemand validating the solution with the human in-the-loop. Experiments three andfour focused on the effects of haptic feedback on the human operator’s motorcontrol mechanisms when controlling a telemanipulation system in free-space. Instudy three, well-established cybernetic models were adopted to study trainedmovements, and the impact of slave dynamics and scaling of haptic feedback. Inthe final study, a reach-adaptation paradigm was used to study the role ofhaptic feedback when learning movements, and the impact of slave dynamics and bandwidthof the presented haptic feedback. Theselatter two experiments show that haptic feedback substantially affects anoperator’s underlying motor control mechanisms (i.e. feedback and feedforwardcontrol) when controlling a slave system. The effects were observed in bothinstantaneous improvements of task execution due to feedback of environmentalforces or device dynamics, as well as also task execution improvements overlonger periods of time due to improved internal models (i.e. learning); hapticfeedback enhances the process of building ‘mappings’ between human input and asystem’s response. This suggests that improved haptic feedback quality improveslearning rates (i.e. efficacy) and control responses (i.e. efficiency). Futurestudies should uncover the potential quantitative effects and time-scales atwhich these effects occur. Additionally,study three showed that the amplitude of haptic feedback can be scaled downwithout harming task performance: human operators are capable of adjustingtheir (neuromuscular) control parameters independently of the absolutemagnitude (i.e. gain) of the haptic feedback controller. However, when scaling,one should account for reasonable lower boundaries, that putatively may begiven by Just Noticeable Differences (JNDs) to keep cues distinguishable. Upperboundaries may be given by individual constraints on comfort. These findingswere confirmed by the second experiment. Studies three and four illustrate thatcomputational models and paradigms from the motor control literature can beadopted to provide generalizable descriptions of human operator behavior intelemanipulation. Here, we targeted free-space motions for systems like cranesand robot arms, and the tasks are representative for activities in domestic,nuclear or subsea environments. The cybernetic models enable for an exclusiveunderstanding of the underlying operator control mechanisms (i.e. feedback andfeedforward control) by looking in the frequency domain, as such complementingand enhancing the insights gained from the time-domain data. The reachadaptation paradigm enables to determine the extent to which haptic feedbackbandwidth affects motor learning and generalization for different slavedynamics. Moreover, these model-based approaches enable extrapolation offindings and to predict outcomes when task characteristics change, such thatinformed a priori design considerations of haptic feedback interfaces and, inthe future, haptic support systems can be made.","Telemanipulation; Haptic Feedback; Human Factors","en","doctoral thesis","","978-94-6384-133-7","","","","","","2020-06-08","","","Human-Robot Interaction","","",""
"uuid:62d9adb9-783a-45e3-b212-49c79dba7e13","http://resolver.tudelft.nl/uuid:62d9adb9-783a-45e3-b212-49c79dba7e13","In-Situ Determination of Buildings’ Thermo-Physical Characteristics","Rasooli, A. (TU Delft Building Energy Epidemiology)","Itard, L.C.M. (promotor); Visscher, H.J. (promotor); Delft University of Technology (degree granting institution)","2020","Accurate determination of building’s critical thermo-physical characteristics such as the walls’ thermal resistance, thermal conductivity, and volumetric heat capacity is essential to indicate effective and efficient energy conservation strategies at building level. In practice, the values of these parameters, which determine not only possible energy savings, but also related costs, are rarely available because the current determination methods are time-and-effort-expensive, and consequently seldom used.
This thesis combines theories, simulations, computations, and experiments to develop and improve methods and approaches for determination of a number of buildings’ most important thermophysical characteristics. First, a modification to the existing standard method, “ISO 9869 Average Method” is proposed to measure the walls’ thermal resistance. Two current problems are solved: long measurement duration (weeks) and imprecision. To further shorten the measurement period to a few hours, a new transient in-situ method, Excitation Pulse Method, EPM (Patent No. 2014467), is then developed and tested. This method allows the determination of the walls’ response factors which can be applied directly in dynamic models. More importantly, it is used to extract critical construction information including walls’ thermal resistance, thermal conductivity, volumetric heat capacity, and the possible layer composition. Finally, in an attempt to reduce the hassle, cost, and intrusion associated with locally-conducted experiments, the use of data from smart meters and home automation systems is explored. Building’s global characteristics including heat loss coefficient, global heat capacitance and daily air change rates are accordingly determined.","In-Situ Measurement; Thermo-Physical Characteristics; Building Heat Transfer; Thermal Resistance","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-276-5","","","","A+BE I Architecture and the Built Environment No 7 (2020)","","","","","Building Energy Epidemiology","","",""
"uuid:d0e0ef9d-caa5-4794-86b8-ab0eaabd0de5","http://resolver.tudelft.nl/uuid:d0e0ef9d-caa5-4794-86b8-ab0eaabd0de5","Operando spectroscopic methods to study electrochemical processes","Firet, N.J. (TU Delft ChemE/Materials for Energy Conversion and Storage)","Smith, W.A. (promotor); Dam, B. (promotor); Delft University of Technology (degree granting institution)","2020","Fossil resources are being depleted, while the concentration of CO2 in the atmosphere rises. There clearly is a strong and immediate need for a radical energy transition. Renewable energy is growing strongly as supplier of fossil-free electricity, but it needs large-scale and long-term storage options in order to be able to make a real impact on our total energy market. Petroleum is not only a feedstock for energy resources such as kerosene and gasoline, but also for many materials and chemicals that are being used in everyday life. A technology called electrochemical CO2 reduction (CO2R) can provide a solution to both these issues: renewably generated electricity is converted in any carbon-based energy containing molecule. The purpose (large-scale energy storage, mobile energy carrier or chemical) determines the molecule that should be produced. Electrochemical CO2 reduction is demonstrated at the lab scale. In order for it to become economically feasible and scalable to use in industry, additional research is needed to find better catalysts, understand the reactions and to improve cell design. The need for research includes a need for operando studies to investigate the exact physical and structural state of the catalyst under operating conditions. Also, relevant reaction intermediates cannot be studied without operando experiments. This thesis therefore focusses on the question how we can use existing operando characterisation techniques to study electrochemical systems. Attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopy is used to study reaction mechanisms on silver electrodes. A front-irradiation, surface-sensitive cell design for an electrochemical cell enabling operando X-ray absorption spectroscopy (XAS) studies on modified silver electrodes during CO2 reduction is presented. A guide on how to conduct operando XAS experiments on gas diffusion electrode-based CO2 reduction catalysts and its results are demonstrated. Oxygen and vanadium are studied with X-ray Raman scattering (XRS) in order to elucidate the electronic effect of photocharging on bismuth vanadate photoelectrodes for solar water splitting.","Electrochemistry; CO2 reduction; operando spectroscopy; operando ATR-FTIR; X-ray absorption spectroscopy","en","doctoral thesis","","978-94-6384-137-5","","","","","","","","","ChemE/Materials for Energy Conversion and Storage","","",""
"uuid:9da13b0e-3a79-4f1b-bd13-0541d0318b15","http://resolver.tudelft.nl/uuid:9da13b0e-3a79-4f1b-bd13-0541d0318b15","Design and evaluation of simulated reflective thoughts in virtual reality exposure training","Ding, D. (TU Delft Interactive Intelligence)","Neerincx, M.A. (promotor); Brinkman, W.P. (promotor); Delft University of Technology (degree granting institution)","2020","Social skills are often an integral part of functioning in modern society, and therefore, in the past decades, social skills training has received considerable research attraction. Various physical and digital approaches have been developed to improve people’s performance in social interaction. Among these approaches, social skills training systems that use multiple technologies (e.g. virtual reality) play an important role. Usually, training systems impart knowledge to users or provide them with the opportunities to learn by doing or by observing. In this thesis, we propose and investigate a novel training approach that aims at simulating the thinking process and providing a stream of thoughts (i.e., virtual cognitions) that people experience during social interaction in virtual reality. Through this approach, users, not only, learn what to do and how to act, but also to understand the underlying reasons why they should behave in a specific manner. Furthermore, users’ beliefs about their capabilities of engaging in social interaction, i.e., their self-efficacy, are also targeted. Several empirical studies were conducted that demonstrate the possibility of generating virtual cognitions and investigate their effects on people’s beliefs and behaviour. The findings imply that providing virtual cognitions in virtual reality can work as a novel and promising intervention that improves people’s self-efficacy and teach them theoretical knowledge concepts in a realistic setting.","Virtual reality; Virtual cognitions; Social skills training; Virtual Reality Exposure Therapy (VRET); Eye-tracking; Inner voice; Behaviour change support system; Human-Computer Interaction; Training system","en","doctoral thesis","","978-94-028-2067-6","","","","","","2023-05-01","","","Interactive Intelligence","","",""
"uuid:63713a7e-b196-4bfc-a91e-e65bfad97fac","http://resolver.tudelft.nl/uuid:63713a7e-b196-4bfc-a91e-e65bfad97fac","The Good, the Cheap, and the Privacy-Friendly: Stakeholder Evaluation of the Effectiveness of Surveillance Technology","Cayford, M.R. (TU Delft Safety and Security Science)","van Gelder, P.H.A.J.M. (promotor); Reniers, G.L.L.M.E. (promotor); Pieters, W. (promotor); Delft University of Technology (degree granting institution)","2020","Surveillance of communications data is a contentious topic, typically centering on privacy vs. security questions. Central to this debate, but often overlooked, is the question of the effectiveness of the surveillance technology. This dissertation focuses on intelligence agencies in the U.S. and the U.K. and the evaluation of the effectiveness of the surveillance technology they employ. It examines three stakeholders – intelligence practitioners, oversight bodies, and the public – and how they treat the question of effectiveness, including considerations of cost and proportionality. The final study considers the role of bureaucracy and its impact on effectiveness evaluation. The dissertation concludes with reflections on additional actors in the effectiveness debate and a discussion on the use of frameworks and the issue of trust.","","en","doctoral thesis","","978-94-028-2068-3","","","","","","","","","Safety and Security Science","","",""
"uuid:b83f1ec0-c346-4eed-b65f-d385cfc92ff7","http://resolver.tudelft.nl/uuid:b83f1ec0-c346-4eed-b65f-d385cfc92ff7","Securing Healthy Circular Material Flows In The Built Environment: The Case Of Indoor Partitioning","Geldermans, Bob (TU Delft Climate Design and Sustainability)","van den Dobbelsteen, A.A.J.F. (promotor); Luscuere, P (promotor); Tenpierik, M.J. (copromotor); Delft University of Technology (degree granting institution)","2020","Departing from two problem statements, one concerning circularity in the built environment and one concerning flexibility in the built environment, this dissertation sets out to answer two main research questions: – In an Open Building division of support and infill, to what extent can the infill contribute to sustainable circular material & product flows? – Which qualitative and quantitative criteria and preconditions are central to integrating the notions of user health & well-being, circularity, and flexibility in infill configurations? In view on these research questions, this dissertation revolves around multiple topics and disciplines, addressing material properties, material flows, product design, and user benefits, relating to a specific building component: non-bearing partitioning. The research follows a mixed-method approach, primarily qualitatively driven and supported by quantitative data and tools. Literature studies, workshops and expert consultations are applied throughout the trajectory to derive, test and adjust criteria, guidelines and design concepts. The dissertation is structured around four research chapters (each set-up as a separate academic article), preceded by a general introduction and background sketch, and followed by an overarching evaluation of the findings.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-275-8","","","","A+BE | Architecture and the Built Environment No 6 (2020)","","","","","Climate Design and Sustainability","","",""
"uuid:dfdfb7e1-795b-4939-be5c-96da4efc21dd","http://resolver.tudelft.nl/uuid:dfdfb7e1-795b-4939-be5c-96da4efc21dd","Visualizing response to DNA damage in bacteria","Deb Roy, S. (TU Delft BN/Nynke Dekker Lab)","Dekker, N.H. (promotor); Delft University of Technology (degree granting institution)","2020","The basis of this thesis has been the curiosity, however modest, to understand how DNA replication happens in vivo, particularly during the onset of DNA damage and beyond. DNA damage is a recurring phenomenon, which a (bacterial) cell faces in its lifetime from the environment or even its inherent metabolism. While we understand much about replication in general from decades of research, our understanding is not comprehensive without understanding how replication is affected, when the cell is under DNA damage and/ or under repair. In terms of genome replication, the effects of DNA damage may be at the level of: a. Replisome components b. Accessory components of the replisome In this thesis and with a limited time span of a PhD research, I (along with my colleagues) have reported on one component each of the two categories stated above in the bacterial model Escherichia coli. In the former case, we have investigated the replicative helicase DnaB and in the latter case, the translesion DNA polymerase IV (Pol IV).","DNA damage; DNA repair; DNA replication; bacterial replisome; translesion polymerases; live cell imaging; single-molecule fluorescence microscopy","en","doctoral thesis","","978-90-8593-439-4","","","","","","","","","BN/Nynke Dekker Lab","","",""
"uuid:ace78c36-d0a3-40d7-bb50-505bce956042","http://resolver.tudelft.nl/uuid:ace78c36-d0a3-40d7-bb50-505bce956042","Methods for brain disease genetics using gene expression data of the healthy brain","Huisman, S.M.H. (TU Delft Pattern Recognition and Bioinformatics)","Reinders, M.J.T. (promotor); Lelieveldt, B.P.F. (promotor); Delft University of Technology (degree granting institution)","2020","Medical studies are rarely easy, and it is especially challenging to understand brain disease. Brains are highly complex organs, and it is, for instance, hard to see the relationships between behavioural change in a person and the changes in the connections among the billions of cells in the brain that cause this behavioural change. Many brain related disorders, such as autism, schizophrenia, and Alzheimer's disease, have some genetic basis. They are influenced by small differences in people's genetic code, which are called variants. Genetic variants can cause differences in the activity or effectiveness of genes. And if genes are involved, knowing which genes these are, and what effect they have can help to find treatments for these diseases.","","en","doctoral thesis","","978-94-6380-844-6","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:016ee80a-fba2-4534-a578-94e7b35022a9","http://resolver.tudelft.nl/uuid:016ee80a-fba2-4534-a578-94e7b35022a9","Horizontal shear flows over a streamwise varying bathymetry","Broekema, Y.B. (TU Delft Environmental Fluid Mechanics)","Uijttewaal, W.S.J. (promotor); Labeur, R.J. (copromotor); Delft University of Technology (degree granting institution)","2020","The Eastern Scheldt storm surge barrier is an icon of Dutch hydraulic engineering. Downstream of the barrier, large local erosion pits (scour holes) have formed adjacent to the applied bed protection after its construction. It was expected during the design phase that these would develop, but both the magnitude of the scour hole depth as well as the present continuation of the scour hole development were not foreseen. In this thesis, the fundamental fluid mechanical behaviour of the flow around the scour holes was studied through a combination of field data analysis, mathematical modelling and laboratory experiments.
Field data showed that the flow near the barrier is characterized by large transverse differences in streamwise velocity, and the flow is thus classified as a horizontal shear flow. As such a flow develops over a locally varying bathymetry in streamwise direction, it showcases non-intuitive behaviour, with potentially large consequences for the ongoing scour development. As a horizontal shear flow develops over a streamwise oriented increase in flow depth, its streamlines either converge or diverge in the horizontal plane. Associated with either horizontal convergence or divergence is the absence or presence, respectively, of vertical flow separation. In case of horizontal convergence and suppression of vertical flow separation, the bed shear stress is shown to be significantly higher compared to a flow that horizontally diverges and vertically separates. The rate of horizontal convergence was shown to be dependent on the increase in flow depth; thus, a positive feedback mechanism is revealed where the presence of a local increase in flow depth sustains or even enhances further development of such a feature. It was demonstrated through mathematical modelling that in case of horizontal convergence and vertical attachment, a streamwise oriented increase in flow depth led to an intensification of the turbulence structures.
The findings from this study were used to explain the observed ongoing growth of the scour holes near the Eastern Scheldt storm surge barrier. Besides, it was hypothesized how the phenomena as revealed in this thesis would apply to similar configurations as the Eastern Scheldt storm surge barrier. The results from this thesis may form part of the knowledge base from which design guidelines or (numerical) design tools for protection against scour around hydraulic structures are developed.","Shallow flows; Horizontal shear flows; Turbulence; Variable topography; Stability analysis; Laboratory experiments; Mathematical modelling; Advection-dominated flows","en","doctoral thesis","","978-94-6375-824-6","","","","","","","","","Environmental Fluid Mechanics","","",""
"uuid:56111690-faa8-4d98-9aba-d4a43fd5e160","http://resolver.tudelft.nl/uuid:56111690-faa8-4d98-9aba-d4a43fd5e160","Ducted wind turbines revisited: A computational study","Dighe, V.V. (TU Delft Wind Energy)","van Bussel, G.J.W. (promotor); Avallone, F. (copromotor); Delft University of Technology (degree granting institution)","2020","Ducted Wind Turbines (DWTs) are one of the many concepts that have been proposed to improve the energy extraction from wind in comparison to bare wind turbines. In reviewing the DWT studies, investigations based on the combined use of theoretical, computational, and experimental techniques have been presented. Although indicated in these studies that the power output of wind turbines can be significantly increased by using surrounding ducts, the factors influencing this power increase, like the duct shape, augmentation add-on’s and yawed inflow conditions, need further investigation. These topics have been addressed in this doctoral thesis. The study presents a computational investigation of DWTs, employing two-dimensional and three-dimensional CFD simulations. To this intent, solutions obtained using panel, RANS, URANS and LB-VLES methods are shown. For reliable solution accuracy, verification and validation assessments are performed when possible. Through parametric investigation, it is found that the aerodynamic performance of the DWT can be improved by increasing the duct cross-section camber and a correct choice of turbine thrust force coefficient, whilst maintaining the same duct-exit-area ratio. The aerodynamic performance improvement for a DWT directly corresponds to the dimensionless duct thrust force coefficient. Flow analysis showed that flow separation when detected inside of the duct, reduces the duct thrust force coefficient and ultimately the aerodynamic performance of the DWT model. In an effort to further improve on the aerodynamic performance of the DWT, the effect of multi-element ducts and Gurney flap on the existing DWT models are investigated. The aerodynamic performance improvement with multi-element ducts strongly depends on the installation settings of the secondary duct element with respect to the primary DWT geometry. On the other hand, a Gurney flap retrofitted at the trailing edge of the duct improves the aerodynamic performance of the DWT model by delaying inner duct wall flow separation, thus increasing the mass flow rate at the turbine. Finally, the effects of yawed inflow condition on the aerodynamic and aeroacoustic performance of DWT models are studied in detail. The analysis showed that DWTs can demonstrate yaw insensitivity up to a specific yaw angle. The yaw insensitivity for the DWT model, however, strongly depends on the aerodynamic mutual interaction between the duct and turbine, which changes with the duct geometry, turbine configuration and yaw angle. While assessing the aeroacoustic performance of the DWT models, it is found that the DWT model xiii xiv Summary with highly cambered duct cross-section generates higher broadband noise levels, which results from the turbulent flow structures convecting along the surface of the duct.","ducted wind turbine; CFD; aerodynamics; aeroacoustics","en","doctoral thesis","","978-94-6380-816-3","","","","","","","","","Wind Energy","","",""
"uuid:4861afac-3701-47fa-afcf-2fe14ec07e1e","http://resolver.tudelft.nl/uuid:4861afac-3701-47fa-afcf-2fe14ec07e1e","Optimizing the operation of a multiple reservoir system in the eastern nile basin considering water and sediment fluxes","Digna, R.F.M.O. (TU Delft Water Resources)","van der Zaag, P. (promotor); Uhlenbrook, S. (promotor); Mohamed, Y.A. (copromotor); Delft University of Technology (degree granting institution)","2020","The Eastern Nile (EN) riparian countries Egypt, Ethiopia and Sudan are currently
developing several reservoir projects to contribute to the needs for energy and food
production in the region. The Nile Basin, particularly the Eastern Nile Sub-basin, is
considered one of the international river systems with potential conflicts between riparian countries. Yet, the Eastern Nile is characterized by the high dependency of downstream countries on river water generated in upstream countries.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-0-367-56441-4","","","","","","","","","Water Resources","","",""
"uuid:98e1ef84-91d0-4ee0-b37b-d7ab794cb367","http://resolver.tudelft.nl/uuid:98e1ef84-91d0-4ee0-b37b-d7ab794cb367","Understanding Levee Failures from Historical and Satellite Observations","Özer, I.E. (TU Delft Hydraulic Structures and Flood Risk)","Jonkman, Sebastiaan N. (promotor); Hanssen, R.F. (promotor); Delft University of Technology (degree granting institution)","2020","Flood defense systems are critical in protecting against catastrophic events which often lead to significant damage, fatalities or substantial socio-economic and environmental impact. Even though levees form a significant part of the existing flood defense systems, there is a limited knowledge of the different levee behavior processes and the critical factors contributing to their failures. This research demonstrates the importance of collecting and analyzing historical failure events to gain new insights into levee failure mechanisms, and shows how satellite technology can both provide useful information on the deformation behavior of levees and be applicable for early warning systems.","Flood defenses; levee failures; levee deformation; Flood risk management; Satellite Radar Interferometry; InSAR time series analysis","en","doctoral thesis","","978-94-6384-135-1","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:75d0ae4c-afbc-4b7a-814d-8bbb3642b95f","http://resolver.tudelft.nl/uuid:75d0ae4c-afbc-4b7a-814d-8bbb3642b95f","Development of A Composite Indicator for Measuring Company Performance from Economic and Environmental Perspectives: A Study on Motor Vehicle Manufacturers","Zeng, Q. (TU Delft Transport Engineering and Logistics)","Lodewijks, G. (promotor); Santema, S.C. (promotor); Beelaerts van Blokland, W.W.A. (copromotor); Delft University of Technology (degree granting institution)","2020","Company performance measurement is fundamental for decision-makers to monitor a company's performance and to solve management problems. The evolution of company performance measurement tools started from a pure financial-biased framework. The first generation of company performance measurement tools was achieved through supplementing the traditional financial measures with non-financial measures. The second generation addressed the dynamic of value creation by investigating transformations of resources. Both the first and the second generation showed appropriateness in how they reflect the realities in companies. The third generation emphasized the business-oriented methodology to real free cash flow activities. This dissertation, that will present a fourth generation company performance measurement tool, has a focus on motor vehicle manufacturers (MVMs) due to its economic significance and its environmental impact during vehicles' production.","multi-criteria decision-making; composite indicators; environmental concerns; motor vehicle manufacturers; time series analysis; benchmarking","en","doctoral thesis","TRAIL Research School","978-90-5584-266-7","","","","","","","","","Transport Engineering and Logistics","","",""
"uuid:2332e125-c9c4-443c-9a18-0f8fd1c2f85e","http://resolver.tudelft.nl/uuid:2332e125-c9c4-443c-9a18-0f8fd1c2f85e","Preserving Confidentiality in Data Analytics-as-a-Service","Tillem, G. (TU Delft Cyber Security)","Lagendijk, R.L. (promotor); Erkin, Z. (copromotor); Delft University of Technology (degree granting institution)","2020","The enhancements in computation technologies in the last decades enabled businesses to analyze the data that is collected through their systems which helps to improve their services.
However, performing data analytics remains a challenging task for small- and medium-scale companies due to the lack of in-house experience and computational resources.
Data Analytics-as-a-Service (DAaaS) paradigm provides such companies outsourced data analytics, where a company that is specialized in data analytics serves its knowledge and computational resources to the other companies, which need data analytics for their businesses.
A major challenge in DAaaS is preserving the privacy of the outsourced data, which might contain sensitive customer or employee information or the intellectual property of the outsourcing company. Leakage of sensitive information has several consequences both for outsourcing and service provider companies as legal obligations, loss of reputation, and financial loss. Therefore, a well functioning outsourced analytics service should achieve several data protection measures such as confidentiality, integrity, and availability.
In this thesis, we focus on the preservation of confidentiality in data analytics-as-a-service applications. We select three analytics applications that are becoming popular in outsourced data analytics, which are process analytics, machine learning, and marketing analytics. Despite there exist several other techniques that are commonly used in outsourced data analytics, we decide to focus on the algorithms of process analytics, machine learning, and marketing analytics since the privacy concerns in these analytics have not been investigated thoroughly.
In confidential data analytics-as-a-service, our goal is to achieve confidentiality by protecting input/output privacy and maintaining the correctness and efficiency of analytics computations. To protect the privacy of data we use two secure computation techniques, which are homomorphic encryption and secure multiparty computation. To assure correctness, we propose several hybrid protocol designs that minimize the loss of accuracy in computations. For the efficiency of our protocols, we use several optimization techniques that reduce the computation and communication costs of private data analytics. Our protocols show promising results for confidential data analytics in the outsourced setting.","Data Analytics; Secure Computation; Confidentiality","en","doctoral thesis","","978-94-028-2044-7","","","","","","","","","Cyber Security","","",""
"uuid:dbbfe1fc-bf63-45f0-8cf2-28ed7dab90eb","http://resolver.tudelft.nl/uuid:dbbfe1fc-bf63-45f0-8cf2-28ed7dab90eb","Semantic-Enhanced Training Data Augmentation Methods for Long-Tail Entity Recognition Models","Mesbah, S. (TU Delft Human-Centred Artificial Intelligence)","Houben, G.J.P.M. (promotor); Bozzon, A. (promotor); Lofi, C. (copromotor); Delft University of Technology (degree granting institution)","2020","Named Entity Recognition (NER) is an essential information retrieval task. It enables a wide range of natural language processing applications such as semantic search, machine translation, etc. The NER can be formulated as the task of identifying and typing words or phrases in a text that refers to certain classes of interest (e.g., disease, Adverse Drug Reactions). There are different techniques to tackle NER, such as dictionary-based, rulebased, and machine learning-based. Machine learning-based NER techniques have shown to perform the best for entities with large amounts of human-labeled training datasets.
However, their performance is limited when dealing with long-tail entities. Long-tail entities are entities that have a low frequency in the document collections and usually have no reference to existing Knowledge Bases. Obtaining human-labeled datasets is expensive and time-consuming, especially for long-tail entities that are scarcely available in document collections. This dissertation focuses on the problem of the lack of training data, arguably the largest bottleneck in training machine learning-based NER techniques. We investigated efficient and effective ways to augment training data by enhancing their size and quality automatically. Our work aimed at showing how, by enhancing the size and quality of the training data using different techniques, it will be possible to improve the performance of Long-tail Entity Recognition (L-tER).","Long-tail Name Entity Recognition; Semantic Enrichment; Training Data Augmentation","en","doctoral thesis","","978-94-6380-808-8","","","","","","","","","Human-Centred Artificial Intelligence","","",""
"uuid:d78075cb-7614-4248-b3f5-65a5fd4b3586","http://resolver.tudelft.nl/uuid:d78075cb-7614-4248-b3f5-65a5fd4b3586","Multi-pinhole Molecular Breast Tomosynthesis: from Simulation to Prototype","Wang, B. (TU Delft RST/Biomedical Imaging)","Beekman, F.J. (promotor); Goorden, M.C. (promotor); Delft University of Technology (degree granting institution)","2020","Breast cancer, being the most common cancer among females, is nowadays routinely diagnosed using X-ray mammography. Though this technique has proven its effectiveness in many cases, X-ray mammography has some disadvantages like reduced diagnostic sensitivity for dense breasts, need for strong breast compression and inability to assess tissues at the molecular level.
Therefore, there is a need for alternative imaging modalities to improve breast cancer diagnosis. One option is breast scintigraphy, which images the distribution of radiolabelled molecules, called tracers, that concentrate in the tumours in breasts with a planar gamma detector. Different tracers react in different physiological processes with tumours. Therefore imaging a specific tracer can reveal the specific pathological process that is specific for a certain kind of breast tumour. Despite the fact that breast scintigraphy has been reported to have improved diagnostic sensitivity in dense breasts compared to X-ray mammography and does not require strong compression, it offers only 2D images and information on the third dimension is thus lost. In this research we proposed a molecular breast tomosynthesis scanner which provides 3D images of the radiotracers in the breast. In the proposed system, the patient would lie prone on a patient bed with a hole in which the breast is inserted. Subsequently, two gamma cameras equipped with multi-pinhole collimators (therefore the technique is called multi-pinhole molecular breast tomosynthesis, MP-MBT) scan the pendant breast from both sides.
To estimate the performance of MP-MBT, the system was modelled in Monte Carlo simulations in a clinically realistic setting. The results assured us that it was worth building a prototype of MP-MBT to further investigate its imaging capability. Besides, voxelized raytracing (VRT) software developed earlier in our group to accelerate simulations and facilitate system optimisations was validated with the Monte Carlo simulation results. Subsequently, VRT was used in further studies in this project.
The promising results of MP-MBT simulations partly relied on a gamma detector with high spatial linearity over the whole detector surface. However, conventional gamma detectors used in clinical practice have large dead edges, i.e. about 4 cm from the detector edges is unusable, and a detector with small dead edges would be very expensive, which may make MP-MBT a less competitive technology. Therefore, in order to have a gamma detector suitable for MP-MBT, we came up with a few different designs with NaI(Tl) scintillators and photomultiplier tube (PMT) array readouts and evaluated their performances with Monte Carlo simulations. From the simulation results, we eventually chose a design with a staggered layout of 15 square PMTs, among which two PMTs detected the optical photons from the scintillator through extra-long additional light-guides. This gamma detector was built in our lab, and it turned out to have only about 15 mm dead edge (mainly due to the 12 mm sealing).
The customised gamma detector was equipped with a lead multi-pinhole collimator design based on previous research. The whole gamma camera was mounted on a robot arm to create a movable scanner. We calibrated the scanner with a point source and scanned a resolution phantom and a breast phantom to evaluate MP-MBT's performance. In the phantom study, the scanner showed the capability of detecting tumours down to 5 mm when a realistic tracer (technetium sestamibi) concentration was administered.
However, the current prototype is still far from a device that can be used in the clinic and we have found several problems with MP-MBT, especially the noise pattern in the reconstructed images, which should be given special attention in the future research.","pinhole collimator; gamma detector; Monte Carlo simulation; breast imaging; SPECT imaging; molecular imaging","en","doctoral thesis","","9789463662734","","","","","","","","","RST/Biomedical Imaging","","",""
"uuid:4aff0adf-4b89-42bd-8fd6-d338d5b0363b","http://resolver.tudelft.nl/uuid:4aff0adf-4b89-42bd-8fd6-d338d5b0363b","Directional Wavefield Decomposition of Snapshots of Acoustic Wavefields","Holicki, M.E. (TU Delft Applied Geophysics and Petrophysics)","Drijkoningen, G.G. (promotor); Wapenaar, C.P.A. (promotor); Delft University of Technology (degree granting institution)","2020","In acoustic exploration and monitoring imaging plays a critical role in uncovering structure and minute changes therein. It is, however, often hampered by unfulfilled assumptions. One such assumption is that estimated incident and reflected wavefields at a reflector travel in opposite directions with respect to the reflector surface-normal vector. In Reverse Time Migration (RTM) this is often not the case due to the common use of scattering modelling operators to estimate incident and reflected wavefields at an interface. This results in artefacts in the output RTM image. A partial remedy to these artefacts is to directionally decompose wavefields during the RTM imaging step. This allows for the decomposition of the incident wavefield at a horizontal interface into a down-going wavefield, which in conjunction with the upgoing reflected wavefield can be used to estimate the reflectivity at the interface. Directional wavefield decomposition classically decomposesmulticomponent wavefields recorded along a horizontal surface into up- and down-going wavefields. However, not all techniques implicitly use wavefields recorded along flat surfaces. RTM, for example, commonly works on snapshots of a wavefield, hence the classical decomposition techniques are hardly applicable. Other techniques have been developed to solve this problem, but still only decompose into up- and down-going wavefields as it is assumed that the media of interest only vary in the vertical direction. None allow for an elegant decomposition of a wavefield according to all possible travel directions. In this thesis we develop a snapshot acoustic directional wavefield decomposition technique with an emphasis on the fact that the method works on snapshots of wavefields in timeand that the directionwith respect towhich the decomposition occurs is arbitrary, and not simply the vertical direction as in up-down decomposition. We demonstrate how to directionally decompose an acoustic wavefield according to the directions of propagation of its constituent plane waves by showing how to separate a wavefield into its constituent plane waves. This allows for approximate wavefield decomposition in arbitrary media, even normal to interfaces. This is an obvious boon for imaging complex structures, as imaging should occur normal to surfaces and not only in the vertical direction.","Acoustic; Snapshot; Wavefield Decomposition; Imaging; RTM","en","doctoral thesis","","978-94-6366-277-2","","","","","","2020-05-19","","","Applied Geophysics and Petrophysics","","",""
"uuid:de78dfd6-f63b-4908-b09c-921cb7ea937e","http://resolver.tudelft.nl/uuid:de78dfd6-f63b-4908-b09c-921cb7ea937e","Novel approaches to produce radionuclides using hot atom chemistry principles","Moret, J.L.T.M. (TU Delft RST/Applied Radiation & Isotopes)","Wolterbeek, H.T. (promotor); van Ommen, J.R. (promotor); Denkova, A.G. (copromotor); Delft University of Technology (degree granting institution)","2020","Radionuclides are often used in the field of nuclear medicine. For some deceases the use of radionuclides is the best possible care, or even the only means of diagnosis or treatment. For these medical applications high specific activity (high activity per unit of mass) is required. Commonly, medical radionuclides are man-made. They can either be produced by neutron activation, charged particle or photon activation or by means of radionuclide generator. Furthermore, hospitals prefer an ‘on demand’ supply. A radionuclide generator is ideal. Radionuclide generators can also be used to produce high specific activity. Conventional radionuclide generators work with the principle that the mother and daughter radionuclide have different electrostatic interactions with the column material. This allows for easy elution of the daughter radionuclide. However, when working with chemical identical mother-daughter radionuclide pairs (e.g. 177mLu / 177Lu) another separation principle is required. Utilising ‘hot atom’ chemical principles such a mother-daughter pairs can be separated. ‘Hot atom’ principles describe the chemical effects that occur due to nuclear interactions or due to decay. An example of these effects is bond rupture. The effective range of those principles is rather limited, requiring thin layers. A possible technique to apply these thin layers is atomic layer deposition (ALD). ALD is commonly used in the semi-conductor industry, but can due to its versatility also be used in the field of catalysis or pharmaceutical. The advantage of using ALD is that this gas phase deposition technique allows for thin conformal coating of complex structured materials. Furthermore, the amount of material that can be deposited can easily be adapted to need because ALD is a self-limiting process. In this thesis the usefulness of ALD in combination with radionuclide production is explored. Because of the versatility of ALD it can also be used to create target materials for charged particle activation and enrichment experiments (Chapter 2). This versatility is illustrated by three case studies, namely this production of targets for 64Cu production, the production of 177Lu by means of a radionuclide generator and the production of 99Mo using three different routes. Also described is how ALD can be used to alter the surface chemistry of high surface area materials to increase their adsorption capacity for Mo (Chapter 3). The obtained particles with an alumina coating are then tested for their adsorption capacity and compared to acid activated alumina, the current used material in 99Mo/99mTc-radionuclide generators. The adsorption capacity of the obtained particles is twice that of acid activated alumina and has a 99mTc elution efficiency of 55%. Furthermore, the coating of nano-particles for the development of with Lu coated particles (Chapter 4) for the preparation of a radionuclide generator is described. ALD allows for a deposition of up to 15w% Lu. Furthermore, the gamma dose received during neutron activation has an influence on the specific activity produced (Chapter 5). Using Cu(II)-phthalocyanine as a target it is shown that an increase in gamma dose during neutron activation results in an increase in Cu release and hence a decrease in specific activity obtained.
in subsurface store hundreds of billions of barrels of hydrocarbons in different sedimentary basins worldwide. The sediment density flows that form these systems, termed turbidity currents, are capable of destroying subsea installations, such as telecommunication cables, due to their high volume and velocity at the ocean bottom. Therefore, understanding what controls these systems is important in order to make predictions of their occurrence and behaviour. Because recent deep-water systems are hundreds to thousands of meters below sea level, direct observation of the flow processes and deposits produced by them is rarely possible. A few flow measurements have been obtained by instruments installed on the ocean floor, but they are commonly destroyed by the violence of the flows. Seismic and sonar imaging from the sea bottom are some of the tools that permit the identification of deep-water deposits and their characteristics in a relatively low-resolution scale. Therefore, most data are based on descriptions of deposits from exhumed systems or through interpretations of seismic reflection data, which might be calibrated with well log and core data from subsurface systems. From these datasets, hypotheses are usually proposed to explain the formation processes of the deposits. Physical laboratory experiments and process-based numerical models are supplementary study approaches that simulate these sedimentary density flows and their deposits, thereby suggesting ties that relate the flow processes to the deposits. These different approaches provide information at different scales of observation. Comparing and integrating them is a useful way to understand better the controlling processes of these deepwater depositional systems. One of the main controls on the depositional architecture of deep-water systems is the relief of the seafloor over which turbidity currents pass, which influences the location of erosion, bypass, and deposition. Seafloor topography can be complicated, and subtle changes may influence flow behaviour and impact the geometry and distribution of the deposits. Stepped submarine slope profiles (sensu Prather, 2003) are one of the most common types of slope-to-basin profiles observed in modern and ancient deep-water systems. Many studies of stepped-slope systems are based on seismic reflection data and a few based on outcrop observations, which focus on the description of the geometries of the deposits. However, no studies on this subject are reported in literature that uses numerical simulations. In this thesis, two natural systems deposited on a stepped-slope were analysed. The first from outcrops of the stratigraphic Units D and E of the Laingsburg-Karoo Basin (South Africa), where it is possible to observe depositional patterns across scale from beds to the depositional architecture in each system. The second case is from a subsurface reservoir system offshore the Campos Basin (Brazil), where the depositional architecture was interpreted through 3D seismic reflection data, integrated with well log and core data. To complement these studies, numerical simulations of multiple flow events using the FanBuilder software (Groenenberg, 2007) were performed using five different bathymetries, varying their slope gradients in one set of simulations, and the degree of confinement in the other. The results of these three data sources were compared with respect to their sand distribution, their geometries, the vertical and lateral stacking patterns, and the connectivity of the sand bodies. Based on these comparisons, conclusions were drawn regarding the geological controls involved in the sediment distribution, from local controls (autogenic processes) to external ones (allogenic processes). The main results of the present study are as follows: • Intraslope lobes are more channelized than basin-floor lobes and their stacking patterns are less influenced by changes in slope gradient; • Their sediment depocentre extends further on the intraslope step with increasing slope gradients; • Lobe compensation is scale-dependent and the shifts between depositional units increase in distance from bed to lobe complex scale; • The controls of the patterns in lateral shifts are mostly related to autogenic processes at the smaller scale and tend to be more allogenically controlled at larger scales; • Slope confinement plays a stronger role than slope gradient in determining whether a system is sand-attached or sand-detached;
• The spatial and temporal change from a sand-attached to a sand-detached
system depends on allogenic controls such as tectonics; • The opposite change is likely to occur only through depositional processes; • Stepped-slope systems differ from mini-basin fills in several aspects, such as the mud content and the depositional architecture; • The slope gradients control deposition on stepped-slope systems, while in mini-basins this is primarily the slope confinement.","turbidity currents; stepped slopes; depositional architecture; numerical simulation; Campos Basin; Karoo Basin","en","doctoral thesis","","978-94-6402-288-9","","","","Dr. J. E. A. Storms of Delft University of Technology has contributed greatly to the preparation of this dissertation.","","","","","Applied Geology","","",""
"uuid:6e67ef39-843c-43a5-bd39-02f24ab23c18","http://resolver.tudelft.nl/uuid:6e67ef39-843c-43a5-bd39-02f24ab23c18","Mechanical Snakes: Path-Following Instruments for Minimally Invasive Surgery","Henselmans, P.W.J. (TU Delft Medical Instruments & Bio-Inspired Technology)","Breedveld, P. (promotor); Smit, G. (copromotor); Delft University of Technology (degree granting institution)","2020","Surgical procedures are inherently invasive as they require the surgeon to cut into the body to create a surgical pathway towards the diseased area, resulting in surgical trauma for the patient. The field of Minimally Invasive Surgery (MIS) strives to reduce surgical trauma by minimizing the size and number of incisions. The used instrumentation plays an important role in this pursuit. Instrumentation that is currently in use is either straight and rigid, demanding a straight surgical pathway, or flexible, allowing for multi-curved surgical pathways. The currently existing flexible instruments, such as, for example, a catheter guided by the blood vessel wall, rely on external support and guidance from the anatomical environment. The ability to follow multi-curved surgical pathways without the need for anatomical guidance extends the reach of surgery and is especially useful in less accessible areas such as, for example, the human skull base. The skull base is a dense anatomical area that, next to important structures such as the pituitary gland, supports a network of fragile nerves and blood vessels. In such a delicate anatomical environment, flexible instruments cannot find the necessary external support and guidance. This implies a need for instrumentation that is not only flexible, but also steerable. A logical next step is the development of steerable snake-like instruments that can follow multi-curved pathways through the body without the need for external support or guidance from the anatomical environment. This kind of functionality is new in surgery and a topic of research in multiple research institutes around the world. Nevertheless, solutions that are thin, stiff and affordable are not yet available. Similar to a biological snake that continuously adapts the shape of its entire body as it moves forward, the shape of a snake-like instrument also needs to be fully controllable. In practice, this will require multiple elements of the instrument to be controlled simultaneously. Humans are not particularly good in this kind of multi-tasking, while robots may excel at such tasks. Therefore, when trying to solve control problems concerning snake-like motion, a natural tendency exists to search for robotic solutions. Medical instrumentation does, for obvious reasons, have to meet high-quality standards. As a consequence, medical-grade robotics tend to be very expensive. The objective of this thesis is, therefore, to explore the possibilities for mechanically-controlled solutions for path-following cable-driven instruments that are suitable for surgical applications.","","en","doctoral thesis","","9789464022131","","","","","","","","","Medical Instruments & Bio-Inspired Technology","","",""
"uuid:a33fa2a9-f347-40a3-96be-51e880018974","http://resolver.tudelft.nl/uuid:a33fa2a9-f347-40a3-96be-51e880018974","On the free-surface vortex driven motion of buoyant particles","Duinmeijer, S.P.A. (TU Delft Sanitary Engineering)","Clemens, F.H.L.R. (promotor); Oldenziel, G. (copromotor); Delft University of Technology (degree granting institution)","2020","This is an experimental and theoretical research on the motion of buoyant particles in the flow of a free-surface vortex at moderate to high particle Reynolds numbers.","Free-surface vortex transport; Buoyant particles; Taylor-column; 2D/3D-PTV; Stereo PIV; Helical motion; Vortex core; Particle motion","en","doctoral thesis","","978-946366-271-0","","","","","","","","","Sanitary Engineering","","",""
"uuid:23c845e1-9546-4e86-ae77-e0f14272517b","http://resolver.tudelft.nl/uuid:23c845e1-9546-4e86-ae77-e0f14272517b","Fourier Optics Field Representations for the Design of Wide Field-of-View Imagers at Sub-millimetre Wavelengths","Dabironezare, Shahab Oddin (TU Delft Tera-Hertz Sensing)","Llombart, Nuria (promotor); Neto, A. (promotor); Delft University of Technology (degree granting institution)","2020","","Fourier Optics; Geometrical Optics; Quasi-Optical Systems; Wide Field of View Imagers; Lens Antennas; Reflector Antennas; Wide Band Imagers; Sub-millimetre Systems","en","doctoral thesis","","978-94-028-2047-8","","","","","","","","","Tera-Hertz Sensing","","",""
"uuid:0dec7916-fa0c-44ef-85ef-0694d3eb25b3","http://resolver.tudelft.nl/uuid:0dec7916-fa0c-44ef-85ef-0694d3eb25b3","Coupling of genotype-phenotype maps to noise-driven adaptation, showcased in yeast polarity","Daalman, W.K. (TU Delft BN/Liedewij Laan Lab)","Dogterom, A.M. (promotor); Laan, L. (copromotor); Delft University of Technology (degree granting institution)","2020","One of the biggest scientific challenges to be tackled this century is how traits of living organisms originate from genes, the so-called genotype-phenotype map, and conversely how traits influence genes through a process called evolution. The solution will yield a large societal impact, with applications in food (e.g., engineering drought-resistant crops), industry (e.g., material production through microorganisms) and health care (e.g., personalized medicine). The complexity of the genotype-phenotype map lies in how it typically spans multiple, interwoven scales (e.g., in size). This dissertation builds on the ambition that ultimately, a solution is found by generalizations of simpler systems. Therefore, we unravel here the map for a tractable example, polarization in budding yeast, and make insightful how evolution can couple to the map.
During polarization, the unicellular organism budding yeast chooses a direction in which it will divide. This involves self-organizing many proteins, in particular Cdc42p, to a single region on its cell membrane. While starting on the molecular scale, the process ultimately affects population traits such as doubling time. To understand the transition in scales in detail, we start bottom-up by experimentally verifying the molecular theory behind polarity success for different genetic backgrounds. The theoretical model treats, amongst others, proteins that activate Cdc42p, which are mechanistically included for the first time. Concretely, we test resulting predictions on sharp lower Cdc42p concentration bounds for viability using, inter alia, growth assays on strains variably producing fluorescent Cdc42p. The experiments confirmed the theory that allows reconstitution of molecular mechanisms underlying polarity establishment.
To advance to population traits, I constructed a tractable growth model, fed by simple rules emerging from the aforementioned theory (only implicitly encompassing the molecular information). Essentially, Cdc42p is stochastically produced, diluted by basic volume expansion, and must exceed a concentration threshold to divide. Despite disregarding many details, quantitative agreement between unintuitive, experimentally validated traits documented in literature and those from model simulations is reached.
The simplicity of the model assumptions also allows new insights in evolution. I elaborate theoretically how lucky cells that by chance produce above average amounts of protein, proliferate better to bias the observed population. Therefore, protein levels promptly adapt non-genetically, also in response to e.g., environmental changes, in a reversible and almost automatic manner. Based on existing experimental data, I predict this noise-based mechanism to notably expand the ease of evolution for essential genes (in yeast for 25%-60% of these). Due to its simple nature, I conjecture that it should be found in many organisms.
In conclusion, we find a successful strategy to tractably analyze the genotype-phenotype map in yeast polarity. The map can be expanded to other functions than polarity, provided that sufficient bio-functional information is available. The analysis also elucidates a new evolutionary coupling to this map. At a step above genes, noisy protein production can freely be utilized for short-term adaptation. Experimentally confirming the presence of this evolutionary mechanism in other model systems, and applying to these the same strategy to predict traits, will generate a completer picture of how traits of living systems are formed and shaped by evolution.
First, the project explored individual preferences for information and communication among patients. It was concluded that in addition to the profiles, an individual patient’s mind-set (e.g. insecurity or anxiety regarding the surgery), and their social support needs, in combination with their physical condition and medical history, should guide the provision of tailored information and communication services. Subsequently, several prototypes were developed and evaluated with patients and care providers: Storyboards, paper-based prototypes, and a fully functional web application that informs THA patients about their activity levels after surgery. The final study explored the use and evaluation of the web application by different profiles. It was concluded that the profiles are an adequate segmentation that, combined with customized features, can be used to designing tailored information tools in THA. However, to increase the relevance of the tailored information, it should align with the course of recovery (e.g. post-surgery complications). Resolving generic technical and usability issues is also essential.
The profile-specific design guidelines that resulted from this thesis can be used by creative industry and healthcare providers to tailor products and services for THA patients. They are also available online, at www.medisigntudelft.nl/research/patientprofiles.","E-Health; Research through Design (RtD); Tailoring; Patient experience; patient education; Personalisation; Personalised healthcare; Orthopaedic surgery","en","doctoral thesis","","978-94-028-2019-5","","","","","","","","","Applied Ergonomics and Design","","",""
"uuid:4345c365-efd6-49e1-975d-3e66028a8e53","http://resolver.tudelft.nl/uuid:4345c365-efd6-49e1-975d-3e66028a8e53","Smart optics against smart parasites: Towards point-of-care optical diagnosis of malaria and urogenital schistosomiasis","Agbana, T.E. (TU Delft Team Raf Van de Plas)","Vdovin, Gleb (promotor); Verhaegen, M.H.G. (promotor); Delft University of Technology (degree granting institution)","2020","Malaria remains an important cause of high morbidity and mortality worldwide. According to World Health Organisation (WHO) malaria report for 2017, malaria accounted for the death of 435,000 people. It is the leading cause of death among pregnant women and little children. 11% of maternal and 20% of under–five deaths are attributed to malaria every year. Malaria transmission is currently active in 95 countries putting the lives of 3.2 billion people at risk. 40% of the malaria related deaths are linked to Nigeria and the Democratic republic of the Congo. Since malaria symptoms are generally non-specific and usually overlap with the symptoms of other febrile illnesses, clinical diagnosis are typically presumptive and often results into high number of false positives which potentially lead to the abuse of antimalarial drugs. The consistent abuse of antimalarial drugs has produced the consequent effect of drug resistance which is a major concern in the current global malaria control and elimination efforts. The WHO therefore recommends that an effective malaria case management plan must be predicated on a standard parasite-based confirmatory diagnostic test. Conventional light microscopy is the recommended reference diagnostic standard prescribed by the World Health Organisation. This method is particularly of interest because it allows parasite specie differentiation, quantification of the parasite density in a given blood smear, high accuracy (although this depends on the expertise of the microscopist), low direct cost, visualization of different stages of the parasite development etc. While well-equipped laboratories for malaria diagnosis are commonly available in developed urban and peri-urban areas, low-resource settings of malaria endemicity usually have very limited options. The recommended standard microscopy is less accessible in resource-limited settings because of the following: lack of required technical skills, incessant power outages, lack of efficient maintenance capability, delayed diagnosis due to intense workload, inaccuracies due to manual counting of the parasites detected in the blood film etc. The inaccuracies of parasite density estimation eventually affects the accuracy and efficiency of the prescribed treatment which could have fatal consequences. A diagnostic process is termed inconclusive by the WHO until and unless a minimum of 100 measurement (microscopy examination of 100 high powered-fields) has been done on a prepared thick blood film. For a thin blood film which provides more details about the morphology of the parasite, an average of 800 measurement is required. This is an easy task for laboratory technologist in malaria non-endemic countries where an average of 120 malaria cases occur yearly. But for malaria endemic country where several thousand cases are reported daily, this is by no means a mean task as it demands full concentration, time, high expertise and experience. To realize current global effort to reduce the heavy malaria burden, the need for a reliable, efficient, accurate and automated point-of-care diagnostic tool cannot be overemphasized. The focus of this thesis work therefore, is to develop smart optical methods that alleviate the burden of manual microscopy by researching methods to optimise existing imaging modalities which can be integrated with smart algorithms for quick malaria parasite detection in infected patients. Aside malaria, schistosomiasis is the second most common parasitic diseases. Although it falls into the category of a Neglected Tropical Disease (NTD), 220.8 million people required preventive treatment in the year 2017 according to the World Health Organisation report. It is a disease of the poor and it is prevalent in tropical and subtropical areas and particularly common in communities where there is no access to clean drinking water and proper sanitation. 779 million people are at risk of contracting this disease which results into impaired growth and development, diminished physical fitness, bladder cancer and decreased neurocognitive abilities. Although safe and effective medication is widely available for treatment, accurate diagnostic techniques for schistosomiasis is hugely underdeveloped and remains a critical challenge. Intestinal and urogenital schistosomiasis are the two variants of this Neglected tropical disease but in this research, we focus on urogenital schistosomiasis (caused by S. haemtobium) because it is most prevalent among the population we worked with and also because it is easier to detect in urine. The diagnostic protocol for S. haemtobium prescribes urine filtration with WHO recommended standard membrane filters (with 12 μl pore size). Several critical measurements by an expert must be done to detect the targeted foreign bodies (parasite eggs) in the urine samples before a reliable conclusion can be made. Also for a confirmatory diagnosis, it is standard practice to examine different samples collected from the patient at different specific intervals. This is particularly recommended to increase the amount of sample analysis per patient thereby increasing the sensitivity of the test. Since this process involves the microscopy examination of filtered urine samples, it is also limited by the challenges already described for standard malaria microscopy. Although several antigen and antibody based rapid diagnostic test kits have been developed for both malaria and schistosomiasis, the reliability of the performance of these diagnostic test is still a major concern. This thesis is aimed at the development of reliable, robust, accurate, cost effective and easy-to-use point-of-care optical devices for quick diagnosis of malaria and urogenital disease in human samples. This thesis begins by looking at light microscopy with extended depth of field. Wavefront coding with adaptive optics and digital inline holography have been considered in this work. An optimal configuration that guarantees maximum resolution based on the coherence property of illuminating source and the specification of the imaging sensor is prescribed. In this system, interference of a plane and object wave at the detector plane generates a hologram from which the complex amplitude of the field in the object plane can be numerically reconstructed by solving an inverse source problem. This method is of practical interest particularly because unlike the conventional microscope, details in transparent biological samples can be retrieved since both amplitude and the phase of the field is reconstructed. It provides potential solution towards label-free diagnosis of parasitic diseases. Combined with flow cytometry and data-driven algorithms we applied this methodology to the development of rapid detection of S. haemtobium. A working prototype device with the potential to map the diseases has been developed and tested on the field. The system design takes into consideration practical field conditions such as ease-of-use, cost, harsh environmental conditions, erratic power outages, system robustness against dust and other artifacts. Feedbacks and results from the field are very promising. Leveraging on recent advances in cellphone and 3-D printing technologies we developed an automated cell-phone based microscope towards the realization of a rapid point-of-care diagnosis of malaria. The challenge here is to optimise the optical train of a low-cost commonly available cell-phone to detect malaria parasite with sufficient resolution. It was found that existing cell-phone based microscope could not resolve the 1 µm size malaria parasites because of the system optical aberration and the numerical aperture limit of the phone objectives. Although this method demonstrate the capability of the cell phone based microscope to image malaria parasite, however the achievable field of view is limited to 150 × 150 µm. This implies that over 600 measurement is needed for a conclusive diagnosis. We circumvent this limitation by the novel implementation of computer-assisted dry fluorescent microscopy. Using computational analysis of image containing large number of blood cells, we establish a robust statistics which provides reliable diagnostic recommendation. The technique was tested with in vitro and in vivo samples and has demonstrated its suitability for highly sensitive, robust and automated diagnostics of malaria. It requires minimal human intervention, uses simple sample preparation, provides high degree of independence of expert judgement, and has a potential for massive community screening for malaria control and elimination programs. The design specifications for the development of working prototypes presented in this thesis took into account feedbacks from diagnostic experts from the following non-governmental organisations: Doctors without Borders, Malaria Consortium, AMREF, Save the Children and Christian Aid (Nigeria). Also, our methodology was thoroughly validated by discussions and interactions with experts on the field (in Nigeria, Ivory Coast, Gabon, Uganda and Ghana) and with parasitologists, researchers and vaccine developers in the Netherlands, Spain, Ireland and Germany, leading to valuable new insights.”
It is our goal that the diagnostic methods and prototype presented in this thesis will be used to compliment the limitations of the existing diagnostic techniques.
First, to obtain a deeply fundamental knowledge about sintering process, both static and time-dependent characterizations need to be performed, at similar scale as in real application. X-ray diffraction (XRD) is selected due to its large detection volume and valuable material information, both qualitative and quantitive. To enable a dynamic time-resolved X-ray diffraction (TRXRD) study and in-situ sample monitoring, a MEMS-based TRXRD nanomaterial platform is firstly designed and fabricated. A gas cell is designed and fabricated to provide an environmental experimental condition, without interference with XRD measurements. Combined with gas cell and power supply, this set up can enable TRXRD characterization of nanomaterial, with large flexibility of temperature control and gas environment.
Next, with the developed characterization platform, both static and time-dependent investigations on the sintering process of a commercial Cu NPs-based paste are per-formed under different conditions. Series of XRD patterns and in-situ electrical resistance measurement are collected, followed with detailed XRD analysis and microstructure observation. These results and insights are on the one hand, a validation of the function of the developed nanomaterials characterization method and platform. On the other hand, they can be transferred to improve and guide process development and material optimization of Cu NPs-based paste.
Last but not least, the in-air pressure assisted sintering behaviors of Cu NP-based paste under various process conditions are investigated and analyzed. Based on the paste characterization results, the in-air sintering temperature range is determined and multiple pressure-assisted sintering experiments in the air are performed. As temperature and pressure increase, Cu NPs form more condensed structures with neighboring particles. Both of these parameters can accelerate the neck formation and inter-particle connection inside Cu joints.","Electronics packaging; Sintering copper nanoparticle paste; Time-dependent material study; MEMS-enabled characterization method","en","doctoral thesis","","","","","","","","2022-04-01","","","Electronic Components, Technology and Materials","","",""
"uuid:f9ce79b1-7890-41b4-850b-134dcfbef52e","http://resolver.tudelft.nl/uuid:f9ce79b1-7890-41b4-850b-134dcfbef52e","Joint Parameters Estimation using FMCW UWB Waveform","Xu, S. (TU Delft Microwave Sensing, Signals & Systems)","Yarovoy, Alexander (promotor); Delft University of Technology (degree granting institution)","2020","As one of the main sensor in autonomous driving, radar has great advantages over other sensors, especially its capabilities during adverse weather condition and Doppler information extraction. Performance of the radar in terms of accuracy and target resolution strongly depends on radar waveforms transmitted and signal processing algorithms applied. To achieve high range resolution, an ultra-wideband (UWB) signal has to be used for sensing, which introduces difficulties to achieve high Doppler and direction-of-arrival (DOA) estimation simultaneously due to the range migration. To address this problem, in this thesis new signal processing algorithms are proposed, which pave the way to improved performance of the automotive radar sensor. As the frequency-modulated continuous-wave (FMCW) radar are widely used in short-range and middle-range applications due to its low cost and simplicity, FMCW waveform is the main research subject. The FMCW signal model is derived and analysed in Chapter 2 which for the first time takes both the range migration and wideband DOA problems into account at the same time. The point-like moving targets are considered in Chapter 3 where their Doppler velocities are within the maximum unambiguous velocity of the radar. A novel improved multiple signal classification (MUSIC) algorithm with the dynamic noise-subspace method is proposed to address both the range migration and wideband DOA problems. The algorithm releases the great potentials of the conventional MUSIC algorithm in the presence of the range migration. Moreover, an efficient algorithm based-on Rayleigh-Ritz step is introduced for the proposed method resulting in a considerable reduction of computational requirements without any performance degradation. Comparison with the conventional narrow-band MUSIC, Keystone-MUSIC, inversion-MUSIC and corresponding Cramer-Rao bounds (CRB) using simulations, reveals the superiority of the method proposed in terms of accuracy, resolution and efficiency. The problems similar to those considered in Chapter 3 but in the presence of the Doppler ambiguity are considered in Chapter 4. A spectral norm-based algorithm is proposed to address the coupling terms for a single moving point-like target. The algorithm for the first time abandons the integration-based method for ambiguous velocity estimation. The spectral-norm based algorithm provides a new tool to resolve the ambiguity problem which outperforms the conventional integration-based algorithm by avoiding the off-grid problem with limited data size. Moreover, combined with the modified CLEAN techniques and Greedy algorithm, the proposed algorithm can be extended to multiple moving targets. Furthermore, the power iteration algorithm is smartly adopted for an efficient implementation of the proposed method. After addressing the point-like targets, the moving extended targets are studied in Chapter 5 especially when multiple extended targets cannot be separated both in range and beam profile. The Doppler difference is used to recognise them and inverse synthetic aperture radar (ISAR) concept is adopted to split and image the targets separately. The conventional entropy minimisation approach is applied to the signal model for not only the Fourier spectrum but also the eigenspectrum as well for the first time. The Fourier spectrum has a relatively high resolution in higher-order motion (e.g. acceleration) while eigenspectrum has a better resolution in Doppler separation. The advantages of both spectra are utilised to separate multiple extended targets by a simple but powerful combination. Via numerical simulation, the applicability of the algorithm in the automotive application is demonstrated. Last in Chapter 6, by processing the experimental data from automotive radar, we present a novel and fast imaging algorithm for slow-moving targets which provides super-resolution on DOA. The range information is processed via fast Fourier transform (FFT) for efficiency while the DOA is estimated by the MUSIC algorithm for super-resolution. Since the MUSIC spectrum is pseudo-spectrum and can not represent the correct dynamic range of the imaging results, a novel normalisation method is introduced to vividly indicate the energies of different targets. In comparison with conventional FFT-BF, a cleaner range-azimuth image is obtained with the proposed algorithm demonstrating higher angular resolution and without strong side-lobes. Although the research presented in this thesis is served for automotive application, some of the algorithms and ideas can be easily generalised for a broad spectrum of diverse applications.","","en","doctoral thesis","","978-94-028-2035-5","","","","","","","","","Microwave Sensing, Signals & Systems","","",""
"uuid:0eee75bb-fe64-4d21-939c-6d1180673cd7","http://resolver.tudelft.nl/uuid:0eee75bb-fe64-4d21-939c-6d1180673cd7","Natural and Mixed Convection in Coarse-grained Porous Media","Ataei Dadavi, I. (TU Delft ChemE/Transport Phenomena)","Kleijn, C.R. (promotor); Tummers, M.J. (copromotor); Delft University of Technology (degree granting institution)","2020","In the chain of steel production, the blast furnace converts iron ore into liquid iron. The hearth of the blast furnace, where the liquid metal is collected and tapped off, is filled with relatively large coke particles (D ~ 20 – 100 mm). The meandering flow of hot liquid metal in the coarse-grained porous medium in the hearth causes erosion of the refractory walls containing the hearth through the formation of hot spots. This has a severe impact on the lifetime of blast furnaces. Therefore, it is crucial to understand the liquid metal flow and heat transfer through the packed bed of relatively large coke particles. With the hot metal flowing in from the top and the refractory walls being cooled, the flow of the liquid metal in the hearth is a natural and mixed convection flow characterized by the dimensionless Rayleigh and Reynolds numbers, and their ratio, viz. the Richardson number. Since the pores between the large coke particles are not small compared to the flow and thermal length scales, there is a strong interaction between the flow and the pore geometry. Therefore, it is important to capture the details of fluid flow and temperature distribution at the pore level and to resolve the strong interaction between the flow and the solid grains.In order to gain a fundamental understanding of the above types of flow, this dissertation reports on experimental investigations of natural and mixed convection in cubical cavities filled with coarse-grained porous media consisting of packed beds of relatively large solid spheres. Bottom-heated natural convection, side-heat natural convection, and vented mixed convection configurations have been considered. Accurate global heat transfer measurements have been performed for various sphere packings, sphere sizes, and sphere conductivities in a wide range of Rayleigh numbers (and Reynolds numbers in the mixed convection case). Refractive index matching between water and hydrogel spheres enabled the use of optical measurement techniques, i.e. Particle Image Velocimetry and Liquid Crystal Thermography, to obtain highly-resolved pore-scale velocity and temperature fields.In bottom-heated natural convection, it was observed that at lower Rayleigh numbers, the Nusselt numbers for the porous medium-filled cavity are reduced compared to the pure-fluid cavity (Rayleigh-Bénard convection) and the Nusselt number reduction strongly depends on packing type, size, and conductivity of spheres. However, at high Rayleigh numbers, when the flow and thermal length scales become sufficiently smaller than the pore length scale, the flow penetrates into the pores with much higher velocities and is not obstructed by the presence of coarse-grained porous media. This leads to an asymptotic regime in which the convective heat transfer for all sphere conductivities, sizes and packing types converge into a single curve which is very close to the pure Rayleigh-Bénard convection curve. The results indicate that the ratio between the thermal length scale and the pore length scale is a determining factor in the effect of porous media on flow and heat transfer.In side-heated natural convection, the presence of the porous medium decreases the heat transfer compared to the corresponding pure-fluid cavity. This is due to the fact that the layers of the spheres touching the isothermal side walls hinder the boundary layers along these walls and divert a portion of the boundary layer fluid away from the walls. This subsequently alters the temperature distribution and reduces the mean temperature gradient at the walls. The heat transfer measurements demonstrate a transition from Darcy to non-Darcy regime by increasing the Rayleigh number and the size of spheres. A new Nusselt number correlation for coarse-grained porous is presented which takes into account the strong effect of the porous medium conductivity. In vented mixed convection, three flow and heat transfer regimes were observed depending on the Richardson number. For Ri < 10, the porous medium directs a portion of the strong forced inflow downward towards the hot wall, and therefore the flow structure and the Nusselt number scaling are similar to pure forced convection and are independent of Rayleigh number. For Ri > 40, the strong upward directed natural convection flow dominates and the Nusselt number becomes less sensitive to the Reynolds number. For 10 < Ri < 40, the upward directed natural convection flow competes with the downward directed forced flow at the hot wall, leading to a minimum effective Nusselt number. A Nusselt number correlation is presented which covers all three regimes. This dissertation concludes by discussing the contribution of this work in improving the knowledge on the physics of natural and mixed convection in coarse-grained porous media and its relevance for understanding and modelling of the fluid flow and heat transfer processes in the blast furnace hearth, as well as in other application fields such as in air ventilation and in the food industry.","Natural Convection; Mixed convection; heat transfer; Porous Media; Particle Image Velocimetry; Liquid Crystal Thermography","en","doctoral thesis","","978-94-6375-893-2","","","","","","","","","ChemE/Transport Phenomena","","",""
"uuid:4741efe6-9a5e-4782-8cef-6d7ed83a3b30","http://resolver.tudelft.nl/uuid:4741efe6-9a5e-4782-8cef-6d7ed83a3b30","Discrete fiber models beyond classical applications: Rigid line inclusions, fiber-based batteries, challenges","Goudarzi, M. (TU Delft Applied Mechanics)","Sluys, Lambertus J. (promotor); Simone, A. (promotor); Delft University of Technology (degree granting institution)","2020","Reinforced composites are used in many industrial and multi-functional applications. The efficiency of the reinforcements depends mainly on the aspect ratio, material properties, and the adhesion between matrix and reinforcement. Particularly, high aspect ratio fillers and inclusions have gained popularity due to their unique material and geometrical features, where a fundamental understanding of composites hierarchical structure and behavior is crucial for the optimal design and performance. There is however a lack of robust numerical modeling frameworks that are able to accurately represent composites with high aspect ratio reinforcements. Ideally the expensive mesh generation of the standard finite element method or the simplifying assumptions adopted by smeared type or mean-field approaches should be avoided.
A group of numerical techniques here referred to as ""embedded methods"" eliminate mesh conformity restrictions and significantly reduce the computational cost of the standard finite element method, while still benefiting from the advantages of a direct numerical analysis. In formulating the embedded models, enrichment techniques and different element technologies are considered, and physical assumptions are investigated. Limitations of the classical embedded models are highlighted through numerical examples, on the basis of which possible enhancements are discussed. We specifically highlight the important roles of field gradients continuity/discontinuity and the element size, order, and regularity extensions on the smoothness of the solutions.
A computationally efficient embedded model is then applied to the study of failure and inclusion orientation effects in planar composites. A detailed study is also performed for dense fiber-reinforced composites, where homogenized mechanical properties are extracted and various forms of neutrality of thin fibers are demonstrated. In this context, a part of this thesis is dedicated to one-to-one comparisons between results obtained using the standard finite element method and embedded techniques. This led to a range of model and geometry parameters under which predictions of embedded technique are reliable. Comparisons are reported in terms of homogenized properties and local field variables, namely relative displacement between inclusions and matrix (slips).
Finally as a preliminary step towards multi-functional fiber-based structural batteries, an electro-chemical system characterized by composite cathode in a half cell configuration is considered. The main point of difference with common composite batteries is that active material particles are cast in form of high aspect ratio fibers, which are efficiently discretized by use of the embedded technique. A discrete definition of fibers, unlike the case of mean-field approaches, allows to define local fields and interfacial conditions between fibers and electrolyte and is crucial for the accurate modelling of a battery cell with fiber-based electrodes.","fiber-reinforced composites; embedded reinforcement; rigid line inclusions; fiber neutrality; fiber-based batteries","en","doctoral thesis","","","","","","","","2021-04-28","","","Applied Mechanics","","",""
"uuid:88dcb158-5fc3-4222-a402-4e484fa84414","http://resolver.tudelft.nl/uuid:88dcb158-5fc3-4222-a402-4e484fa84414","Human Factors of Transitions in Automated Driving","Lu, Z. (TU Delft Intelligent Vehicles)","de Winter, J.C.F. (promotor); Happee, R. (promotor); Delft University of Technology (degree granting institution)","2020","In the last decades, advanced driver-assistance systems have contributed to improved road safety. With the recent advance of technology, automotive automation is taking more and more tasks away from the driver. Although automation removes human imprecision and variability, it also introduces out-of-the-loop problems such as complacency, skill degradation, mental underload, mental overload, and loss of situation awareness. Additionally, the rising levels of automation have contributed to an increasingly complex interaction between the automation and the driver, where driver and automation may have to change roles while driving. The objective of this PhD thesis is to understand what types of ‘transitions’ occur between the automation and the driver, how drivers process visual information to rebuild situation awareness and make decisions during these transitions, and how to make the transitions from automation to human safer and more acceptable for the driver...","","en","doctoral thesis","","","","","","","","2020-10-22","","","Intelligent Vehicles","","",""
"uuid:f6532019-84a8-40b3-8dfa-1176c97d57c1","http://resolver.tudelft.nl/uuid:f6532019-84a8-40b3-8dfa-1176c97d57c1","Cyclists’ hazard anticipation and performance","Kovacsova, N. (TU Delft Biomechatronics & Human-Machine Control)","de Winter, J.C.F. (promotor); Hagenzieker, Marjan (promotor); Delft University of Technology (degree granting institution)","2020","Two-wheeler vehicles (i.e., bicycles, mopeds, and motorcycles) are becoming increasingly popular in congested cities because of their small dimensions, low cost of use compared to cars, and their contribution to a healthy lifestyle. Even though the use of two-wheelers offers benefits, their low conspicuity, instability, and vulnerability of the users create safety risks. Due to their small size, two-wheelers tend to be overseen by other road users, especially at intersections. Furthermore, the stability of two-wheelers is easily affected by disturbances such as an uneven road surface. Moreover, the unprotected state of two-wheeler users contributes to a high risk of serious injuries once an accident happens. A better understanding of how crashes occur in the rider-vehicle-road system is needed....","","en","doctoral thesis","","","","","","","","","","","Biomechatronics & Human-Machine Control","","",""
"uuid:eb04d860-281a-4c6b-8c5b-263f526d0bd9","http://resolver.tudelft.nl/uuid:eb04d860-281a-4c6b-8c5b-263f526d0bd9","Thermodynamics of Industrially Relevant Systems: Method Development and Applications","Rahbari, A. (TU Delft Engineering Thermodynamics)","Vlugt, T.J.H. (promotor); Dubbeldam, D. (promotor); Delft University of Technology (degree granting institution)","2020","Improving and developing simulation techniques are key to obtaining higher efficiency and accuracy in molecular simulations of dense liquid systems. The methodology development introduced in this thesis is relevant both for academia and industrial applications. In this thesis, the methods developments/improvements for molecular simulations are introduced followed by applications for realistic systems and systems of industrial relevance....","Statistical thermodynamics; Molecular simulations; Chemical equilibrium; Free energy calculations; (Partial) molar properties; Hydrogen; Formic acid; Ammonia; Water; Methanol","en","doctoral thesis","","978-94-6366-259-8","","","","","","","","","Engineering Thermodynamics","","",""
"uuid:6ed4204e-36b8-43d4-8541-3375df9312b8","http://resolver.tudelft.nl/uuid:6ed4204e-36b8-43d4-8541-3375df9312b8","A multilevel optimization framework for aircraft operations on near-airport communities: Minimizing noise impact and fuel consumption","Ho-Huu, V. (TU Delft Air Transport & Operations)","Curran, R. (promotor); Santos, Bruno F. (copromotor); Delft University of Technology (degree granting institution)","2020","With a significant impact on the development of the global economy and society, air transport is predicted to rapidly grow in the coming years. Unfortunately, while delivering positive global economic and social benefits, air transport has also generated an adverse influence on the environment, especially on the quality of life of communities around airports. Air transport is the main cause of noise nuisance in the vicinity of airports, which has been linked to various human health effects, such as cardiovascular diseases, sleep disturbance, hearing loss, and communication interference. This research aims to develop an optimization framework that can effectively address the problems of route design and flight allocation in a linked manner to reduce noise impact and fuel consumption.","Aircraft noise; Airport noise; Community noise; Trajectory optimization; Route design; Aircraft allocation; Optimization framework","en","doctoral thesis","","","","","","","","","","","Air Transport & Operations","","",""
"uuid:c32e495d-5dd9-42f8-9203-b540e1f9f175","http://resolver.tudelft.nl/uuid:c32e495d-5dd9-42f8-9203-b540e1f9f175","Transport Properties of Fluids: Methodology and Force Field Improvement using Molecular Dynamics Simulations","Jamali, S.H. (TU Delft Engineering Thermodynamics)","Vlugt, T.J.H. (promotor); Moultos, O. (copromotor); Delft University of Technology (degree granting institution)","2020","Knowledge on transport properties of fluids is of great interest for process
and product design development in the chemical, food, pharmaceutical, and
biotechnological industry. In the past few decades, molecular simulation has
become a powerful tool to calculate these properties. In this context, Molecular
Dynamics (MD) is important for the calculation of transport properties of
complex systems, where currently available models are not valid, or at extreme
conditions at which performing an experiment is dangerous or not feasible. MD
simulations provide detailed information about the dynamics of the system at
the molecular level. In this thesis, the aim is to investigate the computation of
transport properties using force field-based MD simulations.","molecular simulation; molecular dynamics; transport properties; finite-size effects; force field","en","doctoral thesis","","978-94-6366-256-7","","","","","","","","","Engineering Thermodynamics","","",""
"uuid:c7610730-b9af-484b-971c-05fa80605e75","http://resolver.tudelft.nl/uuid:c7610730-b9af-484b-971c-05fa80605e75","Optimization of blending and spatial sampling in seismic acquisition design","Nakayama, S. (TU Delft Applied Geophysics and Petrophysics)","Blacquière, G. (promotor); Wapenaar, C.P.A. (promotor); Delft University of Technology (degree granting institution)","2020","The quality and business aspects are both of particular importance in determining the type of seismic acquisition. Usually, a strong emphasis on cost reduction is inevitable. On the other hand, there is an increasing demand for the acquisition of high-quality seismic data that can contribute to the various stages in the field development profile. These conflicting desires eventually make conventional seismic surveys an inadequate option. The application of blended acquisition along with efficient detector and source geometries is capable of providing high-quality seismic data in a cost-effective and productive manner. This way of data acquisition also contributes to minimizing health, safety and environment exposure in the field. Blended acquisition allows multiple source-wavefields to be overlapped in time, space, and temporal and spatial frequency, causing blending interference. The acquisition of less data via sparse detector and source geometries likely violates the Nyquist sampling criterion. Therefore, to make the aforementioned approach technically justifiable, deficiencies in recorded data have to be dealt with through the course of subsequent processing steps. One way to encourage this technique is to minimize any imperfection in processing algorithms. In addition, one may derive survey parameters that enable a further improvement in these processes, which is the primary focus in this thesis.","Acquisition design; Source blending; Optimization; Spatial sampling","en","doctoral thesis","","978-94-6366-265-9","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:aed70731-e7e0-44b4-849c-e910f71edcb1","http://resolver.tudelft.nl/uuid:aed70731-e7e0-44b4-849c-e910f71edcb1","Measuring and controlling radio-frequency quanta with superconducting circuits","Gely, M.F. (TU Delft QN/Steele Lab)","Steele, G.A. (promotor); van der Sar, T. (copromotor); Delft University of Technology (degree granting institution)","2020","In this thesis, we will present the theoretical and experimental work that led to the realization of Radio-Frequency Circuit QuantumElectro-Dynamics (RFcQED). In chapter 1, I will introduce the field of circuit quantum electrodynamics (QED), and the motivations for extending this field to radio frequencies. In chapter 2, we provide a detailed derivation of the Hamiltonian of circuit QED formulated in the context of the Rabi model, and extract expressions for the cross-Kerr interaction. The resulting requirements for the coupling rate in RFcQED are discussed, one of them being the need to dramatically increase the coupling rate compared to typical circuit QED device. In chapter 3 we cover two experimental approaches to increasing the coupling in a circuit QED system, one making use of a high impedance resonator, the second utilizing a large coupling capacitor. In chapter 4, we combine these two approaches to implement RFcQED. Through strong dispersive coupling, we could measure individual photons in a megahertz resonator, demonstrate quantum control by cooling the resonator to the ground state or preparing Fock states, and finally observe with nanosecond resolution the rethermalization of these states. In chapter 5 we present QuCAT or Quantum Circuit Analyzer Tool in Python, a software package that can be used for the design of circuit QED systems such as the one presented here in this thesis. In chapter 6 we discuss how certain interplays between general relativity and quantum mechanics cannot be described using our current laws of physics. In particular, we show how radio-frequency mechanical oscillators are perfect candidates to performexperiments in this regime. In chapter 7 we present the prospects for coupling such mechanical oscillator to weakly anharmonic superconducting circuits such as the transmon qubits or RFcQED systems. In chapter 8, we provide an outlook.","","en","doctoral thesis","","978-90-8593-436-3","","","","Casimir PhD Series, Delft-Leiden 2020-09","","","","","QN/Steele Lab","","",""
"uuid:f9bbff72-b9b4-4694-a188-b2f1451449af","http://resolver.tudelft.nl/uuid:f9bbff72-b9b4-4694-a188-b2f1451449af","Capturing Agents in Security Models: Agent-based Security Risk Management using Causal Discovery","Janssen, S.A.M. (TU Delft Air Transport & Operations)","Langendoen, K.G. (promotor); Curran, R. (promotor); Sharpanskykh, Alexei (copromotor); Delft University of Technology (degree granting institution)","2020","Airports are important transportation hubs that reside in the heart of modern civilizations.They are of major economic and symbolic value for countries but are thereforealso attractive targets for adversaries. Over the years we have observed successful andunsuccessful terrorist attacks at airports, of which the recent Brussels Airport attack andIstanbul Atatürk Airport attack are two examples.A widely-used method to defend airports against these types of events is that of securityrisk management. Following this approach, security risks are quantified based onthreats, vulnerabilities, and consequences. These risks are then used as a basis to implementsecurity measures that can reduce the risks to acceptable levels. Several securityrisk management approaches were proposed before, such as attack trees and securitygames, but they struggle to include diverse human factors in their analysis. These factorsare inherently present in modern airports, as passengers, employees, and visitors areall humans. Furthermore, existing methods struggle to take other performance metrics,such as efficiency, into account.This thesis addresses these limitations by proposing a novel security risk managementapproach that relies on agent-based models and Monte Carlo simulations. Thisapproach builds on the existing security risk management framework but exploits theadvantages of the agent-based modelling paradigm. Agent-based models allow for theinclusion of rich cognitive, social and organizational models that enable the modellingof human behaviour. Furthermore, agent-based modelling is a suitable paradigm to estimatea variety of performance indicators, including airport efficiency.Two case studieswere performed to assess the performance of our agent-based securityrisk management approach. In these case studies we apply our approach to managesecurity risks at a regional airport, as well as an international airport.","Security Risk Management; Agent-based Modelling; Causal Discovery; Airport Terminal","en","doctoral thesis","","","","","","","","","","","Air Transport & Operations","","",""
"uuid:5b8f20e4-9f83-4b3b-8877-57faa276f247","http://resolver.tudelft.nl/uuid:5b8f20e4-9f83-4b3b-8877-57faa276f247","User Acceptance of Automated Vehicles in Public Transport","Nordhoff, S. (TU Delft Transport and Planning)","van Arem, B. (promotor); Happee, R. (promotor); de Winter, J.C.F. (promotor); Delft University of Technology (degree granting institution)","2020","The acceptance of automated vehicles is a necessary condition for realizing the benefits of road vehicle automation. The objective of the thesis was to examine the acceptance of automated vehicles feeding public transport. The thesis investigated the factors contributing to user acceptance, as well as the interrelations of those factors. It was also investigated how acceptance differs across socio-demographic groups and countries. Online questionnaires, semi-structured interviews, a systematic literature review, and accompanied test rides were performed. Participants were asked to imagine the use of automated vehicles (Chapter 2) or physically experienced them in mixed traffic environments in Berlin (Germany) and Trikala (Greece) (Chapters 3–5, 7–8)....","Automated driving; driverless; public transport; acceptance; UTAUT; DIT; CTAM","en","doctoral thesis","TRAIL Research School","978-90-5584-267-4","","","","TRAIL Thesis Series no. T2020/8, the Netherlands Research School TRAIL","","","","","Transport and Planning","","",""
"uuid:9a8d3973-f5b5-4812-97ed-27e5c14afc34","http://resolver.tudelft.nl/uuid:9a8d3973-f5b5-4812-97ed-27e5c14afc34","Personalized gamification to enhance implementation of eHealth therapy in youth mental healthcare","van Dooren, M.M.M. (TU Delft Design Aesthetics)","Goossens, R.H.M. (promotor); Hendriks, V.M. (promotor); Visch, V.T. (copromotor); Delft University of Technology (degree granting institution)","2020","This dissertation focused on the added value of personalized gamification as a factor to enhance implementation potential of eHealth interventions in youth mental healthcare. Mental health disorders are the leading cause of disability in adolescents. It is important for these adolescents to go into therapy, as adolescence is a period in live in which essential developments occur on which mental health disorders have a negative impact. Although psychosocial therapies are effective in reducing psychiatric symptoms in adolescents with mental disorders, there is still room for improvement. For example, because of premature termination of treatment, poor attendance of treatment‐sessions and a low or non‐adherence to homework assignments...","","en","doctoral thesis","","","","","","","","2020-04-03","","","Design Aesthetics","","",""
"uuid:5d4e0db3-c50c-4f33-b261-c3ec7514139e","http://resolver.tudelft.nl/uuid:5d4e0db3-c50c-4f33-b261-c3ec7514139e","On the modelling of the unstable breaching process","Weij, D. (TU Delft Offshore and Dredging Engineering)","van Rhee, C. (promotor); Keetels, G.H. (copromotor); Delft University of Technology (degree granting institution)","2020","Breaching is an important production mechanism for stationary suction dredgers. It is a process occurring in submerged sandy slopes, which mostly occurs in dense sandy soils with a low permeability. The process is initiated by the formation of a slope under water, whose angle is steeper than the internal friction angle, called the breach face. For dredging related breaching, this steep slope is created by a suction dredger, but it can also be formed after initial shear failure, caused by over steepening due to erosion, an earthquake, or an outwardly directed water flow. During breaching process, this steep slope is semi-stable due to negative pore pressure. Instead of a shear failure, particles are released one by one from the breach face, making it seem like the breach face is slowly moving backwards. The released particles form a density current that flows away from the breach face, and can be collected by a stationary suction dredger. When the size of the breach face increases over time, we have an unstable breach.","Dredging; numerical modelling; turbidity currents; breaching","en","doctoral thesis","","","","","","","","","","","Offshore and Dredging Engineering","","",""
"uuid:9f5b51e1-4877-4e60-a9d0-69fa37fa834f","http://resolver.tudelft.nl/uuid:9f5b51e1-4877-4e60-a9d0-69fa37fa834f","Providing Public Transport by Self-Driving Vehicles: User Preferences, Fleet Operation, and Parking Management","Winter, M.K.E. (TU Delft Transport and Planning)","Cats, O. (promotor); Martens, Karel (promotor); van Arem, B. (promotor); Delft University of Technology (degree granting institution)","2020","Self-driving vehicles could make the operation of public transport services in a more flexible manner more affordable. Introducing shared automated vehicles would al¬low operating a fleet of smaller vehicles in a demand responsive manner. This could potentially impact the way we use and operate public transport services, which could eventually trigger changes in car ownership and land use. The main objective of this dissertation is to understand better what it means to deploy shared automat¬ed vehicles for on-demand public transport services. This is analysed from the per¬spective of three main stakeholders: (1) the user preferences of potential users, (2) the fleet operation supervised by the fleet manager, (3) and potential parking man¬agement strategies issued by a transport authority concerned with the introduction of shared automated vehicles.","","en","doctoral thesis","TRAIL Research School","978-90-5584-262-9","","","","TRAIL Thesis Series no. T2020/07, the Netherlands Research School TRAIL","","","","","Transport and Planning","","",""
"uuid:75c2b62b-d8c0-481e-b5d4-63cdbfcf1a80","http://resolver.tudelft.nl/uuid:75c2b62b-d8c0-481e-b5d4-63cdbfcf1a80","Reconfigurable Range-Doppler Processing and Interference Mitigation for FMCW Radars","Neemat, S.A.M. (TU Delft Microwave Sensing, Signals & Systems)","Yarovoy, Alexander (promotor); Krasnov, O.A. (copromotor); Delft University of Technology (degree granting institution)","2020","","","en","doctoral thesis","","978-94-028-1977-9","","","","","","","","","Microwave Sensing, Signals & Systems","","",""
"uuid:431ffda5-275c-4670-bf32-43d863261ec2","http://resolver.tudelft.nl/uuid:431ffda5-275c-4670-bf32-43d863261ec2","En Route to Better Performance: Tackling the Complexities of Public Transport Governance","Hirschhorn, Fabio (TU Delft Organisation & Governance)","Veeneman, Wijnand (promotor); van de Velde, Didier (copromotor); Delft University of Technology (degree granting institution)","2020","Travelling in most cities today is time consuming, uncomfortable, and unsafe. Excessive traffic congestion significantly restricts people’s access to basic services and opportunities, and ultimately impacts individuals’ fundamental right to freedom of movement. Moreover, increased emissions from vehicles are at the root of the global climate emergency, affecting not only the entire urban population, but also jeopardising future generations. It is imperative to change this trajectory and drive cities towards more sustainable mobility with increased use of collective public transport. Whilst it is recognised that the governance of public transport systems plays a central enabling role in improving public transport to make it more attractive to users and financially sustainable, little is know about how to do this, i.e. about the complex causal relation between governance and performance. Drawing on a mixed-method research design, with qualitative and quantitative analyses of multiple cities worldwide, this thesis analyses how the governance of public transport (including the introduction of innovation such as mobility as a service) influences the performance of these systems, eventually contributing to the achievement of broader goals such as sustainability, efficiency, and accessibility.","","en","doctoral thesis","","978-94-6384-122-1","","","","","","","","","Organisation & Governance","","",""
"uuid:a2c1a54a-ab89-4a6a-b9f9-c63241d2c4b8","http://resolver.tudelft.nl/uuid:a2c1a54a-ab89-4a6a-b9f9-c63241d2c4b8","Graph Partitioning Algorithms for Control of AC Transmission Networks: Generator Slow Coherency, Intentional Controlled Islanding, and Secondary Voltage Control","Tyuryukanov, I. (TU Delft Intelligent Electrical Power Grids)","Popov, M. (promotor); van der Meijden, M.A.M.M. (promotor); Delft University of Technology (degree granting institution)","2020","The vast size of a modern interconnected power grid precludes controlling and operating it as a single object. Subdividing a power grid into a number of internally coherent areas allows to cope with its inherent complexity and to enable more efficient control structures. This thesis focuses on discovering the power system structure to facilitate the definition of control areas for wide-area monitoring, protection and control (WAMPAC) applications. Graph partitioning is a well-developed discipline whose potential is not fully recognized in the power system domain. Particularly, spectral graph partitioning methods are shown to be very promising. Their efficiency is first demonstrated by accurately selecting the number and extent of control zones for secondary voltage control (SVC). Next, it is shown that grouping generators with similar slow rotor angle dynamics can also be efficiently tackled through spectral graph partitioning. The final topic is constrained graph partitioning subject to node grouping constraints, which is related to intentional controlled islanding (ICI). As both solution time and accuracy are critical for ICI, a new polynomial-time heuristic algorithm is proposed that is more accurate than comparable state-of-the-art methods.","Dynamic model reduction; generator aggregation; intentional controlled islanding; number of clusters; power network partitioning; secondary voltage control; slow coherency","en","doctoral thesis","","978-94-6384-116-0","","","","","","","","","Intelligent Electrical Power Grids","","",""
"uuid:b61aa64e-cba4-44c0-8d16-93440e028611","http://resolver.tudelft.nl/uuid:b61aa64e-cba4-44c0-8d16-93440e028611","Tunable Optics: Spectral Imaging and Surface Manipulation on Liquid Lenses","Strauch, M. (TU Delft ImPhys/Optics)","Urbach, Paul (promotor); Bociort, F. (copromotor); Delft University of Technology (degree granting institution)","2020","This thesis focusses on two aspects of tunable optics: Fabry-Pérot interferometers with a variable distance between their mirrors and electrowetting liquid lenses. The need for a device to detect child abuse has motivated us to design and build a camera that can detect the chemical composition of the upper skin layers of a bruise using a self-made Fabry-Pérot interferometer. The research described in the first part of this thesis has shown that wide-angle spectral imaging can be achieved with compact and cost-effective cameras using Fabry-Pérot interferometers. Designs with a full field of 90◦ in which the Fabry-Pérot interferometer is mounted either in front of an imaging system or behind a telecentric lens system are presented and analysed. The dependency of the spectral resolution on the numerical aperture of the lens system is derived and its value as a design criterion is shown. It is shown that the telecentric camera design is preferable over the collimated design for bruise imaging with a Fabry-Pérot interferometer.
The idea to use a liquid lens for spectral imaging has directed the research towards a new concept of controlling surface waves on the surface of a liquid lens. We investigate and model surface waves because they decrease the imaging quality during fast focal switching. We propose a model that describes the surface modes appearing on a liquid lens and that predicts the resonance frequencies. The effects of those surface modes on a laser beam are simulated using geometrical optics and Fresnel propagation, and the model is verified experimentally. The model of the surface oscillations is used to develop a technique to create aspheric surface shapes on commercially available electrowetting liquid lenses. The surface waves on the liquid lens are described by Bessel functions of which a linear combination can be used to create any circularly symmetrical aspheric lens shape at an instant of time. With these surface profiles, one can realise a large set of circularly symmetrical wavefronts and hence intensity distributions of beams transmitted by the lens. The necessary liquid lens actuation to achieve a desired shape is calculated via a Hankel transform and confirmed experimentally. The voltage signal can be repeated at video rate. Measurements taken with a Mach-Zehnder interferometer confirm the model of the surface waves. The capabilities and limitations of the proposed method are demonstrated using the examples of a Bessel surface, spherical aberration, an axicon, and a top hat structure.","","en","doctoral thesis","","978-94-028-1994-6","","","","","","","","","ImPhys/Optics","","",""
"uuid:ea63ca9c-5db2-4e9e-ab63-db5b068ee327","http://resolver.tudelft.nl/uuid:ea63ca9c-5db2-4e9e-ab63-db5b068ee327","Innovative low-melting glass compositions containing fly ash and blast furnace slag","Justino de Lima, C.L. (TU Delft Applied Mechanics)","Veer, F.A. (promotor); Nijsse, R. (promotor); Copuroglu, Oguzhan (copromotor); Delft University of Technology (degree granting institution)","2020","The investigation of new glass compositions is crucial to expand the possible applications of glass, from the typical applications for building engineering, in the form of cast blocks or floated glass, to more advanced technologies, such as 3D-printed glass or glass to metal connections. Despite the intense research activity and new glass compositions being investigated every day, there has been little innovation or evolution in the composition of architectural glass. This is partially explained by the fact that a substantial part of glass research is not relevant to practical large-scale applications. This thesis is more concerned about the development of compositions with optimized properties than the studies of the short- and intermediate-range structure of a theoretical glass that would hardly find a practical application. Thus, these compositions are inexpensive and appropriate to mass production, utilizing conventional melting techniques. Since the high melting temperatures and the brittleness are two important drawbacks of glass, this work aims to improve both properties. The modification of the properties is achieved via changes in the composition of the glass, using compounds such as phosphorus pentoxide, aluminium oxide and boron oxide. Then, the choice of different glass formers and modifiers contributes to the development of compositions with lower melting and glass transition temperatures. The reduction of the melting temperature allows a saving of energy during the manufacturing and recycling processes. The structures of the glasses differ from the standard soda-lime and borosilicate glasses, leading to a different mechanical behaviour. For instance, an anisotropic structure, which could exhibit a better mechanical performance than standard glasses. Furthermore, these new compositions incorporate up to 35% of slag and fly ash in their formulas. The valorization of these by-products that would otherwise have been previously discarded reduces costs and gas emission. The developed compositions have high water resistance, amorphous structure proved by x-ray diffraction and indentation toughness comparable to a standard soda-lime glass. The coloration of the samples varies depending on the composition and, for the samples containing slag, depending on the melting temperature. In this case, melting at higher temperatures allows the production of colorless glass. The color of the glasses is mainly influenced by the presence of sulfur and iron oxide. In conclusion, this thesis describes the development of new glass compositions containing fly ash and slag. The focus of the work is on the improvement of the properties and a comparison of performance of these new compositions with the glasses currently used in building engineering. The promising results point to the possibility of expansion of the current applications of glass.","Glasses; Phosphate; Fly Ash; Blast Furnace Slag; Nanoindentation; Crystallization","en","doctoral thesis","","978-94-6366-269-7","","","","","","","","","Applied Mechanics","","",""
"uuid:33a9138f-7224-4734-a326-d90a9d5980c1","http://resolver.tudelft.nl/uuid:33a9138f-7224-4734-a326-d90a9d5980c1","On power system automation: Synchronised measurement technology supported power system situational awareness","Naglic, M. (TU Delft Intelligent Electrical Power Grids)","van der Meijden, M.A.M.M. (promotor); Popov, M. (promotor); Delft University of Technology (degree granting institution)","2020","This thesis aims to provide insight into the necessary power system operation and control developments to facilitate a sustainable, safe and reliable electric power supply now and in the future. The primary objective is to enhance the interconnected power system situational awareness with the aim of reinforcing the reliability of power systems. First, the thesis elaborates on the existing and emerging operational challenges of modern power systems and identifies the required power system developments to overcome them. Next, it focuses on state-of-the-art Synchronised Measurement Technology (SMT) supported Wide-area Monitoring Protection and Control (WAMPAC) of power systems. In this context, a cyber-physical experimental testbed for online evaluation of the emerging WAMPAC applications under realistic conditions is developed. Following, to fill the scientific gap between the IEEE Std. C37.118-2005 (communication part) and IEEE Std. C37.118.2-2011 specifications and their implementation, the MATLAB supported Synchro-measurement Application Development Framework is developed. Next, to improve situational awareness of power systems, two SMT-supported algorithms are proposed. The first algorithm is suitable for online detection of disturbances, observed as excursions in SMT measurements, in AC and HVDC power grids. Whereas the second algorithm is suitable for online identification of grouping changes of slow coherent generators in an interconnected power system during quasi-steady-state and the electromechanical transient period following a disturbance. Finally, further research directions towards the Control Room of Future are presented.","energy transition; power system automation; synchronised measurement technology; situational awareness; phasor measurement unit; disturbance detection; control room of future; slow coherency","en","doctoral thesis","","978-94-6384-118-4","","","","","","","","","Intelligent Electrical Power Grids","","",""
"uuid:afcbf229-f6ba-4b81-a561-6a0a61dc74de","http://resolver.tudelft.nl/uuid:afcbf229-f6ba-4b81-a561-6a0a61dc74de","Designing biocatalytic redox reactions with oxidoreductases for organic chemistry","Rauch, M.C.R. (TU Delft BT/Biocatalysis)","Hollmann, F. (promotor); Arends, I.W.C.E. (promotor); Paul, C.E. (copromotor); Delft University of Technology (degree granting institution)","2020","The focus of this thesis is to identify improvements of oxidoreductase reactions to bring them to preparative scale. Three approaches were studied: (1) photoregeneration of oxidised nicotinamide cofactors with LEDs as light source and flavins, (2) direct regeneration of Old Yellow Enzymes with light or metals, (3) heme-thiolate enzyme peroxygenases in neat substrate conditions.
Flavins are the main actors of the first two approaches. These redox cofactors have been chosen for their photocatalytic properties as a photosensitiser and as a cofactor in the so-called flavoproteins.","","en","doctoral thesis","","978-94-6384-112-2","","","","","","","","","BT/Biocatalysis","","",""
"uuid:990fdc2d-bae1-4de2-9ac1-3f649bb980cf","http://resolver.tudelft.nl/uuid:990fdc2d-bae1-4de2-9ac1-3f649bb980cf","The importance of the time-effect in electrochemical studies of corrosion inhibitors","Meeusen, M. (TU Delft (OLD) MSE-6)","Mol, J.M.C. (promotor); Terryn, H.A. (promotor); Delft University of Technology (degree granting institution)","2020","The corrosion protection of metallic substrates with corrosion inhibitors, either in solution or dispersed in a coating formulation, has been the focus of many research topics for many decades and has intensified in recent years even more with industry moving away from hexavalent chromium (Cr(VI))- based corrosion inhibitors. While mainly concentrating on the electrochemical behaviour and the underlying corrosion protective mechanism, the study of the time-effect, i.e. the study of how the electrochemical system behaves and the stabilization of the electrochemical system is altered over time, is often not taken into account when studying corrosion inhibitor-containing electrochemical systems. To gain a better understanding of the kinetic aspect of corrosion inhibitors changing the overall electrochemistry, this study focusses on the quantification of the time-effect of corrosion inhibitors’ electrochemical behaviour. Therefore odd random phase electrochemical impedance spectroscopy (ORP-EIS) is selected, a multisine alternative to the classical electrochemical impedance spectroscopy (EIS) technique, capable to measure and quantify the stability of electrochemical systems over time. Two different electrochemical systems are considered: lithium-based corrosion inhibitor technology on aluminium alloy AA2024-T3 and silica- and phosphate-based corrosion inhibitors for hot-dip galvanized steel. The former, already well-understood system, served as the proof of concept to design a well-defined methodology to study corrosion inhibitor-containing electrochemical systems, and gain deeper knowledge of the latter system.","corrosion inhibitors; aluminium alloy AA2024-T3; galvanized steel; ORPEIS; (in)stability; time-effect","en","doctoral thesis","","978-94-6332-615-5","","","","","","","","","(OLD) MSE-6","","",""
"uuid:9c0b2f03-040b-4ec8-b669-951c5acf1f3b","http://resolver.tudelft.nl/uuid:9c0b2f03-040b-4ec8-b669-951c5acf1f3b","Coda-Wave Monitoring of Continuously Evolving Material Properties","Zotz-Wilson, R.D. (TU Delft Applied Geophysics and Petrophysics)","Wapenaar, C.P.A. (promotor); Barnhoorn, A. (copromotor); Delft University of Technology (degree granting institution)","2020","We all monitor the world around us through waves. After about sixteen weeks in the womb, the ears and eyes of an infant child begin to deliver the first signals of light and sound waves to the brain. In fact, one could argue that consciousness itself is the feeling one receives when processing large amounts of wavefield data. Despite the integral presence of wavefield monitoring in biology, it is only since the dawn of the information age, that society has been able to monitor the world around us with similar fidelity. Where a recorded wavefield can be assumed to have travelled through a simple medium, it is often possible to resolve an image, though where a disordered medium is encountered, and the wavefield is multiply scattered, this becomes more difficult. It is this disordered portion of a wavefield, often referred to as the coda-wave, which this thesis is primarily concerned with. By considering the coda-wave over the coherent arrivals, one loses the ability to resolve the structure of a medium, though in turn gains improved sensitivity to changes within. This makes coda-wave monitoring particularly well suited to problems in which sensitivity to change is a more important quality than the ability to image the medium. On face value, one might consider coda-wave derived monitoring within Early Warning Systems (EWSs), towards hazards such as earthquakes, landslides, or the failure of critical infrastructure. However, operational deployment of such systems must work from simple, robust, and automated alert criteria, and therefore often rely on coherent wavefield observations, typically through passive measurements at the boundary of a region of interest. It is due to this reliance on clear, automated alert criteria derived from passive observation, which limits the lead time provided by EWSs, from only a few tens of seconds for earthquakes, to one day of warning for landslides.","Coda waves; monitoring; Interferometry","en","doctoral thesis","","978-94-6366-262-8","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:8f5106ab-9059-4fd6-9448-e4c642362739","http://resolver.tudelft.nl/uuid:8f5106ab-9059-4fd6-9448-e4c642362739","Theoretical advances in practical quantum cryptography","Ribeiro, J.D. (TU Delft QID/Wehner Group)","Wehner, S.D.C. (promotor); Hanson, R. (copromotor); Delft University of Technology (degree granting institution)","2020","Most of themainstream cryptographic protocols that are used today rely on the assumption that the adversary has limited computational power, and that a given set of mathematical problems is hard to solve (on average), i.e. that there is no polynomial time algorithm that solves these problems. While these assumptions are reasonable for now they might not be as relevant for long termsecurity. Indeed, all the communication that happens today can be recorded by an adversary who can later – when the technology allows it – break security. There are good reasons to think that technological progress may lead to break the assumptions made today. For example the rapidly increasing computational power of our computer already allows one to break anything that has been encrypted using DES in the 70s and 80s in few days using regular desktop type devices. There is also the constant improvement of the efficiency of the known algorithms that solve a class of problems. Note that, even though the discovery of a polynomial algorithm for a problem we believe to be hard is still possible, much weaker improvements on current algorithms that solve these hard problems, can already be a threat for security.","quantum; cryptography; two-party cryptography; quantum key distribution; device independence","en","doctoral thesis","","978-94-6402-160-8","","","","","","","","","QID/Wehner Group","","",""
"uuid:aaf9a28c-0926-41ab-a125-72f59d7937fe","http://resolver.tudelft.nl/uuid:aaf9a28c-0926-41ab-a125-72f59d7937fe","Readout circuits for hot-wire carbon dioxide sensors in CMOS technology","Cai, Z. (TU Delft Electronic Instrumentation)","Pertijs, M.A.P. (promotor); Makinwa, K.A.A. (promotor); Delft University of Technology (degree granting institution)","2020","This thesis describes the design and realization of CMOS-compatible CO2 sensors based on thermal conductivity (TC) measurement for indoor air-quality sensing. The goal of this work is to investigate the advantages and limitations of sensing CO2 based on TC measurement, and to exploit its potential to achieve the best possible performance in terms of both CO2 resolution and energy efficiency. Both system-level and circuit-level techniques have been explored, resulting in three prototypes that demonstrate the effectiveness of the proposed techniques. The final prototype, using a time-domain readout approach, achieves a CO2 resolution better than 100 ppm while consuming only 12 mJ per measurement, representing the best-reported performance for a CMOS CO2 sensor in terms of both resolution and energy consumption.","","en","doctoral thesis","","978-94-6380-676-3","","","","","","","","","Electronic Instrumentation","","",""
"uuid:2bcb33bf-5b73-4873-9168-08b1e7a2836f","http://resolver.tudelft.nl/uuid:2bcb33bf-5b73-4873-9168-08b1e7a2836f","Coastal and seasonal hydrodynamics and morphodynamics of the Mekong Delta","Phan, M.H. (TU Delft Coastal Engineering)","Stive, M.J.F. (promotor); Reniers, A.J.H.M. (promotor); Ye, Qinghua (copromotor); Delft University of Technology (degree granting institution)","2020","Coastal retreat problems occur in many deltas over the world. Coastal features are not constant over time and are affected by sea level rise, river runoff, sediment supply, wave and tidal energy, underlying geology and climate. In addition, human activities profoundly influence the coastal processes as a result of changing natural patterns of runoff, littoral sediment supply and construction and reconstruction of engineering works.","Mekong Delta; hydrodynamics; morphodynamics; monsoon climate","en","doctoral thesis","","978-94-028-1991-5","","","","","","","","","Coastal Engineering","","",""
"uuid:00358590-f320-4db8-b9f0-282255eaebb9","http://resolver.tudelft.nl/uuid:00358590-f320-4db8-b9f0-282255eaebb9","Novel approaches of flocculant application in sewage treatment","Kooijman, G. (TU Delft Sanitary Engineering)","van Lier, J.B. (promotor); de Kreuk, M.K. (copromotor); Delft University of Technology (degree granting institution)","2020","Organic flocculants are typically only applied in the sludge line and sometimes in quaternary treatment of conventional sewage treatment plants (STPs) that aim for enhanced nutrient removal. However, with the ongoing societal changes directing towards a higher degree of circularity of resources and a higher degree of wastewater treatment demands, there is a need to re-asses the potentials that flocculants may offer in new wastewater treatment concepts. Our work investigated new possible applications of flocculants in an STP. Applying flocculants for chemically enhanced primary treatment (CEPT) increases the primary sludge production for biogas production, which may lead to a more positive energy balance of an STP. Results showed that 66% more influent COD could be used for biogas production via anaerobic digestion (AD), meanwhile the aerobic oxidation of this COD in the aeration tanks was prevented. However, removing the COD in CEPT with cationic flocculants led to a COD/N ratio of 3.75 g COD/g N in the water line, which is lower than the minimum ratio that is required for a conventional biological nutrient removal STP (BNR-STP). However, recently, novel N removal technologies have been introduced that function well at low COD/N ratios, such as N removal over nitrite, or that do not need any COD at all, such as the Anammox process in the waterline of an STP. With the application of these novel N removal techniques, CEPT with flocculants could be advantageous for the overall energy balance and space requirements of the future STP. Besides the STP energy balance, the application of cationic flocculants for CEPT also impacted the AD: the additional COD that was removed by CEPT was more readily biodegradable, leading to a 9% higher biomethane potential of the primary sludge. Also, in separate batch tests, it was found that flocculants decreased the viscosity of the sludge and, concomitantly, an increase in the hydrolysis rate up to 27% was observed. However, in contrast to the rate of digestion, the results showed that refractory polyacryl amide (PAM) flocculants, irreversibly bound the particles, and thus partially reduced the biomethane potential. Besides the energy aspects of an STP, also there are increasing challenges in the treatment of micro pollutants. STP effluents are one of the main sources of pharmaceuticals in the environment. Literature reports that a large part of the pharmaceuticals in an aquatic matrix, such as present in an STP, are sorbed to colloids. Since flocculation can remove colloids, flocculants in principle could be used to concentrate pharmaceuticals into a smaller sludge flow that subsequently could be treated more efficiently. The possibility of concentrating pharmaceuticals by flocculation in the primary settler was investigated. A jar test showed that pharmaceuticals were hardly removed from sewage with coagulation/flocculation. To investigate the discrepancy between reported colloidal sorption and the lack of removal when removing colloids, we tested a commonly applied experimental setup for determining the colloidal sorption of pharmaceuticals. Colloids were removed from a solution containing pharmaceuticals in two ways: by ultra-filtration (UF) and by flocculation. Both methods showed similar removal of colloids. However, during UF the observed retention of pharmaceutical was 93±4%. In contrast, when removing the colloids with flocculation, no pharmaceuticals removal was observed. These results strongly indicate that an analysis bias is introduced when using UF membranes in the determination of colloidal sorption of pharmaceuticals. Very likely, a direct retention of pharmaceuticals on the UF membrane occurred. Overall results of current work showed that pharmaceuticals hardly sorb to colloids and herewith the absence of removal of pharmaceuticals during coagulation/flocculation is explained. Therefore, flocculation does not seem to be a viable option for concentrating pharmaceuticals from sewage streams. As the cities grow, and the land becomes scarcer, there is an increasing requirement for compact STPs. To achieve this reduction in footprint, the digester volume may be decreased by uncoupling the solids retention time (SRT) from the hydraulic retention time (HRT). Separation of liquid and solids retention is a typical feature of an anaerobic membrane bioreactor (AnMBR) where a membrane keeps the solids inside the reactor, while the liquid can permeate. Membrane filtration of sludge will immediately result in the formation of a cake layer on top of the membrane’s surface. This cake layer or fouling layer forms an excellent barrier for solids and acts as a secondary membrane during the filtration process. Therefore, a simple woven cloth can also act as a support for this cake layer, avoiding the need for purchasing actual membranes, which would decrease the investment costs significantly. An AnMBR equipped with a woven cloth as filter medium is referred to as an anaerobic dynamic membrane bioreactor (AnDMBR).
Challenges in operating an An(D)MBR are the filterability and viscosity of the sludge, which limits the maximum SRT that can be achieved. Our results showed that flocculants reduced the viscosity and increased the filterability of the emulsion. Therefore, flocculants may play a positive role in the optimization of an AnDMBR. Our results showed that increased filterability was only obtained after adding a high concentration of flocculants. However, these high concentrations caused a significant decrease in biomethane potential of the sludge, as the VS destruction was lowered from 32% to 24% after adding the flocculants. In addition, a decrease in the mean particle size (d50) was observed from 58 µm to 32 µm. This was likely to be caused by refractory flocculants that shielded small particles which were turned refractory as well, a phenomenon that is described in literature as well. Likely, the accumulation of these small refractory particles affected the filterability of the sludge, which led to a doubling of the trans membrane pressure (TMP) from about 150 mbar to 300 mbar. Therefore, adding flocculants to an AnDBMR did not yield the benefits that were initially expected. A potential solution to prevent irreversible binding, and thus a decreased filterability, is to use biodegradable flocculants. Further research is needed to evaluate this possibility. As the disposal of sludge forms a large part of the operational costs of an STP, waste sludge reduction in an STP may have a big impact on the operational costs. A proven concept for waste sludge reduction in an STP, although not yet applied in practice, is predation of sludge by aquatic worms. In literature, several reactor configurations were studied in which secondary sludge is successfully predated by aquatic worms in lab-scale bioreactors. However, in contrast to AD, secondary sludge reduction by aquatic worms will cost energy for aeration, meanwhile the converted organic matter is not available anymore for energy recovery. Therefore, the ideal configuration would be to have AD followed by worm predation (WP) of the digested sludge, allowing for both energy recovery and a larger extent of sludge reduction. So far, this had not been considered a viable option, due to the low ammonia tolerance of aquatic worms and the high ammonium concentrations present in the anaerobic digester. However, by flocculating anaerobically digested sludge, the sludge solids can be easily separated from the ammonia-rich liquid creating the possibility of WP of digested sludge, reducing the amounts of solids that need to be disposed. Our results revealed an additional removal of 40% VS of the digested sludge in 12 days when applying worm predation. The solids remained well separated from the liquid, which facilitates further treatment. However, the cationic flocculants caused mortality of the used worms, due to toxicity. The assessed 4-day LD50 value was between 50 and 100 mg/L. For the possible full-scale application of WP for AD sludge degradation, a process could be designed with continuous worm addition. However, a non-toxic flocculant could also be the solution prevent mortality of the worms. Further research is needed to elucidate the best option and to validate the full-scale possibilities. The main barrier identified in this thesis for the application of flocculants in the current and future BNR-STP are i) the economic viability of flocculant applications other than the conventional applications ii) the refractory characteristics of flocculants and iii) the toxicity to aquatic worms. A solution for the above-mentioned challenges may lie in the production of polysaccharide bio-based flocculants such as alginate, chitosan, cellulose and starch. The performance of bio-based flocculants has been investigated for specific cases in numerous successful laboratory and full-scale studies. However, full-scale application is still marginal due to the high costs of production. A possible solution to make bio-based flocculants cost-effective is to find an organic waste source from which these polymers could be synthesized or extracted. However, thus far, there is only limited research in waste-based flocculants in STPs. Therefore, more research is needed in this field that could lead to new bio- and waste-based flocculants to be applied in sewage treatment.","flocculants; colloid; gas flow meter; worms; Anaerobic digestion","en","doctoral thesis","","","","","","","","","","","Sanitary Engineering","","",""
"uuid:8f3090a6-39c6-4ddf-9ee8-9afb73021605","http://resolver.tudelft.nl/uuid:8f3090a6-39c6-4ddf-9ee8-9afb73021605","Playscapes: Creating Space for Young Children's Physical Activity and Play","Boon, Boudewijn (TU Delft Human Information Communication Design; TU Delft Design Aesthetics)","Stappers, P.J. (promotor); Van den Heuvel-Eibrink, Marry M. (promotor); Rozendaal, M.C. (copromotor); Delft University of Technology (degree granting institution)","2020","Young children often lack opportunities to play in a physically active way. This is particularly the case for children with cancer and other chronic diseases, who regularly undergo periods of hospitalization. Promoting their physical activity and play can contribute to their health, wellbeing, and development. This thesis develops ‘Playscapes’ – a design perspective that emphasizes the unstructured and spontaneous nature of young children’s physical activity. Playscapes encourages designers to enable such physical activity through the design of open-ended and ambiguous playthings. By designing such playthings for children with cancer, this thesis contributes to turning hospital environments, such as patient rooms and waiting areas, into potential ‘landscapes for physical activity and play’.","Design for behavior change; physical activity; open-ended play; young children; pediatric healthcare; research through design","en","doctoral thesis","","978-94-6384-117-7","","","","","","","","","Human Information Communication Design","","",""
"uuid:8a05bae4-657c-4838-8586-0a7cfc932a3c","http://resolver.tudelft.nl/uuid:8a05bae4-657c-4838-8586-0a7cfc932a3c","Additively manufactured biodegradable porous metals","Li, Y. (TU Delft Biomaterials & Tissue Biomechanics)","Zadpoor, A.A. (promotor); Zhou, J. (promotor); Delft University of Technology (degree granting institution)","2020","For the treatment of large bony defects, no perfect solution has been yet found, partially due to the unavailability of ideal bone implants. Additively manufactured (AM) biodegradable porous metals provide unprecedented opportunities to fulfil the requirements for ideal bone implants to be used in such treatments. Firstly, the multi-scale geometry of these implants can be customized to mimic the human bone in terms of both micro-architecture and mechanical properties. Secondly, a porous structure with interconnected pores possesses a larger surface area and is favorable for the adhesion and proliferation of bone cells. Finally, the biodegradation property could be exploited to maintain the structural integrity of the implant during the healing process while ensuring that the biomaterial disappears afterwards, paving the way for full bone regeneration.","additive manufacturing; scaffold; biodegradation; mechanical property; biocompatibility; fatigue; functionally graded material","en","doctoral thesis","","978-94-6402-133-2","","","","","","","","","Biomaterials & Tissue Biomechanics","","",""
"uuid:c04b4baf-c8b0-4615-abe4-10a834c641f4","http://resolver.tudelft.nl/uuid:c04b4baf-c8b0-4615-abe4-10a834c641f4","Mathematical formulations and algorithms for fast and robust power system simulations","Sereeter, B. (TU Delft Numerical Analysis)","Vuik, Cornelis (promotor); Witteveen, C. (promotor); Delft University of Technology (degree granting institution)","2020","During the normal operation, control and planning of the power system, grid operators employ numerous tools including the Power Flow (PF) and the Optimal Power Flow (OPF) computations to keep the balance in the power system. The solution of the PF computation is used to assess whether the power system can function properly for the given generation and consumption, whereas the OPF problem provides the optimal operational state of the electrical power system, while satisfying system constraints and control limits.
In this thesis, we study advanced models of the power system that transform the physical properties of the network into mathematical equations. Furthermore, we develop new mathematical formulations and algorithms for fast and robust power system simulations, such as PF and OPF computations, that can be applied to any balanced single-phase or unbalanced three-phase network.","Power flow analysis; Nonlinear power flow problem,; Newton-Raphson method; Power mismatch formulation; Current mismatch formulation; Optimal Power Flow problem; Interior Point Method; Linear power flow problem; Unbalanced distribution networks; Numerical analysis; Krylov subspace methods","en","doctoral thesis","","978-94-6384-119-1","","","","","","","","","Numerical Analysis","","",""
"uuid:45114889-f29f-4964-9a98-56e37a736170","http://resolver.tudelft.nl/uuid:45114889-f29f-4964-9a98-56e37a736170","Information Technology and Urbanization Economies","de Vos, D.W. (TU Delft Urban Studies)","Meijers, E.J. (promotor); van Ham, M. (promotor); Delft University of Technology (degree granting institution)","2020","It is increasingly recognized that urbanization economies – the benefits of living in cities – can be generated by proximity to large cities (OECD 2015). Several scholars have put forward that places near other large cities are increasingly able to ‘borrow size’ of their neighbours to generate these economies, and that this may explain recent patterns of (economic) growth across European cities, whereby the largest cities have not necessarily had the highest growth rates (Dijkstra et al. 2013, Burger and Meijers 2016). These studies suggest that Europe’s unique polycentric urban structure increasingly allows urbanization benefits to be generated by proximity to large agglomerations, due to improvements in physical and digital infrastructure.
Indeed, there is plenty of evidence that increasing the effective density of regions by improving physical transportation infrastructure leads to higher levels of urbanization economies (Graham 2019). For improvements in digital infrastructure however, such evidence is missing. In this thesis I attempt to fill this gap, and contribute to the discussion of whether information technology enables places in proximity of large cities to ‘borrow’ urbanization economies?
To understand the relation between IT and borrowed size it is important to have a plausible theoretical mechanism. In the introduction of this thesis I have put forward such a theoretical link, that is based on the relation between ubiquitous online information and travel behaviour. In short, I expect that in some cases IT may complement longer distance travel for jobs and local products, which means that in these markets urban scale economies (including better matching and wider product variety) are generated and enjoyed across a greater geographical scale. Based on this theoretical link, I devised two research questions.
1. To what extent does information technology increase the geographical extent of local labour and product markets?
2. To what extent has the advent of information technology led to better local (labour or product) market outcomes in places in proximity of large cities?","","en","doctoral thesis","","","","","","","","","","","Urban Studies","","",""
"uuid:2b5626ca-1a12-44e9-88da-6d898b06b751","http://resolver.tudelft.nl/uuid:2b5626ca-1a12-44e9-88da-6d898b06b751","What Leonardo could mean to us now: Systematic variation 21st century style, applied to large-scale societal issues","Kersten, W.C. (TU Delft Design for Sustainability)","van Engelen, J.M.L. (promotor); Diehl, J.C. (promotor); Delft University of Technology (degree granting institution)","2020","The problem: Design challenges are becoming increasingly complex, amongst others because real life is getting more complex. Society is more interconnected than before and most problems occur in a variety of -quickly changing- shapes and forms, i.e. in different contexts. These contexts pose different requirements and often have interdependencies as well. How can design engineers respond to this rise in diversity of requirements and the likely interdependencies?
To reduce the complexity and increased diversity the common response is simplification, e.g., choosing one context as scope of the design task. In a highly interconnected society this no longer suffices. The initially optimal solution creates a path dependency and lock-in that delays or hinders achieving impact on a large scale beyond the initial context. Research focus: The thesis focuses on the question what evolution in design engineering might be possible to address this problem. As a starting point, the oldest design characteristic, i.e. systematic variation, as pioneered by Leonardo da Vinci, is given a contemporary twist. It is suggested to be used before the design task is set in order to ensure multi-contextual perspectives of a large-scale issue. To provide further focus for this research, it revolves around an actual approach that does just that, called Context Variation by Design (CVD), and is mostly applied to basic quality-of-life issues. The research primarily has a design engineering angle, and additionally includes considerations and consequences for management and education. Evolution of design engineering alone, even with management considerations, cannot address the entire problem but might offer a contribution. Research approach: This thesis represents exploratory, therefore inductive, research. The extensive literature research resulted in ten theoretically backed propositions as key component of the thesis. Out of 23 available real-life situations to choose from, mostly MSc-level graduation, course and group assignments, seven were selected based on direct access to rich, high quality information. These cases were analysed and the main results were expressed as empirical findings, in relation to the ten propositions, 41 in total. Furthermore, three key defined constructs had been identified to explore more in depth: context, richness in the design space and adaptive architectures. Main results and conclusions: The analysis of the patterns of the empirical research reveals various signs that a design engineering approach that uses systematic variation before the design task is set, can deliver high quality, potentially superior results when dealing with large-scale (quality-of-life) issues. This was true in particular for cases where students executed full assignments, as opposed to short ones. Because the design result, i.e. an informed adaptive architecture, incorporated requirements from a variety of contexts, the additional effort to scale to these contexts is much smaller from a design engineering perspective. Such signs cannot be considered as (conclusive) evidence, and it was not the intention of this inductive thesis to deliver such results. More light has been shed on particular framings that might be conducive, and the specific interpretation of the key constructs, all resulting in a version 2.0 of CVD. The results can be elaborated upon in next steps. Next steps: The main suggested next steps including ‘bite size’ titles: “Revelling in richness” (further explore richness as a defined construct in the design space), “Going for Gold” (engage in long term commitments and broader partnerships to investigate actual multi-contextual implementation), “C’est le ton qui fait la musique” (explicitly verify framings that resonate with managers and others) and “Leave no Leonardo behind” (explore how using a multi-contextual approach can be used in education to boost the aptitude of design engineers-to-be).","Context variation; Complexity; design approach; scalability; richness","en","doctoral thesis","","978-94-6366-260-4","","","","","","","","","Design for Sustainability","","",""
"uuid:a9ae2b62-0cf8-4cf3-86af-abd33bc99080","http://resolver.tudelft.nl/uuid:a9ae2b62-0cf8-4cf3-86af-abd33bc99080","Performance of complex networks: Efficiency and Robustness","He, Z. (TU Delft Network Architectures and Services)","Van Mieghem, P.F.A. (promotor); Delft University of Technology (degree granting institution)","2020","Network performance is determined by the interplay of underlying structures and overlying dynamic processes on networks. This thesis mainly considers two types of collective dynamics on networks, spread and transport, which are ubiquitous in our daily lives, ranging from information propagation, disease spreading, to molecular motors on cytoskeleton and urban traffic. Exploring the approaches on optimizing the network performance is the fundamental motivation of this work, which helps to control processes on networks and to upgrade network-based services.
Although the properties of phase transition in Susceptible-Infected-Susceptible (SIS) processes have been investigated intensively, the time-dependent behavior of epidemics is still an open question. This thesis starts with the investigation of the spreading time (Chapter 2), which is the time when the number of infected nodes in the metastable state is first reached, starting from the outbreak of the epidemics. We observe that the spreading time resembles a lognormal-like distribution both for the Markovian and the non-Markovian infection processes.
As a follow-up work of Chapter 2, we identify the fastest initial spreaders with the shortest average spreading time in epidemics on a network, which helps to ensure an efficient spreading (Chapter 3). We show that the fastest spreader changes with the effective infection rate of a SIS epidemic process, which means that the time-dependent influence of a node is usually strongly coupled to the dynamic process and the underlying network. We propose the spreading efficiency as a metric to quantify the efficiency of a spreader and identify the fastest spreader, which is adaptive to different infection rates in general networks.
For maximizing the utility of spread, we introduce induced spreading, which aims to maximize the infection probabilities of some target nodes by adjusting the nodal infection rates (Chapter 4). We assume that the adjustment of the nodal infection rates has an associated cost and formulate the induced spreading for SIS epidemics in networks as an optimization problem under a constraint on the total cost. We address both a static model and a dynamic model for the optimization of the induced SIS spreading. We show that the infection rate increment on each node is coupled to both the degree and the average hops to the target nodes in the static optimization method. In the dynamic method, the effective resistance is a good metric to indicate the minimum total cost for targeting a single node.
The average fraction of infected nodes in the NIMFA steady state, also called the steady-state prevalence, in terms of the effective infection rate can be expanded into a power series around the NIMFA epidemic threshold. Practically, we can faster compute the nodal infection probability of the NIMFA steady-state by the truncated expansion with enough terms and an effective infection rate within the radius of convergence. Thus, we investigate the radius of convergence that validates the Taylor expansion of the steady-state prevalence in Chapter 5. We show that the radius of convergence of the steady-state prevalence expansion strongly depends upon the spectral gap of the adjacency matrix.
The research on the robustness of transport on networks mainly encompasses two robustness assessment approaches, along with their applications in communication networks and freight transport networks, respectively. Network recoverability refers to the ability of a network to return to a desired performance level after suffering malicious attacks or random failures (Chapter 6). We propose a general topological approach and recoverability indicators to measure the network recoverability in two scenarios: 1) recovery of damaged connections and 2) any disconnected pair of nodes can be connected to each other. By applying the effective graph resistance and the network efficiency as robustness metrics, we employ the proposed approach to assess 10 real-world communication networks. For vehicle transport systems, Chapter 7 proposes a robustness assessment for multimodal transport networks. The representation of interdependent networks is an excellent proxy for the structure of multimodal transportation systems. We apply our robustness assessment model to the Dutch freight transport, taking into account three modalities: waterway, road and railway. The node criticality, defined as the impact of a node removal on the total travel cost, resembles a power-law distribution, which implies scale-free property of the robustness against infrastructure disruptions.
Many transport processes have a similar objective that all nodes reach an agreement regarding a certain quantity of interest by exchanging the nodal states with their neighboring nodes, which are described by the consensus model in networks (Chapter 8). The robustness of consensus processes is related to the convergence speed to the stability under external perturbations. The (generalized) algebraic connectivity of a network characterizes the lower-bound of the exponential convergence rate of consensus processes. We investigate the problem of accelerating the convergence of consensus processes by adding links to the network. We propose a greedy strategy for undirected network and further extend our approach to directed networks. Numerical tests verify the better performance of our methods than other metric-based approaches.
This thesis considers two dynamic processes on networks and covers performance analysis and optimizations, by means of problem proposal, theoretical analysis, case study and algorithm designing. The developed concepts related to network efficiency and robustness provide a better understanding of collective dynamics on complex networks. The applicability of our methodologies bridges theoretical network models and realistic applications, as well as demonstrates the promising efficacy of network science.","Complex Networks; Network Robustness; Epidemic Spreading; Transport; Network Optimization","en","doctoral thesis","","978-94-6384-1","","","","","","","","","Network Architectures and Services","","",""
"uuid:acdebbdd-009e-4a31-a79b-72fab820b85f","http://resolver.tudelft.nl/uuid:acdebbdd-009e-4a31-a79b-72fab820b85f","Particle fusion in localization microscopy","Heydarian, H. (TU Delft ImPhys/Computational Imaging)","Rieger, B. (promotor); Stallinga, S. (promotor); Delft University of Technology (degree granting institution)","2020","Single molecule localization microscopy (SMLM) shows promise for quantitative structural analysis of subcellular complexes and organelles with a resolution well below the diffraction limit. This superresolution microscopy technique relies on the blinking events of fluorescent molecules that labeled the structure of interest and are spatiotemporally spread over the entire field of view and time. Once hundred thousands frames of these sparse events are recorded, single molecule positions are localized with nanometer precision to form a 2D/3D point set of coordinates. Therefore, SMLM images are not conventional pixelated images but rather spatial point patterns. Photon scarcity and incomplete labeling of the imaged structure, however, limit the resolution that can possibly be achieved by means of SMLM. Moreover, due to experimental limitations the axial resolution is typically ~2-3 times worse than the lateral resolution in conventional setups. Inspired by single particle analysis (SPA) in cryo-electron microscopy (cryo-EM), proper alignment of repeated structures (""particle fusion"") in a 2D/3D SMLM measurement can overcome these limiting factors and so push for isotropic resolution. The existing approaches for particle fusion in SMLM can be classified into customized routines that are borrowed from SPA in EM or methods that use strong prior knowledge about the structure to be reconstructed. While the first approaches are completely ignoring the differences in image formation model between EM and SMLM, the second ones are highly prone to the template-bias problem. In this thesis, a dedicated particle fusion pipeline for 2D/3D SMLM data is proposed. The approach properly considers the pointillistic nature of the SMLM modality and takes into account the localization uncertainties. Furthermore, while it does not require any prior knowledge about the underlying structure of the particles, it can incorporate certain features such as symmetry into the fusion process. Owing to the novel all-to-all registration scheme, the application of the devised pipeline on experimental data with very poor labeling density has been successfully demonstrated. The requirements for successful particle fusion for different SMLM modalities, namely PAINT and STORM, have been characterized through extensive study on 2D and 3D experimental and simulation data. In 2D, an FRC resolution of 3.3 nm on DNA-origami nanostructures has been achieved, and, in 3D, it was demonstrated how the combination of SMLM as a light microscopy technique and a computational approach enables structural analysis of the Nuclear Pore Complex. Future advances of SMLM rely highly on computational routines after data acquisition. Advanced data analysis techniques such as particle fusion can help pushing the boundaries of structural biology using light microscopy.","single molecule localization microscopy; particle averaging; particle fusion; SMLM","en","doctoral thesis","","","","","","","","2020-03-17","","","ImPhys/Computational Imaging","","",""
"uuid:ee1aa368-86de-44dd-8e14-547709f289e4","http://resolver.tudelft.nl/uuid:ee1aa368-86de-44dd-8e14-547709f289e4","Ceramic material under ballistic loading: A numerical approach to sphere impact on ceramic armour material","Simons, E.C. (TU Delft Applied Mechanics)","Sluys, Lambertus J. (promotor); Weerheijm, J. (copromotor); Delft University of Technology (degree granting institution)","2020","Armour systems for ballistic protection can be made from many materials. One type of material used in armour systems is ceramic. Ceramic materials, such as alumina and silicon carbide, can be beneficial in an armour system because of their high hardness and relatively low weight. The high hardness of the ceramic potentially causes a projectile to deform heavily and fracture upon impact with the armour, thereby reducing or even eliminating the threat. The ceramic itself may also damage during the interaction. Although ceramics can damage under impact, they contribute to the protective capability of the armour system as long as they exert a force on the projectile to deformand deceleration it. In order to improve an armour system one does not only need to know when the ceramic component fails, but also how it fails. Once the failure mechanisms of the ceramic are known the armour design may be modified to delay or in an ideal scenario even prevent catastrophic failure of the ceramic. This will eventually result in stronger and lighter armour systems.","ceramic; failure; cone crack; finite element method; constitutive modelling","en","doctoral thesis","","978-94-6384-107-8","","","","","","","","","Applied Mechanics","","",""
"uuid:2122e7cc-0f07-40af-88cd-b5b789974562","http://resolver.tudelft.nl/uuid:2122e7cc-0f07-40af-88cd-b5b789974562","Elastodynamic Marchenko inverse scattering: A multiple-elimination strategy for imaging of elastodynamic seismic reflection data","Reinicke Urruticoechea, C. (TU Delft Applied Geophysics and Petrophysics)","Wapenaar, C.P.A. (promotor); Delft University of Technology (degree granting institution)","2020","The Marchenko method offers a new perspective on eliminating internal multiples. Instead of predicting internal multiples based on events, the Marchenko method formulates an inverse problem that is solved for an inverse transmission response. This approach is particularly advantageous when internal multiples generate complicated interference patterns, such that individual events cannot be identified. Moreover, the retrieved inverse transmissions can be used for a wide range of applications. For instance, we present a numerical example of the single-sided homogeneous Green's function representation in elastic media. These applications require a generalization of the Marchenko method beyond the acoustic case. Formally these extensions are nearly straightforward, as can be seen in the chapter on plane-wave Marchenko redatuming in elastic media. Despite the formal ease of these generalizations, solving the aforementioned inverse problem becomes significantly more difficult in the elastodynamic case. We analyze fundamental challenges of the elastodynamic Marchenko method. Elastic media support coupled wave-modes with different propagation velocities. These velocity differences lead to fundamental limitations, which are due to differences between the temporal ordering of reflection events and the ordering of reflectors in depth. Other multiple-elimination methods such as the inverse scattering series encounter similar limitations, due to violating a so-called monotonicity assumption. Nevertheless, we show that the Marchenko method imposes a slightly weaker form of the monotonicity assumption because it does not rely on event-based multiple prediction. Another challenge arises from the initial estimate that is required by the Marchenko method. In the acoustic case, this initial estimate can be as simple as a direct transmission from the recording surface to the redatuming level. In the presence of several wave-modes, an acoustic direct transmission generalizes to a so-called forward-scattered transmission, which is not a single event but a wavefield with a finite temporal duration. Former formulations of the elastodynamic Marchenko method require this forward-scattered transmission as an initial estimate. However, in practice, this initial estimate is often unknown. We present an alternative formulation of the elastodynamic Marchenko method that simplifies the initial estimate to a trivial one. This approach replaces the inverse transmission, which is often referred to as a focusing function, by a so-called backpropagated focusing function. This strategy allows us to remove internal multiples, however, unwanted forward-scattered waves persist in the data. This insight suggests that forward-scattered waves cannot be predicted by the Marchenko method: either they are provided as prior knowledge, or they remain unaddressed. The remaining forward-scattered waves may be eliminated by exploiting minimum-phase behavior as additional constraint. This approach is inspired by recent developments of the acoustic Marchenko method that use a minimum-phase constraint to handle short-period multiples. Generalizing this strategy to the elastodynamic case is challenging because wavefields are no longer described by scalars but by matrices. Hence, we start by analyzing the meaning of minimum-phase in a multi-dimensional sense. This investigation illustrates that the aforementioned backpropagation turns the focusing function into a minimum-phase object. This insight suggests that, from a mathematical view point, the backpropagated focusing function can be seen as a more fundamental version of the focusing function. Moreover, we present attempts of using this property as additional constraint to remove unwanted forward-scattered waves. Given the remaining theoretical challenges of the elastodynamic Marchenko method, we analyze the performance of an acoustic approximation. We evaluate the effect of applying the acoustic Marchenko method to elastodynamic reflection data. For this analysis, we look for geological settings where an acoustic approximation could be impactful. The Middle East is a promising candidate because, due to its nearly horizontally-layered geology, elastic scattering effects are weaker for short-offsets, which are the main contributors to structural images. Therefore, we construct a synthetic Middle East model based on regional well-log data as well as knowledge about the regional geology. In contrast to field data examples, the synthetic study allows us to include or exclude elastic effects. Hence, we can inspect the artifacts caused by an acoustic approximation. The results indicate that the acoustic Marchenko method can be sufficient for multiple-free structural imaging in geological settings akin to the Middle East.","Marchenko; de-multiple; inverse scattering; elastodynamic waves; multiples","en","doctoral thesis","","978-94-6384-111-5","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:c9193ec9-619b-4d47-b341-62a0e2b7b9b6","http://resolver.tudelft.nl/uuid:c9193ec9-619b-4d47-b341-62a0e2b7b9b6","The elastic anisotropy and mechanical behaviour of the Whitby Mudstone","Douma, L.A.N.R. (TU Delft Applied Geophysics and Petrophysics)","Wapenaar, C.P.A. (promotor); Barnhoorn, A. (copromotor); Delft University of Technology (degree granting institution)","2020","Mudstones play an important role in hydrocarbon exploration and production, carbon capture and storage, and nuclear waste disposal. The high concentration of clay minerals contribute to the high intrinsic anisotropy (e.g., velocity, strength, permeability, and resistivity changes with direction) of mudstones. This high anisotropy complicates, among other things, seismic interpretation for hydrocarbon exploration and production, as well as predictions on the mechanical behaviour of these clayrich rocks. Mudstones are also characterized by a low-permeability matrix, which makes it difficult for fluids to flow through the rock. This impermeable character of mudstones makes them a potential natural seal for long-term CO2 storage and a potential host rock for nuclear waste disposal. For hydrocarbon production, open fractures are needed to enhance the productivity of oil and gas reservoirs, whereas the presence of such fractures can result in unwanted leakage of CO2 or nuclear waste in the subsurface. Fracture formation depends on, among other things, the mechanical properties of the mudstone. It is thus important to understand the elastic anisotropy and mechanical properties of mudstones for successful hydrocarbon exploration and production, and to safely store CO2 and radioactive waste in the subsurface. Although mudstones are important in the energy sector, the understanding of their elastic anisotropy and deformation behaviour under various physical conditions is limited, due to their complex character and the lack of laboratory experiments performed on well-preserved samples.","mudstones; elastic anisotropy; mechanical behaviour; ultrasonic velocities; triaxial deformation tests","en","doctoral thesis","","978-94-6366-249-9","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:1e0e9c0b-06f0-4b11-ab5f-40fcfcacbea4","http://resolver.tudelft.nl/uuid:1e0e9c0b-06f0-4b11-ab5f-40fcfcacbea4","Rotating heat pipe assisted annealing","Çelik, M. (TU Delft Large Scale Energy Storage)","de Jong, W. (copromotor); Boersma, B.J. (copromotor); Delft University of Technology (degree granting institution)","2020","Steel is an indispensable material for the sustainable maintenance and progress of modern civilization. Its versatility in terms of mechanical and thermal characteristics, corrosion resistance, raw material availability, energy consumption and recyclability provides a clear advantage in a fast-changing technological landscape. In order to adapt to the changing needs, steel production methods have been evolving and improving over time. One such improvement opportunity in terms of energy efficient production is the ”heat pipe assisted annealing” concept. The cold rolling of steel is a process where the steel strip is cold-worked by means of rolls to achieve thickness reduction and better uniformity. This results in the strain hardening of steel. To reduce the hardness of steel and to render it more workable, it is thermally treated by heating it to a target soaking temperature and then cooling it down. This process is called annealing and it is an energy intensive process. Conventionally, heating is achieved with natural gas fired furnaces, whereas cooling is done using convective gas cooling. With this setting, the thermal energy extracted from the steel strip during the cooling stage is not used in any way. Moreover, none of the energy that is introduced during the heating stage is retained in the final product.An alternative technology for the annealing of steel was developed at Tata Steel IJmuiden R&D with the objective of recovering and using some of the heat removed during the cooling stage and thus, achieving more energy efficient annealing. With this technology called heat pipe assisted annealing, the cooling strip is thermally linked to the heating strip with multiple rotating heat pipes. In this way, each heat pipe transfers a certain amount of heat from the cooling strip to the heating strip. Only final heating and cooling of the steel strip is carried out in a conventional way. This concept is applicable to relatively low temperature (sub-critical) annealing where the cooling rate is not crucial. Therefore, packaging steel is a good candidate for the application of this technology.A rotating heat pipe is a highly efficient heat transfer device which is a wickless hollow cylindrical vessel rotating around its symmetric axis and containing a fixed amount of working fluid. The working fluid acts as a thermal energy carrier, transporting heat from one end of the heat pipe to the other. This basically occurs in four steps: (i) heat added to the evaporator part of the heat pipe causes the evaporation of the liquid, (ii) vapor travels to the condenser end of the heat pipe due to pressure difference, (iii) vapor condenses in the condenser section where heat is removed from the heat pipe, (iv) liquid returns to the evaporator with the help of the static pressure head and the centrifugal force induced by rotation. The heat pipe assisted annealing concept has been patented and subsequently further studied by Tata Steel Europe R&D. A water-filled rotating heat pipe test rig integrated with steel strips provided the bulk of the prior work. This test rig served as the proof-of-principle installation and it showed that heat can be transported from a hot strip to a cold one with a rotating heat pipe. In this context, several gaps have been identified to further acquire the knowledge on the system components, the concept performance and feasibility.This thesis focuses on four main aspects of the fundamentals and the feasibility of the heat pipe assisted annealing concept: (i) contact heat transfer between the steel strip and the rotating heat pipe, (ii) computationally efficient modelling of the interior dynamics of a rotating heat pipe, (iii) applicable working fluids for the high temperature range, (iv) behavior of the heat pipe assisted annealing system as a whole. These aspects are studied through a thermal engineering perspective. The heat pipe assisted annealing concept relies on the effective transfer of heat from the strip to the rotating heat pipe and vice versa. Therefore, it is important to understand the underlying physics governing this heat transfer and to be able to predict the heat transfer rate for possible configurations. In this context, in Chapter 2 of this thesis, the contact heat transfer between a steel strip and a rotating heat pipe is investigated both experimentally and numerically. The numerical model is based on first principles. It finds the thickness and the pressure of the gas layer between the strip and the heat pipe and subsequently considers different heat transfer mechanisms. The experimental work was carried out on the proof of- principle test rig. The model is validated with the experimental results. The contact heat transfer coefficient in the uniform region varied between 4,000 to 20,000 W/(m2.K). It showed an increase in the contact heat transfer with decreasing strip velocity and increasing radial stress. For the considered cases, conduction through the gas layer was the dominant heat transfer mechanism. Additionally, a simplified expression has been developed for the calculation of contact heat transfer through multiple regression analysis. The modelling of a rotating heat pipe is a crucial step for the detailed study of the heat pipe assisted annealing technology. Although modelling of rotating heat pipes has been the subject of many studies in the literature, these models are not computationally efficient enough to allow for the simultaneous modelling of multiple heat pipes linked to each other with strips. On this ground, in Chapter 3, a novel computationally efficient engineering model describing the transient behavior of the heat pipe is developed. In this model, the liquid and the vapor cells are allowed to change size radially in order to allow for the tracking of the liquid / vapor interface without the need for fine meshing or re-meshing. The model is also adapted to capillary-driven heat pipes. The model is validated with experimental and numerical studies from the literature. The deviation is computed to be around 2% with the numerical and analytical studies and around 6% with the experimental study.The heat pipe assisted annealing concept requires the operation of heat pipes within a temperature range of 25 °C to 700 °C. In order to operate within this range, different working fluids need to be used for different temperature ranges due to constraints of vapor pressure, life time, performance and safety. These working fluids are studied in Chapter 4. First, a selection of the working fluids is made based on a literature review. This selection yielded water, Dowtherm A, phenanthrene and cesium. Then, a life time test has been carried out with thermosyphons to test the stability of phenanthrene. At the end of a 3 months long test at 460 °C, thermal decomposition of phenanthrene was observed. However, these tests should be repeated with better initial vacuum and at multiple temperatures. Finally, Dowtherm A has been used in a rotating heat pipe setup to test its applicability and performance. It has been shown that Dowtherm A is suitable to be used in a rotating heat pipe at the designated temperature range in terms of performance, provided that annular flow is avoided. With the knowledge gathered from the previous chapters of this thesis, a model of the heat pipe assisted annealing line has been developed in Chapter 5. The aim of this model is to quantify the energy efficiency advantage brought by the concept for different number of heat pipes and to understand the behavior of the system as a whole. The simulations were run for a fixed plant layout with varying number of heat pipes and an average wrap angle of 104°. The energy recoveries for the simulations run for a strip of 0.25 mm and a line speed of 6.133 m/s were 76.5%, 73.4%, 69.4% and 63.9% for a total number of 90, 75, 60 and 45 heat pipes, respectively. From the simulation results it follows that cesium heat pipes are more efficient than organic heat pipes. Finally, the simulation results showed that the thermal cycle requirements can be satisfied with this new technology.","rotating heat pipe; annealing; energy efficiency; heat transfer; fluid dynamics","en","doctoral thesis","","978-94-6402-114-1","","","","","","","","","Large Scale Energy Storage","","",""
"uuid:4b4f9f96-237e-421b-82f4-97b1393ae507","http://resolver.tudelft.nl/uuid:4b4f9f96-237e-421b-82f4-97b1393ae507","Towards Cyber-secure Intelligent Electrical Power Grids: Vulnerability Analysis and Attack Detection","Pan, K. (TU Delft Intelligent Electrical Power Grids)","Palensky, P. (promotor); Mohajerin Esfahani, P. (copromotor); Delft University of Technology (degree granting institution)","2020","The digital transformation of power systems has introduced a new challenge for robustness: cyber security threats. Motivated by the feasibility of a potent attack (e.g., the Stuxnet worm attack and the one in the hacker-caused Ukraine blackout) that it can be equipped with extensive system knowledge, vast attack resources to manipulate multiple measurements (multivariate attacks) and also strong capability to keep stealthy from possible detectors, the thesis work has built a framework capable of both vulnerability analysis and attack detection. Security index quantifying attack resources was proposed and the attack scenario was extended to subsume the combined data integrity and availability attacks. Realistic aspects of limited adversarial knowledge or resources were considered in the overall cyber risk assessment. Co-simulation tool specially for cyber security analysis has been developed, capturing the character of a cyber-physical system of intelligent power grids. A diagnosis filter was designed with a scalable and robust feature to detect all the plausible multivariate attacks in an admissible set by exploiting the attack impact on the system dynamics, with non-zero transient or non-zero steady-state residual output. The yielding Nash equilibrium implies that the proposed diagnosis filter is not based on a conservative design in the sense of its long-term behavior. In the end, this thesis also tried to implement the diagnosis filter in a real or simulated power system. A further robustification method was proposed to mitigate the effects from possible model mismatches on the residual output by using the simulation data to extract the model mismatch signatures, which has contributed to a novel data-assisted model-based attack detection approach.","combined attacks; disruptive multivariate intrusions; vulnerability assessment; cyber risk analysis; robust attack detection","en","doctoral thesis","","978-94-028-1975-5","","","","","","2020-12-31","","","Intelligent Electrical Power Grids","","",""
"uuid:915e8fd9-b729-4920-a938-785ecd84e5ee","http://resolver.tudelft.nl/uuid:915e8fd9-b729-4920-a938-785ecd84e5ee","Compact Thermal Diffusivity Sensors for On-Chip Thermal Management","Sonmez, U. (TU Delft Electronic Instrumentation)","Makinwa, K.A.A. (promotor); Sebastiano, F. (copromotor); Delft University of Technology (degree granting institution)","2020","Today’s systems-on-chip (SOCs) and microprocessors are complex systems that require multiple temperature sensors to monitor temperature variations in multiple spots on a single silicon die. For such thermal management applications, specialized compact and fast temperature sensors are required. This is necessary because executing an intensive process on an SoC can cause local hotspots in a short amount of time, which can compromise reliability. Such temperature sensors should also be compatible with advanced nanometer CMOS technologies, since complex SoCs and microprocessors are typically implemented in aggressively scaled CMOS processes.","thermal diffusivity (TD); phase domain sigma delta ADC; Temperature sensor; VCO-based sigma-delta modulator; smart sensors","en","doctoral thesis","","978-94-028-1971-7","","","","","","","","","Electronic Instrumentation","","",""
"uuid:dc5f29f8-7d53-4a3c-83ee-a4d562fbe843","http://resolver.tudelft.nl/uuid:dc5f29f8-7d53-4a3c-83ee-a4d562fbe843","Design and characterization of catalysts with isolated metal sites","Osadchii, D. (TU Delft ChemE/Catalysis Engineering)","Kapteijn, F. (promotor); Gascon, Jorge (promotor); Delft University of Technology (degree granting institution)","2020","This dissertation is devoted to the attractive and rapidly developing field of
heterogeneous catalysts with isolated metal sites. The following research
questions served as the source of inspiration for it: • How to design a catalyst with isolated metal sites? • How to synthesize and develop a catalyst with isolated metal sites? • How to characterize a catalyst with isolated metal sites?
In the first part of this dissertation (Chapters 2-3) the route for design,
synthesis, characterization and further modification of heterogeneous
catalysts with isolated sites is described, using the development of a
catalyst for direct conversion of methane to methanol as an example. The
second part (Chapters 4-5) investigates the applicability of X-ray based
analysis techniques (primarily X-ray photoelectron spectroscopy (XPS) and
X-ray absorption spectroscopy (XAS)) for the characterization of such
catalysts","","en","doctoral thesis","","","","","","","","","","","ChemE/Catalysis Engineering","","",""
"uuid:1192cfa9-b9a8-4439-a2d7-b4dd64910ccb","http://resolver.tudelft.nl/uuid:1192cfa9-b9a8-4439-a2d7-b4dd64910ccb","Automated seismic survey design and dispersed source array acquisition","Caporal, M. (TU Delft Applied Geophysics and Petrophysics)","Blacquière, G. (promotor); Wapenaar, C.P.A. (promotor); Delft University of Technology (degree granting institution)","2020","Reflection seismology is nowadays the preferred technique in the oil and gas industry to estimate the properties of the Earth's subsurface. The method typically includes a series of procedures that fit in three broad categories: • seismic data acquisition; • data processing and imaging; • interpretation and reservoir characterization.
This thesis mainly focuses on the first category and aims at improving both the operational productivity of seismic surveys in terms of costs, and the quality of the data in terms of signal-to-noise ratio and frequency content. Hereafter, we present a novel approach to seismic data collection named Dispersed Source Array (DSA) acquisition. It is proposed to replace traditional broadband sources with a set of devices dedicated to different and complementary frequency bands. Modern multiple driver loudspeaker systems are based on the same key concept and their improved performance is demonstrated.
During field operations, it is often impossible to accurately implement nominal survey geometries in practice. Frequently, acquisition geophysicists are required to cope with unforeseen circumstances such as obstacles in the field and inaccessible or restricted areas. These complications may compromise the quality of the data or lead to delays, and thus extra expenses, during acquisition. In this thesis, we propose two automated approaches to survey design focused on avoiding spatial discontinuities in the recorded
data and on guaranteeing adequate data quality. The two methods are based on the reorganization of regular (centralized) and irregular (decentralized) source acquisition grids, respectively, and provide a practical acquisition plan for seismic crews. In this thesis, based on theoretical considerations and numerical data inversion and imaging examples, the feasibility of Dispersed Source Array acquisitions is demonstrated. Additionally, we show that it is possible to reliably recover subsurface information based on irregularly sampled datasets. We show how, despite the significant mismatch between baseline and monitor survey geometries, decentralized DSA surveys are also suitable for time-lapse studies.
Step 1: Measure current disruption impacts.
Step 2: Predict future disruptions frequencies and impacts.
Step 3: Develop and evaluate measures aimed to control these disruption impacts.","","en","doctoral thesis","TRAIL Research School","978-90-5584-261-2","","","","TRAIL Thesis Series no. T2020/3, the Netherlands Research School TRAIL","","2020-02-19","","","Transport and Planning","","",""
"uuid:f22c0da3-9d85-4c3a-9c07-949e242869d6","http://resolver.tudelft.nl/uuid:f22c0da3-9d85-4c3a-9c07-949e242869d6","Simultaneous joint migration inversion as a high-resolution time-lapse imaging method for reservoir monitoring","Qu, S. (TU Delft ImPhys/Acoustical Wavefield Imaging; TU Delft ImPhys/Medical Imaging)","Verschuur, D.J. (promotor); de Jong, N. (promotor); Delft University of Technology (degree granting institution)","2020","During the past decade, time-lapse seismic technology has been widely applied in hydrocarbon reservoir management. It is a very powerful method to obtain information on reservoir changes in the inter-well regions. This information helps to identify bypassed hydrocarbons and extend the economic life of a field. In a typical scenario, one baseline survey and subsequent monitoring surveys are acquired over time. The survey geometry is usually exactly repeated and well-sampled to mitigate acquisition effects on the next steps in the process. By processing and comparing all the datasets, some physical changes, e.g. reflection amplitude and travel-time changes, can be estimated. These time-lapse changes are then used to calculate interpretable parameter changes in dynamic reservoir rock and fluid properties, e.g. pore pressure and fluid saturation.
In a conventional time-lapse processing workflow, all the multiples are first removed from the data, then independent imaging process is employed to each dataset, given the same propagation velocity model. Later on, to compensate the ignored velocity variations between different surveys, a time-shift map (travel-time differences) is estimated from the calculated images and then applied back to them, yielding the final reflection amplitude differences. However, this conventional processing strategy is usually sensitive to the success of multiple removal and survey repeatability, and also requires well-sampled surveys providing proper illumination. Moreover, artifacts are often generated in addition to the actual time-lapse changes due to the non-repeatable uncertainties during the independent processing steps. Regarding the time-shift-map tool, the relative velocity changes derived from the time-shift map are not the actual velocity changes due to its local 1D subsurface assumption that is embedded.
In order to relax these rigid requirements and have a better velocity change indicator, we propose Simultaneous Joint Migration Inversion (S-JMI) as an effective time-lapse tool for reservoir monitoring, which combines a simultaneous time-lapse data processing strategy with the Joint Migration Inversion (JMI) method. JMI is a full wavefield inversion method that explains the measured reflection data using a parameterization in terms of reflectivities and propagation velocities. JMI is able to make use of multiples and at the same time take velocity variations between surveys into account. The simultaneous strategy, which means fitting all the datasets simultaneously, allows the baseline and monitor parameters to communicate and compensate with each other dynamically during inversion via L2-norm constraints, thus, reducing the non-repeatable uncertainties during the time-lapse processing workflow. As a result, more accurate time-lapse differences can be achieved by S-JMI, compared to inverting each dataset independently. Moreover, in order to get more localized time-lapse velocity differences, we further extend the regular S-JMI to a robust high-resolution S-JMI (HR-S-JMI) process by making a link between the reflectivity/reflectivity-difference and velocity/velocity-difference during inversion. With a complex synthetic example based on the Marmousi model, we demonstrate the performance of the time-shift-map-based method, sequential JMI, the regular S-JMI and HR-S-JMI is improving in this particular order.
Next, we further demonstrate the effectiveness of the proposed method in more real-life cases with a highly realistic synthetic model based on the Grane field, offshore Norway, and a time-lapse field dataset from the Troll Field. Moreover, in order to investigate the feasibility of HR-S-JMI in practice, several numerical experiments based on the realistic Grane model are conducted, regarding the following aspects: noise, including random noise and coherent noise caused by the acoustic assumption; the quality of time-lapse surveys, including sparse surveys, non-repeated surveys, and Ocean Bottom Node (OBN) vs streamer (different types of monitoring surveys); non-repeated sources, including source positioning errors and non-repeated source wavelets; spatial weighting operators in the L2-norm constraints; and sensitivity to weak time-lapse effects. These experiments show that HR-S-JMI is very robust to random noise, coherent noise, survey sparsity, survey non-repeatability, source positioning errors and source wavelet discrepancies. Furthermore, HR-S-JMI remains effective when the spatial weighting operators in the L2-norm constraints are largely relaxed and HR-S-JMI is capable of detecting weak time-lapse changes (e.g. velocity changes down to +/- 35 m/s). These features make it a suitable time-lapse processing solution for cost-effective (semi-)continuous monitoring, termed i4D survey technology, in which inexpensive localized and sparse surveys are employed between the conventional full-field surveys. The simultaneous strategy of S-JMI allows the full-field survey information to compensate the poor illumination of the in-between sparse surveys during process. Furthermore, calender-time constraints are proposed and applied to the parameter differences between the baseline and monitors along the calender-time axis by taking advantage of the feature that time-lapse effects usually develop gradually over time. With a complex synthetic example based on the Marmousi model, we demonstrate that S-JMI is a promising tool to process datasets acquired from (semi-)continuous monitoring, like an i4D survey.
In conclusion, we propose high-resolution simultaneous JMI (HR-S-JMI) as an effective time-lapse processing tool for the following main reasons:
• HR-S-JMI is able to make use of multiples to extend the illumination of the subsurface, instead of removing them;
• HR-S-JMI is an extended imaging process, including automatic velocity updating. Therefore, it takes velocity variations between surveys directly into account;
• HR-S-JMI is a good indicator of velocity changes, it can invert for high-resolution accurate time-lapse velocity changes;
• HR-S-JMI is robust to the uncertainties existing in the monitoring surveys, e.g. noise, sparsity, non-repeatability, source positioning errors, source wavelet discrepancy, etc;
• HR-S-JMI has the ability to detect weak time-lapse changes (velocity changes down to +/- 35 m/s)
Chapter 3 evaluates the continuous production of enzymes using different carbon sources under carbon-limited conditions. It was found that glucose has a positive influence on the production of enzymes that can catalyse the hydrolysis of p-nitrophenyl-β-D-glucopyranoside (PNPGase). Sucrose and fructose seem to inhibit PNPGase synthesis; however, these substrates could also have a positive influence on the synthesis of other enzymes not evaluated in this project. Cells can uptake glucose without the need to synthesize extracellular enzymes like PNPGase. The increase in the production of PNPGase during the continuous culture using glucose as the carbon source indicates the presence of inducers. It was also discovered in this project that polysaccharides were present in the supernatant of all conditions using glucose, fructose/glucose and sucrose (Chapter 4 and Chapter 5). This suggests that the possible inducers could have come from fragments of the extracellular polysaccharides.
Sugar analysis showed the presence of sugar with the same retention time as gentiobiose in the supernatant of the conditions using glucose as the carbon source, which could be a fragment from polymers released from the cell wall. Gentiobiose could be acting as an inducer of enzymes. In addition, a mechanism was also proposed for continuous PNPGase production under glucose-limited conditions assuming that PNPGase includes beta-glucosidase (Chapter 4).
The carbon sources used under carbon-limited conditions influenced the PNPGase productivity and possibly the whole enzymatic cocktail secreted by the fungus. For this reason, shotgun proteomics and SDS-PAGE analysis were performed for the proteins present in the supernatant of the conditions using glucose, fructose/glucose and sucrose (Chapter 4). The shotgun proteomics analysis suggested that the different carbon sources used provided the production of different extracellular proteins including several uncharacterized proteins, which can also include different enzymes. This brings the possibility of creating a hypothesis that different carbon sources easily assimilated by the cells could lead to the synthesis of different inducers (fragments of extracellular polysaccharides), which could induce the synthesis of different enzymes under carbon-limited conditions.
Extracellular polysaccharides were the by-products discovered in this project during the production of enzymes under carbon-limited conditions. The behaviour of intracellular metabolites (glycolysis, citric acid cycle, pentose phosphate pathway and nucleotides) was evaluated under four different conditions in duplicate during the production of extracellular polysaccharides by Trichoderma harzianum under carbon-limited conditions (Chapter 5). This chapter has provided the first step for the optimization of the production of extracellular polysaccharides and the information about the behaviour of intracellular metabolites using this wild type strain is essential to the development of optimal strains.","","en","doctoral thesis","","978-94-028-1947-2","","","","","","","","","OLD BT/Cell Systems Engineering","","",""
"uuid:77bc6ed7-ad54-4d10-8765-eb089d174ccd","http://resolver.tudelft.nl/uuid:77bc6ed7-ad54-4d10-8765-eb089d174ccd","Exploring the use of AOx for organic synthesis and biofuel cells","Pedroso de Almeida, T. (TU Delft BT/Biocatalysis)","Hollmann, F. (promotor); Riul, Antonio (promotor); Delft University of Technology (degree granting institution)","2020","Chapter 1 shows a general presentation of catalysts. Chapter 2 presents a cascade system for alcohol oxidase with the enzyme benzaldehyde lyase. The project was successful in producing in one container only a two-step compound; first, alcohol was converted into aldehyde by alcohol oxidase and, sequentially, the aldehyde converted into the compounds of interest. Background crystals are formed in the final product, turning the removal of the material of interest simpler and more effective.
In Chapter 3, alcohol oxidase was converted into aldehyde in a flow system, but now using the enzyme aryl alcohol oxidase. As oxidase requires oxygen as an electron acceptor, this is a limiting factor due to the low solubility of O2 in the buffer solution. A flow system becomes more interesting as it can overcome this limitation by improving the oxygen mass transport in buffer solution due to the increased contact area between two fluids. Vigorous agitation is required to achieve similar results in a standard system, which normally compromises the enzymatic structure due to mechanical stress generated by aeration strength. The expected impairment of the structured tertiary enzyme was not observed for the aryl alcohol oxidase enzyme.
The findings in Chapter 3 about the mechanical resistance of aryl alcohol oxidase under vigorous motivated us to scale up the system from 50mL to 1L, presented in Chapter 4. It was possible to obtain high catalytic frequency values in this new configuration, but another challenge appeared: the low solubility of both substrate and product in an aqueous medium. This challenge was overcome by the biphasic liquid system composed the organic phase in the upper portion and the buffer in the lower part of the solution. In this system, the aqueous phase was fed with the substrate by the organic phase at all times, and the product removed. The encouraging results showed that aryl alcohol oxidase is a strong candidate for industrial applications.
In Chapter 5, we have evaluated carbon compounds deposited on electrodes to produce locally hydrogen peroxide for peroxidase; the idea is getting the halogenation of a model compound by combining catalytic and electrochemical techniques. Here, a fine tune of the voltage or current on a gas diffusion electrode covered with carbon nanotubes enabled the hydrogen peroxide formation. Since the CiVCPO (vanadium chloroperoxidase from Curvularia inaequalis) enzyme used in this project needs hydrogen peroxide, the system is interesting to make the enzymatic reaction always working at its maximum efficiency. Finally, Chapter 6 shows a synthesis of carbon-based materials to support the enzyme immobilization on electrodes, bringing a new possibility of using some of the enzymes studied here in biofuel cells and third-generation biosensors. The main idea is to work with enzymes without mediators using these carbon-based materials to sequester electrons from the enzymes to the electrodes. Higher currents were achieved with the newly synthesized carbon-based materials and glucose oxidase, paving the way to design devices having faster electrochemical responses.","","en","doctoral thesis","","978-94-6402-134-9","","","","","","","","","BT/Biocatalysis","","",""
"uuid:ebc2f602-a65b-491c-805f-9fb4f37cb104","http://resolver.tudelft.nl/uuid:ebc2f602-a65b-491c-805f-9fb4f37cb104","Data-driven Analysis and Modeling of Passenger Flows and Service Networks for Public Transport Systems","Luo, D. (TU Delft Transport and Planning)","van Lint, J.W.C. (promotor); Cats, O. (promotor); Delft University of Technology (degree granting institution)","2020","Public transport (PT) plays an increasingly important role in solving mobility challenges, especially in densely populated metropolitan areas. Further improving PT systems requires more advanced planning and operations. Fortunately, the considerable amount of data that have become increasingly available for PT systems offer an opportunity to address this challenge. However, how these data can be effectively used to achieve this goal still remains as an unresolved question in the scientific literature. More research is therefore needed to bridge this gap in order to advance PT systems for addressing mobility challenges. To this end, this dissertation is focused on developing methods and models for translating high-volume data from various sources into novel knowledge and insights that can be used to improve PT planning and operations. This dissertation first examines how to obtain onboard occupancy of PT vehicles by integrating all the three different data sources mentioned above. Second, this dissertation deals with the issue of high-dimensionality in large-scale passenger flows. Third, we propose a k-means-based method to cluster PT stops for constructing zone-to-zone OD matrices. Fourth, this dissertation presents a new method for analyzing the accessibility of PT service networks based on a novel network science approach. Last, we investigate whether passenger flow distribution can be estimated solely based on network properties in PT systems.","","en","doctoral thesis","TRAIL Research School","978-90-5584-258-2","","","","TRAIL Thesis Series no. T2020/2, the Netherlands Research School TRAIL","","","","","Transport and Planning","","",""
"uuid:d8b89616-9cb6-4c0d-8f6b-a58c93614a25","http://resolver.tudelft.nl/uuid:d8b89616-9cb6-4c0d-8f6b-a58c93614a25","On-chip reconstitution of an FtsZ-based divisome for synthetic cells","Fanalista, F.","Dekker, C. (promotor); Delft University of Technology (degree granting institution)","2020","Cell replication is a fascinating biological process that ensures the proliferation of a species via division of a mother cell into two daughter ones. This process in bacteria is performed by a complex protein machinery named the Z-ring, which assembles at the cell mid-plane and promotes progressive membrane constriction down to division. A central element of this machinery is FtsZ, a protein that polymerizes into filaments that constitute the main scaffold of the ring. Even though this protein was found to be essential, many aspects concerning the Z-ring assembly, the role of FtsZ-associated proteins, and the contribution of FtsZ to the constricting force in the division process are not yet clearly determined. To address these questions, in this thesis we aimed to reconstitute a minimal divisome in vitro, in order to isolate single components from the complex cellular environment, and study their functionality in a bottom-up way. Following this approach, we aimed to better understand the dynamics and the parameters involved in FtsZ filament association into bundles, and to verify whether such structures are capable to generate a force on the lipid membrane.","FtsZ; synthetic cell; microfluidics; bottom-up biology","en","doctoral thesis","","978-90-8593-433-2","","","","","","","","","BN/Cees Dekker Lab","","",""
"uuid:94c8973e-bf3d-4a12-86e2-24801ed008c9","http://resolver.tudelft.nl/uuid:94c8973e-bf3d-4a12-86e2-24801ed008c9","Optimal Design of Structures and Variable Stiffness Laminates with Strength and Manufacturing Constraints","Hong, Z. (TU Delft Aerospace Structures & Computational Mechanics)","Bisagni, C. (promotor); Turteltaub, S.R. (promotor); Delft University of Technology (degree granting institution)","2020","Reducing weight and improving strength of structures have always been major design goals in the aerospace industry since its inception. In particular, strength directly affects the safety and serviceability of an airplane and is therefore of great importance in structural design. Improving the strength of an airframe can effectively increase its damage tolerance in different failure modes, such as fracture, fatigue and impact damage. To pursue these goals, optimization techniques, which aim to seek for the “best solution” in mathematical models, can be applied in the structural design process. In addition, usage of lightweight carbon fiber reinforced composite laminates further serves this purpose....","Efficient optimization; Stress constraint; Curvature constraint; Variable stiffness laminate; Strength optimization","en","doctoral thesis","","978-94-028-1934-2","","","","","","","","","Aerospace Structures & Computational Mechanics","","",""
"uuid:ad392a9e-5490-49df-a8db-1e64715c8130","http://resolver.tudelft.nl/uuid:ad392a9e-5490-49df-a8db-1e64715c8130","Nanofabricated tips for device-based and double-tip scanning tunneling microscopy","Leeuwenhoek, M. (TU Delft QN/Groeblacher Lab)","Groeblacher, S. (promotor); Allan, Milan P. (copromotor); Delft University of Technology (degree granting institution)","2020","","double tip; two probe; scanning tunneling microscope; local gate; nanofabrication; quantum materials","en","doctoral thesis","","978-90-8593-432-5","","","","","","2020-07-20","","","QN/Groeblacher Lab","","",""
"uuid:db6f4a31-efb9-4917-8f14-60837739a14b","http://resolver.tudelft.nl/uuid:db6f4a31-efb9-4917-8f14-60837739a14b","Energy in Dwellings: A comparison between Theory and Practice","van den Brom, P.I. (TU Delft Housing Quality and Process Innovation)","Visscher, H.J. (promotor); Meijer, A. (copromotor); Delft University of Technology (degree granting institution)","2020","Reduction of energy consumption is currently high on the political agenda of many
countries. Because buildings consume a significant amount of the total energy
consumption they form a big energy saving potential. For this reason the EPBD was
introduced. This directive introduced a mandatory energy performance certificate for all buildings in Europe (in the Netherlands implemented as energy label). The initial aim of this directive was to make people aware of the energy efficiency state of the building that they buy or rent.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-253-6","","","","A+BE | Architecture and the Built Environment No 3 (2020)","","","","","Housing Quality and Process Innovation","","",""
"uuid:cebc3c19-40b1-4b6d-8acd-e30cbd6f0fda","http://resolver.tudelft.nl/uuid:cebc3c19-40b1-4b6d-8acd-e30cbd6f0fda","On the Design and Analysis of Micro-metric Resolution Arrays in Integrated Technology for Near-Field Dielectric Spectroscopy","Thippur Shivamurthy, H. (TU Delft Tera-Hertz Sensing)","Neto, A. (promotor); Spirito, M. (promotor); Delft University of Technology (degree granting institution)","2020","On the Design and Analysis of Micro-metric Resolution Arrays in Integrated Technology for Near-Field Dielectric Spectroscopy Medical procedures and treatments have a great impact on the quality of life as well as on the health care costs. Increasing number of cases pertaining to skin cancer have been documented by the International Agency for Research on Cancer (IARC) [80, 81] every year. The most commonly used surgical technique for the skin cancer treatment is the Mohs surgery, whereby thin layers of skin containing cancer tissue are removed until only cancer free tissue remains. Reducing the number of iterations and in turn the surgery time during a Mohs surgery [82] would reduce patient's discomfort and medical costs. Having a fast, accurate and non-invasive diagnostic tool for the detection of anomalies would provide an additional assistance during the Moh's surgery to assess the depth of the tissue removal resulting in few iterations. On the other hand, horticulture sector represent a very large market in several parts of the world. Enabling techniques to better assess the quality of the product during the entire supply chain process would result in high quality deliverables. Moreover, characterization of materials/objects with high accuracy is applicable to other scenarios that can be addressed by dielectric spectroscopy (e.g., surface quality inspection), which plays a role in many fabrication processes. To address these needs, this work targets at developing models based on spectral method of moments to characterize multilayered samples for two near-_eld systems, _rst that of an open-ended coaxial probe and second, a matrix of near-_eld permittivity sensors in integrated technology.","Capacitive Sensors; Coaxial Probes; Sensitivity; Pin-Patch elements; Method of Moments; Greens function; CMOS Technology","en","doctoral thesis","","978-94-6384-114-6","","","","","","","","","Tera-Hertz Sensing","","",""
"uuid:7e916f03-09cc-4510-9914-03a44b339462","http://resolver.tudelft.nl/uuid:7e916f03-09cc-4510-9914-03a44b339462","High Performance Seed-and-Extend Algorithms for Genomics","Ahmed, N. (TU Delft Quantum & Computer Engineering)","Bertels, K.L.M. (promotor); Al-Ars, Z. (promotor); Delft University of Technology (degree granting institution)","2020","Recent advances in DNA sequencing technology have opened new doors for scientists to use genomic data analysis in a variety of applications that directly affect human lives. However, the analysis of unprecedented volumes of sequencing data being produced represents a formidable computational challenge. The conventional CPU-only computing paradigm is not sufficient to analyze exponentially growing sequencing data in a cost-effective and timely manner. Heterogeneous computing systems with GPU and FPGA based accelerators have become easily accessible and are increasingly being used to process massive amounts of data due to their better performance-to-cost ratio than CPU-only platforms. Furthermore, highly optimized analysis algorithms are required to extract the maximum computational power of these computing systems.","","en","doctoral thesis","","978-94-6384-108-5","","","","","","","","Quantum & Computer Engineering","","","",""
"uuid:dd1f7899-38ee-4c78-a5b0-a6fa92c90f56","http://resolver.tudelft.nl/uuid:dd1f7899-38ee-4c78-a5b0-a6fa92c90f56","Solid oxide fuel cells for ships: System integration concepts with reforming and thermal cycles","van Biert, L. (TU Delft Ship Design, Production and Operations)","Aravind, P.V. (promotor); Visser, K. (copromotor); Delft University of Technology (degree granting institution)","2020","For decades, ships have been propelled by diesel engines. However, there are increasing concerns about their environmental impact. Fuel cells can provide an alternative to convert fuels directly into electricity, with high efficiencies and without hazardous emissions.
Solid oxide fuel cells have a ceramic membrane, which functions at high temperatures. This makes them less prone to contamination, allows internal conversion of various fuels and enables integration with thermal cycles to achieve high combined efficiencies.
So are ships and solid oxide fuel cells a match made in heaven? This dissertation breaks ground on the challenges and opportunities regarding the application of solid oxide fuel cells in ships, internal fuel reforming and integration with thermal cycles.
How do solid oxide fuel cells compare to other power plants? How can we compare different system integration options? Does internal fuel reforming affect the efficiency and lifetime? Can cell experiments provide useful information on overall system performance? These are among the questions addressed in this dissertation.
The reader will learn how solid oxide fuel cell integration with reforming and thermal cycles can provide power on ships with high efficiency and reliability, no pollutant emissions and low noise, but also about the challenges and opportunities of this potentially budding love.","solid oxide fuel cells; alternative fuels; maritime application; combined cycles; dynamic modelling; direct internal reforming; kinetics","en","doctoral thesis","","978-94-6366-248-2","","","","","","","","","Ship Design, Production and Operations","","",""
"uuid:c50eb76c-b2ea-49cf-9873-5c929ff496dc","http://resolver.tudelft.nl/uuid:c50eb76c-b2ea-49cf-9873-5c929ff496dc","Deposition and erosion of silt-rich sediment-water mixtures","te Slaa, S. (TU Delft Environmental Fluid Mechanics)","Winterwerp, J.C. (promotor); He, Qing (promotor); Delft University of Technology (degree granting institution)","2020","From a granulometric point of view, sediment can be classified as sand, silt and clay. Silt is thereby defined as sediment with particle sizes equal to, or larger than 2 μm and smaller than 63 μm, with quartz or feldspar as base mineral. Note that quartz and feldspar particles can be smaller than 2 μm. To date, our knowledge on deposition and erosion processes of silt and siltrich sediment-water mixtures is small compared to their sandy or clayey counterparts, hampering our understanding of large-scale morphological behaviour of silt-dominated systems. The most important difference between clay and silt is that clay consists of clay minerals which have cohesive properties, and as a result, the erosion and deposition of clay beds is influenced by cohesion. The behaviour of cohesive sediment in suspension and in the bed is influenced by flocculation, permeability, effective stress and rheological properties which are related to electro-chemical properties of the base minerals. Silt particles do not have cohesive properties, but there are indications that their erosion behaviour can be apparently cohesive (Roberts et. al., 1998, Van Maren et al., 2009a). Permeability effects are likely to play a role in the behaviour of silt-water mixtures due to the small particles sizes of silt. Such effects result from a difference in timescales between the forcing and the response of the bed and result in apparently cohesive behaviour. Examples are the development of the bed strength with increasing hydrodynamic forcing or the dissipation of overpressures within a compacting silt bed. Such behaviour is characteristic for cohesive material and not for granular material such as silt, and is referred to as apparently cohesive behaviour. In existing literature, the physical processes that control the behaviour of silt are mostly described qualitatively, and silt-specific formulations for hindered settling, compaction and erosion do not exist. Through this thesis, quantitative insights into the physical processes controlling the behaviour of silt-water mixtures have been derived. The overall objectives of this thesis were to i) determine the hindered settling behaviour of silt-water mixtures, ii) determine the deposition and compaction behaviour of silt beds, and iii) determine the erosion behaviour of silt beds.","","en","doctoral thesis","","978-94-6375-784-3","","","","","","","","","Environmental Fluid Mechanics","","",""
"uuid:8d2b0432-7ebe-42c0-b231-34f1a08bd779","http://resolver.tudelft.nl/uuid:8d2b0432-7ebe-42c0-b231-34f1a08bd779","Evaluating Hosting Provider Security Through Abuse Data and the Creation of Metrics","Noroozian, A. (TU Delft Organisation & Governance)","van Eeten, M.J.G. (promotor); Delft University of Technology (degree granting institution)","2020","Hosting providers are theoretically in a key position to combat cybercrime as they are often the entities renting out the resources that end up being abused by miscreants. Yet, notwithstanding hosting providers' current security measures to combat abuse, their responses vary widely. In many cases the response is ineffective, as empirical evidence suggests. To incentivize hosting providers to more effectively combat cybercrime and abuse however, we first require tools by which we can tell more or less secure hosting providers apart. These, may then be used to guide technical and policy questions surrounding the security of online hosting, and to provide empirical grounding to discussions about which potential solutions may move the hosting market towards more desirable security outcomes. Therefore, this book explores ways by which the security of hosting providers, may be measured through empirical data on cybercrime and the creation of metrics. The book explores questions of how such metrics may be constructed, to what extent they may be useful, and what the wider consequences of provider security negligence may be.","Hosting Provider; Abuse; Metrics; Cybercrime; Incentives; Security; Bullet-Proof Hosting; DDoS; Booter; Victim; Patching; Incident Remediation","en","doctoral thesis","","97890-6562-4451","","","","","","","","","Organisation & Governance","","",""
"uuid:388a09c0-1619-4947-9d82-f27eedf88155","http://resolver.tudelft.nl/uuid:388a09c0-1619-4947-9d82-f27eedf88155","Towards an Optomechanical Quantum Memory: Preparation and Storage of Non-Classical States in High-Frequency Mechanical Resonators","Wallucks, A. (TU Delft QN/Groeblacher Lab)","Groeblacher, S. (promotor); van der Zant, H.S.J. (promotor); Delft University of Technology (degree granting institution)","2020","Cavity Optomechanics is a field employing optical control over mechanical oscillators with a variety of possible applications for sensing, optical signal processing as well as quantum technologies. Such systems are attractive because they are often fully engineerable and can be tailored to specific applications by allowing a large range of frequencies and the usage of a number of host materials. The following work explores one such application of a high frequency, ultra-long lived mechanical mode as a quantum memory which can be read out on-demand via an optical interface.","","en","doctoral thesis","","978-90-8593-431-8","","","","","","","","","QN/Groeblacher Lab","","",""
"uuid:1a21c6a6-0412-4ea1-b8b4-c1c84c19d68e","http://resolver.tudelft.nl/uuid:1a21c6a6-0412-4ea1-b8b4-c1c84c19d68e","Process stratigraphy: from numerical simulation to lithology prediction","Karamitopoulos, P. (TU Delft Hydraulic Structures and Flood Risk; TU Delft Applied Geology)","Martinius, A.W. (promotor); Weltje, G.J. (promotor); Donselaar, M.E. (copromotor); Delft University of Technology (degree granting institution)","2020","Process-based stratigraphic models provide attractive tools to simulate sedimentary system dynamics spanning a wide range of spatial and temporal scales and segments of the sediment routing system while allowing full access to the model responses, i.e. the spatial distribution of lithologies as a function of the intervening processes and environmental conditions at the time of deposition. Apart from improving our understanding regarding the evolution of sedimentary systems under pre-specified allogenic forcing mechanisms and intrinsic dynamics, process-based stratigraphic models can be used to improve basin-fill history reconstructions and increase the geological credibility of static reservoir models by integrating regional information to local-scale heterogeneities. The realism and predictive power of the model responses and geological model realizations may be quantitatively assessed by comparison with the geophysical/geological data available.","chronosome; avulsions; bifurcation intensity; depositional connectivity; process-based geological modelling","en","doctoral thesis","","978-94-6402-097-7","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:0ad9f986-8a4a-4f31-a486-d6c44b30605b","http://resolver.tudelft.nl/uuid:0ad9f986-8a4a-4f31-a486-d6c44b30605b","Continuous Sensing on Intermittent Power","Majid, A.Y. (TU Delft Embedded Systems)","Pawełczak, Przemysław (promotor); Langendoen, K.G. (promotor); Delft University of Technology (degree granting institution)","2020","Battery-free energy-harvesting devices have the potential to operate for decades, since they draw power fromvirtually unlimited energy sources, such as sunlight. However, ambient energy sources are volatile, and tiny harvesters can extract only weak power from them. Thus, small energy-harvesting devices operate intermittently: first, they charge their buffers then start operating, which depletes the buffered energy and causes the devices to power down, letting the harvesters to refill the energy buffers for the next operational round. Classical programming architectures assume continuous power. Therefore, frequent power failures render them useless; power failures reset the computational progress and delete volatile data. Thus, the intermittent programming and execution paradigm has emerged. Generally, there are two strategies being employed to support intermittent execution: checkpoint-based and task-based. Prior checkpoint- and task-based systems tackled mainly challenges related to enabling efficient computing on intermittent power. However, they have ignored the challenges associated with sensing, which is the primary application for intermittent systems. Therefore, from a sensing standpoint, these systems have several drawbacks.","batteryless; energy-harvesting; intermittent computing; backscattering; intermittent sensing","en","doctoral thesis","","978-94-6384-105-4","","","","","","","","","Embedded Systems","","",""
"uuid:eb3e337a-972c-498c-9bb9-e9d8bed7f006","http://resolver.tudelft.nl/uuid:eb3e337a-972c-498c-9bb9-e9d8bed7f006","Aerobic Granular Sludge in Seawater","de Graaff, D.R. (TU Delft BT/Environmental Biotechnology)","van Loosdrecht, Mark C.M. (promotor); Pronk, M. (copromotor); Delft University of Technology (degree granting institution)","2020","Increase in sea level will lead to an increase in salinity in domestic wastewater systems. In order to anticipate its effects on biological wastewater treatment, the impact has to be assessed with lab-scale experiments. Aerobic granular sludge (AGS) is a successful technology for simultaneous removal of organic carbon (COD), nitrogen, and phosphorus in a single process step. The impact
of seawater on AGS has not yet been reported. The effect of salinity on AGS can roughly be divided in three aspects: biological activity, physical stability, and change in extracellular polymeric substances (EPS). Based on these aspects, the research from this PhD thesis has been structured. The overall question is: “How does long-term exposure of seawater affect the AGS process?”. This PhD thesis shows the feasibility of the aerobic granular sludge process with seawater-based wastewater.","Wastewater treatment; Aerobic granular sludge; EBPR; Nereda; Seawater","en","doctoral thesis","","978-94-6384-104-7","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:c6d1d9fe-00e6-4b2c-a643-911a9166aeec","http://resolver.tudelft.nl/uuid:c6d1d9fe-00e6-4b2c-a643-911a9166aeec","On the ecology and applications of glucose and xylose fermentations","Rombouts, J.L. (TU Delft BT/Environmental Biotechnology)","van Loosdrecht, Mark C.M. (promotor); Weissbrodt, D.G. (copromotor); Delft University of Technology (degree granting institution)","2020","Microbial fermentations are a key process in naturally and man-made ecosystems. Microbial fermentations play a key role in creating and digesting our food and they are useful in designing bioprocesses that can produce biogas, biofuels, bioplastics, and many other functional molecules (Chapter 1). Furthermore, studying the competition and cooperation in microbial fermentative ecosystems can help to solve the question how microbial diversity is shaped. Glucose is a molecule central to most forms of life, therefore glucose was chosen as a model substrate to perform fermentative enrichment studies. Xylose is an important monomer in many types of hemicellulose and was therefore chosen as second model substrate. Glucose and xylose can be fermented to volatile fatty acids, alcohols or lactic acid. The biomass specific uptake and production rates at which microbial fermentations are performed are high compared to other biological anaerobic carbon conversions. This rate difference is useful when studying fermentation using an enrichment culture approach. Such fermentative enrichment cultures can be used to develop mixed culture fermentation technologies, which offer alternative technological possibilities for processing feedstocks and residual streams containing carbohydrates (Chapter 1). Biogas production is a relatively well-established industry, but remains to be economically outcompeted by natural gas. The market for (bio)hydrogen production is relatively big, as the hydrogen economy stood for 130 billion USD in 2017. Actual large-scale hydrogen production and capture using biological systems has yet to prove itself. Lactate and ethanol can both be produced using mixed culture fermentation, where ethanol production remains to be a challenging business case due to small profit margins. Medium chain fatty acids are also a potential product. These molecules are expected to have many applications, with a likely higher value than biogas or biofuel, thus promising a healthy business case. Producing polyhydroxyalkanoates from volatile fatty acids produced by mixed culture fermentation promises a healthy industrial feasibility. When assuming solely competition on substrates to occur, limiting a single substrate in a microbial ecosystem is expected to result in one dominant species. The results of Chapter 2 confirm this hypothesis, to the extent of >85% of the observed cell surface belonging to a single species for three out of the four enrichment cultures. A population of Enterobacter cloacae and Citrobacter freundii dominated the glucose and xylose limited sequencing batch cultures respectively. Continuous glucose limitation showed the dominance of Clostridium intestinale. A xylose limited continuous enrichment culture resulted in the coexistence of Citrobacter freundii, and a Lachnospiraceae and Muricomes population. Chapter 3 aims to answer the question how dual substrate limitation influences a fermentative microbial community. Dual xylose and glucose limitation led to a generalist population of Clostridium intestinale in continuous feeding, and a generalist population of Citrobacter freundii in sequencing batch culturing. No apparent carbon catabolite repression was observed when analysing a batch cycle or when performing a batch experiment in the continuous dual limited enrichment culture. This response is of value when designing large scale fermentative bioprocesses, as in industry, typically microorganisms are used which show carbon catabolite repression in mixtures of glucose and xylose. The kinetic, stoichiometric and bioenergetic analysis of enrichment cultures in continuously limited or sequencing batch environments showed that sequencing batch enrichments select for rate, while continuous limited enrichments select for efficiency (Chapter 2). Rate is considered as the biomass-specific substrate uptake rate (qsmax) and efficiency is considered as yield of biomass on ATP harvested in catabolism (Yx,ATP). These findings fit within the r- and K-selection theory. Furthermore, it was found that butyrate production is linked to a lower uptake rate than combined acetate and ethanol production. Potentially, more energy is harvested in butyrate production than in combined acetate and ethanol production, through electron bifurcation. More microbial diversity (i.e. more than one species) was observed than what was expected from a competitive point of view in all six enrichments performed in Chapter 2 and 3. Therefore, in Chapter 5 a complementary approach of metabolomics, metagenomics and isolation studies where performed to generate an evidence based hypothesis on how the Enterobacteriaceae and Clostridiales populations in the continuous xylose limited enrichment culture interacted. The metagenomic evaluation resulted in three dominant bins, one for Citrobacter freundii, one for “Ca. Galacturonibacter soehngenii” and one for a Ruminococcus sp. The interaction between Citrobacter freundii and “Ca. Galacturonibacter soehngenii” is proposed to be a sharing of biotin, pyridoxine and alanine by Citrobacter freundii with “Ca. Galacturonibacter soehngenii”. A differential enrichment study showed that indeed the fraction of “Ca. Galacturonibacter soehngenii” increased and Enterobacteriaceae decreased, when these three metabolites were directly supplemented to the enrichment culture. Thus, commensalism and competition were likely to driving microbial diversity in this culture. Chapter 4 aimed to study the ecology of lactic acid bacteria. Bacteria can produce lactic acid from glucose, which is a different metabolism than producing acetate and butyrate. Sequencing batch reactors were used to enrich, comparing a mineral and complex medium. The media were identical, except for the addition of peptides and 9 B vitamins in the complex medium. Glucose was fermented to a mixture of lactic acid and ethanol when using the complex medium, thereby a heterofermentation. Using the mineral medium, glucose was fermented to a mixture of acetate, butyrate and hydrogen, with smaller amounts of lactic acid and ethanol. A population of Lactobacillus, Lactococcus and Megasphaera was enriched on complex medium. On mineral medium, a population of Ethanoligenens dominated the enrichment with a small fraction of Clostridium. Lactic acid producing bacteria are hypothesised to have taken over the fermentation, due to a 94% increase in biomass-specific substrate uptake rate, leading to a higher growth rate. The increase in growth rate is argued to be caused due to resource allocation, whereby lactic acid bacteria optimise their enzyme levels in anabolism and catabolism, attaining a higher growth rate than mineral-type fermenters such as Ethanoligenens. Chapter 6 aims to direct further research, which lies in studying the effect of different parameters on fermentative ecosystems. These parameters are concentrations of: gaseous compounds (I), cations used to neutralise (II), nutrients, such as B vitamins (III). Also, very low pH environments (pH<3.5) are considered an opportunity (IV). Finally, analysing the composition of “real” fermentable streams and their effect on the arising product spectra is of interest (V). Kinetics and bioenergetics are discussed using enzymatic Michaelis-Menten kinetics and the concept of resource allocation. In this way, efforts can be directed into the ability to predict product formation a priori in fermentative ecosystems. Future experimentation is guided to take place on four distinct levels, and useful experiments to verify concepts in this thesis are outlined. Finally, commensalism and/or mutualism might both be relevant in open microbial ecosystems which remains to be settled by future work.","microbial selection; Fermentation; metabolic interactions; chemostat; sequencing batch reactor","en","doctoral thesis","","978-94-6380-685-5","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:eb54d12f-079a-41a4-8d75-1a0fdf2af412","http://resolver.tudelft.nl/uuid:eb54d12f-079a-41a4-8d75-1a0fdf2af412","Photochromic Properties of Rare-Earth Oxyhydrides","Nafezarefi, F. (TU Delft ChemE/Materials for Energy Conversion and Storage)","Dam, B. (promotor); Delft University of Technology (degree granting institution)","2020","The experiments presented in this thesis have provided new insights regarding the photochromic properties of rare earth oxyhydrides (REOxHy). We discovered that thin films of rare-earth (Y, Dy, Er, Gd, Nd) oxyhydrides show unique photochromic properties under ambient conditions. We showed that by direct current reactive magnetron sputtering of rare earth metal targets in an Ar and H2 atmosphere a metallic dihydride thin films can be made. However, above a certain critical deposition pressure, a semiconductor REOxHy thin film is formed upon exposing the films to air. These films show photochromic properties. Compared to YOxHy, the optical bandgaps of the lanthanide-based oxyhydrides are smaller, while photochromic contrast and kinetics show large variation among different cations. The photon energy required to obtain a photochromic effect is given by the optical band gap of the material, as shown in the energy threshold measurements. Photon energies larger than the band gap are required to photo-darken the rare-earth oxyhydrides. The photochromic process is reversible and bleaching occurs through thermal bleaching and possibly through interaction with the light of longer wavelengths (optical bleaching). However, the latter needs to be confirmed with further experiments. We have shown for the first time also that semiconducting YH3 does not exhibit photochromic properties, revealing the importance of the presence of both oxide and hydride ions for the photochromic effect in rare-earth oxyhydrides.","","en","doctoral thesis","","978-94-6332-602-5","","","","","","","","","ChemE/Materials for Energy Conversion and Storage","","",""
"uuid:0aef0047-2020-443a-b27a-86aa2bc9b35e","http://resolver.tudelft.nl/uuid:0aef0047-2020-443a-b27a-86aa2bc9b35e","Field Observations of Atmospheric Aerosol Properties and the Impacts of New Particle Formation on the Radiative Properties of Clouds","Mamali, D. (TU Delft Atmospheric Remote Sensing)","Russchenberg, H.W.J. (promotor); Delft University of Technology (degree granting institution)","2020","Atmospheric aerosol particles are solid or liquid particles suspended in the atmosphere. They are directly emitted into the atmosphere or they are formed via the oxidation of gaseous precursors. Understanding the behavior of particles in the atmosphere is particularly important because they can affect the Earth’s climate, visibility, air quality, human health, and the ecosystem. As a result, they are a topic of high interest for the scientific community...","aerosol climatology; hygroscopicity; dust mass concentration; LIDAR; Unmanned Aerial Vehicle; aerosol-cloud interaction; atmospheric nucleation","en","doctoral thesis","","","","","","","","","","","Atmospheric Remote Sensing","","",""
"uuid:45fd3f70-2ba6-43fa-a2c4-018967bfdc88","http://resolver.tudelft.nl/uuid:45fd3f70-2ba6-43fa-a2c4-018967bfdc88","Measuring, modelling and minimizing perceived motion incongruence: for vehicle motion simulation","Cleij, D. (TU Delft Control & Simulation)","Mulder, Max (promotor); Buelthoff, Heinrich H. (promotor); Pool, D.M. (copromotor); Delft University of Technology (degree granting institution)","2020","Humans always wanted to go faster and higher than their own legs could carry them, leading them to invent numerous types of vehicles to move fast over land, water and air. As training how to handle such vehicles and testing new developments can be dangerous and costly, vehicle motion simulators were invented.
Motion-based simulators in particular, combine visual and physical motion cues to provide occupants with a feeling of being in the real vehicle. While visual cues are generally not limited in amplitude, physical cues certainly are, due to the limited simulator motion space. A motion cueing algorithm (MCA) is used to map the vehicle motions onto the simulator motion space. This mapping inherently creates mismatches between the visual and physical motion cues.
Due to imperfections in the human perceptual system, not all visual/physical cueing mismatches are perceived. However, if a mismatch is perceived, it can impair the simulation realism and even cause simulator sickness. For MCA design, a good understanding of when mismatches are perceived, and ways to prevent these from occurring, are therefore essential.
In this thesis a data-driven approach, using continuous subjective measures of the time-varying Perceived Motion Incongruence (PMI), is adopted. PMI in this case refers to the effect that perceived mismatches between visual and physical motion cues have on the resulting simulator realism. The main goal of this thesis was to develop an MCA-independent off-line prediction method for time-varying PMI during vehicle motion simulation, with the aim of improving motion cueing quality.
To this end, a complete roadmap, describing how to measure and model PMI and how to apply such models to predict and minimize PMI in motion simulations is presented. Results from several human-in-the-loop experiments are used to demonstrate the potential of this novel approach.
The study includes the following research.
First of all, results are presented that are based on a wide range of biophysical measurements that were collected during a year-long ground measurement campaign in several sugarcane fields. These results are accompanied by detailed quality assessments, illustrating their reliability when collecting such measurements through such campaigns. In addition, the methodology for setting up and carrying out the ground campaign is explained, which was designed to minimize biomass alterations in the field in light of the use of the measurements for validation of space-based SAR and optical remote sensing signals.
Secondly, remote sensing signals from various satellites are compared to the ground reference measurements in order to develop space-based sugarcane productivity monitoring techniques. It includes an analysis on the sensitivity of C-band and L-band SAR and optical observations to sugarcane biomass growth, to precipitation events and to SAR sensor configurations. In addition, the spatial features in satellite imagery from the various sensors are analyzed for their temporal consistencies in order to deduce time windows during which the satellite observations are most effective for productivity monitoring. It was found that especially saturation, precipitation and sensor configurations dictate the effectiveness, particularly for SAR. Furthermore, the highest spatial resolution optical imagery proved to perform best for mapping intra-field productivity differences that were measured in the field. In addition to this study, two related but smaller studies are presented. The first focuses on a specific remote sensing technique to identify patterns in a sugarcane field that occur persistently in time. The second demonstrates how plant gaps in a densely ground-measured sugarcane field affect signals from various SAR and how this effect is influenced by spatial averaging windows, precipitation events, sugarcane height and SAR sensor type.
Thirdly, the performance of a specific Bayesian land cover monitoring model that combines SAR and optical observations is demonstrated. The model is an adaptation of the Hidden Markov Model, which allows for the temporally-consistent tracking of vegetation states regardless of gaps in satellite observations. Attention is paid to the effect of precipitation during SAR observations on the model's performance and to certain vegetation conditions that cause classification confusion between land cover types. The research finally provides detailed insights into when SAR-only observations outperform optical-only observations and vice versa, in addition to the advantages when combining them.
Finally, a technique is introduced that exploits SAR signal fluctuations caused by varying (ground and plant) surface wetness conditions in order to improve the characterization of vegetation. Three scenarios that define the selection of SAR observations were investigated for their effect on the classification performance: (i) no distinction between wetness conditions, (ii) distinction between wetness conditions at the time of the SAR acquisitions and (iii) distinction between wetness conditions between consecutive SAR acquisitions. Particularly when the wetness conditions differ under the last scenario, it was found that performances improve. When combining this information with a-priori knowledge on soil types, the accuracy of the classification further increases. For this, maps are used that are a result from applying the previously introduced Hidden Markov Model over the entire state of São Paulo.
The datasets that are used in these studies were mainly acquired by the SAR satellites Sentinel-1, Radarsat-2 and ALOS-2, and by the optical satellites Landsat-8 and Worldview-2. For the studies that are related to land cover monitoring and vegetation characterization, high performance computing was required due to the vast amount of observation data and the complexity of the applied techniques. These facilities were mainly provided by the Dutch national supercomputer of SURF and by Google Earth Engine.","SAR and optical remote sensing; Sugarcane; Growth monitoring; Land cover mapping; Ground reference measurements; Data Assimiliation; Saturation effects","en","doctoral thesis","Delft University Publishers","978-94-6366-252-9","","","","","","","","Geoscience and Remote Sensing","","","",""
"uuid:0ed85902-051f-49a9-a99d-dad082fea758","http://resolver.tudelft.nl/uuid:0ed85902-051f-49a9-a99d-dad082fea758","Quadrature Methods for Wind Turbine Load Calculations","van den Bos, L.M.M. (TU Delft Wind Energy; Centrum Wiskunde & Informatica (CWI))","van Bussel, G.J.W. (promotor); Bierbooms, W.A.A.M. (copromotor); Sanderse, Benjamin (copromotor); Delft University of Technology (degree granting institution)","2020","Two sources of uncertainty can be distinguished in models for wind turbine calculations. Firstly, the environment the wind turbine has to withstand is uncertain and has a direct impact on the life time of the turbine. Secondly, the models used to predict the forces acting on the turbine contain an unknown error, which can also be modeled as a random variable. This thesis discusses numerical methods based on polynomial approximation to study these two types of uncertainty. In essence the computationally costly model is replaced by a polynomial, which is cheap to evaluate using a computer. The first part of the thesis is mainly focused on computing the loads acting on a wind turbine. The key uncertainties in this case originate from the variability in the environmental conditions (such as the weather). For load cases, the main interest is on integral quantities of the computationally expensive model. For the purpose of computing integral quantities, polynomial approximation is equivalent to smartly constructing interpolatory quadrature rules. Various algorithms are proposed to construct such quadrature rules. Their efficiency is demonstrated by computing loads acting on a turbine using measurement data obtained at the Dutch North Sea. Modeling the uncertainty arising from model error is significantly less trivial. Two different approaches, either based on interpolation using Leja nodes or integration based on quadrature rules, are discussed. Which approach is best in a certain computational test case depends on the specific quantity of interest. Examples of the applicability of all proposed methods are discussed throughout the thesis. A common theme in all results is that high convergence rates are obtained for models that can be approximated well using polynomials, which is usually the case for models arising in the field of wind energy.","","en","doctoral thesis","Delft University of Technology","978-94-6384-101-6","","","","","","","","","Wind Energy","","",""
"uuid:8bbf71f1-48bd-4844-84f5-fe8c7ca5398c","http://resolver.tudelft.nl/uuid:8bbf71f1-48bd-4844-84f5-fe8c7ca5398c","A Systems Design Approach to Sustainable Development: Embracing the Complexity of Energy Challenges in Low-income Markets","Costa Junior, Jairo da (TU Delft Design for Sustainability)","Diehl, J.C. (promotor); Snelders, H.M.J.J. (promotor); Secomandi, Fernando (copromotor); Delft University of Technology (degree granting institution)","2020","The societal and technical problems faced by low-income markets are increasingly seen as more complex due to environmental, social, and economic concerns. The enormous negative impacts of complex societal problems and the inability of designers to deal with complexity cannot be overcome without a paradigm shift in how we understand, engage with, and teach about such issues. In light of this challenge, one can pose the question, “What is the best approach to deal with a complex societal problem?”. A traditional approach to deal with a complex problem is to simplify it. Alternatively, as here, research may aim to provide a novel approach to handle complex societal problems, thereby embracing complexity. Thus, this thesis contends that embracing complexity represents a significant shift from the traditional design approach to a systems design approach for sustainable development. To help designers to bring about such a transition, the four main contributions provided in this doctoral research are: (I) Exploring the integration of systems thinking into design, particularly by adopting a systems design approach to sustainable energy solutions for low-income markets; (II) Extending the scope of product-service system design through the introduction of four major systems thinking tenets: a holistic perspective; a multilevel perspective; a pluralistic perspective; and complexity-handling capacity; (III) Proposing heuristic tools for the integration of systems thinking into design, which allows for developing new and strengthening existing systems design approaches; and, (IV) Increasing capacity building for a systems design approach to address complex societal problems through design education.","Complex societal problems; Systems thinking; Systems design approach; Systems-oriented design; Product-Service Systems (PSS); Low-income market; Energy sector","en","doctoral thesis","","978-94-6384-102-3","","","","","","","","","Design for Sustainability","","",""
"uuid:4fdea178-a5b1-4620-987e-b1f2f9c23d32","http://resolver.tudelft.nl/uuid:4fdea178-a5b1-4620-987e-b1f2f9c23d32","Motion Cueing Fidelity in Rotorcraft Flight Simulation: A New Perspective using Modal Analysis","Miletović, I. (TU Delft Control & Simulation)","Mulder, Max (promotor); Pavel, M.D. (promotor); Delft University of Technology (degree granting institution)","2020","Flight simulators, or Flight Simulation Training Devices (FSTDs), offer great benefits in terms of safety and cost associated with pilot training and certification. To warrant uniform certification standards and to prevent adverse pilot training, (sub)system fidelity requirements are imposed by the Federal Aviation Authority (FAA) and European Aviation Safety Agency (EASA). While comprehensive, a notable example of an area in which these requirements are somewhat limited, are those pertaining to the Motion Cueing System (MCS) of full-flight flight simulators. The MCS comprises hardware, typically a set of actuators to enable physical motion of the platform, and software, often termed the Motion Cueing Algorithm (MCA), to process the simulated vehicle motion to prevent violation of (physical) simulator constraints. Naturally, the MCA introduces a significant mismatch between the actual (i.e., in-flight) and simulated vehicle motion perceived by the pilot. Furthermore, this mismatch often comes on top of inaccuracies in the mathematical model used to compute the simulated vehicle motion. Because of this complex interaction, the formulation of quantitative requirements pertaining to the allowed mismatch between real vehicle and simulator motion has proven cumbersome. To date, certification of flight simulator motion is therefore based predominantly on subjective evaluation by experienced pilots. To address this problem, the aim of this dissertation is to develop a unifying tool to quantify motion cueing fidelity in helicopter flight simulation and to evaluate its suitability in realistic applications.","helicopter dynamics; flight simulation; motion cueing; simulation fidelity","en","doctoral thesis","TU Delft, Faculteit LR","978-94-6384-103-0","","","","","","","","","Control & Simulation","","",""
"uuid:4fa16188-7431-4a6e-8097-5bde9cabc466","http://resolver.tudelft.nl/uuid:4fa16188-7431-4a6e-8097-5bde9cabc466","Relative Flow Data: New Opportunities for Traffic State Estimation","van Erp, P.B.C. (TU Delft Transport and Planning)","Knoop, V.L. (copromotor); Hoogendoorn, S.P. (copromotor); Delft University of Technology (degree granting institution)","2020","Traffic state information is crucial for different applications, e.g., in design and operation of road traffic networks, and in navigation services. Traffic sensing data, e.g., loop-detector data, may directly provide the desired information. Alternatively, the traffic state information may be estimated with data that only provides partial and noisy information. To apply this process, i.e., traffic state estimation, we have to make choices related to which data are collected and how these are processed. The macroscopic traffic state can be described using the variables flow, density and mean speed, where flow is equal to the product of density and mean speed. Edie’s generalized definitions of traffic flow define these three variables for spatial-temporal areas. Alternatively, traffic flow can be described using the three dimensions space, time and cumulative flow. The cumulative flow is defined as the cumulative number of vehicles that have passed a position at a specific time, which means that it is a discrete variable. However, the discrete function can be smoothed over space and time. In this case, the macroscopic variables flow and (negative) density can be determined for points in space-time by taking the derivatives to time and space of the smoothed cumulative flow function. In this thesis, a distinction is made between microscopic and macroscopic traffic sensing data. Examples of microscopic traffic sensing data are probe individual speed data and spacing data. Macroscopic data can describe Edie’s generalized definitions of traffic flow for spatial-temporal areas, e.g., probe mean speed data or aggregated double loop-detector data. Alternatively, macroscopic sensing data can describe the change in cumulative flow between points in space-time, e.g., detector count data or relative flow data. The scientific gaps addressed in this thesis are subdivided in four parts that relate to each other. First, we evaluate the errors that are induced when estimating the mean speed for spatial-temporal areas based on error-free data. This provides insight in the errors that arise due to incomplete information and incorrect assumptions when estimating the mean speed. Second, the option to use probe data to mitigate the cumulative count error problem is considered. This problem occurs when estimating the cumulative flow curves based on (stationary) detector data. For this purpose, both probe mean speed and probe trajectory data are used. The probe mean speed data relates to the first part as they describe the mean speed for spatial-temporal areas. If relative flow observations are added to the probe trajectory data, relative flow data from moving observers that are part of the traffic flow are obtained. In the third part, these relative flow data are used to estimate the traffic state. In this part, different combinations of observers are used, which includes stationary observers, moving observers that are part of the traffic flow and moving observers that travel in opposing direction. To estimate the traffic state with relative flow data, streaming-data-driven and model-driven estimation approaches are considered. In a model-driven estimation approach historical data are used to expose traffic flow models. Therefore, we address the possibility to use historical relative flow data to expose these model. The fourth and final part relates the option that road authorities collect personal traffic sensing data (e.g., probe trajectory and/or relative flow data) directly from road-users. In other parts of this thesis, we designed methodologies to use these data, which may be valuable for road authorities. Therefore, it is interesting to investigate how road authorities can gain access to these personal data.","Relative flow data; Traffic state estimation","en","doctoral thesis","TRAIL Research School","978-90-5584-260-5","","","","TRAIL Thesis Series T2020/1","","","","","Transport and Planning","","",""
"uuid:3b3856c6-9eac-4599-be8b-b64fe73e5a5a","http://resolver.tudelft.nl/uuid:3b3856c6-9eac-4599-be8b-b64fe73e5a5a","Experiments and modelling for by-pass pigging of pipelines","Hendrix, M.H.W. (TU Delft Fluid Mechanics)","Henkes, R.A.W.M. (promotor); Breugem, W.P. (promotor); Delft University of Technology (degree granting institution)","2020","(Pipeline Inspection Gauge), which is a cylindrical device that just fits the pipe and propagates through the pipe along with the transport of fluids. While a conventional pig completely seals the pipeline and travels with the same velocity as the production fluids, a by-pass pig has an opening which allows the fluids to partially by-pass the pig. The purpose of the present study is to get a better understanding of the physics of the pigging of a pipeline with multiphase flow transport. The focus is on pigs with by-pass. An important factor in determining the ultimate travel velocity of a by-pass pig is the pressure drop over the by-pass pig, which is characterized by a pressure loss coefficient. We investigate the pressure loss coefficient of three frequently used by-pass pig geometries in a single phase pipeline with Computational Fluid Dynamics (CFD). We present a building block approach for systematic modelling of the pressure loss through the by-pass pigs, which takes the geometry and size of the by-pass opening into account. The CFD results are used to validate the simple building block approach for systematic modelling of the pressure loss through a by-pass pig. It is shown that the models for the pressure loss closely resemble the CFD results for each of the three pig geometries.","","en","doctoral thesis","","978-94-64020-56-4","","","","","","","","","Fluid Mechanics","","",""
"uuid:39f85c8e-042f-4db4-978e-4077daeda2dd","http://resolver.tudelft.nl/uuid:39f85c8e-042f-4db4-978e-4077daeda2dd","On the measurement of VIV lift force coefficients at high Reynolds numbers","de Wilde, J.J. (TU Delft Ship Hydromechanics and Structures)","Huijsmans, R.H.M. (promotor); den Besten, J.H. (copromotor); Delft University of Technology (degree granting institution)","2020","The objective of the research was the measurement of the VIV lift force coefficient in-phase with velocity Clv and in-phase with acceleration Cla for a pipe section with large length over diameter ratio of L/D ~18 at high Reynolds numbers of Re > 1E4. The coefficients are measured for a forced oscillation pipe in a steady flow and can be directly used as input parameter for pragmatic riser VIV prediction models. Risers are vertical pipelines that transport fluids from the oil well on the seabed to the production facility in the free water surface. The risers in deep water are extremely slender structures, having length over diameter ratio of more than L/D = 1E3. The risers in deep water behave as a flexible string-like structure with low structural damping, which makes them susceptible for resonant vibrations. The vibrations caused by the vortex shedding in the downstream wake of the riser are known as Vortex Induced vibration (VIV) and occur when the frequency of the vortex shedding coincides with one or more of the natural frequencies of the riser. The VIV of the riser poses large challenges for the design of the risers, in particular related to metal fatigue.","","en","doctoral thesis","","978-90-9032696-2","","","","","","","","","Ship Hydromechanics and Structures","","",""
"uuid:0dce4050-8c60-4735-aa30-12f8c179f1dd","http://resolver.tudelft.nl/uuid:0dce4050-8c60-4735-aa30-12f8c179f1dd","Peeking inside the black-box: A model-based interpretation of multi-elemental isotope date of chlorinated ethenes in heterogeneous aquifer systems","Thouement, H.A.A. (TU Delft Sanitary Engineering)","Heimovaara, T.J. (promotor); van Breukelen, B.M. (copromotor); Delft University of Technology (degree granting institution)","2020","","","en","doctoral thesis","","978-94-93184-29-9","","","","","","","","","Sanitary Engineering","","",""
"uuid:72e0c6e1-9c17-4b8c-b5ce-fb6a8e2abf20","http://resolver.tudelft.nl/uuid:72e0c6e1-9c17-4b8c-b5ce-fb6a8e2abf20","Biorefinery Design in Context: Integrating Stakeholder Considerations in the Design of Biorefineries","Palmeros Parada, M.D.M. (TU Delft BT/Biotechnology and Society)","Osseweijer, P. (promotor); Posada Duque, J.A. (copromotor); Delft University of Technology (degree granting institution)","2020","Biobased production has been presented as a sustainable alternative to the use of fossil resources. However, emerging controversies over the impacts of biofuels (on, e.g., land use, food, and energy security), made it clear that this production approach cannot be assumed to be inherently sustainable or unsustainable. Behind these controversies are unexplored uncertainties and assumptions made during the development of biofuel production, as well as limited considerations of the local context and the values of stakeholders upon its implementation. While these concerns do not necessarily relate to all biobased products, they do indicate that there are many aspects of sustainability besides those driving biobased production (i.e. the use of renewable resources, climate change mitigation), and that the relevance of some of these aspects depends on the local contexts and the values of stakeholders.
This thesis presents an approach to the development of a more sustainable biobased production. Particularly, this thesis answers the question: “how can considerations of stakeholders and the local context be investigated and integrated into the early-stage design of biorefineries?” To answer this main question, the research in this thesis is structured around the design process. First, the motivation of this work and a review of the literature on biorefinery design is presented in Chapters 1 and 2. Then, by focusing on specific stages of the design process, the research is structured from the definition of the design space (Chapter 3), to the design decision making (Chapter 4) and the evaluation of design concepts (Chapter 5). In Chapter 6 the overall findings of this work are presented and integrated into a novel design approach for more sustainable biorefinery design. The presented approach not only allows to bring considerations of stakeholders’ values and the project context, it also opens the space to identify tensions between stakeholders’ values and sustainability aspects. By promoting the discussion of these tensions in the context of the project, the presented approach opens opportunities for responding to these tensions in the decision making for the development of biobased production...","","en","doctoral thesis","","","","","","","","","","","BT/Biotechnology and Society","","",""
"uuid:7f5ff343-4de1-4ebd-b451-5676bb7a5d44","http://resolver.tudelft.nl/uuid:7f5ff343-4de1-4ebd-b451-5676bb7a5d44","Combining green-blue-grey infrastructure for flood mitigation and enhancement of co-benefits","Alves, Alida (TU Delft BT/Environmental Biotechnology)","Brdjanovic, Damir (promotor); Vojinovic, Zoran (copromotor); Delft University of Technology (degree granting institution)","2020","An increment of urban flood risk in many areas around the globe is expected. Thus appropriate flood risk management is crucial. Although strong evidence exists demonstrating that green-blue infrastructure is a sustainable solution to reduce flooding, its adoption is still slow. The objective of this research is to help decision-makers to adopt adaptation strategies to cope with flood risk while achieving other benefits. This work contributes to enhance planning processes for flood mitigation combining green-blue-grey measures. It provides tools and knowledge to facilitate holistic decision-making, in order to ensure safe and liveable urban spaces for current and future conditions.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-0-367-48597-9","","","","Dissertation submitted in fulfillment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of IHE Delft Institute for Water Education.","","","","","BT/Environmental Biotechnology","","",""
"uuid:5137711f-d5e6-41b6-ac7a-34c6c5e29caf","http://resolver.tudelft.nl/uuid:5137711f-d5e6-41b6-ac7a-34c6c5e29caf","Triplet Dynamics in Crystalline Perylene Diimides","Felter, K.M. (TU Delft ChemE/Opto-electronic Materials)","Grozema, F.C. (promotor); Savenije, T.J. (promotor); Delft University of Technology (degree granting institution)","2020","Conjugated organic materials are interesting for application in opto-electronic devices where they can act as a light absorbing layer, for instance in solar cells or as a light emitting layer in light-emitting diodes. In addition, they can also be used as the semiconducting materials in for instance field-effect transistors. Conjugated organic materials have certain desirable properties that are generally found in organic materials such as light weight, flexibility and cheap processing from solution. A particularly attractive aspects of such materials is that their solid-state properties can be tuned by making changes in the molecular structure by organic synthesis techniques. This also makes it possible to modify the materials so that they exhibit more uncommon processes that may be beneficial for solar cells. Two of such processes, singlet exciton fission (SF) and triplettriplet annihilation upconversion (TTA-UC) are the main subjects of this thesis. The first (SF) is a process in which a singlet excited state, formed by absorption of light, is transformed into a combination of two triplet excited states each with half of the energy. This can, in principle, double the number of electrons that are injected in a solar cell device, and hence double the current from the device. The second (TTA-UC) is the reverse process of SF in which two triplet states with low energy can be combined into a single higher-energy singlet excited state from which an electron can be injected in a solar cells. Exploiting these two processes can in principle lead to considerable improvements in the efficiency of solar cells based on these devices. SF can be exploited to use the excess energy in photons with more than twice the bandgap energy to excite an additional electron. In this way, the excess energy that is otherwise lost as heat, is used to increase the current and therefore the overall energy efficiency of the device. TTA-UC addresses another energy-loss in solar cells, that of photons in the solar spectrum that have a lower energy than the bandgap of the active material of the solar cell. TTA-UC can be used to combined the energy of two of these photons, that are normally not absorbed by the solar cells, to generate a single higher-energy excited state that has sufficient energy to charge separate at an interface. Together, these two processes can address two of the factors that cause major energy losses in solar cells. In order to efficiently exploit these processes, a detailed fundamental understanding of these processes is required, with particular emphasis on the effect of molecular packing in the solid-state as this is the state where they are to be used in devices.","singlet fission; Microwave conductivity; Transient absorption spectroscopy; upconversion; Perylene-diimide molecules; solid state packing","en","doctoral thesis","","978-94-6332-591-2","","","","","","","","","ChemE/Opto-electronic Materials","","",""
"uuid:4ce3b4ec-0a6a-4886-9a82-5945a1f9ea50","http://resolver.tudelft.nl/uuid:4ce3b4ec-0a6a-4886-9a82-5945a1f9ea50","Dykes and Embankments: a Geostatistical Analysis of Soft Terrain","de Gast, T. (TU Delft Geo-engineering)","Hicks, M.A. (promotor); Vardon, P.J. (promotor); Delft University of Technology (degree granting institution)","2020","This thesis presents an investigation of the use and applicability of statistical methods in site investigation and subsequent analyses of dykes and embankments. This comprises a comprehensive site investigation via Cone Penetration Tests (CPTs) and laboratory experiments on sampled material, a large scale field test, and statistical analysis of both the site investigation data and the failure test.
This work offers the potential to better design site investigations in order to provide reliable estimates of heterogeneity and to demonstrate how these can be used in practical analyses. Such analyses are computationally expensive, but can offer significant benefits in reducing the requirements of dyke upgrades.","Field experiment; Hetereogeneity; RFEM; Site investigation; Slope failure; Statistical analysis","en","doctoral thesis","","978-94-028-1915-1","","","","","","","","","Geo-engineering","","",""
"uuid:96eda9cd-3163-4c6b-9b9f-e9fa329df071","http://resolver.tudelft.nl/uuid:96eda9cd-3163-4c6b-9b9f-e9fa329df071","Aerodynamics of wind-assisted ships: Interaction effects on the aerodynamic performance of multiple wind-propulsion systems","Bordogna, G. (TU Delft Ship Hydromechanics and Structures)","Huijsmans, R.H.M. (promotor); Akkerman, I. (copromotor); Delft University of Technology (degree granting institution)","2020","The shipping industry is currently under pressure to reduce its pollutant emissions and, thus, to alleviate its negative impact on the environment and human health. With this aim, in the last years, the International Maritime Organization issued significant measures to improve the energy efficiency of ships (see the Energy Efficiency Design Index) as well as restrictive regulations to limit the amount of contaminating emissions from seaborne transport. Arguably one of the most impactful measures will be enforced at the commence of 2020 when the world fleet will have to use cleaner fuels or otherwise will need to install expensive exhaust gas cleaning systems (known as scrubbers) to comply with the new regulations. The use of cleaner and more expensive fuels will increase the ship operating costs and this, in turn, will make investments into fuel-saving devices more attractive....","Wind-assisted ship propulsion; Aerodynamic interaction; Dynarig; Wingsail; Flettner rotor","en","doctoral thesis","","978-94-6323-977-6","","","","","","","","","Ship Hydromechanics and Structures","","",""
"uuid:337bec74-ac1d-4142-b8f2-a2281175c2d4","http://resolver.tudelft.nl/uuid:337bec74-ac1d-4142-b8f2-a2281175c2d4","On Coherent Structures, Flow-Induced Vibrations, and Migratory Flow: In Liquid Metal Nuclear Reactors","Bertocchi, F. (TU Delft RST/Reactor Physics and Nuclear Materials)","Kloosterman, J.L. (promotor); Rohde, M. (promotor); Delft University of Technology (degree granting institution)","2020","Flows in rod bundles are common to many industrial applications such as heat exchangers or some types of nuclear reactors. The core of many classes of nuclear reactors can be easily sketched as a bundle of rods, the fuel pins, inmmersed in an axial ow of coolant that removes the heat produced by the fission reaction. Coupling this geometry to an axial ow can trigger periodical vortices, known as large coherent structures or gap vortex streets, that move on both sides of the gaps between the rods. By crossing the gap (cross-ow), these vortices may enhance the heat removal mechanism, thus improving the performance of the reactor. However, coherent structures cause velocity oscillations in the ow that may induce vibrations of the fuel rods, leading to their long term damage. The length (or wavelength) of coherent structures is a key parameter for understanding the interplay between these vortices and the vibrations that may be triggered on the rods. Their wavelength determines the frequency of the velocity oscillations in the uid, hence of the external force imposed on the rods. One of the reactor designs belonging to the next generation (Gen-IV) of nuclear reactors is the Liquid Metal Fast Breeder Reactor (LMFBR). This reactor has the fuel rods in the core arranged in a hexagonal matrix. In this design, a wire is helicoidally wrapped around each fuel rod to keep them separated from each other. The presence of the wire diverts part of the more turbulent ow from the bulk towards the gap between the rods, where the ow would be otherwise less turbulent. This enhances the heat exchange and avoids hot spots on the fuel cladding. A phenomenon known as migratory ow has been observed in rod bundles with wire spacers. In the presence of migratory ow, the uid is diverted from the gap towards the main subchannel and it bends against the helicoid path of the wire, thus leading to a very complex ow, where part of the uid follows the wire direction and part moves against it, away from the gap. Although this behaviour was _rst observed years ago, the governing mechanism is not clear yet. Explaining migratory ow is thus a fundamental step towards a general understanding of the mixing and mass transfer phenomena in rod bundles in the presence of helicoid wires.","Cross-flow; Coherent structures; Fluid-Structure Interactions; Hexagonal bundle; Wire wrapped; Laser Doppler Anemometry; Particle Image Velocimetry","en","doctoral thesis","","","","","","","","","","","RST/Reactor Physics and Nuclear Materials","","",""
"uuid:90ed83af-4eca-45bc-af1a-816db85814d9","http://resolver.tudelft.nl/uuid:90ed83af-4eca-45bc-af1a-816db85814d9","Structure and dynamics of fibrous calcium caseinate gels studied by neutron scattering","Tian, B. (TU Delft RST/Neutron and Positron Methods in Materials)","Pappas, C. (promotor); Bouwman, W.G. (promotor); Delft University of Technology (degree granting institution)","2020","In the past decades, neutron scattering techniques have been applied to many soft matter systems and have yielded fruitful results that contribute to a better understanding of their properties. Yet, the techniques were rarely used for studying food materials, probably because they are less well known and less accessible to food scientists. This work fills a small part of the research gap by applying several neutron scattering techniques to the characterisation of a food-relevant protein sample: calcium caseinate. Calcium caseinate can be transformed into pronounced fibres to form the basis for a next generation meat analogue. Given the prospect of the material, the emphasis in this work is on quantifying the size, structure and dynamics of the air bubbles and fibres present in calcium caseinate gels. The main findings are summarised into four chapters.","Calcium caseinate; air bubbles; fibrous structure; solvent isotope effect; neutron scattering","en","doctoral thesis","","978-94-028-1885-7","","","","","","","","","RST/Neutron and Positron Methods in Materials","","",""
"uuid:a99df432-6f4d-4da9-90fd-c756c67e71a8","http://resolver.tudelft.nl/uuid:a99df432-6f4d-4da9-90fd-c756c67e71a8","Selective Water Addition: Investigations of hydratases from the genus Rhodococcus","Busch, H. (TU Delft BT/Biocatalysis)","Hanefeld, U. (promotor); Hagedoorn, P.L. (copromotor); Delft University of Technology (degree granting institution)","2020","Water addition reactions to (un)-activated double bonds are very rewarding reactions as they elegantly introduce a hydroxyl-group thereby often adding value to the generated product by establishing a novel stereocentre in tertiary, chiral alcohols. However, performing selective water addition reactions is an extremely challenging task using classical, chemical approaches. Next to overall unfavourable reaction equilibria, the unreactive water molecule is a poor nucleophile and therefore requires activation. Furthermore, due to its small size, a controlled, stereo- and regioselective addition is difficult to achieve. Consequently, establishing straightforward processes with a preferably high selectivity under reaction conditions as environmentally benign as possible is of high interest to both industry and academia...","Rhodococcus; hydratases; omics; water addition; Michael addition; fatty acids","en","doctoral thesis","","9789464020373","","","","","","","","","BT/Biocatalysis","","",""
"uuid:e0b37188-c8b6-4c96-9d04-93ac1f6899e3","http://resolver.tudelft.nl/uuid:e0b37188-c8b6-4c96-9d04-93ac1f6899e3","Gaps, Frequencies and Spacial Limits of Continued Fraction Expansions","de Jonge, C.J. (TU Delft Applied Probability)","Kraaikamp, C. (promotor); Redig, F.H.J. (promotor); Delft University of Technology (degree granting institution)","2020","In this thesis continued fractions are studied in three directions: semi-regular continued fractions, Nakada’s α-expansions and N-expansions. In Chapter 1 the general concept of a continued fraction is given, involving an operator that yields the partial quotients or digits of a continued fraction expansion. The approximation coefficients θ_n(x) := q²|x-p_n/q_n| are introduced, where p_n/q_n, n ∈ 0, 1, 2, . . ., are the convergents of the continued fraction. Some well-known results on semi-regular continued fractions are given. Finally, the concept of ‘natural extension’ is explained. Chapter 2 is about orders (called patterns) of triplets of three consecutive approximation coefficients θ_(n-1)(x), θ_n(x) and θ_(n+1)(x). The asymptotic frequency of pattern Χ(n) is defined by AF(X(n)) := lim_(N→∞) 1/N #{n ∈ N | 2 ≤ n ≤ N, X(n)}. Starting with the regular continued fraction (RCF), it is shown that, for instance, the asymptotic frequency as n → ∞of the pattern θ_(n-1)(x) < θ_n(x) < θ_(n+1)(x) is smaller than the asymptotic frequency of the pattern θ_n(x) < θ_(n+1)(x) < θ_(n-1)(x). The asymptotic frequencies in the case of the RCF are explicitly given: two of them are 0.1210..., the others are 0.1894... . After this, these patterns are studied of two other semi-regular continued fractions: the optimal continued fraction (OCF) and the nearest integer continued fraction (NICF). The asymptotic frequencies of the OCF prove to be more equally distributed: the two less frequent patterns of the RCF now have the asymptotic frequency 0.1603... , where this is 0.1698... for the other patterns. The asymptotic frequencies of the NICF prove to be different for all six patterns. However, summation of specific pairs yield once 2 · 0.1603... and two times 2 · 0.1698... , thus showing a great correspondence with the OCF. Chapter 3 is dedicated to the natural extension of Nakada’s α-expansions. By meansof singularisations and insertions in these continued fraction expansions, involving the removal or addition of partial quotients 1 in exchange with partial quotients with a minus sign, the interval on which the natural extension of Nakada’s continued fractionmap T_α is given is extended from [√2-1,1) to [(√10-3)/2,1). From our construction it followsthat Ω_α, the domain of the natural extension of T_α, is metrically isomorphic to g for α ∈ [g², g), where g is the small golden mean. Finally, although Ω_α proves to be very intricate and unmanageable for α ∈ [g², (√10-3)/2), the α-Legendre constant L(α) on this interval is explicitly given. In Chapter 4 N-expansions are introduced for natural numbers N larger than 1. These expansions, like semi-regular continued fraction expansions, are also sequences of partial quotients, called orbits, existing in the interval I_α = [α,α+1] for some α ∈ (0,√N-1]. Depending on N and α, there is a finite number of consecutive digits that occur as partial quotient. It appears that there are conditions (that is, combinations of N and α) such that these orbits eventually do not land in certain parts of the interval I_α, called gaps. It is proved that if the number of digits is at least five, no gaps exist. If the number of digits is four, there do not exist gaps for most N, but in the cases that there are α such that I_α contains a gap, there is only one and it covers the lion’s part of I_α. When the number of digits is two or three, the number of gaps varies, but it is possible to give very clear conditions under which there are no gaps.","Continued fractions","en","doctoral thesis","","978-94-6384-087-3","","","","","","","","","Applied Probability","","",""
"uuid:dad7f8c1-c798-44a4-a987-e40eec5195d3","http://resolver.tudelft.nl/uuid:dad7f8c1-c798-44a4-a987-e40eec5195d3","Scaling Aspects of Silicon Spin Qubits","Boter, J.M. (TU Delft QCD/Vandersypen Lab)","Vandersypen, L.M.K. (promotor); Delft University of Technology (degree granting institution)","2020","To harness the potential of quantum mechanics for quantum computation applications, one of the main challenges is to scale up the number of qubits. The work presented in this dissertation is concerned with several aspects that are relevant in the quest of scaling up quantum computing systems based on spin qubits in silicon. Few-qubit experiments are maturing quickly, but simultaneously the lacuna between them and large-scale quantum computers is filled with a combination of science and engineering challenges. The challenges that are addressed in this dissertation are reliable and reproducible sample fabrication, qubit resilience to temperature, spatial correlations in the noise affecting the qubits, and co-integration of qubits with classical control electronics.
I start with describing the development of an integration scheme for silicon spin qubits in an academic cleanroom environment, as several research groups have demonstrated over the last years. This has allowed them to successfully fabricate and operate silicon spin qubit devices. The development of such a scheme is crucial for the fabrication of proof-of-principle devices, and the testing of several design variations for more and more complex qubit devices, before transferring the optimal designs to industrial foundries that are generally less flexible. Moreover, it is essential for performing paramount few-qubit experiments in the near term. The developed scheme has been successfully implemented in the next chapter of this thesis.
In the first experiment, we investigate the effect of temperature on the spin lifetime, as a first step towards higher temperature operation of silicon spin qubits. Spin qubit operation at elevated temperatures will be required to allow for co-integration of qubits with classical control electronics on a single chip, since the heat load associated with this electronics will be too much to deal with at the current qubit operation temperature of ∼10 mK. At a temperature of ∼1-4 K, significantly more cooling power is available (see for example CERN's Large Hadron Collider). Such co-integration would alleviate the interconnect bottleneck and facilitate the implementation of local control in large-scale devices. We find only a modest temperature dependence and measure a spin relaxation time of 2.8 ms at 1.1 K (still much longer than the record spin dephasing time measured in such a system). In addition, we present a theoretical model and use it in combination with our experimentally obtained parameters to demonstrate that the spin relaxation time can be enhanced by low magnetic field operation and by employing high-valley-splitting devices. Together with more recent work, this experiment demonstrates no fundamental limitations to prevent high-temperature operation of silicon spin qubits. Simultaneously, bringing classical control electronics to lower temperatures also is an active research area.
The second experiment uses maximally entangled Bell states of two qubits to study spatial correlations in the noise acting on those two qubits. Spatial correlations in qubit errors hinder quantum error corrections schemes that will be required for fault-tolerant large-scale quantum computers, as these schemes are commonly derived under the assumption of negligible correlations in qubit errors. Therefore, it is important to know to what extent the noise causing these errors is correlated. We find only modest spatial correlations in the noise and gain insight in their origin. The data is in accordance with decoherence being dominated by a combination of nuclear spins and multiple distant charge fluctuators coupling asymmetrically to the two qubits. We recommend to perform similar experiments in isotopically purified silicon to eliminate the effect of nuclear spins and in isolation study spatial correlations in charge noise. Furthermore, our insights show how correlations can be either maximized or minimized through qubit device design. For these reasons, the prospects for the development and implementation of quantum error correction schemes in fault-tolerant large-scale quantum computers are promising.
Finally, after having studied several aspects that are relevant to determine the suitability of silicon spin qubits for large-scale quantum computation in the preceding experiments, we propose a concrete physical implementation of co-integrated spin qubits with classical control electronics in a sparse spin qubit array. While the community usually claims compatibility of silicon spin qubits with conventional CMOS fabrication, existing proposals make assumptions that remain to be validated. Implementing quantum error correction protocols in a sparse array has been studied, but the description of a physical implementation was largely missing. The sparseness of the array allows for integration of local control electronics, as shown to be promising earlier in this thesis. Specifically, we propose to implement sample-and-hold circuits alongside the qubit circuitry that would allow to offset inhomogeneity in the qubit array. This enables individual local control and shared global control, resulting in an efficient line scaling. The scalable unit cell design fits 220 (≈106) qubits in ∼150 mm2.
We assess the feasibility of the proposed scheme, as well as its physical implementation and the associated footprint, line scaling and interconnect density.","Quantum computing; Quantum dots; Spin qubits; Silicon","en","doctoral thesis","","978-90-8593-426-4","","","","Casimir PhD Series, Delft-Leiden 2019-44","","","","","QCD/Vandersypen Lab","","",""
"uuid:5ecb27b4-746c-49db-8b4d-491f2c6f8155","http://resolver.tudelft.nl/uuid:5ecb27b4-746c-49db-8b4d-491f2c6f8155","Towards control of the optoelectronic properties of organic-inorganic perovskites","Gelvez Rueda, M.C. (TU Delft ChemE/Opto-electronic Materials)","Grozema, F.C. (promotor); Savenije, T.J. (promotor); Delft University of Technology (degree granting institution)","2020","In this thesis we have aimed to tune and control the optoelectronic properties of organic-inorganic metal halide perovskites by systematically changing components in the structure and studying the charge carrier dynamic mechanisms....","","en","doctoral thesis","","","","","","","","","","","ChemE/Opto-electronic Materials","","",""
"uuid:42e7f02f-0844-4367-8678-e2d5a3fa959f","http://resolver.tudelft.nl/uuid:42e7f02f-0844-4367-8678-e2d5a3fa959f","Agent-based mathematical modeling of pancreatic cancer growth and several therapies","Chen, J. (TU Delft Numerical Analysis)","Vuik, Cornelis (promotor); Vermolen, F.J. (promotor); Delft University of Technology (degree granting institution)","2020","Cancer is known as one of the leading causes of death in the world with difficult
diagnose at early stages, poor prognosis and high mortality. Animal-based
experiments and clinical trials have always been the main approach for cancer
research, albeit they may have limitations and ethical issues. Mathematicalmodeling
as an efficient method is used to predict results, optimize experimental design
and reduce animal use. Our work focuses on the phenomenological simulation
of cancer progression and therapies at the cell scale level.","Reaction-diffusion equation; Biomechanics; Cell-based modeling; Cellular automata; Monte Carlo simulations; Pancreatic cancer; Cancer therapy","en","doctoral thesis","","978-94-6366-245-1","","","","","","","","","Numerical Analysis","","",""
"uuid:afd39f40-7569-4cc6-ac1a-659342b45f9a","http://resolver.tudelft.nl/uuid:afd39f40-7569-4cc6-ac1a-659342b45f9a","On the assessment of multiaxial fatigue resistance of welded steel joints in marine structures when exposed to non-proportional constant amplitude loading","van Lieshout, P.S. (TU Delft Ship Hydromechanics and Structures)","Kaminski, M.L. (promotor); den Besten, J.H. (copromotor); Delft University of Technology (degree granting institution)","2020","Structural geometry and stochastic loads such as swell and wind seas can typically induce multiaxial stress states in welded details of marine structures. It is known that such complex time varying stress states determine the fatigue resistance of welded steel joints. Therefore, it is of importance to account for them in fatigue lifetime assessment. Over the past few decades, a wide variety of design codes and guidelines have been developed for performing fatigue assessments in engineering practice. In particular for multiaxial fatigue lifetime assessment, additional methods have been developed. These multiaxial fatigue methods are typically developed within academia. A consensus on the most suitable approach for the assessment of multiaxial fatigue in marine structures is lacking. This requires thorough investigation of all different approaches, and equitable comparison and validation with experimental data. Establishing a test setup that enables to test multiaxial fatigue of welded marine structures, is however time and cost intensive. Therefore, experimental multiaxial fatigue data is scarce.","marine structures; multiaxial fatigue; steel; welded joints; non-proportional; constant amplitude loading","en","doctoral thesis","","","","","","","","","","","Ship Hydromechanics and Structures","","",""
"uuid:58e77969-38ad-47d5-b41a-81c80e68c684","http://resolver.tudelft.nl/uuid:58e77969-38ad-47d5-b41a-81c80e68c684","Territories-in-between: A Cross-case Comparison of Dispersed Urban Development in Europe","Wandl, Alex (TU Delft Environmental Technology and Design)","Nadin, V. (promotor); Zonneveld, W.A.M. (promotor); Rooij, R.M. (copromotor); Delft University of Technology (degree granting institution)","2020","An increasing body of literature suggests that the conventional idea of a gradual transition in spatial structure from urban to rural does not reflect contemporary patterns of urban development and their potential for sustainable development. The research introduces the concept of territories-in-between (TiB) to address the issues surrounding the sustainability of dispersed urban development. A cross-case comparison research design was chosen to develop methods and principles that can be transferred to other geographical contexts. Ten cases in five countries were studied with the aim to answer the following questions:
What spatial structures characterise dispersed urban areas in Europe? Which morphological and functional structures of dispersed urban areas offer the potential for more sustainable development? If so, how can this potential be mapped and measured to inform regional planning and design? Are there similarities and dissimilarities concerning potentials of dispersed urban areas in different locations, planning cultures, topographies and histories? Do dispersed urban areas have distinct characteristics? In sum, the findings show that dispersed urban areas in Europe are quite distinct from urban and rural areas and that they share characteristics from one place to another. The research investigated three aspects of sustainable spatial development, the potential of multi-functionality, the provision of ecosystem services and the presence and potential for mixed-use.","dispersed urban development; Spatial planning; regional spatial analysis,; sprawl","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-244-4","","","","A+BE | Architecture and the Built Environment No 2 (2020)","","2020-06-30","","","Environmental Technology and Design","","",""
"uuid:ad5d9147-b3ef-4708-b954-142b00820499","http://resolver.tudelft.nl/uuid:ad5d9147-b3ef-4708-b954-142b00820499","Increasing the Impact of Voluntary Action Against Cybercrime","Çetin, F.O. (TU Delft Organisation & Governance)","van Eeten, M.J.G. (promotor); Hernandez Ganan, C. (copromotor); Delft University of Technology (degree granting institution)","2020","Resources on the Internet allow constant communication and data sharing between Internet users. While these resources keep vital information flowing, cybercriminals can easily compromise and abuse them, using them as a platform for fraud and misuse. Every day, we observe millions of internet-connected resources are being abused in criminal activities, ranging from poorly-configured Internet of Things (IoT) devices recruited into flooding legitimate services’ networks with unwanted Internet traffic or compromising legitimate websites to distribute malicious software that is designed to prevent access to victim’s data or device until a ransom has been paid to the attacker.
The Internet's decentralized architecture necessitates that defenders must voluntarily collaborate to combat cybercrime. While mandatory efforts may be necessary in some circumstances, the bulk of incident response will remain based on voluntary actions among thousands of Internet intermediaries, researchers and resource owners. These voluntary actions typically take the form of one party sending security notifications to another about potential security issues and asking them to act against it. Security notifications are intended to support and promote a wide range of feasible efforts, which aim to detect and mitigate millions of daily incidents and remediate underlying conditions. Despite its importance, voluntary action remains a poorly understood and significantly less investigated component of the fight against cybercrime. All of this puts a premium on understanding how voluntary cyber-defense efforts prove to be the most effective in remediating security issues.","Cyber Security; Cybercrime; Abuse reporting; Vulnerability notifications; Walled garden notifications; Hosting providers; ISPs; Intermediaries","en","doctoral thesis","","","","","","","","","","","Organisation & Governance","","",""
"uuid:d26842a4-8f72-44d4-8952-cd17988d18d8","http://resolver.tudelft.nl/uuid:d26842a4-8f72-44d4-8952-cd17988d18d8","Straws That Tell the Wind: Top-Manager Perception of Distant Signals of the Future","van Veen, B.L. (TU Delft Economics of Technology and Innovation)","Ortt, J.R. (promotor); Badke-Schaub, P.G. (promotor); Schoormans, J.P.L. (promotor); Delft University of Technology (degree granting institution)","2020","This dissertation was prompted by its author’s amazement that only a handful of financial experts had read the arrival of the 2009 recession in the subprime mortgage problems in the American housing market. Despite hefty confrontations in the media between investment experts during the years leading up to the recession, it took the fall of Lehmann Brothers for the world to become aware of the effects of the subprime crisis. Such myopia is exemplary for weak signals: the strategic phenomena detected in the environment or created during interpretation, that are distant to the perceiving top-manager’s frame of reference. If top-managers perceive weak signals early enough and interpret them accurately, they can increase the resilience of their company. If they don’t, their companies run high risks. In the case of the great recession, the correct perceiving top-managers betted against mortgage-backed securities, and the rest had to take drastic measures to survive a double-dip recession. Whether or not having insights into the effective perception of weak signals can make or break companies...","Top-Managers; Weak Signals; perception; Foresight","en","doctoral thesis","","978-90-9032-440-0","","","","","","","","","Economics of Technology and Innovation","","",""
"uuid:5dded994-0323-4508-aeba-e47a66a6d5ba","http://resolver.tudelft.nl/uuid:5dded994-0323-4508-aeba-e47a66a6d5ba","Advanced and Accurate Discretization Schemes for Relevant PDEs in Finance","Le Floch, F.L.Y. (TU Delft Numerical Analysis)","Oosterlee, C.W. (promotor); Grzelak, L.A. (copromotor); Delft University of Technology (degree granting institution)","2020","This thesis studies advanced and accurate discretization schemes for relevant partial differential equations (PDEs) in finance. We start with techniques which may be particularly useful for the pricing of so-called vanilla financial options, European or American, and then move on to more complex models for the pricing of exotic options.
This research is targeted at the architects or facility managers who are interested in userfocused office design, energy efficiency, or office renovation. The results contribute to developing design principles for office renovations with integrated user perspectives, that improve users’ satisfaction and comfort, as well as energy efficiency. I expect the design principles resulted from this research will not only contribute to an increase in the value of a building but also serve as a stepping stone for user-focused office designs or user-related aspects of the built environment.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-240-6","","","","A+BE | Architecture and the Built Environment No 1 (2020)","","","","","Climate Design and Sustainability","","",""
"uuid:f8faacb0-9a55-453d-97fd-0388a3c848ee","http://resolver.tudelft.nl/uuid:f8faacb0-9a55-453d-97fd-0388a3c848ee","Sample effficient deep reinforcement learning for control","de Bruin, T.D. (TU Delft Learning & Autonomous Control)","Babuska, R. (promotor); Tuyls, Karl (promotor); Kober, J. (copromotor); Delft University of Technology (degree granting institution)","2020","The arrival of intelligent, general-purpose robots that can learn to perform new tasks autonomously has been promised for a long time now. Deep reinforcement learning, which combines reinforcement learning with deep neural network function approximation, has the potential to enable robots to learn to perform a wide range of new tasks while requiring very little prior knowledge or human help. This framework might therefore help to finally make general purpose robots a reality. However, the biggest successes of deep reinforcement learning have so far been in simulated game settings. To translate these successes to the real world, significant improvements are needed in the ability of these methods to learn quickly and safely. This thesis investigates what is needed to make this possible and makes contributions towards this goal.
Before deep reinforcement learning methods can be successfully applied in the robotics domain, an understanding is needed of how, when, and why deep learning and reinforcement learning work well together. This thesis therefore starts with a literature review, which is presented in Chapter 2. While the field is still in some regards in its infancy, it can already be noted that there are important components that are shared by successful algorithms. These components help to reconcile the differences between classical reinforcement learning methods and the training procedures used to successfully train deep neural networks. The main challenges in combining deep learning with reinforcement learning center around the interdependencies of the policy, the training data, and the training targets. Commonly used tools for managing the detrimental effects caused by these interdependencies include target networks, trust region updates, and experience replay buffers. Besides reviewing these components, a number of the more popular and historically relevant deep reinforcement learning methods are discussed.
Reinforcement learning involves learning through trial and error. However, robots (and their surroundings) are fragile, which makes these trials---and especially errors---very costly. Therefore, the amount of exploration that is performed will often need to be drastically reduced over time, especially once a reasonable behavior has already been found. We demonstrate how, using common experience replay techniques, this can quickly lead to forgetting previously learned successful behaviors. This problem is investigated in Chapter 3. Experiments are conducted to investigate what distribution of the experiences over the state-action space leads to desirable learning behavior and what distributions can cause problems. It is shown how actor-critic algorithms are especially sensitive to the lack of diversity in the action space that can result form reducing the amount of exploration over time. Further relations between the properties of the control problem at hand and the required data distributions are also shown. These include a larger need for diversity in the action space when control frequencies are high and a reduced importance of data diversity for problems where generalizing the control strategy across the state-space is more difficult.
While Chapter 3 investigates what data distributions are most beneficial, Chapter 4 instead proposes practical algorithms to {select} useful experiences from a stream of experiences. We do not assume to have any control over the stream of experiences, which makes it possible to learn from additional sources of experience like other robots, experiences obtained while learning different tasks, and experiences obtained using predefined controllers. We make two separate judgments on the utility of individual experiences. The first judgment is on the long term utility of experiences, which is used to determine which experiences to keep in memory once the experience buffer is full. The second judgment is on the instantaneous utility of the experience to the learning agent. This judgment is used to determine which experiences should be sampled from the buffer to be learned from. To estimate the short and long term utility of the experiences we propose proxies based on the age, surprise, and the exploration intensity associated with the experiences. It is shown how prior knowledge of the control problem at hand can be used to decide which proxies to use. We additionally show how the knowledge of the control problem can be used to estimate the optimal size of the experience buffer and whether or not to use importance sampling to compensate for the bias introduced by the selection procedure. Together, these choices can lead to a more stable learning procedure and better performing controllers.
In Chapter 5 we look at what to learn form the collected data. The high price of data in the robotics domain makes it crucial to extract as much knowledge as possible from each and every datum. Reinforcement learning, by default, does not do so. We therefore supplement reinforcement learning with explicit state representation learning objectives. These objectives are based on the assumption that the neural network controller that is to be learned can be seen as consisting of two consecutive parts. The first part (referred to as the state encoder) maps the observed sensor data to a compact and concise representation of the state of the robot and its environment. The second part determines which actions to take based on this state representation. As the representation of the state of the world is useful for more than just completing the task at hand, it can also be trained with more general (state representation learning) objectives than just the reinforcement learning objective associated with the current task. We show how including these additional training objectives allows for learning a much more general state representation, which in turn makes it possible to learn broadly applicable control strategies more quickly. We also introduce a training method that ensures that the added learning objectives further the goal of reinforcement learning, without destabilizing the learning process through their changes to the state encoder.
The final contribution of this thesis, presented in Chapter 6, focuses on the optimization procedure used to train the second part of the policy; the mapping from the state representation to the actions. While we show that the state encoder can be efficiently trained with standard gradient-based optimization techniques, perfecting this second mapping is more difficult. Obtaining high quality estimates of the gradients of the policy performance with respect to the parameters of this part of the neural network is usually not feasible. This means that while a reasonable policy can be obtained relatively quickly using gradient-based optimization approaches, this speed comes at the cost of the stability of the learning process as well as the final performance of the controller. Additionally, the unstable nature of this learning process brings with it an extreme sensitivity to the values of the hyper-parameters of the training method. This places an unfortunate emphasis on hyper-parameter tuning for getting deep reinforcement learning algorithms to work well. Gradient-free optimization algorithms can be more simple and stable, but tend to be much less sample efficient. We show how the desirable aspects of both methods can be combined by first training the entire network through gradient-based optimization and subsequently fine-tuning the final part of the network in a gradient-free manner. We demonstrate how this enables the policy to improve in a stable manner to a performance level not obtained by gradient-based optimization alone, using many fewer trials than methods using only gradient-free optimization.
error bounds and conduct numerical experiments. The main goal is to analyze these approximations. We will also touch upon the financial applications of BSDEs.","","en","doctoral thesis","","978-94-028-1886-4","","","","","","","","","Numerical Analysis","","",""
"uuid:e4dc2dfc-6c9c-4849-8aa9-befa3001e2a3","http://resolver.tudelft.nl/uuid:e4dc2dfc-6c9c-4849-8aa9-befa3001e2a3","Quantifying the quality of coastal morphological predictions","Bosboom, J. (TU Delft Coastal Engineering)","Reniers, A.J.H.M. (promotor); Stive, M.J.F. (promotor); Delft University of Technology (degree granting institution)","2020","class=""MsoNormal"" style=""mso-pagination:none;mso-layout-grid-align:none; text-autospace:none"">This thesis investigates the behaviour of the often used point-wise skill score, the MSESSini a.k.a. BSS, and develops new error metrics that, as opposed to point-wise metrics, take the spatial structure of morphological patterns into account. The MSESSini measures the relative accuracy of a morphological prediction over a prediction of zero morphological change, using the mean-squared error (MSE) as the accuracy measure. The main findings about the MSESSini are: 1) a generic ranking, based on values for MSESSini, has limited validity, since the zero change reference model fails to make model performance comparable across different prediction situations; 2) the combination of larger, persistent and smaller, intermittent scales of cumulative change may lead to an increase of skill with time, without the prediction on either of these scales becoming more skilful with time; 3) in the presence of inevitable location errors, the MSESSini favours predictions that underestimate the variance of cumulative bed changes and 4) existing methods to correct for measurement error are inconsistent in either their skill formulation or their suggested classification scheme. In order to overcome the inherent limitations of point-wise metrics, three novel diagnostic tools for the spatial validation of 2D morphological predictions are developed. First, a field deformation or warping method deforms the predictions towards the observations, minimizing the squared point-wise error. Error measures are formulated based on both the smooth displacement field between predictions and observations and the residual point-wise error field after the deformation. In contrast with the RMSE, the method captures the visual closeness of morphological patterns. Second, an optimal transport method defines the distance between predicted and observed morphological fields in terms of an optimal sediment transport field. The optimal corrective transport field moves the misplaced sediment from the predicted to the observed morphology at the lowest quadratic transportation cost. The root-mean-squared value of the optimal transport field, the RMSTE, is proposed as a new error metric. As opposed to the field deformation method, the optimal transport method is mass-conserving, parameter-free and symmetric. The RMSTE, unlike the RMSE, is able to discriminate between predictions that differ in the misplacement distance of predicted morphological features. It also avoids the consistent reward of the underestimation of morphological variability that the RMSE is prone to. Third, a scale-selective validation approach allows any metric to selectively address multiple spatial scales. It employs a smoothing filter in such a way that, in addition to the domain-averaged statistics, localized validation statistics and maps of prediction quality are obtained per scale (geographic extent or areal size of focus). The employed skill score weights how well the morphological structure and variability are simulated, while avoiding to reward the underestimation of variability. To fully describe prediction quality multiple metrics are required with a weighting determined by the goal of the simulation. Point-wise metrics should be supplemented with an error decomposition, as to avoid undesired underestimation of variability. Further, a set of performance metrics must include a metric, e.g. the RMSTE, that accounts for the spatial structure of the observed and predicted morphological fields.","(root)-mean-squared error; model accuracy; morphodynamic modelling; model validation; optimal transport; Monge–Kantorovich; root-mean-squared transport error; effective transport difference; image warping; image matching; scale-selective validation; optical flow; Brier skill score; model skill; zero change model; measurement error; location error; pattern skill","en","doctoral thesis","","978-94-6384-091-0","","","","","","","","","Coastal Engineering","","",""
"uuid:3b5c88bf-8174-44a3-a7ba-c82dac4bbae6","http://resolver.tudelft.nl/uuid:3b5c88bf-8174-44a3-a7ba-c82dac4bbae6","Novel concepts, systems and technology for sludge management in emergency and slum settings","Mawioo, P.M. (TU Delft BT/Environmental Biotechnology)","Brdjanovic, Damir (promotor); Hooijmans, Christine Maria (copromotor); Delft University of Technology (degree granting institution)","2020","Management of sludge is one of the most pressing issues in sanitation provision. The situation is especially complex when large quantities of fresh sludge containing various contaminants are generated in onsite sanitation systems in urban slums, emergency settlements and wastewater treatment facilities that require proper disposal of the sludge. The application of fast and efficient sludge management methods is important under these conditions. This study focuses on addressing the existing challenges and gaps in sludge management, particularly the management of faecal sludge that is generated in the densely populated areas, through innovative concepts and technological development. To assess the current status of decentralized management of faecal sludge, a review of the existent (emergency) sanitation practices and technologies was conducted. In the study, the gaps and opportunities in technological developments for sanitation management in complex situations was identified. The need for an innovative sludge management system led to the development of the “emergency sanitation operation system, eSOS”. This concept proposed and demonstrated the application of modern innovative sanitation solutions and existing information technologies for sludge management. In addition, as a component of the eSOS concept, a sludge treatment system based on microwave irradiation technology, which forms the core of this research, was developed and tested. The microwave technology study was carried out in two stages. The first stage involved preliminary and validation tests at laboratory scale using a domestic microwave unit to assess the applicability of the microwave technology for sludge treatment. Two sludge types, namely blackwater sludge, extracted from highly concentrated raw blackwater stream, and faecal sludge, obtained from urine diverting dry toilets, were tested. The results demonstrated the capability of the microwave technology to rapidly and efficiently reduce the sludge volume by over 70% and decrease the concentration of bacterial pathogenic indicator E. coli and Ascaris lumbricoides eggs to below the analytical detection levels. Basing on these results, a pilot-scale microwave reactor unit was designed, produced and evaluated using waste activated sludge, faecal sludge, and septic sludge, which formed the second stage of the study. The results demonstrated that microwave treatment was successful to achieve a complete bacterial inactivation like in the laboratory tests (i.e. E. coli, coliforms, staphylococcus aureus, and enterococcus faecalis) and a sludge weight/volume reduction above 60%. Furthermore, the dried sludge and condensate had a high energy (≥ 16 MJ/kg) and nutrient contents (solids; TN ≥ 28 mg/g TS and TP ≥ 15 mg/g TS; condensate TN ≥ 49 mg/L TS and TP ≥ 0.2 mg/L), having the potential to be used as biofuel, soil conditioner, fertilizer, etc. Overall, in this study the existence of a wide range of regular onsite and offsite sanitation options was revealed that have the potential to be applied for sludge management in the emergencies. Situations with more or less similar characteristics to emergencies such as urban slums can also benefit from these technologies. In addition, the shortfalls experienced in the many current emergency sanitation responses were associated with the often used conventional fragmented approach that does not capture the entire sanitation chain, but rather looks at the individual components separately with emphasis on the containment facilities. An innovative emergency Sanitation Operation System (eSOS) concept was thus introduced in this study that uses and promotes a systems approach integrating all components of an emergency sanitation chain. Furthermore, the study demonstrated that a microwave technology based reactor can be applied for the rapid treatment of sludge in the areas where large volumes of sludge are generated such as slums and emergency settlements.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-0-367-90221-6","","","","Dissertation submitted in fulfillment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of IHE Delft Institute for Water Education.","","","","","BT/Environmental Biotechnology","","",""
"uuid:7cb715f4-eaf0-4526-8552-9f97cc864383","http://resolver.tudelft.nl/uuid:7cb715f4-eaf0-4526-8552-9f97cc864383","Fault-tolerant quantum computation: Theory and practice","Vuillot, C. (TU Delft QCD/Terhal Group)","Terhal, B.M. (promotor); Delft University of Technology (degree granting institution)","2020","Quantumcomputation is the modern version of Schrödinger’s cat experiment. It is backed up in principle by the theory and thinking about it can make people equally uncomfortable and excited. Besides, its practical realization seems so extremely challenging that some people even doubt it is possible. On the other hand, we are nowadays much closer to realizing quantum computation and in addition, it has much more implications than Schrödinger’s original cat experiment. One of the major difficulties in realizing quantum computation is the inevitable presence of noise in realistic quantum devices which makes the direct realization of quantum computers impossible. In order to protect quantum information and quantum processes against noise, quantum error correction and fault-tolerance have been devised. Although the gap between experiments and the requirements of fault-tolerance is still daunting, the field of quantum error correction and fault-tolerance extends and influences architectural decisions from the hardware to the ideal quantum programs that we want to run. That is why it has the potential to make or break the practicality of quantum computation and a lot of research effort goes into this field. In this thesis we investigate and improve several aspects of fault-tolerant schemes and quantum error correction. We implement an experiment which validates on a small device the usefulness of fault-tolerance for quantum computation. We investigate the advantages of harnessing quantum continuous degrees of freedom present in the lab to protect discrete quantum information in a scalable way. We establish a framework to analyze the fault-tolerant properties of code deformation techniques which are versatile techniques to process quantum information protected by an error correcting code. We also present some novel code deformation techniques with the potential to increase reliability. Finally we define a new class of quantum error correcting codes, quantum pin codes, with built in capabilities for fault-tolerant quantum gates. We give some practical constructions and show some protocols with interesting parameters. The roads towards universal and fault-tolerant quantum computation are still steep but research efforts are pushing in the right directions.","quantum computing; quantum error correction; fault-tolerance","en","doctoral thesis","","978-94-6384-097-2","","","","","","","","","QCD/Terhal Group","","",""
"uuid:1cbc3ec2-7297-4a6a-8fdf-dce9e271c76f","http://resolver.tudelft.nl/uuid:1cbc3ec2-7297-4a6a-8fdf-dce9e271c76f","Advancing Robust Multi-Objective Optimisation Applied to Complex Model-Based Water-Related Problems","Marquez Calvo, O.O. (TU Delft Water Resources)","Solomatine, D.P. (promotor); Alfonso, Leonardo (copromotor); Delft University of Technology (degree granting institution)","2020","The exercise of solving engineering problems that require optimisation procedures can be seriously affected by uncertain variables, resulting in potential underperforming solutions. Although this is a well-known problem, important knowledge gaps are still to be addressed. For example, concepts of robustness largely differ from study to study, robust solutions are generally provided with limited information about their uncertainty, and robust optimisation is difficult to apply as it is a computationally demanding task.
The proposed research aims to address the mentioned challenges and focuses on robust optimisation of multiple objectives and multiple sources of probabilistically described uncertainty. This is done by the development of the Robust Optimisation and Probabilistic Analysis of Robustness algorithm (ROPAR), which integrates widely accepted robustness metrics into a single flexible framework. In this thesis, ROPAR is not only tested in benchmark functions, but also in engineering problems related to the water sector, in particular the design of urban drainage and water distribution systems.
ROPAR allows for employing practically any existing multi-objective optimisation algorithm as its internal optimisation engine, which enables its applicability to other problems as well. Additionally, ROPAR can be straightforwardly parallelized, allowing for fast availability of results.","Robust optimisation; Robust optimization; Optimisation; Optimization; Drainage system; Water distribution system","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-0-367-46043-3","","","","","","","","","Water Resources","","",""
"uuid:7461ee1c-1f76-43aa-b8bb-8da6f57c3528","http://resolver.tudelft.nl/uuid:7461ee1c-1f76-43aa-b8bb-8da6f57c3528","Energy-aware noise reduction for wireless acoustic sensor networks","Zhang, J. (TU Delft Signal Processing Systems)","Heusdens, R. (promotor); Hendriks, R.C. (promotor); Delft University of Technology (degree granting institution)","2020","In speech processing applications, e.g., speech recognition, hearing aids (HAs), video conferencing, and human-computer interaction, speech enhancement or noise reduction is an essential front-end task, as the recorded speech signals are inevitably corrupted by interference, including coherent/incoherent noise and reverberation. Traditional noise reduction algorithms are mostly based on spatial filtering techniques using a microphone array. The performance of the noise reduction algorithms scales with the number of microphones that are involved in filtering, but a large-sized microphone array cannot be mounted in many realistic systems, e.g., HAs. In the last few decades, with a great development in micro-electro-mechanical systems, wireless devices are more and more commonly-used in our daily life, like the smartphone, laptop, wireless HA, and ipad. These devices have acoustic sensors equipped and a capability of wireless communication, leading to a wireless acoustic sensor network (WASN). The WASN can be organized in a centralized fashion where all the devices are only allowed to connect with a fusion center (FC), or in a decentralized way where the devices are connected with the close-by counterparts via wireless links. ThisWASN can resolve the disadvantages of the traditional microphone array systems, since thewireless devices can be placed anywhere in the vicinity and one device is able to make use of measurements from other external devices. More importantly, the acoustic scene can be sampled more comprehensively, resulting in a potential improvement in noise reduction performance.","Microphone subset selection; rate distribution; noise reduction; binaural cue preservation; distributed algorithms; relative acoustic transfer function; quantization; bit-rate; power consumption; energy efficiency; wireless acoustic sensor networks","en","doctoral thesis","","978-94-6366-239-0","","","","","","","","","Signal Processing Systems","","",""
"uuid:5b6e71fd-150a-4b0f-8a53-8cb226d24117","http://resolver.tudelft.nl/uuid:5b6e71fd-150a-4b0f-8a53-8cb226d24117","Inhabitable Voids: Housing Design in Iran’s Period of High-Modernisation","Sedighi, S.M.A. (TU Delft Urban Development Management; TU Delft Space & Type)","van Gameren, D.E. (promotor); Mota, Nelson (copromotor); Delft University of Technology (degree granting institution)","2020","Focusing on the design of large-scale housing schemes, this doctoral dissertation examines the extent to which the architecture of dwelling was affected by the oil-led geopolitics of the Cold War, and influenced by the modernist logic of architectural design and urban planning in Iran’s period of high-modernisation (1945-1979), as Eskandar Mokhtari termed it. This study questions the influence of the country’s geopolitical position during the Cold War, and its impact on the architecture of dwelling. It shows how a series of experimental housing solutions, initiated by leading Iranian architects mostly educated in the West (Europe and North America) became a physical expression of both the state’s modernisation demands and people’s everyday needs. Finally, it illustrates how the design mechanisms employed by these architects enabled for a continual and constant change and transformation in their proposed housing schemes, and empowered the users of space to designate and establish a set of relations with their living environment. While this study could be seen as a contribution to the discourse of urban modernisation in Iran, mainly by focusing on the agency of the dweller in the transformation through time of public housing districts in the country, it also aims to address some current issues related to the design and production of public housing. It seems that in the process of housing development, architects and decision-makers with different backgrounds employ two distinct approaches that mostly cannot be resulted in a convergent solution. While this gap between visions and realities of public housing policies and designs might be seen as a universal as well as a common phenomenon, this issue resulted in many critiques on the development of public housing and built environments, in Iran. Iranian architects criticised the government for its top-down housing policies overemphasising the application of industrialised methods for the production of houses, but neglect the importance of people’s socio-cultural practices as well as vernacular patterns of inhabitation for housing design. On the contrary, decision-makers describe housing solutions provided by architects mostly as visionary and inefficient. Accordingly, investigating overlaps between these divergent approaches is of vital importance, as it would foster and promote a dialogue among multiple stakeholders involved in the process of housing development, and illustrate the roles that architects can perform therein. In Iran’s period of high-modernisation, the architecture of dwelling was widely seen as a place to fulfil the state’s ambitious goals of modernisation projects, and simultaneously to resist universalising tendencies. The Iranian Finance and Planning Organisation prepared five distinct Development Plans, where housing for middle and low-income families held a prominent place. Indeed, these Plans projected the national and international socio-political and economic condition of the time that resulted from rural-urban migrations and the demographic changes being seen in Iran. Accordingly, each Plan led to the construction of a series of large-scale housing projects in urban areas. Among these projects, Kuy-e Chaharsad-Dastgah (1946-50), Kuy-e Narmak (1951-58), Kuy-e Kan (1958-64), Kuy-e Ecbatana (1972-92), and Shushtar-Nou (1975-85) were designed and developed as experimental models by leading Iranian architects, to promote and foster a synthesis of Western living standards and Iran’s vernacular patterns of inhabitation. By investigating these models initiated in three different stages of modernisation in Iran, this dissertation, first, unfolds the processes that created/led to collaboration and negotiation among visions and realities of stakeholders (particularly architects and policy/decision-makers) involved in the production of houses and provision of housing solutions. Then, it shows how the mechanisms employed for the design of these housing schemes enabled for a continual and constant change and transformation. Finally, this study argues that each of these housing models is embedded with a form of inhabitable voids that could be seen and read as heterotopia, as Michel Foucault defined. In other words, incorporating some local archetypes into the design process of these projects implies and creates a certain type of place that the ‘other’ space would flourish. Accordingly, these heterotopic voids might include a creative potential/characteristic to be perceived as an omnipresent tool from the disciplinary toolbox of architecture that would cater for providing certain forms of relationships among people and with their living environment.","Iran; Public Housing Design; The Habitat Bill of Rights; User Participation; Growth and Change; Inhabitable Voids; High-Modernisation","en","doctoral thesis","Delft University of Technology","978-94-6384-032-3","","","","","","2025-01-01","","","Urban Development Management","","",""
"uuid:d2a3bafb-f39d-49ba-a9c0-bb266a9f9ba5","http://resolver.tudelft.nl/uuid:d2a3bafb-f39d-49ba-a9c0-bb266a9f9ba5","Asset Health Index and Risk Assessment Models for High Voltage Gas-Insulated Switchgear Operating in Tropical Environment","Purnomoadi, A.P. (TU Delft DC systems, Energy conversion & Storage)","Smit, J.J. (promotor); Mor, A. R. (promotor); Delft University of Technology (degree granting institution)","2020","Following deregulation in the energy sector during the 1990s, which was also triggered by the ageing of infrastructure and the increasing demands from regulators and customers, many network utilities adopted the Asset Management (AM) in the hope to earn more, have better credit ratings and gain from stock prices. In line with this fact, the emergence of the AM international standard, such as the ISO 55000 series in 2014, gained rapid acceptance among network utilities around the globe.
AM has its core in the asset decision-making process. This activity lies simultaneously at the strategic, tactical and operational level of AM, over the lifecycle of the asset. In such an environment, the asset managing department should not only focus on the reliability of the asset but also on balancing costs, risks and asset performance. Regarding maintenance, the money spent on every maintenance task should benefit the company’s business values.
This thesis focuses on the development of decision-making tools for maintenance of high voltage AC (HVAC) gas-insulated switchgear (GIS) operating under tropical conditions. GIS has been chosen because of its critical role in the transmission network. Any GIS breakdown is usually expensive and requires an extensive outage. Moreover, under tropical conditions, this study observed GIS failure rates over twice the value reported by CIGRE’s survey of 2007. The study was conducted in this research’s case study termed the Java Bali (JABA) case study. The latter consists of 631 CB-bays of 150 kV and 500 kV GISs located in Java and Bali of Indonesia.
Today’s AM decision-making tools for electrical power grids are generally based on Asset Health Index (AHI) and risk assessment (RA) models. These models assist the asset manager in answering the following questions:
1. What is the condition of each GIS in the network?
2. Which one is more likely to fail compared to the others?
3. Which one is more critical compared to the others in terms of making a possible impact on the company’s business such that the mitigating action is prioritised?
4. What optimal action(s) is/are needed to be taken?
Developing the above-mentioned models requires sufficient knowledge of the characteristics of GIS operating under tropical conditions. To that purpose, both statistical analysis and forensic investigations in the JABA case study have been undertaken to find the critical condition indicators for the AHI model. The results are as follows:
1. The tropical conditions have influenced both directly and indirectly the performance of GIS. Corrosions at the exposed GIS parts were seen to have a common direct influence of tropical conditions. They can trigger leakages, secondary, and lead to driving mechanism subsystems’ failures, which reduce the GIS’ performance. The intensive and frequent lightning in tropical conditions is a so-called Failure Susceptibility Indicator (FSI), indicating that a failure mode is expected to initiate more likely than for the same GIS in other environments, especially if the surge arrester fails to protect. Moreover, the GISs outdoor and from the older generation are more susceptible to breakdown under tropical conditions.
2. A high amount of humidity was found in the non-CB enclosures of GIS from lower voltage class (i.e. Class 2 GIS with a voltage level of 150 kV). The origin of this humidity mainly comes from the desorption of moisture from the spacer or internal GIS surfaces during operation.
3. The critical failure modes in GIS operating under tropical conditions are as follows: dielectric insulation breakdown, loss of mechanical integrity in the primary conductor and failing to perform the requested operation due to driving mechanism failure.
Following this study’s findings, laboratory tests in the HV Laboratory of TU Delft were conducted to investigate the influence of high humidity content on the spacer flashover in GIS. The results confirmed without condensation, humidity has no impact on the withstanding strength of the insulation system under AC, LI+/- and SI. Our model also showed that the breakdown voltage under LI+ due to condensation at the surface of a solid insulator is lower than that due to a 2 mm metallic particle attached on the identical solid insulator at 3000 ppmV.
We applied the findings from both field investigation and laboratory tests into our models in the following ways:
1. In the AHI model:
a. Statistical and JABA lab case studies were performed to assess the system’s vulnerabilities and normative levels, in particular, the humidity content in GIS the non-CB enclosure as long as the value was far from the possibility of condensation.
b. The likelihood of failure is determined by so-called condition scale codes reflecting the deterioration of the subsystems.
c. The failure susceptibility indicators (FSI) flag deviating circumstances, such as heavy environmental conditions, operation and maintenance records and the inherent/design factor of GIS. The FSI are just an expectation that is not based on evidence as in a condition indicator. Therefore, the FSI work as warning flags for the decision-maker.
2. In the RA model:
a. Risk is defined as the likelihood of failure times the consequences. The result of the AHI defines the likelihood of failure in the RA model.
b. On the other hand, the consequences consist of seven business values of a transmission utility from the JABA case study, namely, safety, extra fuel cost, energy not served, equipment cost, customer satisfaction, leadership and environment.
We have successfully implemented these models on a GIS example from the JABA case study. Evaluation of possible risk treatments was also done using multi-criteria analysis (MCA) to optimise three parameters: cost, time-to-finish treatment and residual risk.
In practice, transmission utilities face more complex situations with more types of equipment in the network. The methodology discussed in this thesis, however, can be the cornerstone for the development of decision-making tools for other assets at the tactical level of AM as well.","Asset management; Asset Health Index; Risk assessment; Gas insulated switchgear; Tropical environment; Decision-Making","en","doctoral thesis","","978-94-6384-098-9","","","","","","2020-01-13","","","DC systems, Energy conversion & Storage","","",""
"uuid:7de36fae-025d-499a-a726-21657cffce6c","http://resolver.tudelft.nl/uuid:7de36fae-025d-499a-a726-21657cffce6c","Metal-organic Framework Mediated Electrode Engineering for Electrochemical CO2 Reduction","Wang, R. (TU Delft ChemE/Catalysis Engineering)","Kapteijn, F. (promotor); Gascon, Jorge (promotor); Delft University of Technology (degree granting institution)","2020","The electrochemical conversion of CO2 constitutes an interesting pathway to close the anthropogenic carbon cycles. The ability to reach stable operation in short time makes this method a perfect candidate to buffer the intermittency of renewable power sources, such as solar cells and wind power. As is the case of heterogeneous catalysis, the key to commercialize a process lies in the optimization of the catalytic phase. In this thesis, we take advantage of the unique properties of metal-organic frameworks (MOFs) to synthesize efficient catalysts for electrochemical CO2 reduction (CO2ER). Specifically, the two properties we utilize are the atomic dispersion of the elements and the highly designable building blocks (Chapter 1)....","","en","doctoral thesis","","978-94-028-1858-1","","","","","","","","","ChemE/Catalysis Engineering","","",""
"uuid:345937cb-fd16-46b3-9c01-c098e06f0743","http://resolver.tudelft.nl/uuid:345937cb-fd16-46b3-9c01-c098e06f0743","Manipulation of supercurrent by the magnetic field","Irfan, M. (TU Delft QN/Akhmerov Group)","Akhmerov, A.R. (promotor); Wimmer, M.T. (copromotor); Delft University of Technology (degree granting institution)","2020","Superconductor–semiconductor hybrid devices are interesting not only for their known and potential applications but also for the associated novel physical processes. One such example is the proposal for the realization of Majorana zero-modes, which are robust against noise and have applications in quantum information processing. Although the Josephson effect is known for decades, the recent advances in the experimental technologies made it possible only recently to make highly tunable hybrid devices. In this thesis, we study the superconductor–normal-metal–superconductor Josephson junctions and propose new effects or analyze experimental findings. In a Josephson junction, it is difficult to determine whether the flowof supercurrent is ballistic or diffusive. We propose an hourglass-shaped Josephson junction geometry to probe the nature of transport. In this device, the measurement of a critical current as a function of an external magnetic field produces a clear signature of the ballistic supercurrent. In metal-based Josephson junctions, the supercurrent flows uniformly through the scattering region. In contrast, semiconductor-based Josephson junctions allow tunable supercurrent due to the tunable carrier density of the semiconductors. We model a bilayer graphene Josephson junction with a split-top and back gate in the presence of an applied magnetic field to analyze the experimental measurements. The opening of bandgap in bilayer graphene in the gated area by applying tunable electrostatic potential allows spatial manipulation of supercurrent. The magnetic field is then used to probe the supercurrent flow in the device. In general, an applied magnetic field strongly suppresses supercurrent in Josephson junctions because it randomizes the contribution of the individual states. However, we show that graphene Josephson junctions are special and avoid the suppression of critical current under an applied in-plane magnetic field. The critical current as a function of the Zeeman field has a plateau whose size depends on the junction detail. Finally, we study a Josephson junction coupled with a microwave transmission line resonator in collaboration with an experimental group. We model this system to analyze and explain an unexpected experimental result of the system. We show that the unexpected outcome of the experiment is due to the coupling of the higher modes of the transmission line resonator.","","en","doctoral thesis","","978-90-8593-429-5","","","","","","","","","QN/Akhmerov Group","","",""
"uuid:a20ccf52-0b32-4f9c-924a-79b87b22505e","http://resolver.tudelft.nl/uuid:a20ccf52-0b32-4f9c-924a-79b87b22505e","Restructuring medium voltage distribution grids: Parallel AC-DC reconfigurable links","Shekhar, A. (TU Delft DC systems, Energy conversion & Storage)","Bauer, P. (promotor); Delft University of Technology (degree granting institution)","2020","Energy transition will inevitably lead to greater electrification. For example, it is anticipated that electrical energy demand will rise by at least 2-3 times by 2050 with increasing share of electric vehicles and heat pumps. This will translate to significant increase in power demand on the existing medium voltage distribution grid, resulting in structural challenges on such predominantly radial ac networks. Dispersed and variable renewable energy resources further introduce power mismatches with local regions of excess generation and consumption. Under such a scenario, Distribution Network Operators (DNOs) must explore solutions to restructure the grid infrastructure with the goal of capacity reinforcement, improved controllability and efficient power redirection.
In this thesis, dc based technologies are proposed to realize the grid transition from purely ac to hybrid ac-dc networks to address the anticipated challenges posed by energy transition. Refurbishing the existing ac links to operate under dc conditions is shown to enhance the power transfer capacity by approximately 50\,\% within the studied constraints at higher energy efficiency. Reconfigurability between such parallel operating ac and dc links can further increase the achievable capacity gains during (n-1) contingencies, which relates to the capacity maintained with a single component failure in the system. Further, dc interlinks are introduced to weakly mesh the radial ac distribution networks for efficiently redirecting the power to minimize local demand mismatches, prevent branch overloads and increase availability of the grid.
The relevant component engineering aspects such as converter design as well as cable insulation performance under ac and dc conditions are developed to support the underlying assumptions. Control challenges such as mitigation of common mode current specific to parallel ac-dc link systems are explored. The concept of optimal active power steering capability of the dc link while supporting the full reactive power demand are developed mathematically and demonstrated using an experimental set-up of the proposed system.
In future, therefore, it is suggested that dc technologies will play a important role in restructuring the medium voltage ac distribution grids for achieving higher flexibility, controllability and inter-connectivity with enhanced capacity and efficiency. The proposed concepts of this thesis can be extended to integrate renewable energy resources directly to the embedded dc links, making the system multi-terminal and thus transition towards a universal dc grid.","capacity enhancement; dc links; distribution network; efficiency; expansion; flexible; medium voltage; mmc; (n-1) contingency; parallel; reconfiguration; reinforcement; redundancy","en","doctoral thesis","","978-94-6366-235-2","","","","","","","","","DC systems, Energy conversion & Storage","","",""
"uuid:f98bca0e-5ee8-4fd3-8a60-d0e624e525d6","http://resolver.tudelft.nl/uuid:f98bca0e-5ee8-4fd3-8a60-d0e624e525d6","Investigation on foam-assisted chemical flooding for enhanced oil recovery: An experimental and mechanistic simulation study","Janssen, M.T.G. (TU Delft Applied Geophysics and Petrophysics)","Zitha, P.L.J. (promotor); Delft University of Technology (degree granting institution)","2020","Foam-assisted chemical flooding (FACF) is a novel enhanced oil recovery (EOR) methodology that combines the injection of a surfactant slug, to mobilize previously trapped residual oil, with foam generation for drive mobility control, thus displacing the mobilized banked oil. The main goal of this study concerns the understanding of oil mobilization and displacement mechanisms that take place in a FACF process. At first, in order to promote understanding of the incremental benefits FACF can provide one with, we get ourselves familiar with immiscible gas flooding and water-alternating-gas (WAG) injection. Subsequently, we study the effect of aqueous phase salinity, drive foam quality, and method of drive foam injection, on the oil mobilization and displacement processes in FACF, at both model-like conditions and in a reservoir setting. We present novel insights, on the dynamic physical processes that take place within the porous media during FACF, which could only be obtained through the assistance of a medical CT scanner. Moreover, in order to identify the main controlling parameters that determine incremental oil recovery in WAG and FACF, we develop several mechanistic models for the aid of history-matching laboratory observations.","alkaline; surfactant; foam; oil; immiscible gas injection; water-alternating-gas; enhanced oil recovery; core-flood; computed tomography; mechanistic simulation","en","doctoral thesis","","978-94-6384-099-6","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:40454512-e67c-41c5-963b-5862a1b94ac3","http://resolver.tudelft.nl/uuid:40454512-e67c-41c5-963b-5862a1b94ac3","Time-series analysis to estimate aquifer parameters, recharge, and changes in the groundwater regime","Obergfell, C.C.A. (TU Delft Water Resources)","Bakker, M. (promotor); Delft University of Technology (degree granting institution)","2020","The objective of this thesis is twofold: to develop time series analysis methods for the estimation of aquifer parameters and recharge to be used in groundwater models and to develop time series analysis methods for the identification and quantification of a regime change.
In Chapter 2, a pumping test is replaced by time series analysis of heads measured in the vicinity of a well field with a strongly varying pumping regime. The step response function obtained with time series analysis provides an estimate of the steady response to pumping that would be achieved if the pumping rate was constant. The resulting virtual steady state cone of depression of the well field allows for a straightforward calibration of a regular groundwater model to estimate aquifer parameters. In addition, time series analysis can be used to determine the type of reaction, phreatic or semi-confined, in the different monitoring wells.
In Chapter 3, stream-aquifer interaction is analyzed with a time series model using a response function that is a solution to the groundwater flow equation. Head fluctuations in the vicinity of a river are analyzed, which result directly in estimates of aquifer parameters, including the resistance to flow at the interface between the stream and the aquifer. For the study site, the resistance to flow between the stream and the aquifer can be explained by stream line contraction rather than by the presence of a semi-pervious layer at the bottom of the river.
In Chapter 4, time-averaged groundwater recharge is estimated from time series models of groundwater heads that are fitted under an additional constraint that aims at better identifying the influence of evaporation. The constraint is that the seasonal harmonic of the observed head is reproduced as the response of the seasonal harmonics of precipitation, evaporation, and pumping. Better identification of the influence of evaporation results in more reliable recharge estimates to be used in regular groundwater flow models.
In Chapter 5, time series analysis is applied to identify and analyze a transition in the groundwater regime of an aquifer. The groundwater regime is defined as the range of head variations of a time series throughout the seasons. A new time series modeling approach is proposed to simulate the transition from an initial regime to an altered regime. In the case study, the estimated timing and magnitude of the transition provides strong evidence that the transition is the result of dredging works in the main river draining the aquifer. The existence of the transition of the groundwater regime had gone unnoticed, despite intensive groundwater monitoring.
This thesis showed how time series analysis can be applied to estimate the magnitude of groundwater model parameters or recharge and be applied as a tool to gain insight in the functioning of groundwater systems.
A crucial issue when estimating aquifer parameters or recharge from time series models is the uncertainty of the estimates. A modified Gauss Newton approach was used in this thesis. This approach converges quickly and provides an estimate
of confidence intervals of the estimated parameters. The systematic comparison of different estimation procedures, including Markov Chain Monte Carlo, is recommended for future study.
Groundwater modeling is based on a conceptual model of a groundwater system to simulate groundwater flow, while time series analysis can be used to estimate groundwater model parameters and identify possible changes in regimes for use in groundwater models. Both modeling approaches are complementary and it is recommended that they be applied together in a systematic fashion.
Over the years,MPM has been successfully applied to many complex problems from engineering and computer graphics. Despite its impressive performance for these applications, the method still suffers from several numerical shortcomings, such as stability issues, inaccurate mapping of the material-point data, and unphysical oscillations that arise when material points travel from one element to another, the so-called grid crossing errors. This dissertation provides an overview of the existing literature that addresses these drawbacks, and introduces new mathematical techniques that improve the performance of MPM.
Previous studies have indicated that the use of higher-order B-spline basis functions within MPM mitigates the grid-crossing errors, thereby improving the accuracy of the method. This thesis combines the B-spline approach, known as BSMPM, with an alternative technique to project the information from material points to the background grid. The mapping technique is based on cubic-spline interpolation and Gauss quadrature. The numerical results show that the proposed approach further increases the accuracy of the method and leads to higher-order convergence. Moreover, the extension of BSMPM to unstructured grids using Powell-Sabin splines is discussed.
After that, this dissertation compares MPM to the optimal transportation meshfree (OTM) method. Both MPM and the OTM method have been developed to efficiently solve partial differential equations that arise from the conservation laws in continuum mechanics. However, the methods are derived in a different fashion and have been studied independently of one another. This thesis provides a direct step-by-step comparison of the MPM and OTM algorithms. Based on this comparison, the conditions, under which the two approaches can be related to each other, are derived, thereby bridging the gap between the MPM and OTM communities. In addition, the thesis introduces a novel unified approach that combines the design principles from BSMPM and the OTM method. The proposed approach is significantly cheaper and more robust than the standard OTM method and allows for the use of a consistent mass matrix without stability issues that are typically encountered in MPM computations.
Finally, this thesis introduces a novel function reconstruction technique that combines the well-known least-squares method with local Taylor basis functions, called Taylor least squares (TLS). The technique reconstructs functions from scattered data, while preserving their integral values. In conjunction with MPM or a related method, the TLS technique locally approximates quantities of interest, such as stress and density, and when used with a suitable quadrature rule, conserves the total mass and linear momentum after transferring the material-point information to the grid. The integration of the technique into MPM, dual domain material-point method (DDMPM), and BSMPM significantly improves the results of these methods. For the considered onedimensional examples, the TLS function reconstruction technique resembles the approximation properties of the highly-accurate cubic-spline reconstruction, while preserving the physical aspects of the standard algorithm.","material-point method; function reconstruction; Taylor least squares; optimal transportation meshfree method; B-spline; grid-crossing error; spatial accuracy","en","doctoral thesis","","978-94-6384-089-7","","","","","","","","","Numerical Analysis","","",""
"uuid:42fb72e0-c47d-4026-9d28-2a2d82c408b4","http://resolver.tudelft.nl/uuid:42fb72e0-c47d-4026-9d28-2a2d82c408b4","The Adaptive Robust Design Approach: Improving Analytical Support under Deep Uncertainty","Hamarat, C. (TU Delft Policy Analysis)","Thissen, W.A.H. (promotor); Pruyt, E. (copromotor); Delft University of Technology (degree granting institution)","2019","Policymaking often involves different parties such as policymakers, stakeholders, and analysts each with distinct roles in the process. To assist policymakers, policy analysts help in structuring the problem, designing, and evaluating policy alternatives. Analysts face many challenges, like complexity and uncertainty in a system of interest, while supporting the policymaking process. Frequently, analysts rely on mathematical models that represent the key features of the system. Assumptions made during modelling introduce a significant level of uncertainty in the models, and forecasting based on models is therefore always bound by this uncertainty. Instead of focusing on limited best-estimate predictions under uncertainty, exploring a plethora of plausible futures by using mathematical models can help supporting decision-making. In current practice, uncertainty analysis for decision-making is mostly limited to technical and shallow uncertainties but not focused on deep uncertainty. This thesis contributes to a solution for enhanced handling of deep uncertainty to support policymaking. We have developed a new methodological approach for improving analytical support for policymaking under deep uncertainty, and demonstrated each analytical advancement stage with case studies. This thesis proposes to improve analytical support for policymaking to better handle deep uncertainty. Building upon the existing pragmatic practice, a systematic approach for designing adaptive policies under uncertainty is developed. The Adaptive Robust Design (ARD) approach in combination with multi-objective robust optimization will improve the support for policymaking under deep uncertainty. The effectiveness of ARD for developing adaptive robust policies under deep uncertainty is shown by illustrative case studies.","","en","doctoral thesis","","978-90-79787-70-8","","","","","","","","","Policy Analysis","","",""
"uuid:1946c091-170e-4fd4-b98a-b2e46902684e","http://resolver.tudelft.nl/uuid:1946c091-170e-4fd4-b98a-b2e46902684e","Home Occupant Archetypes: Profiling home occupants’ comfortand energy-related behaviours with mixed methods","Ortiz, Marco A. (TU Delft Indoor Environment)","Bluyssen, P.M. (promotor); Delft University of Technology (degree granting institution)","2019","This research is aimed at better understanding how occupants use energy in
their homes from a comfort-driven perspective, in order to propose customized
environmental characteristics that could improve the occupants’ comfort while
reducing energy consumption.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-234-5","","","","A+BE | Architecture and the Built Environment No 5 (2019)","","","","","Indoor Environment","","",""
"uuid:3b41b8a1-3bc7-41bd-8841-5936a3b78f86","http://resolver.tudelft.nl/uuid:3b41b8a1-3bc7-41bd-8841-5936a3b78f86","Making water security: A morphological account of Nile River development","Smit, H. (TU Delft Water Resources)","van der Zaag, P. (promotor); Ahlers, R. (copromotor); Delft University of Technology (degree granting institution)","2019","This dissertation examines Nile water security through the morphology of the river. Water projects are often legitimized by arguing that they will increase the reliability of water or increase its availability for abstract populations. Such analyses often leave unexplained who specifically benefit from these projects and, more so, who do not. Examining the morphology of the river – its form and structure – allows for a historical and material understanding of how hydraulic infrastructure and discourses of water security develop and what this means to whom. My aim is to better understand how scientists, engineers and water users engage in rearranging the morphology of the Nile and in so doing shape their relative positions vis-à-vis each other and the river. In this way the dissertation seeks to support more equitable and sustainable forms of Nile development.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-0-367-46004-4","","","","Dissertation submitted in fulfillment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of IHE Delft Institute for Water Education","","","","","Water Resources","","",""
"uuid:0161174a-4915-480f-970d-77c70a992da9","http://resolver.tudelft.nl/uuid:0161174a-4915-480f-970d-77c70a992da9","An FtsZ-centric approach to divide gene-expressing liposomes","Noguera López, J. (TU Delft BN/Christophe Danelon Lab)","Danelon, C.J.A. (promotor); Dogterom, A.M. (promotor); Delft University of Technology (degree granting institution)","2019","The creation of artificial cells with the minimal set of components to exhibit self-maintenance, self-reproducibility and evolvability (in other words, to be considered alive) is one of the most exciting areas within the field of synthetic biology. Such entities, here called minimal cells, are constructed by either the top-down or bottom-up approach. The top-down approach attempts to realize a minimal cell starting from an already existing unicellular organism and stripping down non-essential genes. In the bottom-up approach, separate biochemicals, such as phospholipids, DNA and proteins are assembled from scratch to reconstitute cell-like functions. On the way to tackle this curiosity-driven building challenge, we also expect to learn more about the most fundamental processes that define a living cell.","synthetic biology; artificial cell; minimal cell; cell division; FtsZ; FtsA; ZipA; ZapA; Min system; liposome; PURE system; supported lipid bilayer; fluorescence microscopy","en","doctoral thesis","","978-90-8593-427-1","","","","","","","","","BN/Christophe Danelon Lab","","",""
"uuid:10f84fa1-f5a2-4c3b-b40b-914d8858f536","http://resolver.tudelft.nl/uuid:10f84fa1-f5a2-4c3b-b40b-914d8858f536","Passenger-Oriented Timetable Rescheduling in Railway Disruption Management","Zhu, Y. (TU Delft Transport and Planning)","Goverde, R.M.P. (promotor); Delft University of Technology (degree granting institution)","2019","Railway systems are vulnerable to unexpected disruptions, which usually result in track blockages for a few hours. In practice, disruptions are handled manually and the resulting impact to passengers is rarely considered. To enable disruption management more efficiently, operator-friendly and passenger-friendly, this thesis develops mathematical models and solution methods for dynamic passenger assignment, timetable rescheduling, and the integrated passenger assignment with timetable rescheduling during disruptions.","Railways; Disruption management; Timetable rescheduling; Passenger assignment","en","doctoral thesis","TRAIL Research School","978-90-5584-259-9","","","","TRAIL Thesis Series no. T2019/16, The Netherlands TRAIL Research School","","2020-08-31","","","Transport and Planning","","",""
"uuid:33bdc816-d18f-4f33-89fe-a9cd462efd32","http://resolver.tudelft.nl/uuid:33bdc816-d18f-4f33-89fe-a9cd462efd32","Optical field sampling for imaging and optical testing","Gong, H. (TU Delft Team Raf Van de Plas)","Vdovin, Gleb (promotor); Verhaegen, M.H.G. (promotor); Delft University of Technology (degree granting institution)","2019","This dissertation has mainly aimed at developing novel techniques, methodologies for measuring the optical field, specifically both the amplitude and phase distribution. Furthermore, we have attempted to extend their applications in the scope of optical imaging, including lensless/holographic imaging, quantitative phase imaging and the calibration for light-sheet microscopes.","Optics; Adaptive optics; Holography; Light sheet microscopy","en","doctoral thesis","","","","","","","","","","","Team Raf Van de Plas","","",""
"uuid:d522cf97-f0b0-4506-8aa5-4b80ec5de723","http://resolver.tudelft.nl/uuid:d522cf97-f0b0-4506-8aa5-4b80ec5de723","Molecular simulation of tunable materials: Metal-organic frameworks & ionic liquids theory & application","Becker, T. (TU Delft Engineering Thermodynamics)","Vlugt, T.J.H. (promotor); Dubbeldam, D. (copromotor); Delft University of Technology (degree granting institution)","2019","Undoubtedly, materials that can be tuned on a molecular level offer tremendous opportunities. However, to understand and customize such materials is challenging. In this context, molecular simulation can be helpful. The work presented in this thesis deals with two types of materials, Metal-Organic Frameworks and Ionic Liquids, and the study with molecular simulation to determine their potential for specific gas separations. For the prediction of their behavior and relevant materials properties with molecular simulation, force fields of sufficient quality are required..","Metal-Organic Framework; Ionic Liquid; Molecular Simulation; Monte Carlo; Force Field; Polarization; Adsorption; Absorption; Refrigeration","en","doctoral thesis","","978-94-6366-215-4","","","","","","","","","Engineering Thermodynamics","","",""
"uuid:a27816fb-b042-4532-9304-4c3a675fd78f","http://resolver.tudelft.nl/uuid:a27816fb-b042-4532-9304-4c3a675fd78f","Right place, Right Time: Modeling the search time and specificity of Cas9 and Argonaute","Klein, M. (TU Delft BN/Chirlmin Joo Lab)","Joo, C. (promotor); Depken, S.M. (copromotor); Delft University of Technology (degree granting institution)","2019","The past decade has witnessed a revolution in genome-engineering. Using CRISPR-Cas9 DNA sequences can be marked, detected and cleaved. Rewriting life’s instructions in such a fashion paves the way towards numerous scientific, agricultural and medical applications. Without proper quantfication of the associated risks we face the danger of applying treatments without knowing its consequences. Most notable concern lies in Cas9’s specificity. Although Cas9 targets DNA complementary to any designed 20nt guide RNA, it notoriously also acts on non-fully matching sequences. This thesis describes work towards a physical understanding of how Cas9 and similar RNA/DNA guided systems locate and recognize their target. Chapter 1 introduces the reader to life’s most important molecules (DNA, RNA and protein) as well as to the RNA guided CRISPR and Argonaute (Ago) systems. The chapter also provides an introduction to the main modeling techniques used in subsequent chapters.","","en","doctoral thesis","","978-90-8593-420-2","","","","","","","","","BN/Chirlmin Joo Lab","","",""
"uuid:fbf363b5-d541-45dd-9127-a11443f60d33","http://resolver.tudelft.nl/uuid:fbf363b5-d541-45dd-9127-a11443f60d33","Acoustic multiple reflection elimination in the image domain and in the data domain","Zhang, L. (TU Delft Applied Geophysics and Petrophysics)","Slob, E.C. (promotor); Delft University of Technology (degree granting institution)","2019","One of the most crucial estimates retrieved from measured seismic reflection data is the subsurface image. The image provides detailed information of the subsurface of the Earth. Seismic reflection data consists of so-called primary and multiple reflections. Primary reflections are events that have been reflected a single time, while multiple reflections have been reflected multiple times before they are recorded by the receivers. Most current migration algorithms assume all reflections in the data are primary reflections. Hence, in order to retrieve an accurate image of the subsurface, multiple reflections need to be eliminated before migration. Keeping the multiple reflections in the measured seismic reflection data will lead to a sub-optimal image of the subsurface, because the multiple reflections will be imaged as if they were primary reflections. Such artefacts in the image can cause erroneous interpretation...","","en","doctoral thesis","","978-94-6384-094-1","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:84562df9-e26f-4746-8cf4-d5d8935149ef","http://resolver.tudelft.nl/uuid:84562df9-e26f-4746-8cf4-d5d8935149ef","A Random Walk Towards the Golden Fleece: Single-molecule Investigations of Argonaute Target Search","Cui, T.J. (TU Delft BN/Chirlmin Joo Lab)","Joo, C. (promotor); Delft University of Technology (degree granting institution)","2019","In this thesis we used single-molecule FRET to investigate the kinetic properties of a protein called Argonaute. While traditionally one uses bulk methods to investigate the molecular properties of proteins, bulk methods do not confer information that is transient, since that is inherently averaged out. Single-molecule methods, as their name imply, allow one to observe interactions between individual molecules and their substrates. This allows one to see fast kinetics which may otherwise be missed. Furthermore, while the bulk methods give one only the average kinetics, single molecule methods also give the distribution of probabilities to access certain bound or conformational states, which in turn can give the observer information of the nature of the stochastic process. We rely in this thesis on single-molecule FRET, an abbreviation for Förster Resonance Energy Transfer: It’s a process where energy is transferred through dipole-dipole interaction from a donor fluorophore to an acceptor fluorophore. The distance between fluorophores determines the efficiency of transfer on a length scale of » 10 nm. Since many biological processes take place on this length scale, this technique is exquisitely suitable to study biological processes real-time on the smallest scale, whether it is conformational changes, protein-protein interactions, or in this case, protein target search studies.","Single molecule; target search; Förster resonance energy transfer; proteins; DNA; RNA; RNA interference","en","doctoral thesis","","978-90-8593-421-9","","","","","","","","","BN/Chirlmin Joo Lab","","",""
"uuid:330eec0a-7a67-4184-8716-9a6b456ddae9","http://resolver.tudelft.nl/uuid:330eec0a-7a67-4184-8716-9a6b456ddae9","Towards increased global availability of surgical equipment","Oosting, R.M. (TU Delft Medical Instruments & Bio-Inspired Technology)","Dankelman, J. (promotor); Wauben, L.S.G.L. (copromotor); Madete, J.K. (copromotor); Delft University of Technology (degree granting institution)","2019","The need for surgery in low- and middle-income countries (LMICs) is tremendous; more people die from treatable surgical conditions than from tuberculosis, malaria and HIV put together. A crucial barrier to surgical care in LMICs is the limited availability of surgical equipment, which results in delays and cancellations of surgeries on a daily basis. The overall aim of this thesis is to study the use of surgical equipment in LMICs, in order to understand how to increase global availability of surgical equipment in the future. One of the strategies that is researched more thoroughly, is the design of context-specific surgical equipment. As many areas in Africa feel the burden of limited access to surgery, we have used hospitals in Africa as a case study, with a main focus on Kenya.
Quantitative risk management deals with the estimation of the uncertainty that is
embedded in the activities of banks and other financial players due, for example, to
market fluctuations. Since the well-being of such financial players is fundamental for the correct functioning of the economic system, an accurate description and estimation of such uncertainty is crucial.","","en","doctoral thesis","","978-94-6380-657-2","","","","","","","","","Applied Probability","","",""
"uuid:84dfc577-ca6f-43ea-9b24-4dc160c103f5","http://resolver.tudelft.nl/uuid:84dfc577-ca6f-43ea-9b24-4dc160c103f5","Database Acceleration on FPGAs","Fang, J. (TU Delft Computer Engineering)","Hofstee, H.P. (promotor); Al-Ars, Z. (promotor); Delft University of Technology (degree granting institution)","2019","Though field-programmable gate arrays (FPGAs) have been used to accelerate database systems, they have not been widely adopted for the following reasons. As databases have transitioned to higher bandwidth technology such as in-memory and NVMe, the communication overhead associated with accelerators has become more of a burden. Also, FPGAs are more difficult to program, and GPUs have emerged as an alternative technology with better programming support. However, with the development of new interconnect technology, memory technology, and improved FPGA design tool chains, FPGAs again provide significant opportunities. Therefore, we believe that FPGAs can be attractive again in the database field. This thesis focuses on FPGAs as a high-performance compute platform, and explores using FPGAs to accelerate database systems. It investigates the current challenges that have held FPGAs back in the database field as well as the opportunities resulting from recent technology developments. The investigation illustrates that FPGAs can provide significant advantages for integration in database systems. However, to make further progress, studies in a number of areas, including new database architectures, new types of accelerators, deep performance analysis, and the development of the tool chains are required. Our contributions focus on accelerators for databases implemented in reconfigurable logic. We provide an overview of prior work and make contributions to two specific types of accelerators: both a compute-intensive (decompression) and a memory-intensive (hash join) accelerator.","Database; FPGA; Acceleration; Decompression; Join","en","doctoral thesis","","978-94-028-1868-0","","","","SIKS Dissertation Series No. 2019-37","","","","","Computer Engineering","","",""
"uuid:d4e7d2a8-aed1-48c8-98c3-eb61f18dde0b","http://resolver.tudelft.nl/uuid:d4e7d2a8-aed1-48c8-98c3-eb61f18dde0b","High-silica Zeolites as Novel Adsorbents for the Removal of Organic Micro-pollutants in Water Treatment","Jiang, N. (TU Delft Sanitary Engineering)","Rietveld, L.C. (promotor); Heijman, Sebastiaan (promotor); Delft University of Technology (degree granting institution)","2019","A broad range of organic micropollutants (OMPs), including pesticides, pharmaceuticals and personal care products, are present in drinking water sources and effluent of wastewater treatment plants (Kolpin et al., 2002; Stackelberg et al., 2004). The presence of OMPs in water significantly threatens public health and thus calls for effective treatment technologies (Alan et al., 2008; Pal et al., 2010). Zeolites are highly structured minerals with uniform micropores (pore diameters < 2nm) (McCusker and Baerlocher 2001). The pores of zeolites allow for the adsorption of OMPs and potentially avoid the negative influence of natural organic matter (NOM) (de Ridder et al., 2012; Hung and Lin 2006; Knappe and Campos 2005). High-silica zeolites have hydrophobic surfaces, which could prevent water competition with OMP adsorption (Maesen 2007; Rakic et al., 2010; Tsitsishvili 1973). High-silica zeolites are thus expected to be potential alternative adsorbents for activated carbon in water treatment.","","en","doctoral thesis","","978-94-6323-961-5","","","","","","","","","Sanitary Engineering","","",""
"uuid:856522dc-528d-4ac6-95f7-d44fb8791ddc","http://resolver.tudelft.nl/uuid:856522dc-528d-4ac6-95f7-d44fb8791ddc","AlGaN/GaN high electron mobility transistor (HEMT) based sensors for gas sensing applications","Sokolovskij, R. (TU Delft Electronic Components, Technology and Materials)","Zhang, Kouchi (promotor); Delft University of Technology (degree granting institution)","2019","The rapid development and market growth of microelectronics technology continues to provide expanding connectivity, productivity, entertainment and well-being to billions of users globally. Moreover, continuous demand for more on-chip functionally presents an exciting opportunity for integration of various chemical sensors for monitoring pollution of our surrounding environment and exposure to toxic, corrosive or flammable gases.","AlGaN/GaN; HEMT; gas sensor; gate recess; 2DEG; H2S; H2","en","doctoral thesis","","978-94-028-1851-2","","","","","","2020-06-01","","","Electronic Components, Technology and Materials","","",""
"uuid:e98f7bbf-4639-49fc-96a4-b3543da637fc","http://resolver.tudelft.nl/uuid:e98f7bbf-4639-49fc-96a4-b3543da637fc","Unravelling Turbulent Emulsions with lattice-Boltzmann simulations","Mukherjee, S. (TU Delft OLD SnC Culture; TU Delft ChemE/Transport Phenomena)","van den Akker, H.E.A. (promotor); Kenjeres, S. (promotor); Delft University of Technology (degree granting institution)","2019","The mixing of two immiscible fluids, often under turbulent conditions, can lead to the formation of an emulsion, where droplets of one fluid are embedded in another fluid. The occurrence of emulsions is commonplace across industries, ranging from the oil industry to food processing and biotechnology. Why emulsions serve diverse applications, in grossly simple terms, is due to their structural organization, as the two fluids in an emulsion form exhibit very different physical properties than they do when separated. The stability of the emulsion structure, hence, is key for its utility. The presence of impurities, or surfactants, in the constituent fluids, greatly enhances emulsion stability, by preventing the coalescence of droplets (which would lead to phase segregation). Emulsion research, over the past century, has developed into a thriving field, driven by the force of detailed experimentation that has significantly informed modeling, control and design of processes dealing with emulsification.
Despite being predictable to a degree, the true nature of droplet dynamics at the
heart of emulsification remains unknown. It is experimentally exceedingly difficult to illumine the evolution of interfaces undergoing coalescence and breakup, while simultaneously reporting the three-dimensional, turbulent flow features. It is slowly becoming feasible, however, to tackle these problems by using numerical simulations. Such simulations, too, involve a level of modeling complexity and pose heavy computational demands, and have hence remained an exception. It is only now becoming feasible to simulate such complex flows, allowing us to augment experiments with numerical insights. In this thesis, we attempt to unravel emulsification (to a small extent) by using simulations resolving both flow and interfaces, while considering fluids with impurities.","Turbulence; emulsions; droplet dynamics; simulations","en","doctoral thesis","","978-94-6384-093-4","","","","","","","","","OLD SnC Culture","","",""
"uuid:e889f189-dbd1-4d5c-8184-8456418d1886","http://resolver.tudelft.nl/uuid:e889f189-dbd1-4d5c-8184-8456418d1886","Controlling disruptive and radical innovations in large-scale services firms","Das, P.A.C. (TU Delft Policy Analysis)","Verbraeck, A. (promotor); Verburg, R.M. (promotor); Delft University of Technology (degree granting institution)","2019","How can large-scale services firms, such as banks, best undertake disruptive and radical innovations to enter new areas of growth, without interfering with current operations? This thesis shows that by embedding tailored controls for disruptive and radical innovations firms can explore these types of innovations more effective and overcome innovation barriers. Moreover, exploration cannot happen without controls that drive discipline and creativity. Yet, comparable to a pendulum swing, a firm has to install controls that provide room to spark creativity, but also ensure behaviour of management and employees is steered towards organisational goals.","","en","doctoral thesis","","978-94-6366-231-4","","","","","","2019-12-10","","","Policy Analysis","","",""
"uuid:93e702d1-92b2-4025-ab57-6d2c141ed14d","http://resolver.tudelft.nl/uuid:93e702d1-92b2-4025-ab57-6d2c141ed14d","Structural Extracellular Polymeric Substances from Aerobic Granular Sludge","Felz, S. (TU Delft BT/Environmental Biotechnology)","van Loosdrecht, Mark C.M. (promotor); Lin, Y. (copromotor); Delft University of Technology (degree granting institution)","2019","Biofilms are pervasive in hydrated environments including wastewater and drinking water systems. A novel promising biological wastewater treatment process offering several advantages towards wastewater treatment with the conventional activated sludge process is the aerobic granular sludge process. Aerobic granular sludge is a special kind of biofilm of spherical shape formed by microorganisms without the addition of carrier material. Biofilms are microbial aggregates composed of microorganisms and extracellular polymeric substances (EPS). EPS are a complex mixture of proteins, polysaccharides, uronic acids, nucleic acids lipids and humic substances. EPS have multiple important functions within a biofilm. They contribute to the initial aggregation of microbial cells and form a highly hydrated matrix being responsible for the structural integrity of a biofilm. By this EPS also provide protection, can serve as a nutrient source and bind extracellular enzymes. Being a complex mixture of multiple compounds makes EPS analysis challenging and therefore the actual composition and structure of the matrix of biofilms is still largely unknown. Aerobic granular sludge and part of its EPS, structural EPS, has hydrogel properties. These structural EPS can be extracted from the granules and were shown to be strongly linked to the structural integrity of the sludge. Characterization of the structural EPS will help to understand the stability of granular sludge and in general of biofilms. The focus of this thesis was to analyze the composition of structural EPS from aerobic granular sludge and to analyze its hydrogel characteristics. Additionally challenges and shortcomings concerning EPS extraction and characterization are illustrated and discussed. Chapter 1 gives a general introduction into biofilms and their EPS, as well as EPS extraction. Issues with current EPS characterization are provided and the outline of this thesis is presented. In Chapter 2 the impact of the extraction method on aerobic granular sludge and the obtained EPS was demonstrated with six different EPS extraction methods including mechanical and chemical treatment. Results showed that to obtain structural EPS it is necessary to dissolve the granular matrix. To dissolve the granular matrix harsh extraction methods are required, and there is no ”one fits all” method to dissolve the granular matrix for structural EPS extraction. Chapter 3 illustrates and discusses shortcomings of current EPS analysis with colorimetric methods for the quantification of proteins, sugars, uronic acids and humic substances. Drawbacks of these colorimetric methods include: a high dependency on the standard compound selection, a lack of suitable standards which feature a similar composition with the analyzed sample and cross-interference among EPS compounds in the measurements. Results showed that, these methods are not suitable to accurately analyze complex samples. The complexity of structural EPS was illustrated by the overall composition of granular sludge structural EPS: besides a protein fraction, the carbohydrate part itself contained a sugar alcohol, seven neutral sugars, two amino sugars and two uronic acids. Simply depending on colorimetric methods for EPS analysis is not recommended. Novel analytic methods need to be developed and implemented for in depth biofilm EPS analysis. In Chapter 4 structural EPS hydrogels formed with metal ions were characterized in terms of gel stiffness and structural homogeneity. Additionally the influence of the metal ion chelating reagent EDTA on the structural integrity of ionic structural EPS hydrogels was investigated. For comparison, alginate, polygalacturonic acid and κ- carrageenan were used as a reference material. The structural EPS hydrogels were less stiff than alginate hydrogels. The structure of lyophilized ionic structural EPS hydrogels was visualized with environmental SEM. Different metal ions had a different impact on the structure of the lyophilized gels. In comparison to alginate, polygalacturonic acid and κ-carrageenan, the integrity of structural EPS hydrogels was less sensitive to EDTA. After one month incubation in an EDTA solution, structural EPS gel beads were still present as a gel while the reference polysaccharide hydrogels failed to keep the gel structure. Apparently structural EPS have a different ionic hydrogel formation mechanism. Multiple functional groups are suggested to be involved in the gel formation of structural EPS. Chapter 5 focused on the analysis of strongly anionic macromolecules in the EPS of aerobic granular sludge. The presence of glycosaminoglycans was evaluated by SDSPAGE analysis, hyaluronic acid and sulfated glycosaminoglycan quantification kits for the mammalian extracellular matrix and glycosaminoglycan specific enzymatic digestion. The linking between sulfated glycosaminoglycans and proteins was analyzed by proteolytic enzymatic digestion. Furthermore, Heparin Red staining was used to visualize the distribution of the anionic macromolecules in the granular matrix. Macromolecules similar to Hyaluronic acid and sulfated glycosaminoglycans were discovered in the EPS, hence named Hyaluronic acid-like and sulfated glycosaminoglycans-like compounds. Sulfated glycosaminoglycans-like compounds were bound to proteins. In aerobic granular sludge the strongly anionic molecules were distributed in the microcolonies, at the outer part of the microcolonies and within the extracellular matrix between the colonies. Glycosaminoglycans-like compounds showed to be comparable to those of vertebrates. Structural EPS were therefore much more complicated than expected. Chapter 6 represents the outlook of this thesis. Results from the previous chapters are summarized and suggestions for future EPS research are given. Suggestions include extraction of EPS, chemical analysis of EPS and general approaches.","","en","doctoral thesis","","978-94-028-1775-1","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:d4b385c9-6d47-44f6-a959-4125825e7f06","http://resolver.tudelft.nl/uuid:d4b385c9-6d47-44f6-a959-4125825e7f06","A Tactile Correct (Biofidelic) Teaching Model for Training Medical Staff to Diagnose Breast Cancer: Detecting Breast Disease using Palpation","Veitch, D.E. (TU Delft Applied Ergonomics and Design)","Goossens, R.H.M. (promotor); Molenbroek, J.F.M. (copromotor); Delft University of Technology (degree granting institution)","2019","When breast cancer is detected early, and is in the localized stage, the 5-year relative survival rate is 100%. (Australian Institute of Health and Welfare 2019) There is a survival advantage in detecting breast cancer early and treating it quickly (Australian Institute of Health and Welfare, 2019; Cancer Australia, 2004, updated 2009; McDonald, Saslow, and Alciati, 2004; National Breast Cancer Foundation, 2019). Clinical Breast Examination (CBE) is a method which can fast track symptomatic women with a breast lump to scarce medical specialist resources to speed the investigation into their putative cancer and facilitate early treatment if needed. Yet too many medical students and doctors feel they could improve their skills in clinical breast examination. Realistic breast models will help the necessary training (Saslow et al. 2004). But knowing what skills need to be transferred and how to design physical breast models are very different things. What are the important skills in identifying and discriminating breast masses by touch and how do simulation models and a validated testing tool assist skill acquisition? Here’s a good example of the creation and development of a successful design from the following brief: to make physical breast models realistic enough to be integral to training and a subsequent testing package, where medical personnel acquire, maintain and improve the skills required to detect possible breast cancer by palpation.
is the alignment of an organization’s real estate to its corporate strategy. In the
last thirty years, fourteen Corporate Real Estate (CRE) alignment models have
been made. In some of these CRE alignment models it is indicated that they strive
for maximum or optimum added value. Even though extensive research into these
existing CRE alignment models has provided us with valuable insights into the steps, components, relationships and variables that are needed in the alignment process","Corporate real estate alignment; design and decision approach; adding value; preference measurement","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-226-0","","","","A+BE | Architecture and the Built Environment No 12 (2019)","","","","","Real Estate Management","","",""
"uuid:3c0e4277-1aff-48ad-b72b-6926d2c876c2","http://resolver.tudelft.nl/uuid:3c0e4277-1aff-48ad-b72b-6926d2c876c2","Data-based Dynamic Condition Assessment of Railway Catenaries","Wang, H. (TU Delft Railway Engineering)","Dollevoet, R.P.B.J. (promotor); Nunez, Alfredo (copromotor); Delft University of Technology (degree granting institution)","2019","Railway catenary is the main infrastructure that delivers electric power for train operation. It is a structure commonly constructed along the railway line with contact wires suspended above the track. One or multiple pantographs mounted on the roof of a moving train collects electric current from the catenary through the sliding contact with a contact wire. With the increase of train speed and traffic density in recent years, the catenary is subject to higher impacts from pantographs, leading to critical failures such as the breakage of contact wire. This results in not only an increasing cost for reactive maintenance, but also disruptions of train service that affect many passengers.
To reduce the life cycle cost and failure rate of catenary in practice, planned and predictive maintenance is desired based on the condition monitoring of catenary. However, the monitoring data are underutilized to effectively assess the catenary condition and facilitate maintenance decision-making. This dissertation contributes in improving the dynamic condition assessment of catenary using the data from condition monitoring. New performance indicators (PIs) of catenary are defined in a way that is adaptive to the variations of monitoring data measured under different circumstances, such as the changes of catenary structure, pantograph type and train speed. The relationship between the monitoring data and the contact wire irregularities is studied using historical data and simulations. Data-based approaches are developed for the quantitative assessment of dynamic catenary condition.
First, an intrinsic wavelength contained in the pantograph-catenary contact force is identified and defined as the catenary structure wavelength (CSW). It is caused by the periodic variations of contact wire stiffness attributed to the cyclic structure of catenary that must regulate the height of contact wire in every spans and interdropper distances. An approach that adaptively extracts the CSWs of pantograph-catenary contact force is proposed based on the empirical mode decomposition algorithm. It extracts the CSW signals corresponding to the span lengths and interdropper distances, respectively, summing to form a characteristic signal of CSWs. The residual signal of the contact force excluding the CSWs is regarded as the non-CSW signal. The mean and standard deviation of the CSWs signal are used as PIs to indicate the condition of the main catenary geometric parameters. A PI based on the quadratic time-frequency representation of the non-CSW signal is proposed for detecting and localizing the local irregularities of contact wire. The proposed PIs are tested by simulation and measurement data and proven effective and adaptive owning to the use of CSWs and non-CSW signal.
Second, the concept of CSW is expanded to the pantograph head acceleration from which the CSWs and non-CSW signal can also be extracted using the same approach developed for the contact force. Considering the characteristics of pantograph head acceleration, the wavelet packet entropy of the CSWs and non-CSW signal is proposed as PIs for detecting contact wire irregularities with different lengths. The entropy of CSWs is used for detecting irregularities with a length longer than 5 m, while the entropy of non-CSW signal is for the short-length local irregularities. An approach to detect and verify contact wire irregularities using the measurement data of pantograph head vertical acceleration from frequent inspections is proposed. The approach is tested using historical inspection data from which irregularities at all lengths are detected and verified. Maintenance resources can thus be specifically allocated to verified detection results to save cost and time.
Third, through analyzing historical inspection data and data-based simulation results, it is found that while the contact wire irregularity deteriorates the pantograph-catenary interaction, the formation of irregularity is also associated with the effects of the interaction like variations of contact and friction forces. Concretely, the contact wire height irregularity with an amplitude of 8 mm can cause considerable increase in the standard deviation of pantograph-catenary contact force. In addition, the irregularity with a certain wavelength can induce the dynamic response with the same wavelength in the contact force. This in turn makes the irregularity part deteriorating faster than the other parts of catenary. At a smaller scale, when the wear irregularity of contact wire has an average wire thickness loss of about 1.5 mm, it can also increase the standard deviation of contact force by more than 5%. Due to the fixing effect at the registration arms and droppers, the wear irregularity commonly contains structural wavelengths of catenary including span lengths and interdropper distances. It is also found that the wear irregularity tends to grow and spread toward in the common or dominant running direction of trains in the specific line. Nevertheless, an existing defect may not affect every pantograph passage and every type of data measured. It is thus advised to measure multiple types of data and perform more frequent inspections to avoid undetected defects.
Last, a data-driven approach using the Bayesian network (BN) to fuse the available inspection data of catenary into an integrated PI is proposed. The BN topology is first structured based on the physical relations between five data types including the train speed, dynamic stagger and height of contact wire, pantograph head acceleration, and pantograph-catenary contact force. Then, tailored PIs are individually defined and extracted from the five types of data as the BN input. As the output of BN, an integrated PI is defined as the overall condition level of catenary considering all defects that can be reflected by the five types of data. Finally, using historical inspections data and maintenance records from a section of high-speed line, the BN parameters are estimated to establish a probabilistic relationship between the input and the output PI. By testing the BN-based approach using new inspection data from the same railway line, it is shown that the integrated PI can adequately represent the catenary condition, leading to considerable reduction in the false alarm rate of catenary defect detection compared with the current practice. The approach can also work acceptably with noisy or partly missing data.
In summary, this dissertation answers how to adequately transform the condition monitoring data of catenary into quantitative assessments of the dynamic catenary condition. The proposed approaches are intended for generic implementations in railway catenaries worldwide.","railway catenary; condition assessment; pantograph-catenary interaction; performance indicator; adaptive data processing; data-driven approach; catenary structure wavelength","en","doctoral thesis","","978-94-6323-962-2","","","","","","2019-11-18","","","Railway Engineering","","",""
"uuid:75ac8d9e-293f-4bff-90b3-467010352032","http://resolver.tudelft.nl/uuid:75ac8d9e-293f-4bff-90b3-467010352032","Crossing borders in coastal morphodynamic modelling","Luijendijk, Arjen (TU Delft Coastal Engineering)","Stive, M.J.F. (promotor); Aarninkhof, S.G.J. (promotor); de Schipper, M.A. (copromotor); Delft University of Technology (degree granting institution)","2019","Sand is the second-most used natural resource behind water and will be under increasingly high demand in coming decades. One of the reasons for this is that, worldwide, sand is more and more applied to counteract beach erosion.
This thesis presents new techniques in remote sensing and numerical modelling to better understand beach erosion and predict the dynamics of our sandy coastlines. To this end, it explores the crossing of three types of borders.
First, international borders are crossed in a global assessment of historic beach dynamics using satellite imagery. Second, the boundaries between model time scales - from storms to decadal times - are dissolved by means of a new morphodynamic acceleration technique. Finally, the developed seamless modelling approach enables to cross the ever-changing boundary between water and land, where sand moves from the wet to the dry domain and vice versa.
This work results in a landscaping model that can better forecast the future behavior of sandy beaches in a changing climate.
The core of this thesis is three case studies of architectural designs that use landscape strategies. The analytical model for landscape architectural composition that Steenbergen and Reh (2003) developed for the European gardens is applied as in drawing analysis of these building's inner space composition. By distinguishing the landscape composition into a four layer model - ground form, spatial form, metaphorical form and programmatic form - the analysis will alter the reading of three architectural projects.
Rem Koolhaas and OMA's unbuilt Jussieu design for two university libraries in Paris of 1992 is visualised for the first time as it could have looked if built. The Rolex Learning Centre at EPF Lausanne was declared 'landscape' as architecture by its designers Japanese Architects Kazuyo Sejima and Ryue Nishizawa (SANAA) at the opening in 2010. The City of Culture of Galicia in Santiago de Compostela by American architect Peter Eisenman was designed in 1999 in a process of layering - similar to the layer model analysis of this thesis.
This thesis will interpret and compare the three architectural designs. It distinguishes design strategies, methods and landscape attitudes that are specific or commonly applied to the projects. Original drawing analysis and critique reveals unexplored potentials for landscape strategies in the architectural discipline.","Architecture; Design Strategies; Landscape Architecture; OMA; SANAA; Peter Eisenman","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-236-9","","","","A+BE | Architecture and the Built Environment No 13 (2019)","","","","","Teachers of Practice","","",""
"uuid:93e4b064-c2fe-413f-b70b-9442a5a379ba","http://resolver.tudelft.nl/uuid:93e4b064-c2fe-413f-b70b-9442a5a379ba","High Resolution Imaging of Noise from Novel Integrated Propeller Systems","Malgoezar, A.M.N. (TU Delft Aircraft Noise and Climate Effects)","Simons, D.G. (promotor); Veldhuis, L.L.M. (promotor); Snellen, M. (copromotor); Delft University of Technology (degree granting institution)","2019","Humans localize sound incessantly using the ears and it proves to be important of
our awareness. It can also be a cause of annoyance to the community whenever
sound is in the form of noise. Due to increase of wealth, industry, technology and
corresponding globalization there is a large increase of noise. Especially with the
increase of air traffic, noise is increased around the vicinities of airports. While
environmental pollution is important and needs to be reduced, the accompanying
noise is also of great importance. For example, while propeller-driven aircraft can
be more efficient energy-wise, it also results in more noise compared to a traditional turbofan. For potential noise reduction it is necessary to first obtain both correct information about the origin and strength of these noise sources.","acoustic beamforming; numerical optimization; aeroacoustics; phased microphone array; array design","en","doctoral thesis","","978-94-028-1843-7","","","","","","","","","Aircraft Noise and Climate Effects","","",""
"uuid:8c60ac7e-ee2f-4090-abeb-5b5daab6ea36","http://resolver.tudelft.nl/uuid:8c60ac7e-ee2f-4090-abeb-5b5daab6ea36","Quantum computing in practice: fault-tolerant protocols and circuit-mapping techniques","Lao, L. (TU Delft FTQC/Bertels Lab)","Bertels, K.L.M. (promotor); Almudever, Carmen G. (copromotor); Delft University of Technology (degree granting institution)","2019","Quantum computing promises to solve some problems that are intractable by classical computers. Several quantum processors based on different technologies and consisting of a few tens of noisy qubits have already been developed. However, qubits are fragile as they tend to decohere extremely quickly and quantum operations are faulty, making reliable computation very difficult. Moreover, quantum processors have hardware constraints such as limited qubit connectivity and shared classical control, making quantum algorithms not directly executable. This thesis focuses on some of the challenges of the implementation of quantum algorithms on near-termintermediate-scale and future large-scale quantum processors. More precisely, it investigates how to perform reliable quantum computation using fault-tolerant protocols and how to execute quantum algorithms on hardware-constrained processors using circuit-mapping techniques.","Fault-tolerant quantum computing; Quantum error correction; Quantum circuit mapping; Quantum computer architecture; Surface code","en","doctoral thesis","","978-94-028-1838-3","","","","","","","","","FTQC/Bertels Lab","","",""
"uuid:a400512d-966d-402a-a40a-fedf60acf22c","http://resolver.tudelft.nl/uuid:a400512d-966d-402a-a40a-fedf60acf22c","When Euler meets Lagrange: Particle-Mesh Modeling of Advection Dominated Flows","Maljaars, J.M. (TU Delft Rivers, Ports, Waterways and Dredging Engineering)","Uijttewaal, W.S.J. (promotor); Labeur, R.J. (copromotor); Delft University of Technology (degree granting institution)","2019","This thesis presents a numerical framework for simulating advection-dominated flows which reconciles the advantages of Eulerian mesh-based schemes with those of a Lagrangian particle-based discretization strategy. Particularly, the strategy proposed in this thesis inherits the diffusion-free properties as in Lagrangian particle-based advection, while simultaneously possessing high-order accuracy and local conservation properties as in state-of-the-art Eulerian mesh-based discretization strategies. These properties render the scheme particularly apt for simulating flow- and transport processes in which the physical diffusion is low, such as turbulent flows, or simulating flow problems with sharp and complex-shaped interfaces, such as the air-water interface in breaking ocean waves.","Lagrangian-Eulerian; finite element method; Hybridized discontinuous Galerkin; particle-in-cell; PDE-constrained optimization; conservation; Advection-dominated flows; Advection-diffusion; incompressible Navier-Stokes; Multiphase flows","en","doctoral thesis","","978-94-6375-581-8","","","","","","","","","Rivers, Ports, Waterways and Dredging Engineering","","",""
"uuid:bd0b3a57-2f03-4bac-bf5e-1d5fac03df88","http://resolver.tudelft.nl/uuid:bd0b3a57-2f03-4bac-bf5e-1d5fac03df88","Blow-up Dynamics and Orbital Stability for Inhomogeneous Dispersive Equations","Csobo, E. (TU Delft Analysis)","van Neerven, J.M.A.M. (promotor); le Coz, S. (promotor); Delft University of Technology (degree granting institution)","2019","This dissertation addresses the local-well posedness, singularity formation, and orbital stability of standing waves to inhomogeneous nonlinear dispersive equations. Inhomogeneous equations are equations with space-dependent coe
cients, which account for the impurities of the propagating media or the presence of an outer potential. Despite playing a crucial role in various domains in physics, the mathematical investigation of inhomogeneous dispersive equations has only started recently, and it is still in its early stages. In this dissertation, we investigate various properties of Schrödinger and Klein-Gordon equations.
(i) What are the main institutional frameworks that have arisen in the European public transport sector since the pressure for a wider usage of ‘competition’ appeared in the 1980s?
(ii) How have these institutional frameworks fared since? In particular, what developments can be observed and what can be said about them?
(iii) What are the main resulting policy challenges and options?","Competition; Competitive tendering; Deregulation; Public transport; Institutions","en","doctoral thesis","","978-94-6384-084-2","","","","","","2019-11-15","","","Organisation & Governance","","",""
"uuid:bfc648bb-5aa7-4c49-82a8-4faf2d8a6f37","http://resolver.tudelft.nl/uuid:bfc648bb-5aa7-4c49-82a8-4faf2d8a6f37","Chaperone-mediated protein rescue: A single-molecule study","Avellaneda Sarrio, M.J. (TU Delft BN/Sander Tans Lab)","Tans, S.J. (promotor); Delft University of Technology (degree granting institution)","2019","The interaction between proteins is central not only to this thesis, but to most processes in the cell. After millions of years of evolution, the accomplished variety, complexity and beauty of the proteomic network is astonishing. When one realizes that these interplays rely on the delicate process of protein folding, a very special sort of protein interaction comes into play: that between molecular chaperones and their clients. Chaperones are specialized proteins crucial to protein folding. They are thought to guide polypeptides through their conformational search from synthesis, preventing alternative hazardous pathways, and to rescue proteins from misfolded and aggregated states....","","en","doctoral thesis","","978-94-92323-31-6","","","","","","2020-03-28","","","BN/Sander Tans Lab","","",""
"uuid:17c5457a-5fc7-420b-92a7-ad121d4b9fa9","http://resolver.tudelft.nl/uuid:17c5457a-5fc7-420b-92a7-ad121d4b9fa9","Efficient cryptographic building blocks for processing private measurements in e-healthcare","Nateghizad, M. (TU Delft Cyber Security)","Lagendijk, R.L. (promotor); Delft University of Technology (degree granting institution)","2019","In order to achieve practical e-healthcare systems, five requirements should be addressed, namely 1) availability, 2) integrity, 3) accuracy, 4) confidentiality, and 5) efficiency. Using remote computer storage and processing services satisfies availability, integrity, and efficiency. However, it introduces privacy concerns regarding the leakage of private medical data to unauthorized parties, which violates GDPR. Data encryption is one of the widely used techniques to address those privacy concerns in e-healthcare systems. Although data encryption provides data confidentiality, while the accuracy and integrity of the data are preserved, it introduces computation and communication overheads that downgrade the efficiency of the e-healthcare systems.
To precisely find the bottlenecks in achieving privacy-preserving e-healthcare systems, we design three real-life e-healthcare scenarios. The scenarios are different in terms of the number of parties used in the system, the way that data are stored (centralized or distributed), and encryption key setting (single-key or multiple-key). Then, we identify the challenges and required cryptographic protocols for each scenario. Afterward, we investigate the performance of several applications that are using the same identified cryptographic protocols. We show that the existing cryptographic protocols, which are required for our scenarios, are dominating the computation and communication costs of the applications.
To address the challenges in the single-key setting, we improve the existing core building blocks, comparison, and equality testing, and develop new protocols to mitigate the overall costs of e-healthcare systems. We show that data filtering and retrieval protocols are still highly resource demanding, even though efficient building blocks are used. Thus, we develop a new secure indexing protocol that reduces the data filtering cost significantly. Moreover, we develop a novel data packing technique to achieve an efficient data retrieval protocol by using our indexing protocol. For themultiple-key setting, we introduce a homomorphic proxy re-encryption scheme. Our encryption scheme
has several properties such as an unlimited number of re-encryption, supporting homomorphism after each re-encryption, one-direction re-encryption, and non-interactive re-encryption key generation. Afterward, we use our encryption scheme for data filtering in the multiple-key setting and evaluate its performance.
The results of the performance analysis of our protocols show that improving core building blocks can significantly decrease both computation and communication costs of the cryptographic applications. Moreover, we show that developing techniques such as data packing and indexing can limit the number of homomorphic operations considerably, and consequently, mitigate the overall computation and communication costs of the cryptographic applications.","e-healthcare; privacy; multi-party protocol; building block; efficiency","en","doctoral thesis","","978-94-6366-224-6","","","","","","","","","Cyber Security","","",""
"uuid:011f686f-5f5c-4fc5-9ba5-b613f95abfe2","http://resolver.tudelft.nl/uuid:011f686f-5f5c-4fc5-9ba5-b613f95abfe2","Mechanical design of dynamic hand orthoses: Expanding technology with comprehensive overviews and alternative pathways","Bos, R.A. (TU Delft Biomechatronics & Human-Machine Control)","Herder, J.L. (promotor); Plettenburg, D.H. (copromotor); Delft University of Technology (degree granting institution)","2019","Orthoses have evolved from simply maintaining bone fractures and correcting spinal deformities to full mechatronic systems that can read a person’s intention and translate that into a desired motion or force path. A vast variety in pathologies (e.g., stroke, muscular dystrophies), applications (e.g., daily assistance, research) and environments (e.g., home, clinic) are possible. The term dynamic hand orthosis is able to cover this full range of applications and is therefore used as an umbrella term. In order to map the research field of dynamic hand orthosis and improve on the state-of-the-art, this thesis proposes a design methodology that categorizes mechatronic components and collects rationale to make specific design choices through scoping & optimization. Finally, a proof-of-principle dynamic hand orthosis was made and tested on a single participant in a case study experiment.","Mechanical design; dynamic hand orthosis; grasp modeling; miniature hydraulics","en","doctoral thesis","","","","","","","","","","","Biomechatronics & Human-Machine Control","","",""
"uuid:d418a98b-f2aa-4af3-b0e0-864875fcad2b","http://resolver.tudelft.nl/uuid:d418a98b-f2aa-4af3-b0e0-864875fcad2b","Telecom-wavelength quantum memories in rare earth ion-doped materials for quantum repeaters","Falamarzi Askarani, M. (TU Delft QID/Tittel Lab)","Tittel, W. (promotor); Hanson, R. (promotor); Delft University of Technology (degree granting institution)","2019","The quantum internet, when finally deployed, will enable a plethora of new applications such as theoretically proven secure communication and networked quantum computing, much the same as its classical counterpart whose development began back in the 1990’s. The task of creating a globe-spanning quantum network, however, is proving a rather difficult task due to the detrimental effect of loss on photon transmission. This problem can in theory be solved using so-called quantum repeaters. In one of the most well-known configurations, the long-distance span is divided into smaller segments—so called elementary links—at whose ends pairs of entangled photons are generated. One photon per pair is stored in a quantum memory, and the second member is transmitted via optical fiber to a remote measurement stations positioned at the centre of the segment. There, a joint measurement of the two photons, one from each end, then heralds the distribution of entanglement between the two quantum memories, i.e. heralds entanglement over the elementary link. In the final step, all neighbouring links are combined via a second joint measurement, and end-to-end entanglement is created...","","en","doctoral thesis","","","","","","","","","","","QID/Tittel Lab","","",""
"uuid:36063bcb-0a9b-4a14-bc36-cb18b43eb413","http://resolver.tudelft.nl/uuid:36063bcb-0a9b-4a14-bc36-cb18b43eb413","Cross-wind from linear and angular satellite dynamics: The GOCE perspective on horizontal and vertical wind in the thermosphere","Visser, T. (TU Delft Astrodynamics & Space Missions)","Visser, P.N.A.M. (promotor); Doornbos, E.N. (promotor); de Visser, C.C. (promotor); Delft University of Technology (degree granting institution)","2019","The decay of satellite orbits has been used extensively to obtain thermospheric density measurements. With the introduction of accelerometers in spacecraft, the spatial resolution of these data could be increased. At the same time, the direction of the measured acceleration provides a measure for the direction of the incoming flow, and therefore of the local cross-wind. In this thesis, the angular acceleration of the Gravity field and steady-state Ocean Circulation Explorer (GOCE) satellite, an Earth explorer by the European Space Agency (ESA), is used as a source for such thermospheric wind data for the first time. The goal is to improve aerodynamic parameter estimates and assess the quality of accelerometer-derived wind data by comparing this new data set to that derived from linear accelerations...","GOCE; Thermospheric wind; Vertical wind; Satellite angular dynamics","en","doctoral thesis","","978-94-028-1787-4","","","","","","","","","Astrodynamics & Space Missions","","",""
"uuid:5afcb3ea-813c-4f7b-ae98-df19ed50f5c2","http://resolver.tudelft.nl/uuid:5afcb3ea-813c-4f7b-ae98-df19ed50f5c2","Radionuclide generator based production of therapeutic lutetium-177","Bhardwaj, R. (TU Delft RST/Applied Radiation & Isotopes)","Wolterbeek, H.T. (promotor); Delft University of Technology (degree granting institution)","2019","Lutetium-177 (177Lu) is a radionuclide with well-established potential in targeted radionuclide therapy (TRNT). 177Lu emits β- particles with a tissue penetration depth of 2 mm, which makes it effective in treating small tumors and causes lower toxicity to nearby healthy cells. The β- emission is also accompanied by gamma ray emission that allows simultaneous imaging of the tumor treatment. The last decade has witnessed a three fold increase in the 177Lu related publications and its demand is expected to grow significantly in the coming years. Currently, the 177Lu availability is completely dependent on the availability of nuclear reactors. They are prone to shutdowns for maintenance, social, economic, political and other unexpected reasons. The exclusive dependency of radionuclide production on nuclear reactors is known to lead to major supply shortages. In general, there is a consensus among the nuclear medicine scientists that new production pathways should be developed that can provide some independence from the nuclear reactor availability...","","en","doctoral thesis","","","","","","","","","","","RST/Applied Radiation & Isotopes","","",""
"uuid:245192d2-34c4-44a8-81da-61f12fda5c33","http://resolver.tudelft.nl/uuid:245192d2-34c4-44a8-81da-61f12fda5c33","Advanced calibration and measurement techniques for (sub)millimeter wave devices characterization","Galatro, L. (TU Delft Electronics)","de Vreede, L.C.N. (promotor); Spirito, M. (promotor); Delft University of Technology (degree granting institution)","2019","As the number of wireless applications increases every year, overcrowding the RF/microwave spectrum, research community and industry are gradually starting to dedicate more attention to the less exploited (sub)millimeter wave spectrum, spanning from 30 GHz to 1 THz. While the high frequency and large available bandwidth of the latter promises very fast communication and the space for countless new applications, the development of new devices working at high frequency is hampered by a series of challenges affecting both technology development and implementation. One of the bottlenecks in new technology development is the availability of accurate and reliable measurement techniques, to support the design and the model validation of both passive and active devices working at (sub)millimeter wave frequencies. As a matter of fact, the test and measurement market dedicated to sub-THz applications has presented small developments in the last decades, with the core instrumentation and measurement techniques still based on the same principles dedicated to lower frequency applications. This thesis is dedicated to the development of calibration and measurement techniques for the characterization of (sub)millimeter wave devices, allowing to bridge the gap between the current available measurement instrumentation and the new needs in the sub-THz range. This is done by mainly addressing two aspects: the development of advanced techniques and artifacts for the characterization of electronic devices in their native environment (i.e., on-wafer), and the implementation of measurement techniques allowing to characterize the small- and large-signal behavior of devices and circuits at (sub)millimeter wave, while overcoming the instrument-related challenges present at those frequencies.","millimeter wave; sub-THz; on-wafer; calibration; VNA, small-signal; large-signal; characterization; wafer probes; transmission lines; CPW; EM simulation; de-embedding; load-pull; power control; instrumentation","en","doctoral thesis","","978-94-028-1813-0","","","","","","","","","Electronics","","",""
"uuid:d60ce0ee-1651-4a46-acf2-f0e3f04ad01b","http://resolver.tudelft.nl/uuid:d60ce0ee-1651-4a46-acf2-f0e3f04ad01b","Space Design for Thermal Comfort and Energy Efficiency in Summer: Passive cooling strategies for hot humid climates, inspired by Chinese vernacular architecture","Du, X. (TU Delft Building Physics)","van den Dobbelsteen, A.A.J.F. (promotor); Bokel, R.M.J. (copromotor); Delft University of Technology (degree granting institution)","2019","Passive cooling for thermal comfort in summer is a big issue in low-energy building design. An important reason is global warming because global warming increases the number of cooling degree days. In addition, the energy demand of buildings has increased rapidly due to both the improvement of living standards and the globalisation of modern architecture. And finally, cooling a building is especially a challenge in countries where few resources are available. Passive cooling techniques, where solar and heating control systems are applied, largely depend on the design of the urban morphology and the building shape. The first research question is therefore: What is the relationship between spatial configuration, thermal environment and thermal summer comfort of occupants and how to apply spatial configuration as the passive cooling strategy in architectural design? Space is the empty part of a building, but its volume is important for the activities of occupants. Architects define the general spatial structure of a building mainly in the early design stages. There they define the spatial properties of a building, i.e. how the spaces are connected and what are the boundary conditions between the spaces. The final research question of this research therefore is: What is the relationship between spatial configuration, thermal environment and thermal summer comfort and how to apply spatial configuration as passive cooling strategy in architectural design in the early stages? In order to answer this research question, this dissertation is divided into two main parts.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-218-5","","","","A+BE | Architecture and the Built Environment No 10 (2019)","","","","","Building Physics","","",""
"uuid:16f1560f-1739-492c-bd95-3f47bf096182","http://resolver.tudelft.nl/uuid:16f1560f-1739-492c-bd95-3f47bf096182","Unveiling the third dimension of glass: Solid cast glass components and assemblies for structural applications","Oikonomopoulou, F. (TU Delft Structural Design & Mechanics)","Nijsse, R. (promotor); Veer, F.A. (promotor); Delft University of Technology (degree granting institution)","2019","Over the last decades, the perception of glass in the engineering world has changed from that of a brittle, fragile material to a reliable structural component of high compressive load-bearing capacity. Although the structural applications of glass in architecture are continuously increasing, they are dominated by a considerable geometrical limitation: the 2-dimensionality imposed by the prevailing float glass industry. Cast glass can overcome this limitation: solid 3-dimensional glass components of virtually any shape and cross-section can be made. Owing to their monolithic nature, such components can form robust repetitive units for the construction of free-form, all-glass structures that take full advantage of the compressive strength of glass; a solution little explored so far. Subsequently, there is a lack of design guidelines in the use of cast glass as a structural material. Scope of this research is, therefore, to investigate both the potential and the limitations of employing solid cast glass components for the engineering of transparent, 3-dimensional, glass structures in architecture. Accordingly, the design, development, prototyping and experimental validation of two distinct cast glass building systems for self-supporting envelopes, from unit level to the entire structure, are presented. First, an adhesively-bonded solid glass block system, using a colourless adhesive as an intermediary, is developed and applied in the Crystal Houses façade. Following, a dry-assembly, interlocking cast glass block system, employing a colourless dry interlayer, is explored as a reversible, circular solution. The results of this dissertation can serve as design guidelines for future structural applications of cast glass in architecture.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-220-8","","","","A+BE | Architecture and the Built Environment No 9 (2019)","","2019-11-25","","","Structural Design & Mechanics","","",""
"uuid:acd7102b-339b-45b5-972e-fe3a2ad9c52e","http://resolver.tudelft.nl/uuid:acd7102b-339b-45b5-972e-fe3a2ad9c52e","To new frontiers, Microbiology for nanotechnology and space exploration","Lehner, B. (TU Delft BN/Stan Brouns Lab)","van der Zant, H.S.J. (promotor); Brouns, S.J.J. (promotor); Delft University of Technology (degree granting institution)","2019","Bacteria and other microorganisms are known and studied as an essential part of daily life and they are utilized in a variety of fields. This work identifies applications in nanotechnology and space research, using the same bacterium for both: Shewanella oneidensis. The extracellular electron transfer (EET) mechanism situated mainly in the cell membrane of Shewanella oneidensis transports electrons, which are produced during its regular metabolic activity, to the outside of the cells. In the presence of certain metal oxides, the bacteria can reduce them while releasing carbon dioxide....","graphene oxide reduction; in situ resource utilization (ISRU); Shewanella oneidensis; space exploration","en","doctoral thesis","","978-90-8593-422-6","","","","Casimir PhD Series, Delft-Leiden 2019-39","","","","","BN/Stan Brouns Lab","","",""
"uuid:7e5a2017-e3ef-4224-bdd2-e8a970c4fed9","http://resolver.tudelft.nl/uuid:7e5a2017-e3ef-4224-bdd2-e8a970c4fed9","Towards enhanced second-generation n-butanol production from sugarcane","Zetty Arenas, A.M. (TU Delft BT/Environmental Biotechnology)","van Loosdrecht, Mark C.M. (promotor); Maciel Filho, Rubens (promotor); van Gulik, W.M. (copromotor); Freitas Azzoni, S. (copromotor); Delft University of Technology (degree granting institution)","2019","Nowadays, the biotechnology industry is facing the challenge of producing suitable equivalents for petroleum-based products from renewable resources in a sustainable and economically feasible way. Finding cleaner alternatives for gasoline, fuels, and chemicals, has been the subject of research worldwide, whether for economic, geopolitical, or environmental reasons. Among these alternatives, liquid fuels derived from biomass stand out for their eco-friendly production....","Second-generation butanol; Clostridia biofilms; Extracellular polymeric substances; Sugarcane industry by-products; In‐situ product recovery; ABE fermentation","en","doctoral thesis","","978-94-028-1811-6","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:315bdd4b-84bb-4a9b-8f2d-831a6a13459e","http://resolver.tudelft.nl/uuid:315bdd4b-84bb-4a9b-8f2d-831a6a13459e","Needles and liver phantoms in interventional radiology: Design considerations","de Jong, T.L. (TU Delft Medical Instruments & Bio-Inspired Technology)","Dankelman, J. (promotor); van den Dobbelsteen, J.J. (promotor); Delft University of Technology (degree granting institution)","2019","Liver carcinoma is in the top five leading causes of cancer death worldwide. Patients often require radiologic interventions in which needles are inserted, for example when taking biopsies, accessing blood vessels or bile ducts, and ablating tumors. Accurate and precise needle placement in interventional radiology is important, but also challenging. Challenges include several factors, such as anatomical obstructions along the insertion path, patient motion, and unwanted needle bending upon insertion. Incorrect needle placement may prolong procedure time, increase radiation dose for the patient and may cause complications. Proposed approaches to improve needle placement in interventional radiology include, but are not limited to, steerable needles and liver phantoms. Although steerable needles are technically feasible to produce, these prototypes are often general-purpose. Currently, there is a lack of (analyzed) clinical and experimental data that provide insight into needle placement, and that would clarify the right design requirements for novel needles in interventional radiology. Another gap exists in the development of liver phantoms, which can be used in a validation set-up for novel needles and/or a training model for medical doctors. Current phantom development focusses mostly on medical imaging properties. However, matching needle-tissue interaction forces and simulating breathing motion are also crucial for a phantom to be of use in a realistic validation set-up. Therefore, the objectives of this thesis are to define relevant design considerations for novel needles, and to develop a high fidelity liver phantom that features respiratory motion and that mimics needle-tissue interaction forces upon insertion. Accomplishing this will improve and advance needle placement in interventional radiology.","","en","doctoral thesis","","978-94-6375-600-6","","","","","","","","","Medical Instruments & Bio-Inspired Technology","","",""
"uuid:aa29b04f-4cd7-41fa-b48b-5edc75fef104","http://resolver.tudelft.nl/uuid:aa29b04f-4cd7-41fa-b48b-5edc75fef104","Solar home systems for improving electricity access: An off-grid solar perspective towards achieving universal electrification","Narayan, N.S. (TU Delft DC systems, Energy conversion & Storage)","Bauer, P. (promotor); Zeman, M. (promotor); Qin, Z. (copromotor); Delft University of Technology (degree granting institution)","2019","Almost a billion people globally lack access to electricity. For various reasons, grid extension is not an immediately viable solution for the un(der-) electrified communities. As most of these electricity-starved regions lie in tropical latitudes, the use of off-grid solar-based solutions like solar home systems (SHS) is a logical approach. However, state-of-the-art SHS is limited in its power levels and availability. Moreover, sub-optimal system sizing leads to either over-utilization --- and therefore, faster degradation --- of the SHS battery, or under-utilization of the SHS battery, leading to higher system costs. Additionally, off-grid SHS designs suffer from a lack of reliable load profile data needed as the first step for an off-grid photovoltaic (PV) system (e.g., SHS) design. The work undertaken in this dissertation aims to analyze the technological limits and opportunities of using SHS in terms of power level, availability, and battery size, lifetime for achieving universal electrification. Firstly, the three main electrification pathways, viz., grid extension, centralized microgrids, and standalone solar-based solutions like pico-solar and SHS are analyzed for their relative merits and demerits. Then, a methodology is presented to quantify the electricity demand of the households in the form of load profiles for the various tiers of electricity access as outlined by the multi-tier framework (MTF) for measuring the household electricity access. Secondly, for the SHS application, a non-empirical battery lifetime estimation methodology is presented that can be used at the design phase of SHS for comparing the performance of candidate battery choices at hand in the form of battery lifetime. Thirdly, an optimal standalone system size is evaluated for each tier of electrification, taking into account the battery lifetime, temperature impact on SHS performance, power supply availability in terms of the loss of load probability (LLP), and excess PV energy. A genetic algorithm-based multi-objective optimization is performed, giving insights on the delicate interdependencies of the various system metrics like LLP, excess energy, and battery lifetime on the SHS sizing. This exercise concludes that meeting the electricity requirements of tiers 4 and 5 level of electrification is untenable through SHS alone. Consequently, a bottom-up DC microgrid born out of the interconnection of SHS is explored. A modular and scalable architecture for such a bottom-up, interconnected SHS-based architecture is introduced, and the benefits of the microgrid over standalone SHS are quantified in terms of lower battery sizes and the defined system metrics. On modeling the energy sharing between the SHS, it is shown that battery sizing gains of more than 40% could be achieved with inter-connectivity at tier 5 level as compared to standalone SHS to meet the same power availability threshold. Finally, a Geo-Information System (GIS)-based methodology is presented that takes into account the spatial spread of the households while utilizing graph theory-based approaches to arrive at the optimal microgrid topology in terms of network length. The research carried out in this dissertation underlines the technological limitations of SHS in aiming towards universal electrification, while highlighting the benefits of moving towards a bottom-up approach in building (rural) DC microgrids through SHS, which can enable the climb up the so-called electrification ladder.","energy access; SDG 7; solar home systems; solar energy; batteries, rural electrification; Multi-tier framework; GIS,; microgrids","en","doctoral thesis","","978-94-6366-217-8","","","","","","","","","DC systems, Energy conversion & Storage","","",""
"uuid:82f63791-c9a8-43b2-8ae9-6bc252315178","http://resolver.tudelft.nl/uuid:82f63791-c9a8-43b2-8ae9-6bc252315178","In Situ Foam Generation: In Flow Across a Sharp Permeability Transition","Shah, S.Y. (TU Delft Reservoir Engineering)","Rossen, W.R. (promotor); Wolf, K.H.A.A. (promotor); Delft University of Technology (degree granting institution)","2019","","foam generation; snap-off; capillary heterogeneity; enhanced oil recovery; synthetic porous media","en","doctoral thesis","","978-94-6366-221-5","","","","","","","","","Reservoir Engineering","","",""
"uuid:fa8e9619-33aa-480e-ae2b-5e43595e9916","http://resolver.tudelft.nl/uuid:fa8e9619-33aa-480e-ae2b-5e43595e9916","Capacitively-Coupled Bridge Readout Circuits","Jiang, H. (TU Delft Electronic Instrumentation)","Makinwa, K.A.A. (promotor); Nihtianova, S. (copromotor); Delft University of Technology (degree granting institution)","2019","This Ph.D. dissertation describes the design and realization of energy efficient readout integrated circuits (ROICs), that have an input referred noise density < 5 nV/√Hz and a linearity of < 30 ppm, as required by Wheatstone bridge sensors used in precision mechatronic systems. Novel techniques were developed, at both the system-level and circuit-level, to improve the ROIC’s energy-efficiency, while preserving its stability and precision. Two prototypes are presented, each with bestin- class energy efficiency, to demonstrate the effectiveness of the proposed techniques.","","en","doctoral thesis","","","","","","","","","","","Electronic Instrumentation","","",""
"uuid:b169dbd7-7856-458c-9bce-095328ffbc30","http://resolver.tudelft.nl/uuid:b169dbd7-7856-458c-9bce-095328ffbc30","Simulation of the Inhomogeneous Deadman of an Ironmaking Blast Furnace","Post, J.R. (TU Delft (OLD) MSE-3)","Yang, Y. (promotor); Sietsma, J. (promotor); Delft University of Technology (degree granting institution)","2019","The ironmaking blast furnace is the most efficient reactor for extracting iron from iron ore. Iron ore consists of iron oxides and in the blast furnace the iron oxides are reduced to iron and melted before it is transformed into steel in the steel plant. The lowest part or hearth of an ironmaking blast furnace plays no doubt the most important role in the making of hot metal, the primary product of a blast furnace: molten, unrefined iron.
The hearth is the location where the final reactions take place, which determine the hot occur at places where usually large refractory degradation occurs in a blast furnace. It is shown that a distribution of hot metal entering the hearth has a large impact on flow in the hearth. Although not all aspects of the modelling worked well, it is shown that these methods show a computational workable route for modelling large particle systems. metal quality, but also where wear of the protective refractory lining is the main reason for a major, lengthy and costly maintenance stoppage once every 10 to 20 years. In the hearth, hot metal flows through the bed of packed coke particles, the deadman, toward a tap hole in the wall of the hearth. The particles of the coke bed exhibit a particle size distribution, which determines the permeability for liquid flow in the packed bed. The permeability is not uniformly distributed and this determines the flow distribution in the hearth. It is the flow of hot metal which dissolves the carbon of the coke particles, resulting in a change in packing of the coke bed and degrades the refractory lining of the hearth. The process conditions in the hearth make it near to impossible to monitor and predict these processes. Using mathematical modelling as a tool, insight is gained in understanding the process of dissolution of coke bed particles and the impact on the packing of the coke bed as well as the hot metal flow inside the hearth.
In this thesis population balance modelling and particle-packing modelling are used to model the effect of dissolving particles on the packing of a packed bed. The model is linked with a CFD model of a simplified blast furnace hearth, and the resulting packed bed porosity is used to calculate the effects on hot metal flow in the hearth. The ensuing effects on coke dissolution rate and flow in the CFD model is demonstrated. This approach of simultaneously calculating the distribution of the deadman porosity, caused by dissolution, and the ensuing hot metal flow is an unique and new aspect in blast furnace hearth modelling.
Results of dissolution experiments, in water, with spherical benzoic acid particles in a packed bed, validate the results of the dissolution and packing model. It is also shown that the packing model is also suited for modelling packed coke beds.
This thesis shows that population balance modelling is well suited to model systems with a large number of particles, especially in the case of calculating the evolution of the particle size distribution of dissolving particles.
CFD results in this thesis demonstrate that large coke dissolution rates and high flow rates occur at places where usually large refractory degradation occurs in a blast furnace. It is shown that a distribution of hot metal entering the hearth has a large impact on flow in the hearth. Although not all aspects of the modelling worked well, it is shown that these methods show a computational workable route for modelling large particle systems.","ijzermaken; hoogoven; dode man; deeltjesgrootte verdeling; oplossen; ruwijzerstroming; wiskundig model; gepakt bed; porositeit; ijzererts","en","doctoral thesis","","978–94–028–1783–6","","","","","","","","","(OLD) MSE-3","","",""
"uuid:1f889837-0d94-415c-8137-6065c0a44245","http://resolver.tudelft.nl/uuid:1f889837-0d94-415c-8137-6065c0a44245","Ultra-thin mems fabricated tynodes for electron multiplication","Prodanovic, V. (TU Delft EKL Processing)","Sarro, Pasqualina M (promotor); van der Graaf, H. (promotor); Delft University of Technology (degree granting institution)","2019","For decades, photomultiplier tubes (PMTs) have been the most common choice in single photon detection, covering the spectral range from deep-ultraviolet to nearinfrared. PMT is a vacuum tube with three crucial components: photocathode, chain of dynodes and anode. At the photocathode, photons are converted to electrons in a photoelectric effect, after which they are directed to the dynodes chain. The material and geometry of dynodes are chosen to efficiently amplify the charge through the secondary electron emission (in reflection mode). Finally, created avalanche of electrons is collected and measured by the anode. Timed Photon Counter (TiPC) is a novel vacuum-based photomultiplier proposed to overcome limitations of PMTs in terms of size, speed, spatial resolution and operation in the presence of magnetic field. The key novelty of TiPC is a tynode – a large-size array of ultra-thin, free-standing membranes which, in contrast to dynodes, multiply electrons in the transmission mode. Due to the short and straight crossing paths of electrons between subsequent tynodes, the time resolution of the TiPC can be in the order of 10 -12 s. The set of tynodes is placed under the photocathode, and on top of a CMOS detecting chip. With such design, TiPC represents a light, compact and ultra-fast photodetecting device with a high relevance for solid state, atomic and molecular physics experiments, medical imaging and 3D optical imaging. The focus of this thesis is microelectromechanical systems (MEMS) fabrication of the tynodes. To our knowledge, this is the first time MEMS technology is employed as a powerful tool for the production of large arrays of free-standing membranes, with thicknesses of only a few nanometers, to be used in photodetection. Detailed analysis in terms of mechanical, optical, electrical and structural properties were performed in order to discern the most suitable material for the TiPC application among the investigated candidates. The transmission SEY (TSEY) of the released tynodes is analysed with a dedicated setup, specifically developed in our group, inserted in a scanning electron microscope (SEM). Low pressure chemical vapour deposition (LPCVD) was employed as a technique to grow silicon nitride (SiN) tynodes with varied layout, elemental stoichiometry and thicknesses in the range from 25 to 40 nm. Due to its inability to produce good-quality films with thicknesses lower than 20 nm, LPCVD was replaced by atomic layer deposition (ALD). It was found that SiN performs poorly in terms of secondary electron emission (SEE), and we selected Al2O3 (alumina) as the next tynode material. The ALD of alumina is investigated in the temperature range from 300 down to 100 °C, with the goal to determine its viability in the coating of temperature-sensitive substrates such as photoresist. We demonstrated the fabrication of 5 – 25 nm-thick ALD alumina tynodes which exhibited moderately high TSEY. Apart from SiN and alumina, other materials subjected to SEE analysis in this work were: chemical vapour deposited (CVD) ultrananocrystalline diamond (UNCD), monocrystalline silicon and LPCVD silicon carbide (SiC). Applying atomic layer deposited magnesium oxide (MgO) as the tynode material resulted in a transmission secondary electron yield (TSEY) of up to 5.5, by which it proved to be the most efficient electron multiplier among materials taken into account in this work. During the fabrication of tynodes, SEE films were exposed to different MEMS processing steps, and thus inevitably undewent a surface modification which alters the SEE properties. On that account, we conducted a study on the ALD MgO films subjected to various chemical and thermal treatments and explored the methods to further enhance their SEE. For the final application in the TiPC, stacked tynodes should provide the focusing of electrons. To meet this requirement, the emission film was grown on a pre-patterned substrate, which enabled hemi-spherical shape of the released membranes. Finally, for the vertical stacking and alignment of the tynodes, steps for the formation of V-grooves were added in the standard fabrication flowchart.","tynodes; ultra-thin membrames; timed-photon counter; secondary electron emission; atomic layer deposition","en","doctoral thesis","","978-94-6384-085-9","","","","","","2020-08-01","","","EKL Processing","","",""
"uuid:03d70c5d-596d-4c8c-92da-0398dd8221cb","http://resolver.tudelft.nl/uuid:03d70c5d-596d-4c8c-92da-0398dd8221cb","Language-Parametric Methods for Developing Interactive Programming Systems","Konat, G.D.P. (TU Delft Programming Languages)","Visser, Eelco (promotor); Erdweg, S.T. (promotor); Delft University of Technology (degree granting institution)","2019","All computers run software, such as operating systems, web browsers, and video games, which are used by billions of people around the world. Therefore, it is important to develop high-quality software, which is only possible through interactive programming systems that involve programmers in the exchange of correct and responsive feedback. Fortunately, for many general-purpose programming languages, integrated development environments provide interactive programming systems through code editors and editor services.
On the other hand, Domain-Specific Languages (DSLs) are programming languages that are specialized towards a specific problem domain, enabling better software through direct expression of problems and solutions in terms of the domain. However, because DSLs are specialized to a specific domain, and there are many problem domains, we need to develop many new DSLs, including their interactive programming systems!
Ad-hoc development of an interactive programming system for a DSL is infeasible, as developing one requires a huge development effort. Therefore, our vision is to create and improve language-parametric methods for developing interactive programming systems. A language-parametric method takes as input a description of a DSL, and automatically implements (parts of) an interactive programming system, reducing development effort, thereby making DSL development feasible. In this dissertation, we develop three language-parametric methods throughout the five core chapters.
We develop a language-parametric method for incremental name and type analysis, in which language developers specify the name and type rules of their DSL in meta-languages (languages specialized towards the domain of language development). From such a specification, we automatically derive an incremental name and type analysis, including editor services such as code completion and inline error messages.
We develop a language-parametric method for interactively bootstrapping the meta-language compilers of language workbenches. We version metalanguage compilers, explicitly denote dependencies between them, and perform fixpoint bootstrapping, where we iteratively self-apply meta-language compilers to derive new versions until no change occurs, or until a defect is found. These bootstrapping operations can be started and rolled back (when defect) in the interactive programming system of the language workbench.
Finally, we develop PIE, a parametric method for developing interactive software development pipelines, a superset of interactive programming environments. With PIE, pipeline developers can concisely write pipeline programs in terms of tasks and dependencies between tasks and files, which the PIE runtime then incrementally executes. PIE scales down to many low-impact changes and up to large dependency graphs through a change-driven incremental build algorithm.","","en","doctoral thesis","","978-94-6366-210-9","","","","","","","","","Programming Languages","","",""
"uuid:e0c3246b-5e6e-47f4-b012-3393fd47fc90","http://resolver.tudelft.nl/uuid:e0c3246b-5e6e-47f4-b012-3393fd47fc90","Cooperative Multi-Vessel Systems for Waterborne Transport","Chen, L. (TU Delft Transport Engineering and Logistics)","Negenborn, R.R. (promotor); Hopman, J.J. (promotor); Delft University of Technology (degree granting institution)","2019","This PhD thesis investigates V2V, V2I, and I2I cooperation of CMVSs for improving the safety and efficiency of waterborne transport. A predictive motion control framework and a generic negotiation framework are proposed to achieve consensus among controllers. Different applications provide insights into the impact of CMVSs on the performance of the waterborne transport systems. Specifically, four types of cooperation and their applications to the Port of Rotterdam and the metropolitan area of Amsterdam are investigated, i.e., Vessel Train Formation (VTF), Cooperative Floating Object Transport (CFOT), Waterway Intersection Scheduling (WIS), and Cooperative Waterway Intersection Scheduling (CWIS).","Cooperative Multi-Vessel Systems; Autonomous Vessels; Vessel train formation; Cooperative Object Transport; Distributed Model Predictive Control","en","doctoral thesis","TRAIL Research School","978-90-5584-257-5","","","","TRAIL Thesis Series T2019/15, The Netherlands TRAIL Research School","","2019-11-18","","","Transport Engineering and Logistics","","",""
"uuid:18c6ac5f-fed3-470d-a222-fa95ce423037","http://resolver.tudelft.nl/uuid:18c6ac5f-fed3-470d-a222-fa95ce423037","Supporting Human-Machine Interaction in Ship Collision Avoidance Systems","Huang, Y. (TU Delft Safety and Security Science)","van Gelder, P.H.A.J.M. (promotor); Delft University of Technology (degree granting institution)","2019","Ship collision is a classical problem for maritime practitioners and researchers. Human error is a major cause of collision accidents, which motivates researchers to develop automation systems replacing navigators on board. However, before autonomous ships fully replace conventional ships, supporting the situational awareness of human operators is still a rigid demand. Moreover, how to prevent automation performing out of the human’s expectations (e.g., violation of regulations) is another challenge. Therefore, the design of human-machine interactions (HMIs) becomes crucial.
This dissertation developed the Human-Machine Interaction oriented Collision Avoidance System (HMI-CAS) that allows human operators and automation to share their intelligence. Specifically, the HMI-CAS not only offers one (optimal) solution to human operators but also visualizes the solution space with both dangerous solutions and feasible solutions. Thus, the decision process of automation becomes transparent for human operators. The human operators can not only read and understand the solutions offered by the machine but also validate and modify the solutions via the interface of the HMI-CAS. Without human interventions, the HMI-CAS also can work automatically. Moreover, to support the humans take evasive action in time, the measure of collision risk utilizing a concept called “room-for-maneuver” is proposed, which offers alerts before collisions become inevitable.
In brief, instead of replacing humans on board, the proposed HMI-CAS aims at bridging the intelligence of humans and machines, which enriches the choice of collision avoidance systems for supporting human operators and for developing autonomous ships.
The first apparitions of the term broaching-to date back to the 18th century. Sailors have always been frightened by the potentially devastating consequences of sailingwindward, but this phenomenon has been consistently studied starting from the 1950s only. Several naval architects put in evidence the main characteristics of the physical phenomenon of the broaching-to in following sea, developed useful and accurate techniques meant to predict the behaviour of the vessel sailing in those scenarios. Although the great efforts spent in the research on this subject, there is still some uncertainty about the causes of a broaching-to event, and about the characteristics of the vessel that might lead to an unsafe behaviour in following waves. This thesis aims to investigate these aspects, with the final desirable result of providing guidelines for safer vessels to designers and shipbuilders.","Manoeuvrability-in-waves; high-speed craft; broaching-to; following sea; captive model tests; panel method","en","doctoral thesis","","978-94-6366-212-3","","","","","","2019-11-14","","","Ship Hydromechanics and Structures","","",""
"uuid:9b434ac4-ebcd-4e23-812d-354d836fdcb3","http://resolver.tudelft.nl/uuid:9b434ac4-ebcd-4e23-812d-354d836fdcb3","Complexity is in the Eye of the Beholder","Kashiwagi, I.J. (TU Delft Marketing and Consumer Research)","Santema, S.C. (promotor); Plugge, A.G. (copromotor); Delft University of Technology (degree granting institution)","2019","The Information Communications Technology (ICT) industry has been identified to have poor project outcomes (NATO Science Committee, 1969; Standish, 2016). ICT Project complexity has been reported by suppliers and clients as a cause of the poor project outcomes (Sauer & Cuthbertson, 2003; Whittaker, 1999). As the ICT industry becomes more integrated into society through technological advances and automation, firms require approaches and solutions to handle project complexity in order to stay in operation (Bakhshi et al., 2016; Ireland, 2016; Qureshi & Kang, 2014; Ramasesh & Browning, 2014)...","","en","doctoral thesis","","978-0-9985836-6-2","","","","","","","","","Marketing and Consumer Research","","",""
"uuid:7c2aa066-02aa-40fe-939c-343e14599de0","http://resolver.tudelft.nl/uuid:7c2aa066-02aa-40fe-939c-343e14599de0","Accelerated screening and orientation sensitive chromatographic modeling of biopharmaceuticals","Kittelmann, J. (TU Delft BT/Bioprocess Engineering)","Ottens, M. (promotor); Hubbuch, Jürgen (promotor); Delft University of Technology (degree granting institution)","2019","The downstream process development for biopharmaceuticals is faced with increasing challenges. A growing market of drug candidates and new molecule families, as well as a rising trend to personalized medicine lead to an increase in market diversity. At the same time more purification techniques and materials become available, resulting in an exponential growth in potential parameter combinations and conditions to be considered and screened for. The establishment of high throughput screening (HTS) technologies and automated liquid handling stations (LHS) have driven standardization in experiments, data handling and data quality assessment in the last decade. Despite, the establishment of automation technologies for almost all purification process steps throughout the field of DSP development, a miniaturization beyond the scale of 96-well plates has not been reached, as sample handling and pipetting accuracy fell short with established LHS...","","en","doctoral thesis","","978-94-6384-075-0","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:f741b590-8cd7-4e52-9641-b471954db5b2","http://resolver.tudelft.nl/uuid:f741b590-8cd7-4e52-9641-b471954db5b2","Obtaining well-posedness in mathematical modelling of fluvial morphodynamics","Chavarrias Borras, V. (TU Delft Rivers, Ports, Waterways and Dredging Engineering)","Blom, A. (promotor); Uijttewaal, W.S.J. (promotor); Delft University of Technology (degree granting institution)","2019","","","en","doctoral thesis","","978-94-6384-063-7","","","","","","","","","Rivers, Ports, Waterways and Dredging Engineering","","",""
"uuid:48ed7edc-934e-4dfc-b35c-fe04d55caee1","http://resolver.tudelft.nl/uuid:48ed7edc-934e-4dfc-b35c-fe04d55caee1","Indoor swarm exploration with Pocket Drones","McGuire, K.N. (TU Delft Control & Simulation)","de Croon, G.C.H.E. (promotor); Tuyls, K.P. (promotor); Kappen, Hilbert (promotor); Delft University of Technology (degree granting institution)","2019","Pocket drones, weighing less than 50 grams, are small, agile and inherently safe. This makes them suitable for several surveillance tasks such as search and rescue, green-house monitoring and pipe-line inspection. In order for a more efficient search, a swarm of pocket drones would be ideal to explore these types of areas faster. Current methods and navigation techniques will not be suitable due to their extensive requirements for the platform's computational capabilities and memory storage. This dissertation will therefore focus on designing a new strategy for a swarm of pocket drones for both low-level and high-level navigation in an indoor environment. The first part of this dissertation focuses on the low-level-navigation capabilities of the swarm of pocket drones, by first looking at the individual. We developod Edge-FS (Flow \& Stereo), which enabled the stereo-camera to detect both obstacles and the drone's velocity at the same time. A further necessity for swarm operations is for multiple pocket drones to avoid each other. An on-board relative localization scheme based on the Received Signal Strength Intensity (RSSI) of the inter-drone communication was developed to make this possible. Two pocket drones with a forward-facing stereo-camera were communicating with each other by means of Bluetooth and by fusing the RSSI with their velocity (estimated by Edge-FS). With this, two pocket drones were able to fly together in a room while avoiding the walls and each other. The second part of this dissertation focuses on high-level-navigation. Since conventional navigation strategies cannot fit on-board the pocket drones, we investigated an alternative method: bug algorithms. We present a literature survey and comparison of the existing techniques and evaluation on their suitability for deployment for real-world scenarios. We found that with increasing sensor errors and estimation drift, all existing bug algorithms’ performances decreased. This provided us valuable insights for the design of a novel bug algorithm for high-level-navigation. Finally, we developed and demonstrated a bug-algorithm-based navigation strategy for multiple pocket drones for indoor exploration and homing. We named this technique the swarm gradient bug algorithm (SGBA) and it enabled the pocket drones to explore a floor of an inside building and return to its original position by the RSSI-gradient of a radio beacon. Once two pocket drones come into each other's proximity, one will avoid the other and coordinate its own preferred search direction based on the information it has received (from the other).","Micro Aerial Vehicle; Swarm robotics; Autonomous navigation; Pocket Drones; Stereo Vision; Bug Algorithms; Optical flow","en","doctoral thesis","","978-94-6182-976-4","","","","","","","","","Control & Simulation","","",""
"uuid:b1f4d743-95c1-45ec-ae91-af62224e1d7c","http://resolver.tudelft.nl/uuid:b1f4d743-95c1-45ec-ae91-af62224e1d7c","On Hardware-Accelerated Maximally-Efficient Systolic Arrays: Acceleration and Optimization of Genomics Pipelines Through Hardware/Software Co-Design","Houtgast, E.J. (TU Delft Computer Engineering)","Al-Ars, Z. (promotor); Bertels, K.L.M. (promotor); Delft University of Technology (degree granting institution)","2019","Developments in sequencing technology have drastically reduced the cost of DNA sequencing. The raw sequencing data being generated requires processing through computationally demanding suites of bioinformatics algorithms called genomics pipelines. The greatly decreased cost of sequencing has resulted in its widespread adoption, and the amount of data that is being generated is increasing exponentially, projected to soon rival big data fields such as astronomy. Therefore, acceleration and optimization of such genomics pipelines is becoming increasingly important.
The BWA-MEM genomic mapping algorithm is a critical first step of many genomics pipelines, as it maps the raw input sequences onto a reference genome, thereby reconstructing the sample's original genetic assembly. A major part of overall BWA-MEM execution time is spent performing Seed Extension, an algorithm closely related to the Smith-Waterman pairwise sequence alignment algorithm. The standard approach for the heterogeneous acceleration of the Smith-Waterman algorithm is to map it onto a systolic array architecture to compute elements of the similarity matrix in parallel. In order for systolic arrays to operate at high efficiency, they require long sequences to be aligned to one another. The BWA-MEM algorithm, in contrast, typically generates very short sequences that then require pairwise alignment through the Seed Extension algorithm. Therefore, in this dissertation, various techniques to improve the efficiency of systolic arrays for short sequence lengths are proposed.
The Variable Logical Length, the Variable Physical Length, and the Variable Logical and Physical Length systolic array architectures are proposed to eliminate the dependence of systolic array efficiency on read sequence length. To eliminate its dependence on reference sequence length, a streaming, implicit synchronizing architecture is introduced. Together, these techniques result in a maximally-efficient systolic array. A Seed Extension kernel has been implemented on both FPGA and GPU with a threefold kernel-level improvement to execution time, resulting in the first FPGA-accelerated and the first GPU-accelerated implementation of BWA-MEM with an overall end-to-end twofold application-level speedup. Moreover, a Smith-Waterman implementation has been developed on FPGA using the above efficiency improvements to the systolic array architecture, resulting in an implementation that has a performance of 214 GCUPS and that is able to attain 99.8% efficiency, which is the highest reported efficiency and performance of any FPGA-accelerated Smith-Waterman implementation to date. Finally, various aspects of these designs are evaluated, including power-efficiency and design-time.","Acceleration; BWA-MEM; FPGA; GPU; Heterogeneous system; Pairwise sequence alignment; Smith-Waterman; Systolic array","en","doctoral thesis","","978-94-6366-203-1","","","","","","2019-11-11","","","Computer Engineering","","",""
"uuid:9f4042e1-ceb3-4894-90c5-3602bd0a1276","http://resolver.tudelft.nl/uuid:9f4042e1-ceb3-4894-90c5-3602bd0a1276","High-Speed Interfaces for Capacitive Displacement Sensor","Xia, S. (TU Delft Electronic Instrumentation)","Nihtianova, S. (promotor); Delft University of Technology (degree granting institution)","2019","This thesis describes the theory, design, and implementation of high-speed capacitive displacement sensor interface circuits. The intended application is to readout the capacitive displacement sensor used in a servo loop, where the measurement time needs to be low to ensure loop stability....
While performing such statistical analyses, we found that there are strong relationships between the different molecular datasets (e.g. mutations, CNAs, methylation and gene expression) and that these relationships can negatively affect our ability to identify biomarkers. Following these results, we have developed TANDEM, a method to identify biomarkers while taking into account these relationships between datasets, and iTOP, a method to infer how different datasets are related to each other.
For difficult cases where the number of cell lines is very small, we have developed a method that predicts drug response simultaneously for all drugs in the screen, thereby gaining statistical power. We based this method on a machine learning methodology called multi-task learning. In contrast to other multi-task learning methods, our approach provides insight into which features are important for a given treatment, thereby allowing us to identify biomarkers fromthese models.
Finally, we analyzed a screen of 54 drug combinations across 765 cell lines. We report which combinations show synergy (i.e. where the effect of the combination was larger than onewould expect based on the individual drug effects) most frequently, hence making them broadly applicable. In addition, for each drug combination, we statistically associated molecular features (i.e. mutations, copy number aberrations, gene expression and proteomics) with the synergy, from which the strongest associations may be good candidate biomarkers.
Our first main contribution is related to temporal correlations. In most of the studies, the influence of time in the SIS spreading process is omitted because the specific value of the infection and curing rates does not influence the first-moment metastable properties, such as the infection probability of each node. Only the ratio between the two rates matters. In this dissertation, we show that the temporal correlation can be analyzed with the mean-field approaches, although mean-field methods are meant to only analyze first-moment properties. We derive the autocorrelation of the nodal infection state both in the steady and transient states under the mean-field approximation. By analyzing the autocorrelation, we indicate the influence of the underlying network and the value of the infection and curing rates on the temporal properties of the spreading process. We also show that the infection and curing rates can be calculated by measuring the infection state of each node.
Second, we relax the Markovian assumption in the SIS process by extending the Poisson infection process to a Weibull renewal process. The Poisson infection process is just a special case of the Weibullian renewal process. Under this Weibullian framework, we can parameterize the non-Markovian infection behavior and show some new features raised by it. We specifically focus on an extreme (limiting) case of the Weibullian SIS process where the distribution of the infection time is a Dirac delta function. The analysis of the extreme case leads to the largest possible epidemic threshold for non-Poissonian infection processes. We further discuss the epidemic threshold for different infection processes with Weibull, lognormal and Gamma distributed infection time, which fit realistic spreading phenomena well, under a previous non-Markovian mean-field method based on renewal theory. We show consistency between our results and previous theory and that those different infection processes behave similarly.
Third, we dive into the localization phenomena in networks from the viewpoint of SIS spreading processes. Localization of the spreading process appears just above the epidemic threshold in networks whose principal eigenvector of the adjacency matrix is localized. In the localized spreading, the prevalence (order parameter), which is the expected fraction of infected nodes, converges to zero with the increase of network size but the number of infected nodes is non-zero. Thus, the localized spreading forms an interesting phase different from the all-healthy phase (no infection) and the endemic phase (non-zero prevalence). We evaluate the above-mentioned extreme case of the Weibullian SIS process where the time-dependent prevalence is periodic in the long-run. Near the epidemic threshold, the ratio between the steady-state maximum and minimum prevalence, which equals to the largest eigenvalue of the adjacency matrix, diverges in some networks, but the spreading process is still localized. In other words, the divergent ratio of prevalence, determined by the largest eigenvalue of the network, cannot amplify a zero-prevalence to a non-zero one in the thermodynamic limit. The result indicates that the localization of spreading processes may be only determined by the network structure but not the specific infection process.
Finally, we study the curing strategy for the control of the spreading process, specifically, the pulse curing strategy. Compared to the classical asynchronous curing strategy (for instance Poissonian), pulse strategy is an optimized method of suppressing the spreading and applied broadly in disease control. Here, we study the model which is composed of a susceptible-infected process and a periodical pulse curing process with a successful curing probability below one. We derive the mean-field epidemic threshold. Based on our analysis, the pulse strategy reduces the number of curing operations by $36.8\%$ compared to traditional asynchronous curing strategies in the Markovian SIS model.
All the above-mentioned theoretical analyses are verified by directly simulating SIS processes.","Spreading Process; Complex Networks; Stochastic Simulation","en","doctoral thesis","","978-94-6384-074-3","","","","","","","","","Network Architectures and Services","","",""
"uuid:cf7fe485-1ef1-426a-bfa1-7925ef11394c","http://resolver.tudelft.nl/uuid:cf7fe485-1ef1-426a-bfa1-7925ef11394c","Mobilization and Displacement of Residual Oil by means of Chemical Enhanced Oil Recovery Processes","Al Saadi, F.S.H. (TU Delft Reservoir Engineering)","van Kruijsdijk, C.P.J.W. (promotor); Wolf, K.H.A.A. (promotor); Delft University of Technology (degree granting institution)","2019","Enhanced oil recovery (EOR) seeks to improve the recovery of oil from existing mature oil fields. It targets the oil left behind after conventional recovery by natural reservoir drive and water injection. The injection of surfactant polymer chemicals can enhance oil recovery by reducing the interfacial tension, allowing more oil to be released from its host rock and improving the flood conformance.
In this study, the principles and parameters of chemical surfactant polymer EOR mechanisms, which mobilise, displace and transport residual oil (i.e. build an effective oil bank) after water injection were investigated. The current understanding of when, and under what conditions, an oil bank is formed and maintained is limited. This is relevant in core-flow experiments that need to be appropriately interpreted and scaled, from the centimetre to the field scale, in various steps. Various factors that influence the dynamics of building a stable oil bank were evaluated, using an extensive core-flow experimental study with the aid of computed tomography scanning....","","en","doctoral thesis","","","","","","","","","","","Reservoir Engineering","","",""
"uuid:8264bbbd-376c-4d08-8067-39255ab6fb03","http://resolver.tudelft.nl/uuid:8264bbbd-376c-4d08-8067-39255ab6fb03","Docking of Surgical Guides","Mattheijer, J. (TU Delft Medical Instruments & Bio-Inspired Technology)","Dankelman, J. (promotor); Nelissen, R.G.H.H. (promotor); Delft University of Technology (degree granting institution)","2019","Malalignment of implant components is a root cause for knee prosthesis failure. Patient Specific Surgical Guides (PSSGs) are used to improve the postoperative position of the implant relative to a preoperative planned position. The PSSGs are tailor made to match the patient’s bony anatomy and align drill holes and saw slots relative to the bone. However, correct PSSG alignment (and thus prosthesis alignment) is heavily dependent on the geometric fit with the matching anatomy. This thesis provides methods for preoperative optimization of the matched contact and intraoperative adjustments for improved alignment.","","en","doctoral thesis","","978-94-028-1758-4","","","","","","2020-10-30","","","Medical Instruments & Bio-Inspired Technology","","",""
"uuid:aa1ef96f-4da9-41ff-bff8-30186ef2a541","http://resolver.tudelft.nl/uuid:aa1ef96f-4da9-41ff-bff8-30186ef2a541","Optimizing the exploitation of persistent scatterers in satellite radar interferometry","Dheenathayalan, P. (TU Delft Mathematical Geodesy and Positioning)","Hanssen, R.F. (promotor); Delft University of Technology (degree granting institution)","2019","Time-series synthetic aperture radar interferometry (InSAR) has evolved into a widely preferred geodetic technique for measuring topography and surface deformation of the earth. In the last decades, time-series InSAR methodologies were developed to extract information from persistent scatterers (PS) and distributed scatterers (DS). Methodologies based on DSs extract information from pixels from the natural terrain. Persistent Scatterer Interferometry (PSI) extracts information from PSs, which are found in abundance in areas with man-made infrastructure. However, a satisfactory geodetic application of these methodologies requires a complete understanding of the measurement principles, an identification of radar scatterers in the physical world, and an interpretation of the estimated deformation. Moreover, for areas not suitable for coherent imaging adding new measurements is not trivial. In consideration of the above challenges, the two main objectives of this study are: (i) to develop a systematic method to decode PSI measurements, i.e., identify PSs in the object space in order to interpret the estimated deformation (kinematics), and (ii) to assess the feasibility of encoding artificial radar scatterers, i.e. adding new measurements using radar reflectors, at places where there exists no coherent InSAR measurements. We review the contents of SAR resolution cell and the time-series processing methodologies with special focus on the Delft implementation of PSI processing. A physical interpretation of the time-series InSAR results is shown possible by decoding what the radar has measured and understanding the deformation phenomena. We employ two approaches to perform this decoding. First is to identify the source of the radar reflection by characterizing and associating PSs to a target type. By using only InSAR data, we apply an iterative classification method to discriminate radar scatterers between the ground level and elevated infrastructure. We combine the limited classification output with deformation rate and identify various deformation phenomena such as shallow compaction, no relative motion, autonomous structural motion, local land subsidence, and inter-structural deformation. In particular, we introduce a parameter known as RDI (Relative Deformation Index) to detect, quantify and analyze the regions subject to relative deformation for infrastructural stability analysis. The feasibility of this approach is successfully demonstrated with underground gas-pipe and water-pipe network monitoring applications over Amsterdam and The Hague, respectively. Second, a point-level (object or sub-object level) linking of radar reflections to real-world objects. For this step, a precise 3D position of the scatterers is derived. Applying corrections for various position error sources, accurate 3D position of scatterers is achieved for high-resolution and medium-resolution SAR imagery. A standard Gauss-Markov approach is applied to facilitate error propagation and quality assessment and control. The 2D and 3D position capabilities are validated using trihedral corner reflector field experiments. In order to precisely associate radar scatterers to physical objects, we introduce an approach to use a 3D building model of the physical objects. Linking of scatterers to parts of infrastructure is demonstrated for high-resolution and medium-resolution imagery. Finally, we propose the concept of small radar reflectors to introduce new coherent reflections. The small reflectors are designed such that they are visible from both ascending and descending imaging directions, enabling vector decomposition of deformation measurements. These small radar reflectors act as weak point scatterers. To achieve a desired SCR (Signal to Clutter Ratio), many small reflectors are distributed over an area and averaged. The detection of small reflectors is achieved by distributing them in a predefined spatial pattern. In this study, a new interferometric phase expression is derived to estimate a phase standard deviation for low-SCR and high-SCR targets. The proposed concept is experimentally validated using X-band satellite data over a grassy terrain in the Netherlands. The results indicate that distributed corner reflectors can provide deformation measurements with millimeter precision.","spaceborne radar interferometry; surface deformation; persistent scatterers; positioning, corner reflectors, geometric calibration, infrastructure monitoring","en","doctoral thesis","","978-94-6384-083-5","","","","","","","","","Mathematical Geodesy and Positioning","","",""
"uuid:7aab28bf-76cc-46ed-b305-b55d19723a5f","http://resolver.tudelft.nl/uuid:7aab28bf-76cc-46ed-b305-b55d19723a5f","On Probing Appearance: Testing material-lighting interactions in an image-based canonical approach","Zhang, F. (TU Delft Human Information Communication Design)","Pont, S.C. (promotor); de Ridder, H. (promotor); Delft University of Technology (degree granting institution)","2019","Materials are omnipresent. Recognizing materials helps us with inferring their physical and chemical properties, for instance if they are compressible, slippery, sweet and juicy. Yet in literature, much less attention has been paid to material perception than to object perception. This dissertation presents studies on a method to systematically measure human visual perception of opaque materials and test the influence of lighting and shape on material perception. In our studies, we applied multiple psychophysical methods such as matching, discriminating, and perceptual scaling to test the visual perception of materials for human observers. Beyond just matte and glossy material variations that were commonly tested in material perception literature, we included in total four canonical material modes to account for a wide range of materials, namely diffuse, asperity, forward, and mesofacet scattering for ""matte"", ""velvety"", ""specular"", and ""glittery"" material modes, respectively. For the lightings, we included three canonical lighting modes within a spherical harmonics and perception based framework, namely ""ambient"", ""focus"", and ""brilliance"" lighting. Based on the spherical harmonics analysis of the global lighting environment, we were able to quantify the “diffuseness” and “brilliance” of the light maps by using Xia’s diffuseness metric and a novel brilliance metric we proposed. Combining the four material modes and three lighting modes, we presented a canonical set that in combination with optical mixing supports a painterly approach in which key image features could be varied directly. With this method we were able to test and predict light-material interactions using both photographs of the real objects and computer rendered stimuli. We first introduced a new type of non-spherical appearance probe, implementing the painterly approach. Moreover, we developed an interactive interface that integrated the probe for an asymmetric matching task, where observers adjusted sliders to vary each material mode in the probe. The interface was found to be intuitive for inexperienced users and allowed purely visual quantitative measurements. Performances were generally well above chance and robust across experiments and observers, validating the approach. We further developed the material probe and expanded it to allow optical mixing of canonical lighting modes. In a light matching experiment and a 4-category discrimination experiment we found asymmetric perceptual confounds between judgments of material and lighting. Specifically, observers were found to be less sensitive to light changes than to material changes. Moreover, using this canonical approach, we were able to test and predict light-material interactions in two perceptual scaling experiments. To this aim a novel spherical harmonics based metric was introduced for quantifying the ""brilliance"". Lastly, we compared results from our probing method and results from other psychophysical experimental methods, namely perceptual scaling and discrimination, in which semantic information (for material attributes) was involved. Robust effects of light, shape, and light map orientation were found, in a material dependent way. To conclude, our research mainly contributed to 1) the development of a novel probing method that mixes image features of the proximal stimulus in a fluent manner instead of varying the distal physical properties of the stimuli, plus a validation that it works and that it allows quantitative measurements of material perception and material-lighting interactions; 2) understanding of visual perception of opaque materials and material-light-interactions in a wide ecological variety; 3) a validated model for predicting the material dependent lighting effects for matte, specular, velvet and glittery materials; and 4) the interpretations of the material perception results in a manner relating to shape and light. Our findings can be further applied to many subjects, such as industrial design, education, e-commerce, computer graphics, and future psychophysical studies.","Visual perception; Material perception; Lighting","en","doctoral thesis","","","","","","","","","","","Human Information Communication Design","","",""
"uuid:77784d7c-8f01-4222-bb83-87ee1de52930","http://resolver.tudelft.nl/uuid:77784d7c-8f01-4222-bb83-87ee1de52930","In-plane dynamics of high-speed rotating rings on elastic foundation","Lu, T. (TU Delft Railway Engineering)","Metrikine, A. (promotor); Tsouvalas, A. (copromotor); Delft University of Technology (degree granting institution)","2019","Rotating ring-like structures are very commonly used in civil, mechanical and aerospace engineering. Typical examples of such structures are components in turbomachinery, compliant gears, rolling tyres and flexible train wheels. At the micro-scale, rotating ring models find their applications in the field of ring gyroscopes, in which high accuracy of modelling is required. The in-plane vibrations of rotating rings are of particular interest since such structural components are usually subject to in-plane loads. The focus in this thesis is therefore placed on the in-plane dynamics of rotating rings. While the radial and circumferential motions of a stationary ring are coupled due to curvature, a steadily rotating ring, as any gyroscopic system, is subject to two additional fictitious forces induced by the gyroscopic coupling due to rotation, i.e. the Coriolis and centrifugal forces. Among them the centrifugal force associated with the steady rotation of the ring (quasi-static force) introduces an axi-symmetric radial expansion and a hoop stress; the latter has the tendency to stiffen the ring. In contrast, the dynamic part of the centrifugal force has the tendency to soften the system. Next to that, the Coriolis force bifurcates the natural frequencies of the ring. The proper consideration of the rotation effects is essential to determine the dynamic behaviour of rotating rings, such as stability of free vibrations and resonance of rotating rings under stationary loads. Although various models exist, the considerations of rotation effects are not always in agreement, resulting in distinct theoretical predictions of critical speeds associated with instability and resonance of rotating rings. In addition, in all the existing rotating ring models, the equations of motion were derived assuming the inner and outer surfaces of the ring to be traction-free. However, when one considers a ring whose inner surface is elastically restrained by distributed springs, this assumption is violated. The traction at the inner surface can significantly influence the stress distribution along the thickness of the ring and this effect has to be properly accounted for since the internal stresses may show a strong gradient from the inner surface to the outer surface, especially in the case of rings rotating at high speeds or when the latter are supported by stiff foundation. The primary aim of this thesis is to develop a highly accurate rotating ring model that properly accounts for both the rotation and boundary effects with rigorous mathematical derivation to fill the gap regarding the modelling and prediction of the dynamic behaviour of high-speed rotating rings. To achieve this aim, the following four objectives are set: (i) identify the reasons of disagreements between various existing rotating ring models and clarify the mathematically sound derivations of governing equations; (ii) develop a high-order rotating ring model which properly accounts for the rotation effects, as well as the non-zero tractions at boundaries; (iii) close the debate on the prediction of critical speeds associated with instability of free vibrations and resonance of forced vibrations; and (iv) apply the developed high-order model to predict the steadystate response of rotating rings under stationary loads and the stability of rotating ringstationary oscillator system.","high-speed; rotating rings; elastic foundation; in-plane vibration; stability; high-order theory; traction boundary effects; critical speeds; steady-state response; ring-oscillator system","en","doctoral thesis","","978-94-6323-850-2","","","","","","2020-02-01","","","Railway Engineering","","",""
"uuid:7760f5ad-958e-4f18-8a1c-d35ae63e9c44","http://resolver.tudelft.nl/uuid:7760f5ad-958e-4f18-8a1c-d35ae63e9c44","Elongated particles in fluidized beds: From lab-scale experiments to constitutive models","Mahajan, V.V. (TU Delft Intensified Reaction and Separation Systems)","Padding, J.T. (promotor); Eral, H.B. (copromotor); Delft University of Technology (degree granting institution)","2019","Gas-solid fluidized beds are widely used in various industries due to their favourable mixing, and mass and heat transfer characteristics. Fluid catalytic cracking, polymerization, drying, and granulation are a few examples of their applications. In recent years, there has been increased application of fluidized beds in biomass gasification and clean energy production. Fluidization has been extensively studied, experimentally, theoretically and numerically, in the past. However, most of these studies focused on spherical particles while in practice granules are rarely spherical. Particle shape can have a significant effect on fluidization characteristics. It is therefore important to study the effect of particle shape on fluidization behavior in detail. One of the main reasons we still do not completely understand the fluidization phenomenon is because of complex hydrodynamic interactions and its large separation of scales. Industrial fluidized bed reactors of tens of meters in diameter can have hydrodynamic scales varying from micrometers to meters. Experimental setups of such large size are extremely expensive and therefore not practical. On the other hand, theoretical and empirical correlations are not accurate for scale-up and are rarely available for non-spherical particle shapes. Because of this, we need a different approach. One that takes advantage of experimental measurements and numerical simulations. The tasks are divided into three parts based on scales, each focusing on a particular aspect : DNS (direct numerical simulation), CFD-DEM (computational fluid dynamics - discrete element model) and TFM (two fluid model) or MP-PIC (multi-phase - particle in cell). In this thesis, the focus is on CFD-DEM modelling, a ’bridge’ that connects the DNS and TFM/MP-PIC models.","","en","doctoral thesis","","978-94-6375-618-1","","","","","","","","","Intensified Reaction and Separation Systems","","",""
"uuid:f8b1542e-1fd2-4bee-be8b-e748718a8051","http://resolver.tudelft.nl/uuid:f8b1542e-1fd2-4bee-be8b-e748718a8051","Catalytic control in out-of-equilibrium assembly systems","van Rossum, S.A.P. (TU Delft ChemE/Advanced Soft Matter)","Eelkema, R. (promotor); van Esch, J.H. (promotor); Delft University of Technology (degree granting institution)","2019","Nature is capable of constantly adapting some of its assembled structures in response to external and internal signals. For instance, microtubuli grow and shrink upon cell division and cellular transport. Furthermore, actin fibers play a major role in muscle contraction and cell signaling. To achieve these transient functions, such assembled structures operate in an out-of-equilibrium state. Energy input and dissipation enables structure growth and subsequent collapse. Regulating the energy input with fuel concentration and the activity of associated enzymatically catalyzed processes leads to a high level of kinetic control in biological out-of-equilibrium processes....","","en","doctoral thesis","","978-94-028-1708-9","","","","","","","","","ChemE/Advanced Soft Matter","","",""
"uuid:77070b57-9493-4aa6-a9a5-7fed52e45973","http://resolver.tudelft.nl/uuid:77070b57-9493-4aa6-a9a5-7fed52e45973","Designing Creative Space: A Systemic View on Workspace Design and its Impact on the Creative Process","Thoring, K.C. (TU Delft Methodologie en Organisatie van Design)","Badke-Schaub, P.G. (promotor); Desmet, P.M.A. (promotor); Delft University of Technology (degree granting institution)","2019","Work and study environments that facilitate creative design processes, the so called creative spaces, have been gaining increased interest in recent years. The question whether or not the physical environment could support creative activities has attracted the attention of design schools, startups, and global enterprises. This PhD project contributes to this emerging field by providing a holistic investigation of the topic from different angles. The first part of this thesis explores the topic through four empirical studies, in order to gain a broad understanding of creative work and study environments. The second part pursues a practice based design science approach that consolidates the findings in a set of tangible artifacts...","","en","doctoral thesis","","978-94-6384-082-8","","","","","","","","","Methodologie en Organisatie van Design","","",""
"uuid:9a74f4ee-62e9-4117-bcb0-26c5a1a52cb9","http://resolver.tudelft.nl/uuid:9a74f4ee-62e9-4117-bcb0-26c5a1a52cb9","Experimentally validated multi-scale fracture modelling scheme of cementitious materials","Zhang, H. (TU Delft Materials and Environment)","Schlangen, E. (promotor); Šavija, B. (copromotor); Delft University of Technology (degree granting institution)","2019","Cementitious materials are heterogeneous on mutliple length scales, from nanometres to metres. Consequently, their macroscopic mechanical properties are affected by material structures at all length scales. In pursuit of fundamental understanding the relationship between their multiscale heterogeneous material structure and mechanical properties, testing and modelling are required at all length scales. In this thesis, a series of experimental and modelling techniques for cementitious materials on multiple length scales (micrometre to millimetre) has been developed. This forms an experimentally validated modelling scheme in which experimental results are used to provide input and validation for numerical model at each length scale. The approach on micro-scale sized specimen preparation has been developed by combining thin-sectioning and micro-dicing techniques. Mechanical measurements on the prepared micro-scale sized specimens were performed using a nanoindenter under various test configurations. The micromechanical model has been developed by combining the micro X-ray computed tomography and discrete lattice fracture model. In terms of hardened cement paste (HCP), the micro-cube indentation splitting test technique offers experimental results for the calibration of the micromechanical model. The one-sided micro-cube splitting test was used to validate the calibrated model. Moreover, the one-sided splitting test can offer the nominal splitting strength of HCP. The micro-cube compression test was developed to validate the modelling results and to provide the compressive strength and Young’s modulus measurements of HCP at the micro-scale. The experimentally validated micromechanical model was further used to predict the uniaxial tensile fracture behaviour of HCP at the micro-scale. It is confirmed by both numerical modelling and experimental measurements that the micromechanical properties (such as compressive strength, tensile strength) of HCP are much higher than at the meso-scale properties. With respect to the interfacial transition zone (ITZ), micro-scale sized HCP-aggregate cantilever beams were fabricated and loaded by the nanoindenter. The measured load-displacement response was used to calibrate the microstructure informed lattice fracturemodel. This model was further used to predict the fracture behaviour of the ITZ under uniaxial tension. The volume averaging up-scaling approach has been adopted as a tool to pass the outcome from the micro-scale to the higher scale as input. The micro-beam three-point bending test has been developed to validate this modelling scheme on HCP. The good agreement between modelling and testing shows that this modelling approach can reproduce the experimental results in terms of fracture pattern, strength and elasticity well. This up-scaling approach was further validated by comparing the modelling and testing results of the 10 mm cubic mortar under uniaxial tension. As strength and fracture properties of cementitious materials are size dependent, a size effect study on HCP has been carried out using both one-sided splitting test configurations and the multiscale modelling approach. The size range of specimens that can be experimentally measured and numerically simulated are significantly improved by using these techniques. The experimentally validated multi-scalemodelling scheme developed in this thesis is fully quantitatively predictable at the meso-scale. This modelling scheme is generic. It can be used in the same or similar way for studying systems utilizing other binders or aggregates.","Cementitious materials; Cement paste; Mortar; Micromechanics; Multi-scale modelling; Lattice fracture model; X-ray computed tomography; Size effect; Nanoindenter","en","doctoral thesis","","978-94-6384-071-2","","","","","","","","","Materials and Environment","","",""
"uuid:ae11c3e7-86f2-4c6a-8d53-ee8781d56a72","http://resolver.tudelft.nl/uuid:ae11c3e7-86f2-4c6a-8d53-ee8781d56a72","Consolidation and drying of slurries: A Building with Nature study for the Marker Wadden","Barciela Rial, M. (TU Delft Environmental Fluid Mechanics)","Winterwerp, J.C. (promotor); Griffioen, J. (promotor); Delft University of Technology (degree granting institution)","2019","The Marker Wadden project aims to improve the ecosystem of Lake Markermeer (The Netherlands) by constructing a wetland with sediment from the lake. Sediment is dredged from the bed, and the resulting slurries are pumped into the project area. During this process, segregation and oxidation of the sediment may occur. The native sediment composition and changes induced by the construction process affect the mechanical behavior of the wetland. The initial stress state of the sediment is another variable affecting the behavior. Over time, vegetation may colonize the wetland, also influencing the mechanical properties of the sediment. These factors were studied in physical and numerical experiments as part of this thesis. First, consolidation experiments in settling columns at low initial concentrations below the gel point (virgin consolidation) were performed, and the material parameters were obtained. These parameters were different from the parameters obtained from the Seepage Induced Consolidation (SIC) test because of over-consolidated initial conditions induced by mixing. Numerical simulations were performed with a 1DV consolidation model to quantify the effect of over-consolidation and material parameters on the consolidation behavior. Incremental Loading and Constant Rate of Strain (CRS) tests were performed to analyze the compressibility behavior, and Fall Cone tests were used to determine the undrained shear strength. The fractal theory was found to be a useful tool to normalize and identify the different behavior of samples across all tests. The drying behavior was analyzed with using a Hyprop test, and the Soil Water Retention Curves obtained were fitted with a van Genuchten model. The model parameters were found to be more influenced by the type of organic matter than by its total amount. Finally the effect of Phragmites australis (i.e. common reed) on the consolidation and drying was assessed with a newly-designed column device. The reed acted as an ecological engineer, draining the sediment. However, no differences in the thickness of the sediment layer were found presumably because of armoring by roots. The general conclusion is that over-consolidated initial conditions can be induced by different processes such as mixing and atmospheric drying. Furthermore, the composition of the sediment may change when exposed to segregation and oxidation. In particular, the type of organic matter affects the mechanical behavior of fine sediment at all stages (settling, consolidation, drying) and needs to be characterized. Consequently, the material parameters need to be determined for actual project conditions.","slurry; consolidation; drying; vegetation; nature; cohesive; clay; organic","en","doctoral thesis","","978-94-6384-073-6","","","","","","","","","Environmental Fluid Mechanics","","",""
"uuid:1a2adab5-0137-47f2-b8a4-a4949f790f60","http://resolver.tudelft.nl/uuid:1a2adab5-0137-47f2-b8a4-a4949f790f60","Experimental investigation of turbulence in canonical wall bounded flows: Pipe flow and Taylor-Couette flow","Gül, M. (TU Delft Fluid Mechanics)","Westerweel, J. (promotor); Elsinga, G.E. (promotor); Delft University of Technology (degree granting institution)","2019","This thesis aims to contribute to the physical understanding of turbulence in canonical flows. For that purpose, experiments are conducted in a statistically steady turbulent pipe flow, ramp-type accelerating/decelerating turbulent pipe flow and turbulent Taylor-Couette flow. For the first two flows, time-resolved data is acquired in the cross section of the pipe with stereoscopic PIV. The databases spanning over a friction Reynolds number range of Re⌧ = 340−1259 are utilised to study coherent motions, in particular the large-scale motions and the internal shear layers bounding them. For the turbulent Taylor-Couette flow study, in addition to torque measurements, high resolution stereoscopic-PIV measurements are performed in the radial-axial planes to investigate the hysteresis phenomenon in the system.","","en","doctoral thesis","","978-94-6366-211-6","","","","","","","","","Fluid Mechanics","","",""
"uuid:c8adfe08-43cd-4b8a-9436-54a9a56b4e14","http://resolver.tudelft.nl/uuid:c8adfe08-43cd-4b8a-9436-54a9a56b4e14","Computational methods for phase retrieval: Non-iterative methods, Ptychography, and Diffractive Shearing Interferometry","Konijnenberg, A.P. (TU Delft ImPhys/Optics)","Coene, W.M.J.M. (promotor); Urbach, Paul (promotor); Delft University of Technology (degree granting institution)","2019","In this thesis, several phase retrieval methods are discussed. Since the focus will mainly be on theory rather than experiment, the structure has been determined by the similarities and differences of the mathematics of these methods. For example, a distinction is made between non-iterative and iterative methods, and between single-shot iterative phase retrieval and multiple-shot iterative phase retrieval (ptychography). However, it must be noted that phase retrieval methods that are mathematically similar, are suitable for fundamentally different experimental setups. For example, one can consider setups for lensless imaging, of which an interesting application is metrology using Extreme Ultraviolet (EUV) radiation. In such setups, no focusing optics are used, and one typically computes an image from far-field intensity patterns. On the other hand, there are setups for aberrated imaging. In these setups, one does use focusing optics to form images, but by introducing some sort of variations or perturbations, one can generate a set of images from which a complex-valued field can be computed. For example, regular ptychography and Fourier ptychography are mathematically the same, but the former is used for lensless imaging, while the latter is used for aberrated imaging. Mathematically, the only difference between these two ptychographic approaches is that the roles of object space and Fourier space are interchanged.","Phase retrieval; Ptychography; Computational imaging","en","doctoral thesis","","","","","","","","2019-10-25","","","ImPhys/Optics","","",""
"uuid:23197e0d-8c67-4fe6-a359-be95cc843fcc","http://resolver.tudelft.nl/uuid:23197e0d-8c67-4fe6-a359-be95cc843fcc","Ultrasonic Health Monitoring of Thermoplastic Composite Aircraft Primary Structures","Ochôa, Pedro (TU Delft Structural Integrity & Composites)","Benedictus, R. (promotor); Groves, R.M. (promotor); Villegas, I.F. (copromotor); Delft University of Technology (degree granting institution)","2019","The adoption of composites as aircraft primary structural material has created a real need for a new aircraft maintenance philosophy to enable almost real-time structural condition assessment. It is thus crucial to develop structural health monitoring (SHM) systems capable of performing damage diagnostic and remaining useful life prognostic. At the same time, the urge to reduce production costs has led to consistent developments in thermoplastic composite (TpC) technology. In particular, new possibilities have been unlocked for automated assembly processes based on welding. This context constitutes a unique opportunity for integrating research on SHM into the advances of TpC technology, in order to contribute to a combined reduction of production and maintenance costs, and thus to the development of a truly cost-effective composite airframe. Over the last three decades, ultrasonic guided waves (GWs) have been recognised as having a great potential for detailed quantitative diagnostic of damage in composite structures. However, there are still no certified GW based SHM (GW-SHM) applications for civil aircraft. The reason for that is a limited understanding about the interaction between measurement variability factors associated with real operational environments, damage types, materials and geometric complexity. Therefore, the aim of the research presented in this thesis was to accelerate the bridging of those knowledge gaps and thereby to improve the reliability of GW-SHM for composite aircraft.
The research presented in this thesis has put forward three possible paths for improving the reliability of GW-SHM systems for composite aircraft. First, to the early detection of manufacturing defects by investigating the relationship between GW propagation and assembly process parameters at structural element scale. Second, to reduce the uncertainty in the damage diagnostic by increasing the systematisation of the GW-SHM system design. Third, to increase the robustness of damage diagnostic capabilities by studying the effects of real operational-environmental variability factors on GW propagation at real scale.","Ultrasonic guided wave; Structural health monitoring; Thermoplastic composites; Aircraft structures; System reliability","en","doctoral thesis","","978-94-028-1730-0","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:5383f365-fd5d-47ff-8826-e4d557c6e082","http://resolver.tudelft.nl/uuid:5383f365-fd5d-47ff-8826-e4d557c6e082","Small-Angle Scattering by Cellulose: Structural changes in cellulosic materials under chemical and mechanical treatments","Velichko, E. (TU Delft RST/Neutron and Positron Methods in Materials)","Pappas, C. (promotor); Bouwman, W.G. (copromotor); Delft University of Technology (degree granting institution)","2019","Humanity needs to increase use of renewable sources of materials and energy. Biomass can be used for both of these needs. Cellulose is the main component of biomass. Understanding the multi-level hierarchical structure of cellulose holds the key tomultiple applications of this material. One of the promising applications of lignocellulsic biomass is the production of bioethanol as a replacement for fossil fuels. Yearly production of biomass could potentially supply enough bio-ethanol to completely replace gasoline. However, it would require dramatic increase in the efficiency of the bioethanol production. The main obstacle to the development of bioethanol production into a sustainable process is the recalcintrance of cellulose which was developed throughout entire plant evolution. In order to overcome this obstacle an important step was incorporated into the process, i.e. pretreatment of biomass. Amultitude of pretreatments have been developed and applied to disrupt the structure of lignocellulosic biomass. However it is still not clear, which structural parameters are responsible for the success of a certain pretreatment technique.","Cellulose; mesostructure; SAXS; SANS","en","doctoral thesis","Ipskamp Drukkers","78-94-028-1720-1","","","","","","","","","RST/Neutron and Positron Methods in Materials","","",""
"uuid:763e8f40-d920-4b22-a105-b392553fe2f4","http://resolver.tudelft.nl/uuid:763e8f40-d920-4b22-a105-b392553fe2f4","Honouring Geological Information in Seismic Amplitude-Versus-Slowness Inversion: A Bayesian Formulation for Integrating Seismic Data and Prior Geological Information","Sharma, S. (TU Delft ImPhys/Acoustical Wavefield Imaging; TU Delft Applied Geology)","Luthi, S.M. (promotor); Verschuur, D.J. (promotor); Delft University of Technology (degree granting institution)","2019","Seismic waves from active experiments carry information regarding the subsurface in the form of reflected data that is recorded at the surface. This recorded data is subjected to sophisticated processing methods to estimate relevant parameters describing the geology of the subsurface. Traditionally the recorded data is used to create an image of the subsurface in terms of reflectivities, using seismic migration, which back-projects the data recorded at the surface into the earth. The resulting image can be interpreted in terms of structures and depositional patterns. There is another route that is followed to quantify the elastic properties of the subsurface by means of inversion of the recorded data. The essence of seismic inversion is to obtain the elastic properties of the earth’s subsurface from a finite set of noisy measurements, by forward modelling based on assumed properties and feed-back that projects the data mismatch onto model parameter space. Full-waveform inversion (FWI) is a special form of inversion that is gaining considerable attention in the last decade, which can be attributed to the advancement in the computational power available. However, several challenges remain for multi-parameter FWI to be successfully implemented on real size data problems in industry or academia at a scale fine enough to be useful in reservoir characterization.","","en","doctoral thesis","","978-94-6384-077-4","","","","","","","","","ImPhys/Acoustical Wavefield Imaging","","",""
"uuid:40632b7a-970e-433d-9b4e-ff2d2249b156","http://resolver.tudelft.nl/uuid:40632b7a-970e-433d-9b4e-ff2d2249b156","Enhancing reliability-based assessments of quay walls","Roubos, A.A. (TU Delft Hydraulic Structures and Flood Risk)","Jonkman, Sebastiaan N. (promotor); Steenbergen, R.D.J.M. (promotor); Delft University of Technology (degree granting institution)","2019","In the coming years, thousands of quay walls will approach the end of their intended fifty-year design lifetime and become part of lifetime extension programmes throughout the world. It is presently unclear how reliable these structures are and whether they are still capable of bearing ship and crane loads. An appropriate assessment of a quay wall’s reliability is essential to safely and responsibly determining its remaining service life. This thesis demonstrates how quay wall reliability can be evaluated and what aspects should be taken into consideration.","","en","doctoral thesis","","978-94-6375-534-4","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:d3bc68cd-76de-4d5c-b6b4-c739e36323ae","http://resolver.tudelft.nl/uuid:d3bc68cd-76de-4d5c-b6b4-c739e36323ae","Understanding bainite formation in steels","Ravi, A.M. (TU Delft (OLD) MSE-3)","Santofimia, Maria Jesus (promotor); Sietsma, J. (promotor); Delft University of Technology (degree granting institution)","2019","The formation of bainite in steels is one of the most intensely researched topics in the field of metallurgy. Despite the importance of bainite among the modern-day steels, even a qualitative theory to explain the bainite formation still remains a subject of controversy due to the complexity of its formation mechanism. Currently, two competing theories have been proposed by the scientific community regarding the mechanism of bainite formation. One theory suggests that the bainite growth occurs via diffusionless and displacive mechanism while the other argues that bainite growth is a diffusional process. In this work, the kinetics of bainite formation is studied in detail through the lens of the displacive theory of bainite formation. Using this theory, a novel physically-based model is developed to describe the kinetics of bainite formation. This kinetic description is validated using experimentally obtained kinetic data. The effect of martensite/austenite interfaces, ferrite/austenite interfaces and cementite/austenite interfaces on the kinetics of bainite formation is also understood with the help of customized set of heat treatments. The results obtained in this doctoral thesis provide significant insight into the effect of various parameters which control the rate of bainite formation in steels. From a technological perspective, the results also open up new avenues for designing efficient heat treatment routes for the development of multi-phase advanced high strength steels involving bainitic microstructures.","Steel; Bainite; Kinetics; Heat treatments; Nucleation","en","doctoral thesis","","978-94-028-1712-6","","","","","","2020-07-01","","","(OLD) MSE-3","","",""
"uuid:c77fb732-10e7-4e85-9dde-f22d0a76dac4","http://resolver.tudelft.nl/uuid:c77fb732-10e7-4e85-9dde-f22d0a76dac4","Rules or Rapport?: On the governance of supplier-customer relationships with initial asymmetry","Steller, F.P. (TU Delft Marketing and Consumer Research)","Santema, S.C. (promotor); Deken, F. (copromotor); Delft University of Technology (degree granting institution)","2019","","supplier-customer relationship; relationship governance; public procurement; regulated tender environment; rapport","en","doctoral thesis","","978-94-028-1683-9","","","","","","","","","Marketing and Consumer Research","","",""
"uuid:41ef1078-106a-47ca-abc1-99743a0256f4","http://resolver.tudelft.nl/uuid:41ef1078-106a-47ca-abc1-99743a0256f4","Exploring business change: The design of a digital tooling for business model exploration for the automotive ecosystem","Athanasopoulou, A. (TU Delft Information and Communication Technology)","de Reuver, Mark (promotor); Janssen, M.F.W.H.A. (promotor); Delft University of Technology (degree granting institution)","2019","Opportunities such as digital technologies are fundamentally reshaping businesses. Enterprises are moving from selling physical products and services, to providing digitally enabled services. Enterprises need to rethink their business models and how to adapt them to take advantage of these opportunities. How to adapt the existing business model is not always obvious or clear how to do it and business model exploration is needed. However, there is a lack of research on business model exploration. With business model exploration, enterprises can discover new business model opportunities, get new business model ideas, and create competitive advantage. Scholars argue that a way to support the transformation of business models is to develop business model tools. However, there is no clear indication of whether business model tooling contributes to the exploration and the process from an existing business model to a new one. Additionally, existing business model tools are mainly focused on formalizing one specific business model design, rather than facilitating the systematic exploration of alternative business models.","business models; business model exploration; DSR; IoT; automotive industry; experimental design; action research; Q-methodology","en","doctoral thesis","","","","","","","","","","","Information and Communication Technology","","",""
"uuid:30966f68-cea2-4669-93da-23a477d0978b","http://resolver.tudelft.nl/uuid:30966f68-cea2-4669-93da-23a477d0978b","Detection of factors that determine the quality of industrial minerals: An infrared sensor-based approach for mining and process control","Guatame-Garcia, Adriana (TU Delft Resource Engineering)","Buxton, M.W.N. (promotor); Jansen, J.D. (promotor); Delft University of Technology (degree granting institution)","2019","Industrial minerals are essential to human activity. The products derived from them make an integral part of a wide range of materials that are ubiquitously present in our daily lives. The performance and attributes of these materials depend significantly on the properties and quality of the industrial minerals and the products generated from them. These characteristics are ensured by the selection and mining of adequate ores, and by using various beneficiation and processing strategies to modify or enhance the original properties of the minerals.
One example of these strategies is calcination, in which the minerals are subject to thermal treatment. The success of the generation of high-quality products by using this technique partly depends on the capability of the plant to detect the factors that can degrade the quality of the raw ore, feed for calcination and final product. It also depends on its ability to inform and adapt the operations according to the presence of such factors. A possible approach for doing this is to characterise the minerals and materials with sensor technologies that can generate information on-site and in real-time, focusing on the identification of the degrading factors. Their timely detection can give operational feedback to the process and aid in the generation of high-quality products.
This Thesis aims to develop methods for the detection of factors that determine the quality of industrial mineral products by using data derived from infrared sensors, which have the potential to be implemented in mining and process control. For doing this, kaolin, perlite and diatomite have been selected as commodities that are relevant to the market and that represent different applications. This research shows the capacity of infrared sensor-based technologies to retrieve information, directly or indirectly, about the factors that affect the quality of industrial minerals at a lower cost and with comparable efficiency to other analytical methods.
We employed a combination of ultrafast spectroscopy techniques (transient absorption spectroscopy and spectroelectrochemistry) and computational methods (Monte-Carlo and Density Functional Theory simulations), to obtain information about the energy of electronic levels and the dynamics of carrier transport, relaxation and transfer between different materials. Our results provide evidence of previously unreported effects: hot-electron transfer between different QD species, a hole contribution to the bleach of cadmium chalcogenide QDs and the presence of a temperature de-activated mobility for carrier hopping in InP QD films. Furthermore, we demonstrate the possibility to achieve band-alignment in QD heterojunction via control of the ligand passivation on the QD
surface.","","en","doctoral thesis","","","","","","","","","","","ChemE/Opto-electronic Materials","","",""
"uuid:cab44efd-8a4c-4b4d-8168-bd64738adb64","http://resolver.tudelft.nl/uuid:cab44efd-8a4c-4b4d-8168-bd64738adb64","Characterization and development of high energy density Li-ion batteries","Harks, P.P.R.M.L. (TU Delft ChemE/Materials for Energy Conversion and Storage)","Mulder, F.M. (promotor); Delft University of Technology (degree granting institution)","2019","Due to the electricity storage facilities required for a future powered on renewable energy, and due to the high performance batteries necessary for electric vehicles and mobile electronics, battery research and development is more urgent than ever. In this thesis battery technology was investigated both on the material-, as on the electrode-level. This research was carried out to elucidate working principles of new battery materials and to develop new fabrication methods as to contribute to improving Li-ion batteries....","lithium batteries; in situ techniques; immersion precipitation; neutron depth profiling","en","doctoral thesis","","978-94-028-1759-1","","","","","","","","","ChemE/Materials for Energy Conversion and Storage","","",""
"uuid:a79de52b-66ee-4d1b-91cc-3c53b7832e7e","http://resolver.tudelft.nl/uuid:a79de52b-66ee-4d1b-91cc-3c53b7832e7e","Dynamics and control of Atomic Force Microscopy","Keyvani Janbahan, A. (TU Delft Computational Design and Mechanics)","van Keulen, A. (promotor); Goosen, J.F.L. (copromotor); Delft University of Technology (degree granting institution)","2019","The technique of Atomic Force Microscopy (AFM) is one of the major inventions of the twentieth century which substantially contributed to our understanding of the nanoscale world. In contrast to other microscopy techniques, the AFM does not operate based on the electromagnetic waves, but nano-mechanical interactions between the sample surface and a sharp probe. Therefore, its resolution is not fundamentally limited to the diffraction limit of light, but the sharpness of the probe tip which can be as small as a few atoms. The images and data obtained by AFM have had crucial importance for the scientists in the fields of biology, material science, and experimental physics. However, AFM experiments have always involved some challenges. Particularly, the limited imaging speed, and the probability of damaging the samples hinder scientists from extracting the necessary information on the samples. Besides its applications as a research tool, the AFM could potentially solve some of the challenges in semiconductor industry as a metrology and inspection tool, however, the aforementioned limitations are even more restrictive for any industrial use. Therefore, it is imperative to develop apparatus and methods which can increase the speed and reliability of AFM. In this thesis, we try to understand the physics of AFM and contribute to its development towards a potential industrial and clinical tool, from the perspective of dynamics and control of its cantilever.","","en","doctoral thesis","","978-94-6384-068-2","","","","","","","","","Computational Design and Mechanics","","",""
"uuid:6127d95f-4fe1-4417-949c-5158ddb1fb33","http://resolver.tudelft.nl/uuid:6127d95f-4fe1-4417-949c-5158ddb1fb33","Scientific progress in sediment and water quality assessment: Implementation of practical case studies","Wijdeveld, A.J. (TU Delft Geo-engineering)","Heimovaara, T.J. (promotor); Chassagne, C. (copromotor); Delft University of Technology (degree granting institution)","2019","The management of sediment, soil and water in the Netherlands dates back to the first settlements in the lower Northern and Western parts of the Netherlands. Around 500 B.C. farmers constructed ‘terps’ (artificial dwelling mounds) to protect against floodwater. The Romans (50 B.C. – 250 A.C.) reshaped natural waterways, to improve transport by ship. This also meant that river embankments were constructed and waterways and harbours had to be dredged. With the construction of river dikes in the period 700 – 1200 A.C. the sediment challenge began. The peatlands behind the dikes dried out, creating land below the average sea level. Dikes and dike maintenance therefore became crucial and since 1255 A.C. the first official governmental bodies, public utility boards (waterschappen) were created. These public utility boards together with the Dutch government, provinces and cities are still responsible for sediment and water quantity and quality. With this timeframe in mind, the scope of this thesis, examining developments in sediment and water quality management in the Netherlands over the past 25 years (1993 – 2018) is relatively short. A critical focus during these past 25 years was the heritage of industrial pollution of waterways and sediments from the early twentieth century. During the past 25 years changes took place in the way risks of contaminants in sediment and water were evaluated, partly due to scientific progress on the ecotoxicological impact of contaminants. As important as the scientific progress were the policy and legislation changes. These changes are driven by a broader spectrum of societal needs, like safety against flooding, scarcity of public funds and the need to change to a more circular economy using sediment as a resource. The goal of this thesis is to help water managers to understand the mechanisms
that change the ecotoxicological risks in their water and sediment systems, providing tools that go beyond the legislation requirements to assess these risks.","Sediment; water quality; metals; ecotoxicology; quality standard; legislation","en","doctoral thesis","","9789081013604","","","","","","","","","Geo-engineering","","",""
"uuid:2c3f8d00-ca74-4028-a02b-050922ba7aa3","http://resolver.tudelft.nl/uuid:2c3f8d00-ca74-4028-a02b-050922ba7aa3","Functionalized Hybrid Ceramic Membranes for Organic Solvent Nanofiltration","Amirilargani, M. (TU Delft OLD ChemE/Organic Materials and Interfaces)","Sudhölter, Ernst J. R. (promotor); de Smet, L.C.P.M. (copromotor); Delft University of Technology (degree granting institution)","2019","","","en","doctoral thesis","","978-94-6384-066-8","","","","","","","","","OLD ChemE/Organic Materials and Interfaces","","",""
"uuid:3d7bc400-2447-4a88-8768-3025d7b54b7f","http://resolver.tudelft.nl/uuid:3d7bc400-2447-4a88-8768-3025d7b54b7f","The impact of API evolution on API consumers and how this can be affected by API producers and language designers","Sawant, A.A. (TU Delft Software Engineering)","van Deursen, A. (promotor); Bacchelli, A. (promotor); Delft University of Technology (degree granting institution)","2019","The practice of software engineering involves the combination of existing software components with new functionality to create new software. This is where an Application Programming Interface (API) comes in, an API is a definition of a set of functionality that can be reused by a developer to incorporate certain functionality in their codebase. Using an API can be challenging. For example, adopting a new API and correctly using the functionality can be challenging. One of the biggest issues with using an API, is that the API can evolve, with new features being added or existing features being modified or removed. Dealing with this challenge has led to an entire line of research on API evolution.
In this thesis, we seek to understand to what extent API evolution more specifically API deprecation affects API consumers and how API consumers deal with the changing API. API producers can impact consumer behavior by adopting specific deprecation policies, to uncover the nature of this relationship, we investigate how and why the API producer deprecates the API and how this impacts the consumer. Deprecation is a language feature, i.e. one that language designers implement. Its implementation can vary across languages and thus the information that is conveyed by the deprecation mechanism can vary as well. The specific design decisions taken by the language designers can have a direct impact on consumer behavior when it comes to dealing with deprecation. We investigate the language designer perspective on deprecation and the impact of the design of a deprecation mechanism on the consumer. In this thesis, we investigate the relationship between API consumers, API producers, and language designers to understand how each has a role to play in reducing the burden of dealing with API evolution.
Our findings show that out of the projects that are affected by deprecation of API elements, only a minority react to the deprecation of an API element. Furthermore, out of this minority, an even smaller proportion reacts by replacing the deprecated element with the recommended replacement. A larger proportion of the projects prefer to rollback the version of the API that they use so that they are not affected by deprecation, another faction of projects is more willing to replace the API with the deprecated element with another API. API producers have a direct impact on this behavior with the deprecation policy of the API having a direct impact on the consumer's decision to react to deprecation. If the API producer is more likely to clean up their code i.e. remove the deprecated element, then the consumers are likely to react to the deprecation of the element. This shows us that even for non-web-based APIs, the API producers can impact consumer behavior. We also, observe that the nature and content of the deprecation message can have an impact on consumer behavior. Consumers prefer to know when a deprecated feature is going to go away, what its replacement is and the reason behind the deprecation (informing them of the immediacy of reacting to the deprecation). The design of the deprecation mechanism needs to reflect these needs as the deprecation mechanism is the only direct way in which API producers can communicate with the consumer.","API evolution; API deprecation; API mining; Language features","en","doctoral thesis","","978-94-6380-552-0","","","","","","","","","Software Engineering","","",""
"uuid:7faee5f5-21ed-4dba-b539-9427907fd55b","http://resolver.tudelft.nl/uuid:7faee5f5-21ed-4dba-b539-9427907fd55b","Strategies for the implementation of ion implantation doping technique in c-Si wafer-based solar cells","Limodio, G. (TU Delft Photovoltaic Materials and Devices)","Zeman, M. (promotor); Isabella, O. (copromotor); Delft University of Technology (degree granting institution)","2019","Electricity consumption is increased in last twenty years. It is crucial therefore to find alternative, renewable and sustainable sources to mantain this development. Solar energy has proved to be a valid alternative to produce clean energy. The gigantic development of solar energy technology in the last twenty years has lead to a scenario inwhich solar energy is no more a dream or an experimental demonstration of scientific principles, but it is fully integrated in energy production. To make it comparable with other sources, prices are lowered employing mass-production and more simple technologies. One of the possible technologies available is to use crystalline silicon wafers as absorber layer in combination with ion implantation doping technique. This doping technology is inherited by integrated circuit industry. Therefore, the context of this thesis is to explore ion implantation to simplify processing of c-Si solar cells. Indeed, the feature of this doping technique allows to speed up processing of current industrial standard named passivated emitter rear contact (PERC). Moreover, the integration of so-called carrier-selective passivating contacts with ion implantation is of great interest for future evolution of c-Si solar cell industry.","c-Si wafer-based solar cells; ion implantation; carrier-selective passivating contacts; passivation","en","doctoral thesis","","","","","","","","","","","Photovoltaic Materials and Devices","","",""
"uuid:5d91d546-2b0b-401a-ac9a-de7dc60dd22d","http://resolver.tudelft.nl/uuid:5d91d546-2b0b-401a-ac9a-de7dc60dd22d","Light-induced Charge Carrier Dynamics in Metal Halide Perovskites","Guo, Dengyang (TU Delft ChemE/Opto-electronic Materials)","Savenije, T.J. (promotor); Houtepen, A.J. (promotor); Delft University of Technology (degree granting institution)","2019","Metal halide perovskites (MHPs) are interesting candidates for application in photovoltaic devices. During the past decade the efficiency of perovskite solar cells has improved enormously exceeding now values over 24%. In an MHP absorber layer, charge carriers are generated on optical excitation, and decay by band-to-band recombination or by other undesired loss mechanisms. In this thesis, the relationship between material properties, including constituents and morphology of the MHP layer and these processes is studied. The charge carrier dynamics describing these light-induced processes are studied by the time resolved microwave photoconductivity (TRMC) technique, in combination with standard optical techniques such as time-resolved photoluminescence (TRPL)…","","en","doctoral thesis","","978-94-028-1733-1","","","","","","","","","ChemE/Opto-electronic Materials","","",""
"uuid:c4db06fd-30f6-419f-a05f-bf6fbb76a421","http://resolver.tudelft.nl/uuid:c4db06fd-30f6-419f-a05f-bf6fbb76a421","Utilizing dynamic context semantics in smart behavior of informing cyber‐physical systems","Li, Y. (TU Delft Cyber-Physical Systems)","Horvath, I. (promotor); Rusak, Z. (promotor); Delft University of Technology (degree granting institution)","2019","Context is interpreted as a body of information dynamically created by a pattern of entities and relationships over a history of situations. Computational handling of dynamically changing contexts and the consideration of rapidly changing situations in awareness building, situated reasoning, and proactive adaptation of smart cyberphysical systems has been recognized as an important research phenomenon. The main reason is that there are many real-life processes whose smart control and self-* behavior require quasi-real time processing of context information. Though processing time-varied context information has been addressed in the literature, domain-independent solutions for reasoning about time-varying complex and critical activity scenarios are scarce. Thus, explicit generation and utilization of dynamic context semantics in smart behavior of informing cyber-physical systems (I-CPSs) is a frontier endeavor.","Informing cyber‐physical systems; context information representation; dynamic context management; semantic inference; indoor fire evacuation application","en","doctoral thesis","","9789463840194","","","","","","","","","Cyber-Physical Systems","","",""
"uuid:f5694794-e299-4eb5-ab72-7dae3768b1fa","http://resolver.tudelft.nl/uuid:f5694794-e299-4eb5-ab72-7dae3768b1fa","Understanding the Impact of Human Interventions on the Hydrology of Nile Basin Headwaters, the Case of Upper Tekeze Catchments","Gebremicael, T.G. (TU Delft Water Resources)","van der Zaag, P. (promotor); Abbas Mohamedali, Y. (copromotor); Delft University of Technology (degree granting institution)","2019","The availability and distribution of water resources in catchments are influenced by various natural and anthropogenic factors. Human-induced environmental changes are key factors controlling the hydrological flows of semi-arid catchments. Land degradation, water scarcity and inefficient utilization of available water resources continue to be important constraints for socio-economic development in the headwater catchments of the Nile river basin in particular over the Ethiopian Catchments. This research investigates the impact of landscape anthropogenic changes on the hydrological processes in the Upper Tekeze basin (A tributary of the Nile). The hydrology of the basin is investigated through analysis of hydro-climatic data, remote sensing techniques, new field measurements and parsimonious hydrological models.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-0-367-42508-1","","","","Dissertation submitted in fulfillment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of IHE Delft Institute for Water Education.","","","","","Water Resources","","",""
"uuid:c3958573-4de3-4e41-b512-e7a383a14a5e","http://resolver.tudelft.nl/uuid:c3958573-4de3-4e41-b512-e7a383a14a5e","Dependable Network Topologies","Joshi, P.D. (TU Delft Computer Engineering)","Hamdioui, S. (promotor); Bertels, K.L.M. (promotor); Delft University of Technology (degree granting institution)","2019","Networks such as road networks, utility networks, computer and communication networks and even social networks are the backbone of human civilization. Network analysis enables quantitative measurement of the important criteria such
as delays, ease of routing and fault tolerance, and is required to build efficient and robust networks. Computer networks have evolved over the last five decades in parallel with technology which has grown exponentially tracking ‘Moore’s Law’, which projected exponential performance growth in computing. Notably though, the supercomputers of today pushing exascale performance are doing so, not primarily because of the improved performance of the microprocessors, but overwhelmingly due to the ability to network tens of millions of these microprocessors in systems. These systems depend very heavily on robust network topologies to achieve the exponentially growing performance seen over the last few decades. The network topologies in the world’s top performing supercomputers have evolved with the focus towards boosting performance by binding together an increasing number of processors in efficient networks over the years. Popular topologies have included torus, hypercubes, fat trees and some combinations thereof. The biggest drawbacks of the rapidly increasing number of devices networked together are the increased message delays, the declining ability to withstand various faults, and security issues. Building such supercomputers of today has very high down costs, and it is imperative that their utilization is maximized. This requires these high performance systems to be highly dependable also. This forms the motivation for the work in this thesis.","","en","doctoral thesis","","978-94-028-1709-6","","","","","","","","","Computer Engineering","","",""
"uuid:b46cc724-c6cf-4282-b3ba-3817e8bebf87","http://resolver.tudelft.nl/uuid:b46cc724-c6cf-4282-b3ba-3817e8bebf87","Synthetic biology meets liposome-based drug delivery","Soler Canton, A. (TU Delft BN/Christophe Danelon Lab)","Danelon, C.J.A. (promotor); Dogterom, A.M. (promotor); Delft University of Technology (degree granting institution)","2019","Synthetic biology is an emerging and rapidly expanding field of research focused on the assembly of novel biological systems with new functionalities tailored for different applications. Genetic circuits have been re-wired or constructed with elements from different organisms, and metabolic pathways have been engineered to endow cells with non-natural capabilities. One of the most exciting goals of synthetic biology is bringing solutions to biomedical challenges. Though this research area is still in its infancy, translational medicine is already witnessing the first steps towards the development of therapies based on synthetic biology.
Cell-free synthetic biology (or in vitro synthetic biology), a branch of synthetic biology, makes use of cell-free gene expression systems to create biological networks that operate outside the chassis of a living cell. More than a decade ago, the convergence of synthetic biology, cell-free gene expression systems and liposome technology gave rise to the creation of artificial vesicle bioreactors that can synthesize genetically-encoded molecules. Though the primary motivation of these studies was the assembly of a semi-synthetic cell, the application of such technologies for different therapeutic purposes has been envisioned. Examples of this are the creation of antigen-producing liposomes as novel vaccination systems, the development of bioreactor liposomes suited for remotely controlled in-situ mRNA or protein production, and the assembly of a PCR-based nanofactory for gene delivery.
Liposomes are the most successful drug delivery scaffolds ever developed with more than fifteen liposome-based drugs approved. Liposomes have long been investigated in the field of gene therapy as delivery vehicles for nucleic acids to overcome the barriers encountered by these molecules in vivo. In particular, the field of RNA interference-based drugs is very promising, with the first marketed formulation in 2018, and several others on their way.
However, despite their long history of investigation, most of the structural and functional properties of the liposome-based drug delivery systems are inferred from bulk measurements. Therefore, the level of heterogeneity regarding, e.g. encapsulation efficiency, lipid composition, remains largely unknown within liposome preparations, which complicates the development of new formulations with improved therapeutic efficiency. Another important limitation is the observed membrane leakiness and premature drug release upon administration. Controlling the bio-distribution of a therapeutic drug is essential to minimize toxic side effects and enhance the efficiency of the treatment. To tackle this problem, the creation of targeting delivery systems with stimuli-responsive ability can improve the biodistribution profile of a drug and allow its delivery on demand.
This work contributes to the convergence of the fields of synthetic biology, single-liposome biophysics and biomedicine, presenting different possible applications of vesicle bioreactors for the improvement of current RNA-based gene delivery systems.","synthetic biology; liposomes; gene therapy; RNA; therapeutic nanofactories; light activation; single vesicle analysis","en","doctoral thesis","","978-90-8593-418-9","","","","Casimir PhD series, Delft-Leiden 2019-35","","","","","BN/Christophe Danelon Lab","","",""
"uuid:4e2900cd-1fa1-4bce-b0f5-c99f23a13c6c","http://resolver.tudelft.nl/uuid:4e2900cd-1fa1-4bce-b0f5-c99f23a13c6c","An Institutional Approach to Peri-Urban Water Problems: Supporting community problem solving in the peri-urban Ganges Delta","Gomes, S.L. (TU Delft Policy Analysis)","Thissen, W.A.H. (promotor); Hermans, L.M. (copromotor); Delft University of Technology (degree granting institution)","2019","Water resources in the Ganges delta are undergoing drastic change as a result of urbanisation. Increasing demand for water due to urban expansion around cities like Kolkata (India) and Khulna (Bangladesh) is affecting groundwater access in nearby peri-urban communities. Peri-urban areas lie outside the formal purview of urban institutions, while their prevailing rural institutions are not equipped to deal with the changing urbanization context. To support problem solving efforts by peri-urban communities, this thesis offers the ‘Approach for Participatory Institutional Analysis’ or APIA. This participatory and structured approach explores problems using an institutional lens. It offers communities insight into the underlying institutions or rules in their most-pressing problems, the actors involved, and strategies to address them. This book outlines the need for such an approach , particularly in peri-urban areas and tells the story of how the APIA helped peri-urban communities examine their groundwater-based drinking water problems. Case-study applications in Bangladesh and India provide insights into APIA’s potential as a capacity building tool and ways to further improve its design and use with stakeholders in different contexts.
The first experiment describes the creation and measurement of a 2x2 quantum dot array. Historically, most experiments with quantum dots have been performed with linear arrays due to the relative ease of fabrication. We introduce a bi-layer gate structure, facilitated by the lift-off of sputtered silicon nitride, to create the 2x2 dot array. This gate design enables us to achieve unprecedented tunability of the tunnel coupling between all nearest-neighbor pairs of dots in 2d arrays. We also demonstrate individual control over the chemical potential and the electron occupation of each dot along with accurate measurement of the on-site and inter-site interaction terms. The use of virtual gates significantly aids in the tuning of tunnel coupling and chemical potential. The demonstrated high degree of control of the system along with fast single-shot spin-readout achieved through Pauli spin blockade establish this dot array as a promising simulator of the Fermi-Hubbard model.
The 2x2 dot array is used to simulate Nagaoka ferromagnetism in the next experiment. This form of itinerant ferromagnetism arises from the Fermi-Hubbard model, and was first shown analytically in the limit of infinite interaction strengths and infinite lattices by Nagaoka in 1966. Nagaoka ferromagnetism has been a topic of rigorous theoretical studies ever since, but its experimental signature has eluded us for more than five decades. In this experiment, we load the four dot plaquette with three electrons and demonstrate the emergence of spontaneous ferromagnetism by measuring the spin correlation of two out of the three electrons. Changing the topology of the array to an open chain is shown to destroy the ferromagnetic signature, consistent with the Lieb-Mattis theorem. We also show indications that this ferromagnetic ground state can be destroyed by applying a perpendicular magnetic field, unlike most other forms of ferromagnetism. However, this ground state shows striking robustness to the offset in the local potential of any dot. This is the first experimental verification of Nagaoka’s prediction as well as the first simulation of magnetism using quantumdot arrays.
The final experiment takes a different approach to simulate the Fermi-Hubbardmodel with a large 2d array of quantum dots. The dot array is created using only three gates in a top-down approach. This allows for only global control over the electron filling and tunnel coupling of the dots, contrary to the previous experiments. The readout is performed with capacitance spectroscopy, which allows us to directly probe the density of states of the two-dimensional electron systems. We measure the disorder levels and optimize both substrates and gating strategies to induce periodic potential, sufficiently stronger than the disorder level, at the 2d electron gas. Although we demonstrate a novel platformfor the realization of artificial lattices of interacting particles, this effort is currently limited by the substrate inhomogeneity.","Quantum simulation; Quantumdot array; Fermi-Hubbard Model; Nagaoka ferromagnetism","en","doctoral thesis","","978-90-8593-411-0","","","","Casimir PhD Series, Delft-Leiden 2019-29","","","","","QCD/Vandersypen Lab","","",""
"uuid:2a13956c-e2d3-4641-b79d-db8c62fb65b9","http://resolver.tudelft.nl/uuid:2a13956c-e2d3-4641-b79d-db8c62fb65b9","Towards intrinsically safe microstructures in resistant spot welded advanced and ultra high strength automotive steels","Eftekharimilani, P. (TU Delft (OLD) MSE-5)","Hermans, M.J.M. (promotor); Richardson, I.M. (promotor); Delft University of Technology (degree granting institution)","2019","The potential for weight reduction, ease of manufacturing and improved crashworthiness makes advanced and ultra high strength steels attractive for automotive applications. Resistance spot welding is by far the most widely used joining method in the automotive industry due to the high operating speeds, the reliability of the process and the suitability for automation. Safe microstructures in resistance spot welds in AHSS and UHSS have to be assured to promote acceptance of these steels in the automotive industry. However, the higher alloying contents of AHSS/UHSS steels limit their weldability and unfavourable modes of weld failure are frequently observed. The main aim of this research is to identify and understand the unfavourable failure of the AHSS welds and to modify the microstructure and thus the mechanical response of the welds. In this PhD thesis the results of alternative welding schedules to modify the microstructure and mechanical performance of the AHSS resistance spot welds are reported. The effects of a paint bake cycle on the microstructure of the welds have also been investigated and the predominant mechanisms involved were studied. The residual stress within these welds were measured and simulated to facilitate the residual stress prediction before welding. Double pulse resistance spot welding with different second pulse current levels was applied to improve the microstructure of the weld edge. The second current pulse equal to the first pulse anneals the weld edge and modifies the weld edge microstructure. Microstructural analysis was performed using optical microscopy, scanning electron microscope, electron probe microanalysis (EPMA) and electron back scattered diffraction (EBSD). The double pulse weld showed a reduction in segregation of alloying elements such as phosphorous and a change in grain morphology from dendritic to a more equi-axed shape and smaller grain size. The results obtained from the mechanical testing i.e. cross tension strength test (CTS) and tensile shear strength test (TSS) showed enhanced cross-tension strength and energy absorption capability of the weld for the double pulse welds.","Resistance spot welding; Advanced high strength steels (AHSS); Microstructure analysis; Steels; EPMA; EBSD analysis; Mechanical properties; TEM study; Phase transformations; residual stress; modelling; Automotive","en","doctoral thesis","","978-94-6380-515-5","","","","","","","","","(OLD) MSE-5","","",""
"uuid:35115149-ed67-4d73-98d1-ff81022cecae","http://resolver.tudelft.nl/uuid:35115149-ed67-4d73-98d1-ff81022cecae","Applications of Optical Birefringence: With Natural-Materials and Meta-Materials","Tang, Y. (TU Delft ImPhys/Optics)","Urbach, Paul (promotor); Adam, A.J.L. (copromotor); Delft University of Technology (degree granting institution)","2019","The beauty of optical birefringence lies in the fact that it provides an independent control of light over different polarization directions, which leads to many important applications in today's optical systems. This thesis describes the applications of existing naturally-occurring birefringent materials as well as the engineered formed-birefringent meta-materials via nano-fabrication. In particular the application of birefringent materials in optical trapping, waveguide engineering and imaging are discussed. The investigation methods include numerical simulation, nano-fabrication and experimental validation.","meta-materials; birefriengence; waveguide; super-resolution; optical trapping; nanoparticles; imaging","en","doctoral thesis","","978-94-028-1736-2","","","","","","2020-09-30","","","ImPhys/Optics","","",""
"uuid:ff4e3db6-69e5-4b8f-be68-a94c2d84b8bb","http://resolver.tudelft.nl/uuid:ff4e3db6-69e5-4b8f-be68-a94c2d84b8bb","Planning and Operation of Automated Taxi Systems","Liang, X. (TU Delft Transport and Planning)","van Arem, B. (promotor); Correia, Gonçalo (promotor); Delft University of Technology (degree granting institution)","2019","In recent years, technology development has accelerated the future roll-out of vehicle automation. An automated vehicle (AV), also known as a driverless car and a self-driving car is an advanced type of vehicle that can drive itself on existing roads. A possible area of application for AVs is public transport. The concept of automated taxis (ATs) is supposed to offer a seamless door-to-door service within a city area for all passengers. With automation technology maturing, we may be able to see the situation in which hundreds or even thousands of ATs will be on the road replacing private vehicles accounting for the majority of people’s daily trips. However, little attention has been devoted to the usage of a fleet of ATs and their effect on a real-scale road network.
In this thesis, we explore how automated driving can serve mobility and what is the best way to introduce this technology as part of the existing transport networks. This is also the research gap this thesis is going to fill. The objective of this thesis is to contribute to the planning and operational strategies that these AT systems should follow in order to satisfy mobility demand.
This thesis uses mathematical optimization to address the above research problems. A mathematical optimization problem consists of maximizing or minimizing a function by systematically selecting some input values within a defined domain. It aims to find the best available values of the objective function and the corresponding values of the problem input. The purpose of this thesis is to provide a tool to support the decision-making processes both for long-term planning strategies and short-term tactical operations when ATs are going to be applied in the urban transport system.
processes by reducing gas mobility. In fact, foam is the only EOR technology that is
able to fight against both gravity segregation and geological heterogeneity. Surfactant Alternating Gas, or SAG, is the preferred method to place foam into the reservoir for both operational and injectivity reasons. For example, this method of injection avoids the difficulties of having foam in the injection lines. Injecting foam in this manner also offers better injectivity than in foam-injection processes in which pregenerated foam is injected into the reservoir.","Foam; Surfactant-Alternating-Gas; Mobility Control; Injectivity; non-Newtonian; Enhanced-Oil-Recovery","en","doctoral thesis","","978-94-6384-069-9","","","","","","","","","Reservoir Engineering","","",""
"uuid:01110abf-6e9e-4518-abd3-c4e0daa13f6f","http://resolver.tudelft.nl/uuid:01110abf-6e9e-4518-abd3-c4e0daa13f6f","Challenges of end-user programmers: Reflections from two groups of end-users","Swidan, A.A.S. (TU Delft Software Engineering)","van Deursen, A. (promotor); Hermans, F.F.J. (copromotor); Delft University of Technology (degree granting institution)","2019","The goal of this dissertation is to explore, understand, and mitigate when possible, the challenges that end-users face when creating their software programs. To gain a wider perspective of the challenges, we investigated two groups of end-users: spreadsheet users and school-age children learning to program.","End-User Programming; Spreadsheets; Programming Education","en","doctoral thesis","","","","","","","","","","","Software Engineering","","",""
"uuid:99d6611c-ad5a-4eaa-b915-f6834fddd1f5","http://resolver.tudelft.nl/uuid:99d6611c-ad5a-4eaa-b915-f6834fddd1f5","Switch Panel Structural Performance (SP2)","Hiensch, E.J.M. (TU Delft Railway Engineering)","Dollevoet, R.P.B.J. (promotor); Steenbergen, M.J.M.M. (copromotor); Delft University of Technology (degree granting institution)","2019","","switch panel; wear; rolling contact fatigue (RCF); rail; damage model; track friendliness","en","doctoral thesis","","","","","","","","","","","Railway Engineering","","",""
"uuid:5580c747-d8a1-464c-8db8-e16eb2a499f7","http://resolver.tudelft.nl/uuid:5580c747-d8a1-464c-8db8-e16eb2a499f7","Numerical Modelling Of Pile Installation Using Material Point Method","Thi Viet Phuong, N. (TU Delft Geo-engineering)","van Tol, A.F. (promotor); Brinkgreve, R.B.J. (copromotor); Delft University of Technology (degree granting institution)","2019","Structures and buildings built on soft soil require deep foundations often consisting of piles. Through the piles, loads are transferred to deeper soil layers which are capable to mobilise enough bearing capacity for the superstructure. During installation of a displacement pile, the soil around the pile gets distorted which leads to a change of stress, density and soil properties in the distorted zone. The quantification of change in soil properties, soil state and the influenced zone around the pile during installation are yet a remaining uncertainty in geotechnical engineering. This thesis examines the mechanisms that govern pile installation and subsequent loading by numerical analysis. The study focuses on jacked and impact hammer installation techniques in dry and fully saturated sand...","Pile installation; material point method; large deformation; grain crushing","en","doctoral thesis","","978-94-6384-065-1","","","","","","","","","Geo-engineering","","",""
"uuid:af104fd3-ddc5-49b2-8e17-88bcffa52323","http://resolver.tudelft.nl/uuid:af104fd3-ddc5-49b2-8e17-88bcffa52323","Optimisation of dynamic heterogeneous rainfall sensor networks in the context of citizen observatories","Chacon Hurtado, J.C. (TU Delft Water Resources; TU Delft Sanitary Engineering)","Solomatine, D.P. (promotor); Alfonso, Leonardo (promotor); Delft University of Technology (degree granting institution)","2019","Precipitation drives the dynamics of flows and storages in water systems, making its monitoring essential for water management. Conventionally, precipitation is monitored using in-situ and remote sensors. In-situ sensors are arranged in networks, which are usually sparse, providing continuous observations for long periods at fixed points in space, and due to the high costs of such networks, they are often sub-optimal. To increase the efficiency of the monitoring networks, we explore the use of sensors that can relocate as rainfall events develop (dynamic sensors), as well as increasing the number of sensors involving volunteers (citizens). This research focusses on the development of an approach for merging heterogeneous observations in non-stationary precipitation fields, exploring the interactions between different definitions of optimality for the design of sensor networks, as well as development of algorithms for the optimal scheduling of dynamic sensors. This study was carried out in three different case studies, including Bacchiglione River (Italy), Don River (U.K.) and Brue Catchment (U.K.) The results of this study indicate that optimal use of dynamic sensors may be useful for monitoring precipitation to support water management and flow forecasting.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-0-367-41706-2","","","","Dissertation submitted in fulfillment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of IHE Delft Institute for Water Education.","","","","","Water Resources","","",""
"uuid:80f98acd-dc72-4aa8-bec6-ce72a26c2c65","http://resolver.tudelft.nl/uuid:80f98acd-dc72-4aa8-bec6-ce72a26c2c65","Successful transfer of algebraic skills from mathematics into physics in senior pre-university education","Turşucu, S. (TU Delft Science Education and Communication)","de Vries, M.J. (promotor); Spandaw, J.G. (copromotor); Delft University of Technology (degree granting institution)","2019","Science teachers have the experience that students in secondary and higher education face difficulties with applying mathematics into science class, leaving less time for their core business of teaching science. This may be frustrating and time-consuming, overshadowing the science content that needs to be taught. In addition, in a large number of countries, science curricula are overloaded, compelling science teachers to fit their program into a seriously reduced instruction time, making inefficient transfer of mathematics in science even more harmful. In this study, we examined students’ transfer of algebraic skills from mathematics into physics in senior pre-university education. We aimed at improving this transfer to solve algebraic physics problems.","mathematics and science education; qualitative and quantitative research; symbol sense and transfer of learning","en","doctoral thesis","","978-94-6380-493-6","","","","","","","","","Science Education and Communication","","",""
"uuid:7642be04-1902-4ff2-8ff7-8b6ef2c574e5","http://resolver.tudelft.nl/uuid:7642be04-1902-4ff2-8ff7-8b6ef2c574e5","Mechanistic Insight into Next Generation Batteries: The Story of Li-oxygen and Zn-aqueous Batteries","Li, Z. (TU Delft RST/Storage of Electrochemical Energy)","Wagemaker, M. (promotor); Ganapathy, S. (copromotor); Delft University of Technology (degree granting institution)","2019","Current Li-ion batteries dominate the market but face great challenges with respect to safety, cost and the higher energy and power densit requirements of electrical vehicles and stationary energy storage. Relevant for mobile electrical transport, Li-O2 batteries in theory offer the highest specific energy among all the lithium electrochemical energy storage systems. Research efforts have been made to address the challenges that impede the functioning of this battery, which include low round trip efficiency, low specific capacity and poor cycling stability. To understand the origin of these issues, attaining a deeper understanding of the mechanism behind the electrochemical reactions is of vital importance. This also forms the foundation for exploring ideal oxygen cathodes and better electrolytes. The aqueous zinc batteries are another potential candidate for large scale electric energy storage owing to its low-cost, high operational safety, and environmental benignity. However, it is not easy to find a suitable insertion cathode for ZIBs, because the electrostatic interaction between divalent Zn ions. The development of host materials for ZIBs is still in its infancy, and in-depth understanding of the electrochemical processes involved is paramount at this early stage. The focus of this thesis is on attaining mechanistic insight into the electrochemical processes occurring in working Li-O2 and aqueous zinc batteries, which guides the choice/design of proper electrode materials for these next generation battery systems.","Li-O2 battery; aqueous Zn/VO2 battery; electrochemical reaction mechanism; operando analysis","en","doctoral thesis","","978-94-6380-501-8","","","","","","","","","RST/Storage of Electrochemical Energy","","",""
"uuid:193a4016-5fc7-401b-babe-722ff6a95a6c","http://resolver.tudelft.nl/uuid:193a4016-5fc7-401b-babe-722ff6a95a6c","Experimental Study and Numerical Simulation of the Reaction Process and Microstructure Formation of Alkali-Activated Materials","Zuo, Y. (TU Delft Materials and Environment)","van Breugel, K. (promotor); Ye, G. (promotor); Delft University of Technology (degree granting institution)","2019","Alkali-activated materials (AAMs) show promising potentials for use as building materials. Before the utilization, AAMs must satisfy the properties required from the construction sector, such as good durability and long-term service life. These properties mainly depend on the chemical and physical properties of the microstructure of AAMs. In the literature many studies have been presented about the reaction process and microstructure formation of AAMs, but still some aspects are not given due attention, such as the pore solution composition of AAMs and the origin of the induction period during the reaction of AAMs. Furthermore no computer-based simulation models have been developed so far for simulating the reaction process and microstructure formation of AAMs. It is still a big issue and challenge today to numerically obtain the microstructure of AAMs.This research adopted two routes to study the reaction process and microstructure formation of AAMs, i.e. the experimental study route and numerical simulation route. Ground granulated blast furnace slag and fly ash were used as the aluminosilicate precursors and sodium hydroxide and sodium silicate were used as the alkaline activators. The experimental study route provided new results for better understanding the reaction process and microstructure formation of AAMs. Then the new insights from the experimental study route helped to develop the GeoMicro3D model for simulating the reaction process and microstructure formation of AAMs.(1)Experimental study routeFirstly, the pore solutions of alkali-activated slag, alkali-activated fly ash and alkali-activated slag/fly ash pastes with different activators and reaction conditions were studied by means of ICP OES. It was found the pore solution composition of AAMs depended on the activation conditions, such as the type and concentration of alkaline activator and curing temperature. Then, the reaction kinetics of alkali activated slag, alkali activated fly ash and alkali activated slag/fly ash pastes were investigated by using isothermal calorimetry. The sodium content, silica content and curing temperature affected the reaction kinetics of AAMs. The origin of the induction period was different for alkali-activated slag systems and alkali-activated fly ash systems. For alkali activated slag systems, the presence of soluble Si in the activator slowed down the dissolution of slag and caused an induction period. In contrast, the fact that an induction period occurred in alkali-activated fly ash systems was mainly attributed to the passivation of the leached surface layer caused by the absorbed Al.Finally, the microstructure development of alkali-activated slag, alkali-activated fly ash and alkali-activated slag/fly ash pastes was studied using SEM and MIP. It was found that the type and concentration of alkaline activator affected the microstructure formation of AAMs. An increase of Na2O content led to a reduction of the total porosity and a refinement of the microstructure for alkali-activated slag systems. In contrast, an increase of Na2O content did not affect the total porosity of alkali activated fly ash systems very much. Instead, it altered the microstructure by increasing the amount of large pores and decreasing the amount of small pores. The SiO2 content seriously affected the microstructure formation of AAMs. In sodium silicate activated systems, the soluble silicate in the activator led to a relatively dense microstructure with separated small capillary pores. This was different from the relatively coarse microstructure with connected capillary pores in sodium hydroxide activated systems.(2)Numerical simulation routeFirstly, the initial particle parking structure of AAMs, as the starting point for simulating the reactions and microstructure formation, was simulated using real-shape particles of slag and fly ash. In comparison with the spherical particles, using real-shape particles increased the total surface area (up to 23 %) and bulk specific surface area (at least 12 %) of the simulated initial particle parking structures. At low liquid/binder ratios (≤ 0.47), using real shape particles led to a significant shift of the pore size distribution to small pores as compared to using the spherical particles in the simulated initial particle parking structures.Secondly, a dissolution model was developed for simulating the dissolution of aluminosilicate precursors in alkaline solution. The influences of temperature, reactivity of precursors, alkalinity of solution and inhibiting effect of aqueous Al etc. on the dissolution of precursors were taken into account in this model. The simulation results of the dissolution of slag and fly ash in alkaline solution were in agreement with the experimental data.Then, the reactions in AAMs were thermodynamically simulated. A thermodynamic model, i.e. N(C)ASH_ss, was developed to describe the N A S H gel. With this model and the C(N)ASH_ss model in the literature it is possible to perform thermodynamic modelling of the reactions in alkali-activated fly ash systems and alkali-activated slag/fly ash systems. The simulated pore solution compositions of AAMs were in line with the experimental results.Finally, the GeoMicro3D model was built up based on the numerical modules of initial particle parking structure, dissolution of aluminosilicate precursor, thermodynamic modelling and nucleation & growth of reaction products. As a case study GeoMicro3D was implemented to simulate the reaction process and microstructure formation of alkali-activated slag with three different alkaline activators. The simulated reaction kinetics (degree of reaction of slag) and pore solution chemistry (element concentration) were in agreement with the experimental results. The simulated volume evolution of solid phases by GeoMicro3D was consistent with the results calculated by GEMS with regard to the primary reaction products (C (N )A S H) and some crystalline reaction products, such as the hydrotalcite-like phase and katoite. Besides the volume evolution of solid phases, GeoMicro3D also provided the volumes of adsorbed water and gel pore water that were retained by the C (N )A S H gel.To sum up, the reaction process and microstructure formation of AAMs were studied experimentally and numerically. Based on the insights obtained from the experimental study numerical models were developed and validated. With these models it is possible to simulate the initial particle parking structure of slag/fly ash in alkaline activator, dissolution of slag/fly ash, chemical reactions and microstructure formation of AAMs with reasonable accuracy.","alkali-activated materials; slag; fly ash; pore solution composition; reaction kinetics; heat release; microstructure formation; numerical simulation; initial particle parking structure; spherical harmonics; dissolution; lattice Boltzmann method; thermodynamic modelling; GeoMicro3D; nucleation and growth","en","doctoral thesis","Delft University of Technology","978-94-6384-062-0","","","","","","","","","Materials and Environment","","",""
"uuid:c97f643c-002f-4dc9-a243-2df4f9a784b1","http://resolver.tudelft.nl/uuid:c97f643c-002f-4dc9-a243-2df4f9a784b1","Low Noble Metal Content Catalysts for Hydrogen Fuel Technology","Westsson, E.E. (TU Delft RST/Storage of Electrochemical Energy; TU Delft ChemE/Advanced Soft Matter; TU Delft ChemE/Chemical Engineering)","Koper, G.J.M. (promotor); Picken, S.J. (promotor); Delft University of Technology (degree granting institution)","2019","The increasing energy demand of the world population in combination with tangible climate change effects stemming from rising carbon dioxide emissions is currently characterizing a large portion of the political and societal debate. Despite huge technological advancement in the field of renewable energy resulting in energy prices lower than that of fossil based energy, the rate of greenhouse gas emissions has not even levelled off but rather kept increasing. A part of the problem lies in the very nature of season and weather dependent energy conversion technologies producing electricity peaks that are hard to buffer. The solar and wind powered scenario is not yet able to completely replace the relatively demand flexible fossil fuel based power plants. The gap between energy production and energy use, in essence meaning storage and distribution of sustainable energy, constitutes one of the largest challenges of our times. Hydrogen has been proposed as a molecule with the potential of being an important energy carrier in a renewable energy based economy. In a fuel cell, hydrogen can be electrochemically oxidized to water, releasing its chemical energy without the emission of combustion by-products like carbon dioxide. Commonly platinum is used as a catalyst to speed up the anode and cathode reactions in a fuel cell. Reversibly, an electrolyser uses electricity to electrochemically split water into its constituents; hydrogen and oxygen. Ideally, hydrogen could be produced where and when the electricity is available or cheap and be stored or transported to the location where it is needed, although technical challenges as well as infrastructural hurdles are still to be solved. If electrochemical devices, such as fuel cells, are to play a major role in the future energy landscape a better understanding of catalytic processes along with cheap and scalable non-noble metal catalysts are still needed.","Fuel cell; Catalysis; Core-shell nanoparticles","en","doctoral thesis","","978-94-028-1696-9","","","","","","","","ChemE/Chemical Engineering","ChemE/Advanced Soft Matter","","",""
"uuid:99b15acb-e25e-4cd9-8541-1e4056c1baed","http://resolver.tudelft.nl/uuid:99b15acb-e25e-4cd9-8541-1e4056c1baed","Vortex Generators for Flow Separation Control: Wind Turbine Applications","Baldacchino, D. (TU Delft Wind Energy)","van Bussel, G.J.W. (promotor); Ferreira, Carlos (promotor); Delft University of Technology (degree granting institution)","2019","Vortex generators have become a ubiquitous sight on the modern wind turbine blade. These small, passive devices can increase the energy extraction potential of a rotor, but their subtle footprint disguises the technical difficulties associated with designing and integrating them onto wind turbine blades. The complexity of rotor inflow and the blade-bound flow present specific challenges for the design of vortex generators. Flow three dimensionality effects along the blades have conventionally been factored into design tools using correction factors for two-dimensional airfoil performance characteristics. However, the introduction of local perturbations in the form of streamwise vortices adds an additional layer of complexity. Indeed, the interaction of the vortex generator and flow three-dimensionality is ill-understood, and thus, so are its design implications. Furthermore, the passive nature of vortex generators means that a lot of variables influence their performance, making design optimisation a costly process. This thesis aims to improve the physical understanding of vortex generator physics in the context of wind energy applications, paving the way for more effective engineering tools. The objective is tackled by reviewing the state of the art, benchmarking existing tools and experiments, defining, measuring and simulating relevant test cases, and developing a new design tool. A measurement campaign is conducted in a boundary layer wind tunnel using non-intrusive PIV measurements for assessing the details and dynamics of streamwise vortices. A second measurement campaign maps the performance of the DU-97-W300 airfoil section with vortex generators in a conventional closed-loop wind tunnel. Inviscid vortex theory is employed for modelling vortex dynamics. Xfoil features throughout as a design tool and itself as the subject of an improved airfoil design tool incorporating vortex generators.","flow control; vortex generator; asymmetric vortices; streamwise vortice; stall dynamics; integral boundary layer","en","doctoral thesis","","978-94-6384-056-9","","","","","","","","","Wind Energy","","",""
"uuid:4470eb1d-c71a-4de1-b11e-36d93a77ad78","http://resolver.tudelft.nl/uuid:4470eb1d-c71a-4de1-b11e-36d93a77ad78","Towards understanding and supporting complex decision-making by using game concepts: A case study of the Dutch railway sector","Bekius, F.A. (TU Delft Organisation & Governance)","de Bruijn, J.A. (promotor); Meijer, S.A. (promotor); Delft University of Technology (degree granting institution)","2019","Decision-making on large infrastructural projects is complex. The complexity arises from the interdependencies within and between technical and social systems. The dynamic environment in which the decision-making takes place adds to this complexity. In this dissertation, we use game concepts to understand complex decision-making processes, and to support these processes. Game concepts describe decision-making situations as a game in which decision makers make (strategic) choices that lead to outcomes. They are particularly useful for addressing the agency of actors, i.e., their responsibility, and the dynamics of the process. Different decision-making processes of the Dutch railway sector are described and interpreted by using game concepts. The game concepts explain a large part of the process and analyses reveal a classification of game concepts over time, decision levels, and interactions. Additionally, this thesis has shown that game concepts are valuable for decision makers themselves to gain insight in the actor complexity of the process and to formulate next steps. To model these decision-making situations, one of the game concepts, the Multi-Issue game, has been formalized.","","en","doctoral thesis","","978-94-028-1661-7","","","","","","","","","Organisation & Governance","","",""
"uuid:1947fa3e-2e3c-4007-bc4f-12cb93f34be6","http://resolver.tudelft.nl/uuid:1947fa3e-2e3c-4007-bc4f-12cb93f34be6","Image-based representations for efficient rendering and editing","Scandolo, L. (TU Delft Computer Graphics and Visualisation)","Eisemann, E. (promotor); Delft University of Technology (degree granting institution)","2019","Over the years, technological improvement has led to the ability to acquire, create and store vast amounts of geometrical and appearance data to use in graphical applications. Nevertheless, efficiently creating images when using such large amounts of data remains an ongoing topic of research, given the computational resources required. This dissertation will focus on a particular kind of algorithms in order to tackle this problem: image-based representations.
Entities represented in a virtual 3-dimensional world can be projected into a regular 2-dimensional grid to form an image. This results in a much more compact, albeit incomplete, representation of the original information, since an image is a piece-wise constant discretization of the underlying data. Nevertheless, computing properties of the 3-dimensional data via its 2-dimensional projection is much more efficient. Still, it is important to understand when and how to use these types of representations, since the discretization of the original data can lead to artifacts in the computed results.
This work will focus on three key aspects of computer-graphics algorithms where using appropriate image-based representations can result in increased performance: memory requirements, computational efficiency, and interactivity. To do so, it will describe image-based solutions to different relevant problems in the field of computer graphics. Firstly, by exploring how using 2-dimensional intermediate representation can reduce memory requirements for storing large static shadow information in comparison to state-of-the-art voxel representations. Secondly, a hierarchical image-based approach for efficiently computing diffraction patterns will be demonstrated, which can outperform FFT-based solutions. Lastly, an efficient interactive optimization method for editing disparity when creating stereographic images will be described.
These algorithms will be described and evaluated in detail, in order to give the reader insight into the usage of image-based representations.","graphics; rendering; compression; diffraction; stereo","en","doctoral thesis","","","","","","","","","","","Computer Graphics and Visualisation","","",""
"uuid:88322666-9cf1-4120-bbd6-4a93438bca74","http://resolver.tudelft.nl/uuid:88322666-9cf1-4120-bbd6-4a93438bca74","Cultura: Achieving intercultural empathy through contextual user research in design","Hao, C. (TU Delft Design Conceptualization and Communication)","Stappers, P.J. (promotor); van Boeijen, A.G.C. (copromotor); Delft University of Technology (degree granting institution)","2019","In today’s globalizing world an increasing number of companies design products and services for overseas markets and users. Designers face the challenge of creating solutions that fulfil users’ needs, and this becomes more important as the cultural distance between them continues to grow. Unsuccessful international endeavours which resulted in market disasters have already been noted. For example, the Italian fashion company Dolce & Gabana recently made an advertisement that featured a Chinese model struggling to eat ‘the great traditional Margherita Pizza‘, by using ‘this kind of small stick-shaped tableware’- chopsticks, and many Chinese customers took cultural offense at being depicted in the caricature (Ng, Lam & Jane, 2018). As can be imagined, if designers do not carefully consider the local cultural context for which they hope to create designs, their solutions are likely to be mismatched, or perhaps even harmful to their users. To avoid such situations, rich stories about everyday experiences, shared by users, are valuable resources to help designers to develop empathic understanding of users. Contextual user research, using generative techniques, has been demonstrated to be an effective way of collecting insightful user stories and communicating them to designers in order to create meaningful solutions, and has become a recognizable part of design practice. However, most of the reported work has been with users and designers with a shared a cultural background, so that empathic understanding can be built on a tacit shared cultural basis. When conducting contextual studies with a cross cultural dimension, we found the problem to be twofold: first, these tools and techniques, when employed in user research, sometimes failed to facilitate social interactions or bring out expressions of users, due to mismatches with cultural inclinations. Second, designers found it difficult to empathise with the individual aspects of user insights (such as quotes and anecdotes) from a culture they had little experience with. This thesis focuses on conducting contextual user research in a cross-cultural setting. It investigates ways of supporting users in telling rich and relevant stories, and designers in building empathic understanding (the design goal of this thesis). By investigating the issues mentioned above – a framework will be proposed, various tools and techniques to support users and designers will be created. A new and rewarding process for conducting intercultural contextual user research, called Cultura, will be developed at the end of the research..","Design tools and techniques; Cultura; Contextual user research; Intercultural empathy","en","doctoral thesis","","9789402816600","","","","","","","","","Design Conceptualization and Communication","","",""
"uuid:bb07b47f-9e2c-448a-a235-9f29baed2d5d","http://resolver.tudelft.nl/uuid:bb07b47f-9e2c-448a-a235-9f29baed2d5d","Unravelling Mode and Route Choice Behaviour of Active Mode Users","Ton, D. (TU Delft Transport and Planning)","Hoogendoorn, S.P. (promotor); Cats, O. (promotor); Duives, D.C. (copromotor); Delft University of Technology (degree granting institution)","2019","Due to increasing urbanisation rates worldwide combined with growing transportation demand, liveability of the urban environment is under pressure (UN, 2018). In response, many governments worldwide have set goals for increasing the share of trips made using sustainable modes of transport, such as walking and cycling. The use of active modes (i.e. walking and cycling) provides health benefits for individuals due to increased activity levels, and on a network level these modes (standalone or in combination with public transport) can potentially reduce traffic jams and the associated externalities (including air and noise pollution) when substituting the car. To achieve the desired increase in active mode shares, targeted policies need to be implemented. This requires a better understanding of who currently uses these modes, who could be persuaded to switch to active modes, and which determinants are driving active mode choice.
This intended change towards active modes requires an adequate representation of walking and cycling in the transportation planning models in order to assess the effect of active mode policies on modal shares and distribution over the network. However, this is often not the case. Moreover, integration of active modes in these models occurs very slowly. Walking and cycling are often missing in transportation planning models, treated as a ‘rest’ category, or combined into slow/active modes, all of which result in incorrect estimates of the active mode shares, making it impossible to correctly identify the impact of potential policy measures on active mode shares. Examples of these policy measures are introduction of new infrastructure or changes to existing infrastructure, which impact route choice and distribution over the network, and reimbursement of using the bicycle to go to work, which impacts the mode choice of individuals.
Investigating mode and route choice of active mode users increases the knowledge on active mode choice behaviour. By bridging this gap, the transportation planning models can potentially be improved. The objective of this thesis is ‘to understand and model mode and route choice behaviour of active mode users’. We identify six topics that are imperative to travel choices. First, we investigate the daily mobility patterns of individuals in relation to attitudes towards modes, because attitudes are considered to influence travel behaviour (Chapter 2). Afterwards, we zoom in on individual trips. We aim to understand which determinants drive the choice to walk or cycle (Chapter 3). In this topic we define the mode choice set as all feasible modes per individual and trip. However, not all feasible modes are used by individuals. Therefore, the third topic focuses on modes used over a long period of time, which we coin the experienced choice set. We investigate which determinants are relevant for including or excluding modes in this choice set (Chapter 4). Regarding cyclists’ route choice, we investigate the determinants influencing this choice (Chapter 5). This research is based on the experienced choice set. Accordingly, we compare this method to frequently used choice set generation methods to identify the added value of the experienced choice set (Chapter 6). Finally, we perform a literature review on how mode and route choice can be modelled simultaneously (Chapter 7).","active modes; route choice determinants; mode choice determinants; discrete choice analysis; choice set formation; experienced choice set","en","doctoral thesis","TRAIL Research School","978-90-5584-254-4","","","","TRAIL Thesis Series no. T2019/12, the Netherlands Research School TRAIL","","","","","Transport and Planning","","",""
"uuid:b4288708-cc36-40de-966a-534b7090a7fc","http://resolver.tudelft.nl/uuid:b4288708-cc36-40de-966a-534b7090a7fc","Light trapping in Si thin film solar cell","Ahmadpanahi, S.H. (TU Delft Photovoltaic Materials and Devices)","Isabella, O. (promotor); Zeman, M. (copromotor); Delft University of Technology (degree granting institution)","2019","This thesis is structured in six distinct chapters. In Chapter 1 a general introduction is given to address the main challenge in thin-film silicon solar cells and to motivate the need for light trapping. This chapter also describes the main focus of this thesis and the urge to understand the light behaviour inside a periodic waveguide thin film. This is followed by Chapter 2, which provides the mathematical background and the frame work which has been used throughout the thesis. This chapter presents some practical details and calculation techniques which have been used to obtain our result. In Chapter 3, a semi-analytical approach is introduced to calculate the contribution of guided and nonguided resonances to total absorption of a grating waveguide structure under normal incidence. In this approach, we use Fourier expansion to calculate the energy spectral density of the electric field inside the absorber. In this way, the weight of each resonance in total absorption is defined for a large wavelength range for TM and TE polarization. Additionally, the proposed mathematical model is supported by numerical and rigorous calculations, using a software based on the finite element method. This approach is extended for oblique incidence in Chapter 4. In this chapter it is explained howthe variation of tangential and normal components for TM electric field under oblique incidence influences the accuracy of numerical calculation. The correlation between the density of modes and the absorption peaks due to guided mode excitation is also presented in this chapter. Chapter 5 focuses on calculating the maximum absorption enhancement achieved by each type of resonance in a waveguide structure with symmetric and asymmetric gratings. In this chapter a different approach is introduced to count the number of resonances in a grating waveguide structure, at each frequency. Then, temporal coupledmode theory is used to calculate the maximum absorption enhancement for each diffraction order. This approach is extended for a thin film with double-side texturing. Chapter 6 provides the conclusion of the thesis.","Light trapping; Diffraction and gratings; Thin film; Solar Cells; Absorption enhancement","en","doctoral thesis","","","","","","","","","","","Photovoltaic Materials and Devices","","",""
"uuid:ccf28881-efca-4301-8397-acf32d609737","http://resolver.tudelft.nl/uuid:ccf28881-efca-4301-8397-acf32d609737","Regimes of Urban Transformation in Tehran: The Politics of Planning Urban Development in the 20th Century Iran","Mashayekhi, A. (TU Delft Spatial Planning and Strategy)","Zonneveld, W.A.M. (promotor); Stead, D. (promotor); Delft University of Technology (degree granting institution)","2019","This thesis contributes to a detailed understanding of the urbanisation of Tehran, and offers a new perspective on its complexities and specificities. This perspective builds on the work of urban scholars who have critically questioned the Eurocentric understanding of cities and have moved beyond an artificial hierarchy of cities that pushes for ‘backward/underdeveloped’ cities to become like ‘advanced/developed’ cities, even if that is inappropriate to their specific material and cultural condition. The thesis considers the urbanisation of Tehran over the course of the whole 20th century and argues that urban change and development process of Tehran is a multi-scalar process and despite its particularities and differences, have been connected to wider global development processes. In doing so this research examines the historic trajectory of Tehran’s urbanisation and development through the lens of international development discourse (such as state-led industrialisation or long-term economic development planning and privatisation) and shows how the interconnection between Iranian city making practices and international development ideas have shaped Tehran urban spaces and social structure. Throughout the last century, many international perspectives on Iran diagnosed the country and its capital city as being ‘backward’, ‘Third World’, and ‘undeveloped’. Thus, the pressure to ‘catch up’ with developed and economically powerful nations has been crucial in the ways in which Iranian government regimes, political elites, experts, and citizens have dealt with the ‘problem of underdevelopment’. In this thesis, the discourse of development refers not only to how development is defined or described, but also to how it is measured and practiced. Since the beginning of the 20th century, shifts in the global political economy have caused new discourses, institutions, and actors of development to emerge – bringing important implications for the formulation of national development polices and urban planning practices in cities across the Global South, including Tehran. Therefore, this study seeks to reveal how the interplay between the global discourse on development and Iran’s development policies has resulted in particular urban plans and development projects for Tehran, whose outcomes have had long lasting effects on trajectories of urbanisation and urban transformation in Iran. This research provides a historic perspective which first interrogates the relationship between urbanisation and development discourses and problematizes their perceived positive relationship in studies of cities in the Global South. Then a series of theoretical debates on history of urbanisation in non-Western cities, and shifts in international development discourse and its implications for the formulation of national development polices and urban planning practices in cities across the global south are presented. These theoretical discussions have offered conceptual lenses and an analytical framework that have helped to create a multi-scalar approach which frames the history of Tehran’s urbanisation as an intertwined local and global process. It is particularly through the application of this analytical framework that this thesis provides a novel understanding of Tehran urban development and the role of urban planning and design. The multi-scalar analysis of Tehran’s urbanisation is divided into three major periods over the last century. The beginning and end of each of these chronological periods is defined by crucial socio-political changes in Iran, each of which were motivated by the ambition to develop a modern and independent Iran that resists Western hegemony. Moreover, each period has been structured around the key shifts in international development discourses, as well as key national development policies, urban plans and projects for Tehran, their political and economic purposes, and the experts and institutions involved in making them. The empirical analysis of each period unpacks the differing agendas for the construction of nation-statehood and traces the conflict and alliances between state and non-state actors and agencies in the process of negotiating and implementing national development policies and urban plans for Tehran. Finally, the role of local and Western urban planners and experts, and the ideas and principles that guided their work, were examined through tracing the institutionalisation of expertise and the plan-making processes. By revealing the pathway of Tehran’s urbanisation and its specific historical trajectory, this study finds that the development of Tehran as a capital city during the 20th century has been one of the key mechanisms with which the Iranian state has constructed itself, and fortified its role as a builder and engineer of new ‘modern’ urban spaces. In each of the different periods, extensive powers to organise the territory were in the hands of the state, which can be seen as a way of maintaining and extending its authority and legitimacy.
The case of Tehran’s urbanisation uncovers a mutual relationship between national development plans and urban change. The historical study of a series of key national development plans shows how Iranian ruling elites and chosen experts responded to dominant international development discourses and attempted to nurture a locally interpreted version of the ‘developed’ city. These efforts had direct implication for planning procedures and city making practices. As such, the case of Tehran deepens knowledge about the role of state and nation-building processes in shaping urban planning practices and urbanisation of southern cities, and also offers a counter-narrative to the common views in urban studies which suggest that large cities are bypassing their nation-states in driving economic growth and becoming strategic actors in the global economy.
Ultimately by interrogating state power in producing Tehran urbanism, we highlight the importance of and need for more research on the role of state (formal) and non-state (informal) actors in shaping Tehran’s urban development trajectory and the politics of city-making practices. This is particularly pertinent to the careful investigation of the role of revolutionary charitable foundations in planning development as these foundations cannot be defined simply as public or private sector. In more general terms, it is important to further research the role of the religious-political groups (as non-state actors) or any other developmental organisation with ideological orientations in shaping urban spaces and spatial practices of Middle Eastern cities. In fact, it will be impossible to do any planning reform without considering the crucial role these ideological groups and organisations play in socio-economic development of these cities.","","en","doctoral thesis","","","","","","","","2021-08-27","","","Spatial Planning and Strategy","","",""
"uuid:48b7649c-5062-4c97-bba7-970fc92d7bbf","http://resolver.tudelft.nl/uuid:48b7649c-5062-4c97-bba7-970fc92d7bbf","Planning Support Tools in Urban Adaptation Practice","McEvoy, S. (TU Delft Policy Analysis)","Slinger, J (promotor); van de Ven, F.H.M. (promotor); Delft University of Technology (degree granting institution)","2019","","","en","doctoral thesis","","978-94-632-3736-9","","","","","","","","","Policy Analysis","","",""
"uuid:ba02810b-e380-4c88-a4ed-d6bd2598ab2f","http://resolver.tudelft.nl/uuid:ba02810b-e380-4c88-a4ed-d6bd2598ab2f","Computation-in-Memory based on Memristive Devices","Du Nguyen, H.A. (TU Delft Computer Engineering)","Hamdioui, S. (promotor); Taouil, M. (copromotor); Delft University of Technology (degree granting institution)","2019","In-memory computing is a promising computing paradigm due to its capability to alleviate the memory bottleneck. It has even higher potential when implemented usingmemristive devices ormemristors with various beneficial characteristics such as nonvolatility, high scalability, near-zero standby power consumption, high density, and CMOS compatibility. Exploring in-memory computing architectures in the combination withmemristor technology is still in its infancy phase. Therefore, it faces challenges with respect to the development of the devices, circuits, architectures, compilers and applications.
This thesis focuses on exploring and developing in-memory computing in terms of architectures (including classification, limited schemes of instruction set,micro-architecture, communication and controller, as well as automation and simulator), and circuits (including logic synthesis flow and interconnect network schemes).","Computer architecture; resistive computing; Computation-in- Memory","en","doctoral thesis","","978-94-6384-060-6","","","","","","","","","Computer Engineering","","",""
"uuid:81ce7e8c-2965-42a0-bac6-b8ba6f6faee3","http://resolver.tudelft.nl/uuid:81ce7e8c-2965-42a0-bac6-b8ba6f6faee3","Cavity optomagnonics: Manipulating magnetism by light","Sharma, S. (TU Delft QN/Bauer Group; TU Delft QN/Theoretical Physics)","Bauer, G.E. (promotor); Blanter, Y.M. (promotor); Delft University of Technology (degree granting institution)","2019","We discuss theoretically the coupling of magnetization and infrared photons in whispering gallery mode cavities.","","en","doctoral thesis","","978-90-8593-413-4","","","","Casimir PhD Series, Delft-Leiden 2019-30","","","","","QN/Bauer Group","","",""
"uuid:5e700fc1-7620-4ab0-9b72-859e2db7926b","http://resolver.tudelft.nl/uuid:5e700fc1-7620-4ab0-9b72-859e2db7926b","Vessel Route Choice Model and Operational Model based on Optimal Control","Shu, Y. (TU Delft Transport and Planning)","Hoogendoorn, S.P. (promotor); Ligteringen, H. (promotor); Daamen, W. (promotor); Delft University of Technology (degree granting institution)","2019","Modeling is a promising approach to understand and predict the safety and efficiency of maritime traffic in ports and waterways. Different types of models have been developed over the years. Nevertheless, several important scientific challenges still remain. For instance, few models consider vessel behavior in ports and waterways under the influence of internal factors including vessel type and size, and external factors, such as wind and visibility. More data and research are needed to understand the influence of internal and external factors on vessel behavior including speed, course and path in ports and waterways; more research is also needed to explore human behavior of the bridge team for vessel maneuvering in ports and waterways. To address the needs listed, this thesis focuses on analyzing the influence of wind, visibility, current and vessel encounters on vessel speed, course and path using Automatic Identification System (AIS) data. Based on this analysis a new maritime traffic model has been developed that considers both internal and external factors, and aims to better predict the individual vessel behavior. The model can be used to provide data for the safety and efficiency assessment of vessel traffic in ports and inland waterways. In the last decades, the AIS system, which is an onboard autonomous and continuous broadcast system that transmits vessel data between nearby vessels and shore stations, has been developed. This is used now by almost all vessels. Therefore, AIS data, including vessel speed, course and path, can serve as a valuable data source to investigate vessel behavior. In this thesis, AIS data from a part of the port of Rotterdam is analyzed to investigate influences of different factors, such as vessel size and type, external conditions and vessel encounters, on vessel behavior. Firstly, vessels are distinguished into influenced and unhindered vessels based on certain thresholds that we obtained from the AIS data. The influenced vessel behavior is compared with the behavior of unhindered vessels, which are not influenced by other vessels or strong external influences of wind, visibility and current. The analysis provides evidence showing that the vessel behavior including vessel speed, course and path is influenced by various factors. Ship speed and path is influenced by internal factors (including vessel type, size, waterway geometry and navigation direction) and external factors (including wind, visibility, current, overtaking and head-on encounters), while ship course is only influenced by overtaking and head-on encounters. It can also be concluded that the AIS data is a useful source to get insights into vessel behavior.","","en","doctoral thesis","TRAIL Research School","978-90-5584-253-7","","","","TRAIL Thesis Series no. T2019/11, the Netherlands Research School TRAIL","","","","","Transport and Planning","","",""
"uuid:d25fd4fd-02d7-4811-b675-615badbb3c05","http://resolver.tudelft.nl/uuid:d25fd4fd-02d7-4811-b675-615badbb3c05","Designing context-aware architectures for business-to-government information sharing","van Engelenburg, S.H. (TU Delft Information and Communication Technology)","Janssen, M.F.W.H.A. (promotor); Klievink, A.J. (copromotor); Delft University of Technology (degree granting institution)","2019","New developments in Information and Communication Technology (ICT), such as big data, the Internet of Things (IoT), and blockchain technology provide opportunities for businesses and government organisations to benefit from business-to-government (B2G) information sharing. For example, big data analytics might provide government organisations with knowledge on how to assess risks using the information they receive from businesses. However, B2G information sharing can entail risks as well. Sensitive data could fall in the wrong hands and by that the competitor of the business might obtain this data. In addition, B2G information sharing could be unlawful...","Information sharing architecture; E-government; International container shipping; Blockchain; Design method; Context; Context-aware systems; Business-to-government information sharing","en","doctoral thesis","","978 94 6380 462 2","","","","","","","","","Information and Communication Technology","","",""
"uuid:389f453e-f7ff-4fea-a353-32755cf9a9e1","http://resolver.tudelft.nl/uuid:389f453e-f7ff-4fea-a353-32755cf9a9e1","Abstraction as a Tool to Bridge the Reality Gap in Evolutionary Robotics","Scheper, K.Y.W. (TU Delft Control & Simulation)","de Croon, G.C.H.E. (promotor); Mulder, Max (promotor); Delft University of Technology (degree granting institution)","2019","Automatically optimizing robotic behavior to solve complex tasks has been one of
the main, long-standing goals of Evolutionary Robotics (ER). When successful, this
approach will likely fundamentally change the rate of development and deployment
of robots in everyday life. Performing this optimization on real robots can be risky
and time consuming. As a result, much of the work in ER is done using simulations
which can operate many times faster than realtime. The only downside of this, is
that, due to the limited fidelity of the simulated environment, the optimized robotic
behavior is typically different when transferred to a robot in the real world. This
difference is referred to as the reality gap...","Evolutionary Robotics; Reality Gap; Abstraction; Robust behavior; MAV","en","doctoral thesis","","978-94-6366-197-3","","","","","","","","","Control & Simulation","","",""
"uuid:62197dda-cb0e-402f-afa6-bc1a7d69ad3d","http://resolver.tudelft.nl/uuid:62197dda-cb0e-402f-afa6-bc1a7d69ad3d","Critical behavior of Ising spin systems: Phase transition, metastability and ergodicity","Fukushima Kimura, B.H. (TU Delft Applied Probability)","Ruszel, W.M. (promotor); Ruszel, Wioletta (copromotor); Delft University of Technology (degree granting institution)","2019","Physical phenomena commonly observed in nature such as phase transitions, critical phenomena and metastability when studied froma mathematical point of view may give arise to a rich variety of behavior whose study becomes interesting in itself. In Chapter 1 we illustrate the phase transition phenomenon at low temperatures for one-dimensional long range Ising models with inhomogeneous external fields. More precisely, we consider Ising spins arranged on the one-dimensional integer lattice where such spins interact via ferromagnetic pairwise interactions whose strength is inversely proportional to their distance to the power ®; furthermore, the system is put under the influence of an external magnetic field that vanishes with polynomial power ± as the distance between the spin and the origin increases. In that case we show that a phase transition manifests itself in the form of the existence of two distinct infinite-volume Gibbs states, obtained by means of the application of the thermodynamic limit considering “plus” and “minus” boundary conditions respectively, whenever the system is subject at low temperatures and an inequality involving ® and ± holds. The proof of this result is done by means of the Peierls’ contour argument adapted to one-dimensional long range Ising models, first introduced by J. Fröhlich and T. Spencer in 1982 and later modified by M. Cassandro, P.A. Ferrari, I. Merola and E. Presutti in 2005. Our results improve the one obtained by the latter authors since we managed to avoid the assumption of large nearest-neighbor interactions and added the influence of an external field, showing an interplay between the constants ® and ± in order to guarantee the manifestation of the phase transition.","Gibbs measures; Long range Ising model; Metastability; Probabilistic cellular automata","en","doctoral thesis","","978-94-6380-487-5","","","","","","","","","Applied Probability","","",""
"uuid:bcd8f7e7-55f5-43d1-a90f-4679603dcbe5","http://resolver.tudelft.nl/uuid:bcd8f7e7-55f5-43d1-a90f-4679603dcbe5","Role of the organic cation in hybrid halide perovskites: A computational study","Maheswari, S. (TU Delft ChemE/Opto-electronic Materials)","Siebbeles, L.D.A. (promotor); Grozema, F.C. (promotor); Delft University of Technology (degree granting institution)","2019","Hybrid halide perovskites are currently the most studied optoelectronic materials. They have been successfully employed as the active material in solar cells. Despite the achieved success of these materials, the properties of these hybrid frameworks of an inorganic lattice that includes organic cations are not fully understood. This is because of the multiple complex processes that are operative in these materials and it is very hard to unravel them just on basis of experiments. Therefore, computational studies of these materials are important to gain insight in the material structure, the electronic structure and the processes dictated by these properties. An additional advantage of computational studies is that properties can be predicted without actually making the materials in the lab. Such computational study thus give insights in the functioning of hybrid perovskite materials and gives directions to their further development. Of particular interest in this thesis is the role of the organic cation. In some earlier studies it has been pointed out that he role and presence of the organic cations is just limited to stabilizing the structure of hybrid perovskites without influencing the electronic energy states. In this thesis we examine the role of the organic cation in detail, demonstrating that the organic cation has a distinct effect on the electronic structure of hybrid halide perovskites.","","en","doctoral thesis","","78-94-6332-547-9","","","","","","","","","ChemE/Opto-electronic Materials","","",""
"uuid:6d7dc81b-f374-4600-a902-58026bb19708","http://resolver.tudelft.nl/uuid:6d7dc81b-f374-4600-a902-58026bb19708","Into darkness: From high density quenching to near-infrared scintillators","Wolszczak, W.W. (TU Delft RST/Luminescence Materials)","Dorenbos, P. (promotor); Delft University of Technology (degree granting institution)","2019","In many aspects measurement of the α/β ratio has advantages over other methods. It provides higher precision and higher density of excitation than is available with Compton or photoelectric effect electrons. It has been shown that the α/β ratio follows the same trends and patterns as previously found for nonproportionality of electron/ gamma photon response. The α/β ratio also correlates with intrinsic energy resolution measured with 10 keV gamma photons. Materials with high α/β ratio have high intrinsic energy resolution at high density of excitation. The same trend is observed for 662 keV gamma photons with exception of alkali halides and ZnSe:Te. We have found that alkali halides have lowintensity of quenching and performbetter than LaBr3:Ce and LaCl3:Ce at high density excitation (with α particles or 10 keV electrons). The superiority of LaBr3:Ce and LaCl3:Ce over alkali halides probably comes not from high resistivity to high density quenching, but from lack of a low density quenching which is responsible for the ""hump"" in an electron/gamma nonproportionality curve. We can conclude, that halide-based scintillators are the most promising for discovering new highly proportional materials.","scintillator; α/β ratio; digital signal processing; pulse shape discrimination; alpha particle; non-radiative energy transfer; near-infrared scintillator","en","doctoral thesis","","78-94-6332-533-2","","","","","","","","","RST/Luminescence Materials","","",""
"uuid:e73259c9-a0df-44f4-b24d-0a14b049197c","http://resolver.tudelft.nl/uuid:e73259c9-a0df-44f4-b24d-0a14b049197c","Urban informality shaped by labor: Addressing the spatial logics of favelas","Chagas Cavalcanti, A.R. (TU Delft Space & Type)","van Gameren, D.E. (promotor); Rocco, Roberto (copromotor); Delft University of Technology (degree granting institution)","2019","This doctoral thesis mainly consists of a series of journal publications written by the author between 2015 and 2019. The doctoral thesis presents the results of ten years of research on informal settlements, with particular reference to Brazilian favelas. The research aimed to understand the social dynamics of the production of space in these settlements. To this purpose, the author took residence in favelas and performed field research for a total of six years, including the witnessing of a resettlement process from a favela to a formal social housing development in the city of Maceió, in Brazil. The social dynamics that produces and influences the space of the favelas observed in the field were systematically codified in a new pedagogic tool by the author. As main findings from the analysis, it emerged that labor primarily shapes, plans and governs space in informal settlements. Working activities explain the emergence of these settlements, influence the dynamics of space inside the domain of the house, influence the shape of streets up to the margin of the favelas, but also has influence on city and global scales. From the residents’ perspective, labor represents both a means to earn their subsistence, livelihoods and underscores their inner self-esteem as human beings. Working practices originally present in the favelas were in fact restored in the social housing development to where citizens were relocated, with their original domestic function. According to this thesis, labor practices of inhabitants of informal settlements must be addressed when designing housing solutions for deprived citizens fighting for their survival and must be considered as a housing right. The reasons why the current housing approaches do not contemplate work are understood in context and interpreted according to their historic and economic backgrounds. A housing architectural and planning approach aimed at restoring the combination of working and domestic functions of human beings is proposed instead.","work; labor; informal settlemens; slums; architecture; planning; livelihood; housing","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-199-7","","","","A+BE | Architecture and the Built Environment No 9 (2019)","","","","","Space & Type","","",""
"uuid:11a7a961-9f25-46f5-bf07-dc5765afcd59","http://resolver.tudelft.nl/uuid:11a7a961-9f25-46f5-bf07-dc5765afcd59","Electron beam technology for single nanometer fabrication","Scotuzzi, M. (TU Delft ImPhys/Charged Particle Optics)","Hagen, C.W. (promotor); Kruit, P. (promotor); Delft University of Technology (degree granting institution)","2019","To follow Moore’s law and the trend of devices to keep shrinking, the nanotechnology industry is challenged in finding a suitable technique for the mass production of integrated circuits with critical dimension in the sub-10 nm range. At the time this project started, EUV lithography, that employs light with a wavelength of 13.5 nm and thus requires the scanner to be kept under vacuum, was encountering difficulties in making its way into high volume manufacturing. Therefore different technologies were being explored as alternative to EUVL. The European project Single Nanometer Manufacturing for beyond CMOS devices (SNM) [1] aims at developing a manufacturing platform that routinely provides sub-10 nm resolution. To achieve that, fabrication processes based on Nano Imprint Lithography (NIL) are investigated. NIL is a low cost, high resolution and high throughput patterning technique, suitable for the mass production of devices. In NIL, a UV-transparent stamp is pressed on top of a substrate covered with polymer, called the NIL resist. The features on the stamp are imprinted in this polymer, that hardens under the exposure of UV light. The features on the polymer are then transferred into the underlying substrate using etching processes. To fabricate these stamps, we propose to use electron beam induced deposition (EBID)...","","en","doctoral thesis","","978-94-6380-454-7","","","","","","","","","ImPhys/Charged Particle Optics","","",""
"uuid:7f6985bb-3383-48f6-bcd3-f2758f35e3c2","http://resolver.tudelft.nl/uuid:7f6985bb-3383-48f6-bcd3-f2758f35e3c2","Genome analysis and engineering of industrial lager brewing yeasts","Gorter de Vries, A.R. (TU Delft BT/Industriele Microbiologie)","Pronk, J.T. (promotor); Daran, J.G. (promotor); Delft University of Technology (degree granting institution)","2019","style=""margin-top:0cm;margin-right:0cm;margin-bottom:10.0pt;margin-left: 0cm"">Lager beer, also referred to as Pilsner, is the most popular alcoholic beverage in the world, with an annual consumption of almost 200 billion litres per year. To make lager beer, brewer’s wort is fermented with the yeast Saccharomyces pastorianus. This microorganism converts wort sugars into ethanol and contributes key flavour compounds to the beer. S.pastorianus is an interspecific hybrid which likely formed about 500 years ago by spontaneous mating between an ale-brewing S. cerevisiae strain and a wild S. eubayanus contaminant.
The genome of lager brewing yeast is exceptionally complex: not only does it contain chromosomes from the two parental species, but these have also undergone extensive recombination and are present in varying copy numbers, a situation referred to as aneuploidy. The S. eubayanus ancestor was only discovered in 2011, enabling an improved understanding of the complex genomeand convoluted evolutionary ancestry of S.pastorianus. Furthermore, recent advances in whole-genome sequencing technology and in gene editing tools have simplified the genetic accessibility and amenability of Saccharomycesyeast genomes. The aim of this thesis was to leverage these advances to investigate how the genetic complexity of current S.pastorianus strains emerged and how it contributes to industrial lager brewing performance, and to develop new methods for strain improvement of brewing yeasts.","Yeast; Biotechnology; Brewing; Fermentation; Genetics; Evolution; Molecular biology; CRISPR-Cas; Neofunctionalisation; Saccharomyces eubayanus × Saccharomyces cerevisiae hybrids; Saccharomyces cerevisiae; Saccharomyces pastorianus; Nanopore sequencing","en","doctoral thesis","","978-94-6380-407-3","","","","","","2019-09-06","","","BT/Industriele Microbiologie","","",""
"uuid:7b17a968-1414-4b84-bbf3-9a0c1197e1fd","http://resolver.tudelft.nl/uuid:7b17a968-1414-4b84-bbf3-9a0c1197e1fd","Intelligent control systems: Learning, interpreting, verification","Lin, Q. (TU Delft Cyber Security)","Verwer, S.E. (copromotor); van den Berg, Jan (promotor); Delft University of Technology (degree granting institution)","2019","Automatic control is a technique about designing control devices for controlling ma- chinery processes without human intervention. However, devising controllers using conventional control theory requires first principle design on the basis of the full under- standing of the environment and the plant, which is infeasible for complex control tasks such as driving in highly uncertain traffic environment. Intelligent control offers new op- portunities about deriving the control policy of human beings by mimicking our control behaviors from demonstrations. In this thesis, we focus on intelligent control techniques from two aspects: (1) how to learn control policy from supervisors with the available demonstration data; (2) how to verify the controller learned from data will safely control the process.","intelligent control; hybrid automata learning; safety verification","en","doctoral thesis","","","","","","","","","","","Cyber Security","","",""
"uuid:2c876b61-a850-4ae1-b47d-38a60a576006","http://resolver.tudelft.nl/uuid:2c876b61-a850-4ae1-b47d-38a60a576006","Molecular Modeling of Supramolecular Structures","Piskorz, T.K. (TU Delft ChemE/Advanced Soft Matter)","van Esch, J.H. (promotor); de Vries, A.H. (copromotor); Delft University of Technology (degree granting institution)","2019","In this thesis, we explore the application of various molecular simulations techniques to give insights into the self-assembly of supramolecular systems.
Chapter 1 explains the importance of molecular simulation to study the selfassembly process. In chapter 2 we study self-assembly of a derivative of 1,3,5-triamidocyclohexane (CTA) using common techniques: simulations of selfassembly from randomly distributed molecules and simulations of the final structure. The results show the importance of the choice of force-field and limitations of conventional molecular dynamics to give insights into processes
which occur on a long timescale. In Chapter 3 we tackle the timescale issue by
using an adaptive sampling method. The results provide a unique insight into the kinetic pathways of the self-assembly process. Moreover, we were able to provide insights into the next stages of the self-assembly. Although, the method provides insight at a level of detail hardly accessible by any other technique it is
limited to rather small systems. In Chapter 4 we study self-assembly of long functionalized alkanes on a graphite flake. We use coarse-grained molecular
dynamics to tackle both temporal and spatial scales instead of the high resolution of the all-atomistic model. These results give insights into the mechanism of self-assembly of monolayers on graphite. Chapter 5 is an extension to Chapter 4. Here, we study the last stage of the self-assembly process, Ostwald ripening, responsible for correction of the structure, which leads to high quality long-range ordered assemblies.
The results presented in this thesis have two major outcomes: (a) methodology to simulate of self-assembly, and (b) insights into self-assembly process.","","en","doctoral thesis","","978-94-028-1665-5","","","","","","","","","ChemE/Advanced Soft Matter","","",""
"uuid:9148465a-0855-4cb1-a5da-de8d23dabe81","http://resolver.tudelft.nl/uuid:9148465a-0855-4cb1-a5da-de8d23dabe81","Rank-based optimization techniques for estimation problems in optics","Doelman, R. (TU Delft Team Raf Van de Plas)","Verhaegen, M.H.G. (promotor); Delft University of Technology (degree granting institution)","2019","Aberrations in optical systems, such as telescopes and microscopes, degrade the quality of the images that can be produced by these systems. For example, an object that is positioned out of focus produces a blurred image on a camera sensor and the turbulent air in the earth’s atmosphere reduces the imaging performance of telescopes. In this thesis we only consider wavefront aberrations. AO can be used to compensate for these wavefront aberrations. The working principle of AO is to quantify by measuring or estimation the wavefront aberration and to dynamically adjust wavefront modulating devices, such as Deformable Mirrors (DMs), to counteract the aberration and thereby improving the optical performance. The estimation of the wavefront aberration based on images of a point source is called phase retrieval, which is a highly nonlinear estimation problem. The success of the estimation usually depends on the (type of) algorithm, the available information on the aberration that is incorporated in the estimate, and the degree to which the model of the optical system corresponds to reality. In this thesis we propose a convex optimization-based method for phase retrieval. The method allows for easy inclusion of many types of prior information on the aberration. Furthermore, we develop an efficient implementation of the optimization. The robustness of the approach against measurement noise is investigated and compared with several other state of the art algorithms. Experimental validation shows the algorithmis well able to estimate aberrations in real-life circumstances. A new type of prior information is introduced to estimate dynamic wavefront aberrations. In literature and in practice, the optical path is split between either a wavefront sensor and a camera, or between multiple cameras in order to reliable estimate an aberration. The inherent problem is that between the sensor and cameras the aberration can differ (Non-Common Path (NCP) errors), and a wrong estimate is used in the compensation by the AO system. We propose a method to estimate the aberration from measurements by a single camera, by assuming that the aberration evolves according to (non-specific) model, i.e. the dynamics are contained in a model-set. At the same time that we estimate the aberration, we also identify the dynamics according to which the aberration evolves over time. The estimation of the wavefront aberration based on images of an unknown object is called blind deconvolution if both the aberration and object are estimated. Like phase retrieval, this too is a highly nonlinear estimation problem. We propose the first convexoptimization based estimation method for blind deconvolution problems that estimate aberration and object when the images are acquired using coherent illumination. The method allows for the inclusion of many existing types of prior information on the object and/or aberration. Finally, we analyze controllers for segmented mirrors in large ground-based telescopes. These mirrors consist of many interconnected hexagonal segments. This distributed nature of the system warrants the investigation into whether the controller that keeps the segments aligned can be designed in such a way that it can be distributed over the segments as well, essentially resulting in a distributed controller where local controllers communicate with each other. What complicates the analysis is that the dynamics across segments are not necessarily decoupled: the wind load can be correlated and the flexibility in the supporting structure of the segments can cause dynamic coupling. We investigate the design of a distributed controller that incorporates these global dynamics. Furthermore, we investigate the performance of the distributed controller and howit relates to the communication and interconnection pattern of the local controllers.","Optimization; Phase retrieval; Identification; Blind deconvolution; Distributed control; Primary mirror control","en","doctoral thesis","","","","","","","","","","","Team Raf Van de Plas","","",""
"uuid:f2470c09-e8f0-4c1e-b4e4-effc17ecd373","http://resolver.tudelft.nl/uuid:f2470c09-e8f0-4c1e-b4e4-effc17ecd373","Experimental-numerical material characterization of adobe masonry: Tests and simulations on various types of earthen bricks and mortar in statics and dynamics","li Piani, T. (TU Delft Applied Mechanics)","Sluys, Lambertus J. (promotor); Weerheijm, J. (copromotor); Delft University of Technology (degree granting institution)","2019","Research in this thesis is aimed at comprehensively characterizing the mechanical performance of adobe components. Adobe is a traditional masonry made of sundried bricks and mortar. Bricks are made of soil mixed with fibres and joined together by mud mortar. Adobe is largely spread in areas of the world prone to seismic risk or involved in military conflicts. Its low environmental impact attracts scientific attention also for sustainable applications in current building industry. Unfortunately, the material and structural properties of adobe are still hardly assessed, as a result of centuries of progressive abandonment of this building technology in western countries after introduction of modern building materials in the market. In this doctoral research, a combined experimental and numerical approach was followed. It has been aimed at fulfilling experimental data and knowledge gaps in the study of the main properties of this material. Experimental tests have been performed on bricks and mortar characterized by different mineralogical compositions, fibre percentages and moisture content. Mechanical tests consisted of bending and compression tests. Tests in compression have been performed at different rates of deformation from statics to high velocity impact. Data derived from tests have constituted a solid dataset aimed at interpreting and modelling the mechanical performance of adobe. Experimental trends resulted in physical theories concerning the main features of the quasi brittle response of adobe. In particular, the role of fibres and water content in the mixture on the mechanical response of adobe bricks and mortar has been addressed in this study in the static and dynamic regimes of the spectrum of strain rate induced loadings. The main mechanical parameters in compression and tension for adobe have been statistically determined from the static and dynamic tests. Mechanical properties and physical theories have been framed in several models that interpret the response of adobe for different applications. Constitutive models have been derived to address the uniaxial response in compression at different strain rates of adobes of different mineralogical composition and water contents. A finite element damage model has been developed to simulate the main failure modes specifically observed in earthen bricks at different loading conditions and rates, including high velocity impacts. The numerical study has been devoted at ensuring objectivity of analysis to the results of simulations performed using different mesh refinements of the geometrical model of the tested brick. Furthermore, engineering ballistic models that address the response of adobe walls to small caliber penetrations have been developed in this doctoral research. This thesis contains the description of the performed experiments, the analysis of data, the theoretical interpretations and the models developed for the material characterization of adobe masonry.","","en","doctoral thesis","","978-94-6323-789-5","","","","","","","","","Applied Mechanics","","",""
"uuid:ed0bee26-e6f7-45c0-a56d-1b3fdfbac1e2","http://resolver.tudelft.nl/uuid:ed0bee26-e6f7-45c0-a56d-1b3fdfbac1e2","Catalytic methane conversion with single-site porous catalysts: a computational approach","Szécsényi, A. (TU Delft ChemE/Catalysis Engineering)","Pidko, E.A. (promotor); Gascon, Jorge (promotor); Delft University of Technology (degree granting institution)","2019","In this thesis we presented comprehensive studies on methane activation by binuclear Fe-oxo sites located in porous MOF and zeolite frameworks. Many of such studies have been previously performed, however the main focus was usually on the C-H bond activation of methane. Here we focused on the whole reaction mechanism including the activation of the Fe site and the overoxidation of methanol, as well as the effects of the porous framework.","methane conversion; zeolite; DFT; methane to methanol","en","doctoral thesis","","978-94-028-1630-3","","","","","","","","","ChemE/Catalysis Engineering","","",""
"uuid:b0382c6a-52af-42a5-b5bf-91368fd9c284","http://resolver.tudelft.nl/uuid:b0382c6a-52af-42a5-b5bf-91368fd9c284","Managing startle and surprise in the cockpit","Landman, H.M. (TU Delft Control & Simulation)","van Paassen, M.M. (promotor); Groen, Eric L. (promotor); Delft University of Technology (degree granting institution)","2019","After several recent flight safety events, such as the accident of Air France flight 447 in 2009, investigators determined that surprise and startle can severely disrupt pilot responses. They concluded that pilots need to be better prepared for unexpected and potentially startling situations. In response, aviation safety authorities have recommended and mandated that startle and surprise should receive more attention in pilot training. However, there is insufficient scientific data available on pilots’ behavior in startling and surprising situations, and on how they can best be trained for these situations. This thesis addresses this problem, by studying startle and surprise in pilots, and by investigating which training interventions can strengthen the pilots’ response to unexpected situations...","Aviation; Mental models; Performance; Pilots; Resilience; Simulation; Stress; Training; Upset recovery","en","doctoral thesis","","978-94-6182-963-4","","","","","","","","","Control & Simulation","","",""
"uuid:ef966464-6b76-434d-b147-81ec247b023c","http://resolver.tudelft.nl/uuid:ef966464-6b76-434d-b147-81ec247b023c","Towards optimum swirl recovery for propeller propulsion systems","Li, Q. (TU Delft Flight Performance and Propulsion)","Veldhuis, L.L.M. (promotor); Eitelberg, G. (promotor); Delft University of Technology (degree granting institution)","2019","In a propeller propulsion system, due to the torque working on the propeller, a rotational motion of the fluid is generated. This rotational motion, expressed as a swirl component in the slipstream, does not result in any useful propulsive power, but causes a decrease in propeller efficiency. By recovering the momentum in the crosswise direction with other aerodynamic components located in the slipstream, either extra thrust can be produced or the overall drag of the aircraft can be reduced with the same power input from the propeller. This dissertation provides aerodynamic design and investigation of swirl recovery for both uninstalled and installed propeller propulsion systems. Swirl recovery vanes (SRVs) are a set of stationary vanes located behind a propeller, by which the angular momentum contained in the propeller slipstream can be recovered and thereby extra thrust can be generated. In this thesis, a design framework of SRVs is developed based on a lifting line model. The design method features a fast turnaround time, which makes it suitable for system level design and parameter studies. As a test example, a set of SRVs was designed for an uninstalled six-bladed propeller at a high propeller loading condition. A parametric study was performed of the SRV performance as a function of the blade count and radius. In order to validate the design routine, an experiment was performed with a propeller and the SRVs in a low-speed open-jet wind tunnel. The thrust generated by the SRVs was measured at different propeller loading conditions. The experimental results show that the SRVs provided thrust at all the measured propeller advance ratios. Since the SRVs did not require any extra power input, the propulsive efficiency of the system (propeller + SRVs) has improved accordingly for all the loading conditions considered. For an installed tractor-propeller propulsion system, both the downstream wing and the SRV have the ability of recovering the swirl of propeller slipstream. In the first case of swirl recovery from the trailing wing, reduction of wing induced drag can be achieved. In order to determine the optimum wing shape for maximum drag reduction, a multi-fidelity optimization procedure is developed, where the low-fidelity method corresponds to the potential flow-based method, and the high-fidelity method is based on an analysis by solving Euler equations. As a test case, the twist distribution of the wing is optimized at the cruise condition of a typical turboprop aircraft. Compared to the baseline wing (untwisted), the induced drag of the optimized wing has decreased by 1.4% of the propeller thrust. In the second case of swirl recovery from the SRV, extra thrust can be generated by the vanes. Four different cases of SRVs installation positions are investigated (with assumption of inviscid flow) with different axial and azimuthal positions relative to the wing. An optimum configuration is identified where SRVs are positioned on the blade-downgoing side downstream of the wing. For the identified optimum configuration, a set of SRVs was designed taking the effect Summary II of viscosity into account. The SRV design is subsequently validated by RANS simulation. Good agreement is observed in the lift, circulation, and thrust distributions of the SRV between the lifting line prediction and the RANS result. A thrust of 1.6% of propeller thrust from SRVs was validated by the RANS simulation. Comparing the two ways of swirl recovery, further investigation has shown that for the installed propeller propulsion system, due to the different aerodynamic consequences of the two (drag reduction of the wing compared with thrust enhancement from the SRV), they can be algebraically added up.","propeller; swirl recovery vane; propeller integration","en","doctoral thesis","","978-94-6323-805-2","","","","","","","","","Flight Performance and Propulsion","","",""
"uuid:14048e52-00ad-49e8-9964-aa14e33673fd","http://resolver.tudelft.nl/uuid:14048e52-00ad-49e8-9964-aa14e33673fd","Intelligent rail maintenance decision support system using KPIs","Jamshidi, A. (TU Delft Railway Engineering)","Dollevoet, R.P.B.J. (promotor); Li, Z. (promotor); Nunez, Alfredo (copromotor); Delft University of Technology (degree granting institution)","2019","Key Performance Indicators (KPIs) enable the infrastructure manager to keep the performance quality of the infrastructure at an acceptable level. A KPI must include specific features of the infrastructure such as functionality and criticality. The KPIs can be classified into three performance levels: (1) technical level KPIs; (2) tactical level KPIs and (3) global level KPIs. For instance, some KPIs are related to individual rail components (technical level) and some correspond to a bigger picture of the rail including multiple components (tactical level). The global level also gives an overview indication of the full-length rail based on what the infrastructure manager requires. Hence, to use every KPI correctly, the infrastructure manager should be aware of the proper KPIs level.
In this dissertation, an intelligent rail maintenance decision support system using KPIs is proposed. The thesis is composed of three parts: design of KPIs, rail degradation model and condition-based maintenance decision system.","Key Performance Indicators; Rail Infrastructure; Rail Surface Defects; Axle Box Acceleration; Maintenance Decision Support System","en","doctoral thesis","","978-94-6384-059-0","","","","","","","","","Railway Engineering","","",""
"uuid:4f4e5174-92f4-47ab-a173-4e6e2bedd005","http://resolver.tudelft.nl/uuid:4f4e5174-92f4-47ab-a173-4e6e2bedd005","Impact damage repair decision-making for composite structures: Predicting impact damage on composite aircraft using aluminium data","Dhanisetty, V.S.V. (TU Delft Air Transport & Operations)","Curran, R. (promotor); Verhagen, W.J.C. (promotor); Delft University of Technology (degree granting institution)","2019","There is a growth in the use of composites for the new generation of wide-body aircraft such as the Boeing 787 and Airbus A350. This shift from using aluminium as the primary material is motivated by the benefits of using composites in design, manufacturing and operations. Composites offer the aircraft manufacturer the ability to create more complex shapes and optimise the design such that it is light-weight. This, in tandem with other design improvements, leads to lower fuel burn. Consequently, airlines see the advantage of these new aircraft to reduce their operational cost. Therefore, as airlines continue to renew their ageing fleets of aluminium aircraft, there is going to be an increased need for composite maintenance. However, fulfilling the increased demand for composite repairs is impeded by limited availability of historical damage data, due to the young operational age of these aircraft. Composites are particularly sensitive to impact damage, and understanding the likelihood and the consequence of this type of damage is valuable for maintenance processes such as repair decision-making. The purpose of this dissertation is to predict the risk of impact damage for future composite aircraft and use it to substantiate maintenance decision-making in an operational setting...","Multi-criteria decision-making (MCDM); Aircraft maintenance; Impact damage; Composite; Aluminium; Impact risk","en","doctoral thesis","","978-94-028-1620-4","","","","","","","","","Air Transport & Operations","","",""
"uuid:380d0ac3-b3c9-4156-a2c1-a2e78b79b9e7","http://resolver.tudelft.nl/uuid:380d0ac3-b3c9-4156-a2c1-a2e78b79b9e7","The Socio-Spatial Aesthetics of Space Formation: A New Perspective on the Concepts and Architecture of Walter Gropius and Aldo van Eyck","Sack, O. (TU Delft Space & Type)","van Gameren, D.E. (promotor); Avermaete, T.L.P. (promotor); Delft University of Technology (degree granting institution)","2019","This dissertation deals with ‘architectural space formation’, which is understood as the part of architectural and urban design that concerns the creation and structuring of physically defined spaces of inside and outside character separately as well as in relation to each other and to open space. Furthermore, it focuses on the fundamental significance of space formation in architectural design and aesthetics as well as the question of how Walter Gropius and Aldo van Eyck referred to space formation in their approaches towards architectural design and aesthetics separately, compared to each other, and in relation to the discussion of architectural space and space formation at the beginning of the twentieth century.","","en","doctoral thesis","","","","","","","","","","","Space & Type","","",""
"uuid:c720afdc-71d1-492e-97fe-7e226a493379","http://resolver.tudelft.nl/uuid:c720afdc-71d1-492e-97fe-7e226a493379","Arsenic Removal for Drinking Water Production in Rural Nicaraguan Communities","Gonzalez Rodriquez, B.J. (TU Delft Sanitary Engineering)","Rietveld, L.C. (promotor); van Halem, D. (copromotor); Delft University of Technology (degree granting institution)","2019","","","en","doctoral thesis","","978-94-6384-053-8","","","","","","","","","Sanitary Engineering","","",""
"uuid:e336a982-fd4e-48f3-aa18-7e15db7cde32","http://resolver.tudelft.nl/uuid:e336a982-fd4e-48f3-aa18-7e15db7cde32","Multilevel Solvers for Stochastic Fluid Flows","Kumar, P. (TU Delft Aerodynamics)","Dwight, R.P. (promotor); Oosterlee, C.W. (promotor); Delft University of Technology (degree granting institution)","2019","Uncertainty is ubiquitous in many areas of science and engineering. It may result from the inadequacy of mathematical models to represent the reality or from unknown physical parameters that are required as inputs for these models. Uncertainty may also arise due to the inherent randomness of the system being analyzed. For many problems of practical interest, uncertainty quantification (UQ) can involve computations that are intractable even for the modern supercomputers, if conventional mathematical techniques are utilized. The reason is typically a product of complexity factors associated with many samples needed to compute the statistics, and for each sample, complexity associated with the spatio-temporal scales characteristics to the system. The main objective of this research is to obtain multilevel solvers for stochastic fluid flow problems with high-dimensional uncertainties. In our approach, the complexity arising due to sampling is overcome by the multilevel Monte Carlo (MLMC) method and complexity due to spatio-temporal scales is eliminated via the multigrid solver. Historically, Monte Carlo (MC) type methods have been proven to be the methods of choice for problems with a large uncertainty dimension as they do not suffer from the curse of dimensionality. A well-known computational bottleneck associated with the plain MC method is the slow convergence of the sampling error. For problems involving a wide range of space and time scales, ensuring a low mean square error will require a large number of MC samples on a very fine computational mesh making the estimator very expensive. Inspired by the multigrid ideas, the MLMC method generalizes the standard MC to multiple grids, exhibiting an exceptional improvement. The efficiency of the MLMC method comes from solving the problem of interest on a coarse grid and subsequently adding corrections based on fewer mesh resolutions. On the coarsest grid, a large number of samples can be computed inexpensively. The corrections computed on fewer grids, have smaller variances and can be estimated accurately using only fewer samples. The estimates at different levels are then combined using a telescopic sum...","","en","doctoral thesis","","978-94-6366-189-8","","","","","","","","","Aerodynamics","","",""
"uuid:216d5572-d287-4aa7-8b9c-37c211e6ed98","http://resolver.tudelft.nl/uuid:216d5572-d287-4aa7-8b9c-37c211e6ed98","Mesoscopic transport in 2D materials and heterostructures","Papadopoulos, N. (TU Delft QRD/Goswami Lab; TU Delft QN/Steele Lab)","Steele, G.A. (promotor); van der Zant, H.S.J. (promotor); Delft University of Technology (degree granting institution)","2019","This thesis presents an experimental work on electronic transport in two-dimensional (2D) van der Waals systems. The variety of the physics of the chapters reflects the exploratory character of the Ph.D. study and demonstrates the versatility and the unique properties of some of the members of the vast group of van der Waals materials. In the experiments we explore the fundamental properties of carriers and states in 2D materials via electrical transport. The first three chapters include experimental work on planar transport in multiterminal devices from transition metal dichalcogenides (2H-MoS2) and trichalogenides (TiS3). In the fist chapter, we study how intravalley spin relaxation and the phase coherence affects weak localization in boron nitride encapsulatedMoS2. In TiS3 we explore its electronic properties and in order to avoid disorder induced localization, we protect the devices with hexagonal boron nitride. An improvement in the quality of the transport and signatures of charge-density-wave transition are observed. Lastly, multi-terminal transport in 1T/1T0-MoS2 and its carrier transport mechanism are investigated, with special emphasis on how to establish low-temperature electrical contacts to 2H-MoS2. The next part of the thesis shifts to vertical transport in van der Waals heterostructures. Firstly, we use WS2 as tunneling barrier between monolayer graphene and metal contact. We observe sequential tunneling through localized states. By studying the ground and excited states, we gain information about their spatial sizes and the magnetic moments of these states. Lastly, we explore heterostructures of graphene on WSe2 and potential effects on the band structure due to the dielectric environment and the proximity induced spin-orbit coupling.","2D materials; heterostructures; transport; magnetotransport; tunneling; spin-orbit coupling; quantum Hall effect; variable range hopping; weak localization","en","doctoral thesis","","978-90-8593-409-7","","","","","","","","","QRD/Goswami Lab","","",""
"uuid:7a58d495-6045-4f4c-a19e-3850d2c26a12","http://resolver.tudelft.nl/uuid:7a58d495-6045-4f4c-a19e-3850d2c26a12","Computer-Based Social Anxiety Regulation in Virtual Reality Exposure Therapy","Hartanto, D. (TU Delft Health, Safety and Environment; TU Delft Interactive Intelligence)","Neerincx, M.A. (promotor); Brinkman, W.P. (copromotor); Delft University of Technology (degree granting institution)","2019","Social anxiety disorder (SAD), commonly referred to social phobia, is one of the most an immense and unreasonable fear of social interaction. Cognitive behaviour therapy (CBT) is the most thoroughly studied nonpharmacologic approach to the treatment of SAD patients. In CBT, patients are gradually, in vivo, exposed to anxiety-provoking real-life situations until habituation occurs and patients’ fear dissipates. Although effective for most patients, there are some clear limitations: some specific and required social situations are difficult to arrange due to unpredictability and possibly short duration of these naturally occurring social interactions, the therapist has limited control over anxiety provoking elements during the exposure, and individuals with social phobia have high refusal rate for in vivo exposure to the dreaded and fearful social situation.
Virtual Reality Exposure Therapy (VRET) has been suggested as an alternative to overcome many shortcomings of in vivo exposure. Contrary to exposure in vivo, in a VRET system the therapist can manipulate the exposure elements in a safer, manageable and cost-effective way. A VRET system presents fear eliciting stimuli to the patient in a Virtual Reality environment where the parameters of its anxiety evoking stimuli can be easily gradually manipulated by a therapist. The state of the art and recent enhancement of Internet and VR technology seems to be able to meet the ever-increasing demand for more accessibility and efficiency of mental healthcare services by bringing the VRET system directly at patient’s home. This thesis presents the development and evaluation of such an envisioned home-based VRET system for SAD patients.
The blue-print of the envisioned system design entails several key elements that need to be established. All those key elements were investigated and evaluated empirically in three separated studies followed up by a feasibility study. The first key element identified, is the system’s ability to measure the patient’s anxiety level automatically. Traditionally in VRET this is done using self-reported anxiety measurements, where patients are asked to report their anxiety every four or five minutes. Without the direct involvement of a therapist, it is up to the system to determine the appropriate timing, therefore, the timing is a crucial element. An empirical study involving 24 participants investigated the effects of three different types of automatic self-reported anxiety timing mechanisms (dialogue dependent, speech dependent and context independent). The results showed that the participants preferred a dialogue dependent timing mechanism above speech dependent or timing dependent mechanism, since it was considered as less interruptive. Moreover, the study also confirmed the needs for an accurate automatic self-reported anxiety timing mechanisms, as it could affect people’s experience and their behaviour in a dialogue with virtual human.
The second key element is the system’s ability to elicit and control the anxiety evoking stimuli within the social scene. This was investigated in two successive empirical studies. The first study investigated whether an exposure to various virtual social scenarios was associated with different levels of anxiety. The 24 participants were exposed to a free-speech dialogue interaction with a virtual character in a neutral world, blind date and job interview setting. The results showed that the participants’ level of anxiety increased significantly from the neutral world, the blind date to the job interview. This indicates that various virtual social scenarios are indeed able to evoke different levels of anxiety. The second study investigated anxiety control within a dialogue in VRET system. For this, the study assessed the association between the ratio of negative and positive dialogue responses made by a virtual character and individual’s level of anxiety. Twenty-four participants were exposed to two different experimental conditions: a positive, and a negative virtual job interview condition. In the positive condition, in the course of time the number of positive responses from the virtual character increased while negative responses decreased. In the negative condition, the opposite happened. The results showed that the manipulation of the dialogue style in both conditions had a significant effect on people’s level of anxiety, their attitude, their speech behaviour, their dialogue experiences, their own emotion, and how they perceived the emotion of the virtual human. These finding demonstrate that social dialogues in a virtual environment can be manipulated for therapeutic purposes effectively.
The third key element of the envisioned system is the possibility to introduce autonomous anxiety regulation. Traditionally, in the clinic a therapist tries to regulate the patients’ anxiety. However, in a home situation, a system would have to do this regulation automatically. A third empirical study was conducted to investigate and evaluate the ability and effectiveness of an automatic feedback-loop regulation mechanism for maintaining individual’s anxiety on a predefined target level. A group of 24 participants were exposed into two different system response conditions: a static and a dynamic condition. In the static condition participants were exposed to a static set of virtual reality stressors while in the dynamic condition they were exposed to a set of virtual reality stressors that changed dynamically aiming at keeping the anxiety of the participants at stable level. In the static condition, the anxiety dropped as indicated by decreased self-reported anxiety, decreased heart rate, increased heart rate variability, and longer answers. In contrary, in the dynamic condition, the participant’s anxiety level was maintained around a pre-set anxiety reference level. Therefore, the findings demonstrate that individuals’ level of anxiety can be regulated automatically using an automatic feedback-loop mechanism.
Besides those three important key elements of the system design blue-print, the envisioned system also has a number of practical and important elements such as the development of a virtual health agent, the therapist application, and a secure remote database server. Together these elements lay the foundation for a home-based VRET system. To evaluate the feasibility of such proposed system to treat people with SAD at home, an empirical study was conducted. The home-based VRET system was evaluated with a group of five social anxiety disorder patients. All patients received a complete home-based VRET system and were scheduled to perform 10 treatment sessions at home. The study findings showed that the proposed system could evoke the required anxiety, as expected, which over time dropped as patients’ self-reported anxiety and heart rate gradually decreased during the exposure sessions. To conclude, this thesis argues that the proposed home-based VRET system could evoke the required anxiety in patients with substantial level of presence. By meeting the above mentioned key challenges of our study, we showed that an effective home-based VRET system can be built and provided in due course. Therefore, this finding suggests that delivering a home-based VRET system is indeed possible, which could provide numerous benefits for both patients and therapists.
Thanks to the progress in high-performance computing, automated turbomachinery design based on computational fluid dynamics is becoming more and more a viable option to tackle complex design problems. Because of the inherently unsteady nature of turbomachinery flows, optimization methods that are able to account for accurate time resolution of the flow features offer an increased level of simulation fidelity, if compared to methods that assume steady state flows. In this respect, unsteady-based optimization can lead not only to higher fluid dynamic performance, but it can also be seen as a key enabler to address complex multi-disciplinary design problems.
To date, however, most turbomachinery optimization methods are based on the assumption of steady state flows, as a consequence of the high computational cost associated with unsteady fluid dynamic simulations. Reduced order methods offer a computational efficient solution for shape optimization in unsteady flows.
This dissertation documents research on reduced order methods for unsteady adjointbased shape optimization of turbomachinery. In particular, the reduced order methods considered are: the harmonic balance and the look-up table method for the estimation of thermo-physical fluid properties.
The research work resulted in an optimization framework based on a novel harmonic balance discrete adjoint solver, implemented in the open source code SU2. Results show the computational efficiency and effectiveness of the proposed optimization method to deal with unsteady turbomachinery design problems. For the exemplary test cases considered, the unsteady-based optimization led to increased fluid dynamic performance if compared to the optimization results based on steady state computations. Furthermore, the method was successfully employed for design problems of turbomachinery operating with non-ideal compressible flows.","Turbomachinery; Optimization; Adjoint Method; CFD; Computational Fluid Dynamics; Discrete Adjoint; Unsteady","en","doctoral thesis","","978-94-6375-455-2","","","","","","","","","Flight Performance and Propulsion","","",""
"uuid:9e5bc7a2-fc0e-47ef-aee3-f0fce4809e5c","http://resolver.tudelft.nl/uuid:9e5bc7a2-fc0e-47ef-aee3-f0fce4809e5c","Human factors of monitoring driving automation: Eyes and Scenes","Cabrall, C.D.D. (TU Delft Intelligent Vehicles)","de Winter, J.C.F. (promotor); Happee, R. (promotor); van der Helm, F.C.T. (promotor); Delft University of Technology (degree granting institution)","2019","This PhD thesis document is a collection of several of my published (and submitted) peer review journal articles from underneath the Human Factors of Automated Driving (HF Auto, PITN-GA-2013-605817) seventh framework program (FP7) of the European Commission. The topics include: human factors, automotive road safety, autonomous/automated driving technology, human supervisory control, adaptive automation, driver state monitoring, and scene-tied (situated) eye-based assessments of attention. Outside of the publications are summary, introduction, and conclusion chapters as well as contribution appendices to tie all the related work together. Summary: Like fatigue and distraction driving aids before, the advent of additional driving automation/autonomy poses new challenges for protecting road users now against vigilance decrements. Within the larger Human Factors of Automated Driving (HFAuto) project, the goal of this thesis was ‘to develop a system that is able to monitor the driver’s vigilance’. The approach taken was to investigate vigilance from a cognitive systems engineering (ecological perspective). Instead of conceptually restricting vigilance to be some kind of internal cognitive state/property of a driver, this thesis treated vigilance as a state/property of a system (i.e., the relationship between a driver and driving scene/situation). This thesis contains seven research papers in the form of literature reviews and experiments with eye-tracking, driving video clips, driving simulation, and on-road semi-naturalistic observation. It can be concluded form this thesis, that to develop driver monitoring systems (DMS) of driving vigilance, eye measurements (especially of movement distances) and scene contents (especially road curvatures and collision hazards) are important and relatable factors. Furthermore, it is concluded that these factors are obtainable in viable ways for future research and development application efforts. Specifically, the studies suggest means for DMS to be targeted to protect and maintain a foundational level or inner-most loop of driving attention at a behavioral level (rather than interactive implicit cognitive layers and representational experiences that can be added on top). An applied observational, data-driven, and behavioral/situated approach is expected to better avoid higher order cognitive ambiguity/dilemmas, and so serves to make more end-user acceptable DMS more tractable.","automated driving; eye tracking; traffic safety; human computer interaction; human supervisory control","en","doctoral thesis","","978-94-6323-714-7","","","","","","","","","Intelligent Vehicles","","",""
"uuid:a03c8a30-4ab9-4bc1-9512-dd75cd3b5314","http://resolver.tudelft.nl/uuid:a03c8a30-4ab9-4bc1-9512-dd75cd3b5314","Monolithic integration of silicon and polymer microstructures for Organ-on-Chip applications","Quiros Solano, W.F. (TU Delft Electronic Components, Technology and Materials)","Sarro, Pasqualina M (promotor); Delft University of Technology (degree granting institution)","2019","Drug development is a complex, time-consuming (10 - 15 years) and expensive process. For a new medicine to reach the market, the net expenses covered by the pharmaceutical industry have been estimated to be around $2.6 billion. Nevertheless, the risk of finding adverse effects or toxicity cases once the drug is already on the market is still high. Thus, pharmaceutical companies have been keenly looking forward to means to eliminate this at an early stage of the development process. Recently, Organs-on-Chips (OOCs) emerged as a potential alternative to traditional drug screening. These devices, promise, in the middle-term, to enhance the in vitro screening and, in the long term, to reduce and eventually eliminate animal models in safety and efficacy essays. Nevertheless, the fabrication methods for most of these devices are hardly adaptable to scalable fabrication processes for in vitro screening application, as they rely strongly on manual techniques. This thesis demonstrates the successful development of diverse microstructures for Organ-on-Chip applications by using scalable IC and MEMS-based fabrication techniques. Chapter 3 demonstrates the development of microfabricated porous PDMS membranes for barrier modelling. A simple and reproducible method to fabricate and transfer porous PDMS membranes with a high control on pore size, porosity, thickness, is shown. Very thin (thickness <10 ¹m) porous membranes with small features sizes down to 2 ¹mand porosity up to 65% can thus be fabricated and successfully transferred with high reproducibility. The presented results on cell transmigration, topology and barrier formation demonstrated the biocompatibility of the porous PDMS membranes. Chapter 4 shows further efforts towards the realization of manufacturable OOCs. A monolithically microfabricated OOC device, an alternative to the available devices capable to address many more applications, was demonstrated. Preliminary biological experiments indicate its biocompatibility as cells (HUVEC, Cardyomyocites) are successfully cultured and maintained viable in the microchannels and the silicon cavity. Finally, Chapter 5 demonstrates other possibilities allowed by the use of IC and MEMS techniques. The integration of microstructures that enable transduction mechanisms to monitor the cell microenvironment, is shown. Specifically, strain gauges for stress sensing as an alternative to monitor in situ strain in microfabricated OOCs, are presented. Relative resistance changes of approximately 0.008% and 1.2% for titanium and polymeric strain gauges have been observed, respectively. The technological advances shown in this thesis form a significant contribution towards manufacturable fabrication of Organs-on-Chips and the standardization of OOCs as routinely used tools for drug development.","Organ-on-Chip; MEMS; Silicon; PDMS; Membranes; Cell; Strain; Stress","en","doctoral thesis","","978-94-6384-051-4","","","","","","2025-07-08","","","Electronic Components, Technology and Materials","","",""
"uuid:a8577854-a254-44a4-bdb2-b63218454828","http://resolver.tudelft.nl/uuid:a8577854-a254-44a4-bdb2-b63218454828","Uncertainty analysis in integrated catchment modelling","Moreno Rodenas, A. (TU Delft Sanitary Engineering)","Clemens, F.H.L.R. (promotor); Langeveld, J.G. (promotor); Delft University of Technology (degree granting institution)","2019","The adoption of increasingly restrictive water quality standards is directed to maintain natural ecosystems in a good status. Complying with such standards requires significant investments in water infrastructure and operations. Consequently, mathematical simulation is usually applied to assist in the decision-making process for such large-scale actuations. In particular, environmental models are proposed to represent the wastewater cycle in natural water bodies, such that the effect of different pollution mitigation alternatives can be estimated. Integrated catchment models (ICM) aim at simulating water quality dynamics by representing the link between urban drainage networks, wastewater treatment operations, rural hydrology and river physical-biochemical processes. However, these subsystems present dynamics acrossmultiple spatiotemporal scales and many relevant processes are still not fully understood. System observations are scarce and often insufficient to identify most model representations. As a result, ICM studies often produce significant output uncertainties.","","en","doctoral thesis","","978-94-92801-89-0","","","","","","","","","Sanitary Engineering","","",""
"uuid:dcaea760-bb97-4f7e-b16a-e268a6d1c678","http://resolver.tudelft.nl/uuid:dcaea760-bb97-4f7e-b16a-e268a6d1c678","Percutaneous interface tissue removal for hip refixation: The first step in instrument design","Kraaij, G. (TU Delft Medical Instruments & Bio-Inspired Technology)","Dankelman, J. (promotor); Nelissen, R.G.H.H. (promotor); Valstar, E.R. (promotor); Delft University of Technology (degree granting institution)","2019","In the Netherlands about 36.000 total hip prostheses are implanted every year. Survival of these prostheses at 10 year follow-up is 90% in patients older than 70 years at the index operation. In younger patients these results decrease to about 80–85% at 10 years follow-up. The main cause of failure in total hip replacement is aseptic (mechanical) loosening which is caused by a biological response to wear products of the articulation of the joint. This foreign body reaction is associated with periprosthetic bone resorption and subsequent formation of periprosthetic fibrous (interface) tissue. As a result the implant is becoming increasingly loosened, causing debilitating pain on ambulation. At present, patients with loosened prostheses can only undergo revision surgery. This procedure is often extensive (3–5 hr surgery and over 1 liter of blood loss), due to the necessity of removing the prosthesis and all interface tissue; thereafter a new prosthesis is implanted. This revision surgery has a high complication rate in elderly patients with comorbidities (e.g. cardiovascular disease, diabetes etc), which can even result into death in a small percentage. Because of this high complication rate, this demanding procedure cannot be performed in patients with a poor general health, thus these patients remain with this debilitating pain. Therefore, we investigated the possibilities of an alternative minimally invasive refixation procedure that leaves the prosthesis in place, but relies on removing the periprosthetic interface membrane and replacing it with bone cement. Before the refixation procedure can be executed this way, an instrument to remove the interface tissue needs to be developed.","","en","doctoral thesis","","978-94-028-1548-1","","","","","","","","","Medical Instruments & Bio-Inspired Technology","","",""
"uuid:068a1257-dfc2-4389-964e-f665aa5cb213","http://resolver.tudelft.nl/uuid:068a1257-dfc2-4389-964e-f665aa5cb213","Analysis and modelling of Morphodynamics of the Yangtze Estuary","Chu, A. (TU Delft Coastal Engineering)","Wang, Zhengbing (promotor); Aarninkhof, S.G.J. (promotor); Delft University of Technology (degree granting institution)","2019","The flow and sediment transport in the Yangtze Estuary are intrinsically complex because various processes and mechanisms are involved on a large range of temporal and spatial scales. In this thesis the interaction of river discharge and tidal wave with the corresponding sediment transport in the Yangtze Estuary is investigated. The objective is to gain further understanding of the processes and mechanisms dominating the sediment transport in the estuary. Based on the understanding of flow and sediment transport, a morphodynamic model is established and tested to simulate the morphological change of the Yangtze Estuary. Supported by a literature survey reviewing previous studies, the observed data (including water levels, currents, salinity, sediment concentration, sediment samples, etc.) at various stations under different conditions (spring/neap tide, dry/wet season, etc.) are first analyzed to investigate the characteristics of flow and sediment transport in the Yangtze Estuary. Subsequently, a process-based model based on Delft3D is set up for the estuary. After being calibrated and validated against measurements under various conditions, the model is used to simulate the sediment transport at the mouth bar of the Yangtze Estuary. Scenarios of model simulations are designed to account for different combinations of processes and mechanisms contributing to sediment transport. The results demonstrate that taking salinity processes into consideration is a prerequisite to understand how fine sediment has been trapped in the mouth bar area of the Yangtze Estuary in the last half century. It is also concluded that flocculation of fine-grained sediment in suspension enhances the sediment deposition in the mouth bar area. The net effect of all sediment transport processes is typical sedimentation in the wet season and erosion in the dry season, with net deposition annually. A decreasing trend in the annual net deposition has recently become visible. The deposition rate at present is down to 1/3 of the magnitude in the past.","","en","doctoral thesis","","978-94-6366-191-1","","","","","","","","","Coastal Engineering","","",""
"uuid:14b55d5e-586a-4641-8990-55a397674db8","http://resolver.tudelft.nl/uuid:14b55d5e-586a-4641-8990-55a397674db8","High-Fidelity Load and Gradient Corrections for Static Aeroelastic Tailoring of Composite Wings","Jovanov, K. (TU Delft Aerospace Structures & Computational Mechanics)","De Breuker, R. (promotor); Bisagni, C. (promotor); Delft University of Technology (degree granting institution)","2019","Multi-disciplinary design optimization (MDO) is a field that has gained traction in recent years. It can be distinguished as a methodology that promotes a simultaneous change in design features associated to multiple disciplines in order to achieve the best possible design of a coupled multi-disciplinary problem. A typical application in aircraft design is aero-structural design optimization, where the interaction of fluids and structures on load-carrying components is considered. The recent progress in this field can be largely attributed to the increasing computational resources and parallel computing in the form of High Performance Computing Clusters. Nevertheless, the computational bottleneck which persists to this day in high-fidelity aero-structural design is the gradient computation, which is a fundamental component in the design of systems with several thousand design variables and constraints.","Optimization; Aeroelasticity; Sensitivity Analysis","en","doctoral thesis","","978-94-028-1565-8","","","","","","","","","Aerospace Structures & Computational Mechanics","","",""
"uuid:1986ad09-6730-44ca-b62d-6d10426ed820","http://resolver.tudelft.nl/uuid:1986ad09-6730-44ca-b62d-6d10426ed820","Direct numerical simulations of flow around non-spherical particles","Pacha Sanjeevi, S.K. (TU Delft Complex Fluid Processing)","Padding, J.T. (promotor); Breugem, W.P. (promotor); Delft University of Technology (degree granting institution)","2019","This work focuses on creating a recipe for parametrizing flow around assemblies of non-spherical particles. A multi-relaxation time lattice Boltzmann method (MRT-LBM) is used to simulate the flow. The research focuses on 3 different developments. First, different boundary conditions available in the literature for LBM are tested to identify the best for the flow problem. The second part of the thesis focuses on developing more widely applicable scaling laws for drag and lift of various isolated non-spherical particles. In the third part, a recipe to describe hydrodynamic forces on assemblies of axisymmetric, non-spherical particles is proposed. With the described parameters, drag, lift and torque correlations are proposed accordingly. This research is funded by the European Research Council under its consolidator grant scheme, contract no. 615096 (NonSphereFlow).","Direct numerical simulations; Particulate flows; Non-spherical particles; Lattice Boltzmann method","en","doctoral thesis","","978-94-6375-435-4","","","","","","2019-07-02","","","Complex Fluid Processing","","",""
"uuid:c8259a08-bbee-4af0-b570-1350a2dd8d89","http://resolver.tudelft.nl/uuid:c8259a08-bbee-4af0-b570-1350a2dd8d89","Incremental sliding mode flight control","Wang, Xuerui (TU Delft Control & Simulation)","Mulder, Max (promotor); van Kampen, E. (copromotor); Delft University of Technology (degree granting institution)","2019","The swift growth of air traffic volume stresses the importance of flight safety enhancement. Statistical data shows that fly-by-wire technology with automatic flight control systems can effectively reduce the fatal accident rate of loss of control in-flight. Although the dynamics of an aircraft are nonlinear and time-varying, it is common practice to design flight control laws based on local linear time-invariant (LTI) dynamic models, and apply gain-scheduling method. Here, the flight envelope is divided into many smaller operating regimes, and LTI model-based controllers are designed and tuned for each of them. However, this approach is cumbersome and cannot guarantee flight stability and performance in-between operational points. In view of the challenges encountered by LTI model-based control, nonlinear control methods have attracted attention from the flight control community. Nonlinear dynamic inversion (NDI) and backstepping (BS) are two frequently used nonlinear control methods in flight control. These two approaches cancel the nonlinearities in the closed loop using a nonlinear model of the system. However, mismatches between the model and real dynamics inevitability exist, especially when an aircraft encounters atmospheric disturbances and when sudden actuator faults or even structural damages occur. To enhance the robustness of model-based nonlinear control methods to model mismatches, a commonly adopted approach is to augment them with online model identification. This process, however, is computational intensive and requires sufficient excitation, which can make an impaired aircraft fly out of the diminished safe flight envelope. In consideration of these challenges, the main goal of this thesis is: To design a stability-guaranteed nonlinear flight control framework with reduced model dependency and enhanced robustness.","Incremental control; nonlinear control; fault-tolerant control; aeroservoelastic system; sliding mode control; sliding mode disturbance observer; quadrotor flight control","en","doctoral thesis","","978-94-6384-046-0","","","","","","","","","Control & Simulation","","",""
"uuid:8c1c8eae-43ca-405b-b075-6f0a7a8d2ab9","http://resolver.tudelft.nl/uuid:8c1c8eae-43ca-405b-b075-6f0a7a8d2ab9","On predicting individual video viewing experience: The value of user information","Zhu, Y. (TU Delft Multimedia Computing)","Hanjalic, A. (promotor); Heynderickx, I.E.J.R. (promotor); Redi, J.A. (copromotor); Delft University of Technology (degree granting institution)","2019","Experience prediction is one key component in today’s multimedia delivery. Knowing user’s viewing experience allows online video service providers (e.g., Netflix, YouTube) to create value for their customers by providing personalized content and service. However, individual experience prediction is a challenging problem since viewing experience (defined as Quality of Experience in this thesis) is a multifaceted quantity and it is rather personal and subjective. The existing methods for quantifying Quality of Experience (QoE) target at estimating how the video quality is perceived by users, neglecting the hedonic part of experience (the degree of enjoyment of a user watching a video). Quite naturally, these methods consider only factors related to video perceptual quality (purely from video), which is insufficient to properly assess viewing experience. The research reported in this thesis attempts for the first time at shifting the paradigm for perceptual quality modeling, towards measuring and predicting the level of enjoyable viewing experience a user has with a video. In particular, it focuses on exploiting the potential value of user factors (information from users) and investigate their influences on QoE prediction.
The goal of this thesis is to develop a feasible method for predicting the individual viewing experiences in terms of perceptual quality and enjoyment by taking multiple influencing factors into account. Here, the influencing factors are taken from both video (e.g., related to perceptual quality) and user (user factors, e.g., interest. personality). We take three major steps to accomplish this goal. We first deploy a subjective experiment to understand the relationship between perceptual quality and enjoyment, and how their influencing factors form the final viewing experience. With a set of identified influencing factors, we then propose a new QoE prediction model which processes both user and video information to predict individual experience (i.e., either perceptual quality or enjoyment). We show that combining information from video and user enables better prediction performance as compared to only considering information from video related to perceptual quality. Our third step tackles the problem of reliable data collection for the individual QoE research. We developed an open-sourced Facebook application, named YouQ, as an experimental platform for automatic user information collection from social media while performing an online QoE subjective experiment. We show that YouQ can produce reliable results as compared to a controlled laboratory experiment, both in terms of QoE and of quantification of user factors and traits. As a result, a complete, feasible method for individual QoE prediction is presented in this thesis.
Based on the findings presented in this thesis, we reflect on the contribution and make recommendations for future research directions, which we think are substantial and promising for individual QoE prediction.","","en","doctoral thesis","","978-94-6384-052-1","","","","","","","","","Multimedia Computing","","",""
"uuid:1b5cc3e5-0436-44a1-a3a0-08402b56d5f9","http://resolver.tudelft.nl/uuid:1b5cc3e5-0436-44a1-a3a0-08402b56d5f9","Container transport inside the port area and to the hinterland","Hu, Q. (TU Delft Transport Engineering and Logistics)","Lodewijks, G. (promotor); Wiegmans, B. (promotor); Delft University of Technology (degree granting institution)","2019","This thesis discusses the connection between container terminals and the hinterland railway system. Mathematical models are proposed to formulate the various relevant operations and methods are developed to provide solutions to improve the system performance. This thesis could provide suggestions to decision maker(s) regarding to the improvement in both inter-terminal transport system and the hinterland railway system, i.e., increasing the number of containers delivered on time with lower costs.","Freight transport; port-hinterland transport networks; Operations research; Heuristic algorithms","en","doctoral thesis","TRAIL Research School","978-90-5584-250-6","","","","TRAIL Thesis Series no. T2019/9, the Netherlands Research School TRAIL","","","","","Transport Engineering and Logistics","","",""
"uuid:4f6bf5d8-e75f-4627-ab0d-5e859f37a5fd","http://resolver.tudelft.nl/uuid:4f6bf5d8-e75f-4627-ab0d-5e859f37a5fd","Traffic management optimization of railway networks","Luan, X. (TU Delft Transport Engineering and Logistics)","Lodewijks, G. (promotor); De Schutter, B.H.K. (promotor); Delft University of Technology (degree granting institution)","2019","This thesis adopts optimization approaches to tackle the traffic management problem for railway networks, aiming at achieving better performance of railway operations, in terms of punctuality, reliability, non-discrimination, capacity utilization, and energy efficiency. Specifically, the following four aspects are considered: - Non-discriminatory traffic control; - Traffic control cooperating with a preventive maintenance plan; - Traffic control integrating with train control; - Distributed optimization of traffic control for large networks.","","en","doctoral thesis","TRAIL Research School","978-90-5584-252-0","","","","TRAIL Thesis Series no. T2019/10, The Netherlands TRAIL Research School","","","","","Transport Engineering and Logistics","","",""
"uuid:a6dbbf0f-de8b-4c39-a13f-4904f206ace6","http://resolver.tudelft.nl/uuid:a6dbbf0f-de8b-4c39-a13f-4904f206ace6","Finite Element Methods for Seismic Imaging: Cost Reduction through mass matrix preconditioning by defect correction","Shamasundar, R. (TU Delft Applied Geophysics and Petrophysics)","Mulder, W.A. (copromotor); Delft University of Technology (degree granting institution)","2019","Demand for hydrocarbon fuel is predicted to keep increasing in the coming decades in spite of easily accessible alternative fuels due to shifting geopolitical and economic situations. In order to find new hydrocarbon pockets, we need sharper images of earth’s subsurface. Also, the exploration of other sources of energy like geothermal will benefit from better models of the what lies underneath the surface. One way to obtain better images is to use superior numerical methods for forward modelling - Finite-element methods (FEM) are one such method, but their accuracy comes at the cost of increased compute expense. This thesis explores means to reduce this cost and adapt FEM to large-scale problems in geophysics. The Finite Difference (FD) method is the most popular numerical approximation scheme used in subsurface imaging problems. Representing the wave as the solution of individual motion and material equations is advantageous in terms of accuracy and stability and leads to the natural inclusion of density variations in the medium. This representation is referred to as the first-order formulation of the wave equation in this document. Finite Element (FE) methods are commonly derived for second-order equations because of the nature of variational formulation. Finite-element discretisations of the acoustic wave equation in the time domain often employ mass lumping to avoid the cost of inverting a large sparse mass matrix. Unfortunately, for a first-order system of equations, mass lumping destroys the superconvergence of numerical dispersion for odd degree polynomials. In chapter 3 of this thesis, we consider defect correction as a means to restore the convergence. We adapt the defect correction method to FEM by solving the consistent mass matrix with the lumped one as preconditioner. For the lowest-degree element, fourth-order accuracy in 1D can be obtained with just a single iteration of defect correction. In this chapter, we analyse the behaviour of the error in eigenvectors as a function of the normalized wavenumber in the form of leading terms in its series expansion and find that this error exceeds the dispersion error, except for the lowest degree where the eigenvector error is zero. We also present results of numerical experiments that confirm this analysis. Chapter 3 concluded that defect correction can improve the convergence property of finite-elements in the first-order system of acoustic equations in 1D; the inexpensive linear elements showed the same performance as a fourth-order scheme. However, for realistic problems we need to ensure that the same improvement holds in higher dimensions. Based on the results of the earlier chapter, we conjecture that defect correction should work for 2D problems. In the first half of chapter 4, we analyze the 2-D case. Theoretical results imply that the lowest-degree polynomial provides fourth-order accuracy with defect correction, if the grid of squares or triangles is highly regular and material properties constant. But numerical results converge more slowly than theoretical predictions. Further investigation demonstrates that this is due to the activation of error-inducing wavenumbers in the delta-source representation. In the second half of the chapter, we provide a solution to this problem in the form of a tapered-sinc source function. In chapter 5, we consider isotropic elastic wave propagation with continuous masslumped finite elements on tetrahedra with explicit time stepping. These elements require higher order polynomials in their interior to preserve accuracy after mass lumping and are recently discovered up to degree 4. Global assembly of the symmetric stiffness matrix is a natural approach but requires large memory. Local assembly on the fly, in the form of matrix-vector products per element at each time step, has a much smaller memory footprint. With dedicated expressions for local assembly, our code ran about 1.3 times faster for degree 2 and 1.9 times for degree 3 on a simple homogeneous test problem, using 24 cores. This is similar to the acoustic case. For a more realistic problem, the gain in efficiency was a factor 2.5 for degree 2 and 3 for degree 3. For the lowest degree, the linear element, the expressions for both the global and local assembly can be further simplified. In that case, global assembly is more efficient than local assembly. Among the three degrees, the element of degree 3 is the most efficient in terms of accuracy at a given cost. In chapter 6, we consider cubic Hermite elements as interpolants in place of Legendre polynomials. By nature of theirC1 continuity, they might offer a solution to the problems of ‘spurious’ wavenumbers seen in earlier chapters with conventional interpolation schemes. Results show acceptable convergence properties on homogeneous media, but the representation needs to be altered to suit discontinuities in density, which makes
interesting future work.","Finite Element Method; Acoustic Modeling; Forward Modeling; Subsurface Imaging","en","doctoral thesis","","978-94-6366-185-0","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:91c635b5-a35a-4d25-a874-b4dc8094579e","http://resolver.tudelft.nl/uuid:91c635b5-a35a-4d25-a874-b4dc8094579e","Rethinking Faecal Sludge Management in Emergency Settings: Decision Support Tools and Smart Technology Applications for Emergency Sanitation","Zakaria, F. (TU Delft BT/Environmental Biotechnology)","Brdjanovic, Damir (promotor); Delft University of Technology (degree granting institution)","2019","The development of technology in the emergency sanitation sector has not been emphasised sufficiently considering that the management of human excreta is a basic requirement for every person. The lack of technology tailored to emergency situations complicates efforts to cater for sanitation needs in challenging humanitarian crisis. Sanitation response together with the provision of clean water and hygiene promotion are considered life-saving efforts in emergencies. Nevertheless, in an emergency, there is regularly lack of means and limited planning time available to provide an effective and safe sanitation response. Reviewing the existing practices, the emergency toilet options consist of very basic provisions, primarily trench and pit latrines. Whenever it is not possible to dig a pit or trench, the option left is using container based sanitation. This type of sanitation in particular requires a collection or emptying plan, and a subsequent treatment and safe disposal plan, which is usually overlooked in the realm of an emergency where there is limited time to plan for any requirements after toilet provisions.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-0-367-36181-5","","","","Dissertation submitted in fulfillment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of IHE Delft Institute for Water Education.","","","","","BT/Environmental Biotechnology","","",""
"uuid:23aa4a2e-8c3d-40bf-95f9-22e88e83c758","http://resolver.tudelft.nl/uuid:23aa4a2e-8c3d-40bf-95f9-22e88e83c758","Welgelegen: Analyse van de Hollandse buitenplaatsen in hun landschappen (1630-1730)","Verschuure, G.A (TU Delft Landscape Architecture)","Luiten, E.A.J. (promotor); van Thoor, M.T.A. (copromotor); Delft University of Technology (degree granting institution)","2019","In de zeventiende eeuw liet de stedelijke elite op grote schaal buitenplaatsen en landgoederen in het Hollandse laagland aanleggen. Zij verkoos de meest welgelegen plaatsen voor hun zomerverblijf met speelhuizen, bomenlanen en geschoren hagen. De voorkeur om in elkaars nabijheid te gaan wonen was veelal ingegeven door eenzelfde gebruik of vanwege een vergelijkbare stedelijke of landschappelijke beleving van de omgeving. Door identieke keuze drukte de elite hun stempel op het landschap. Door historische kaarten en inventarisaties met verklaringen uit historische bronnen te combineren zijn de belangrijkste vestigings-en compositiefactoren met de daarbij samenhangende gebruiks-en belevingsmotieven bepaald. De analyse toont met behulp van kaarten en historische prenten aan, hoe destijds de buitenplaatsenaanleg door stedelijke en landschappelijke vernieuwingen werd beïnvloed (trekvaarten, droogmakerijen, afzandingen, jacht en wandelen, wonen in de stad aan straten en pleinen in het groen en dergelijke). Op basis van de bovengenoemde factoren en motieven zijn de meest typerende buitenplaatsenlandschappen tussen 1630 en 1730 bepaald en deze zijn samengevat in Hollands Tempe, de lustlandschappen van Holland. In de afgelopen decennia heeft provinciaal ruimtelijk erfgoedbeleid over buitenplaatsen steeds meer aandacht gekregen voor het behoud van de omgeving van buitenplaatsen in buitenplaats-of landgoedbiotopen of buitenplaatsenlandschappen. Deze groepen buitenplaatsen, buitenplaatsenlandschappen, vormen veelal de basis voor onze groene structuren en parken in en om de steden. Voor de toekomst van deze groenstructuren is het nodig om naast de klimatologische, ecologische en economische waarden ook de cultuurhistorische en identiteitswaarden te onderstrepen. Dit onderzoek heeft een bijdrage willen vormen in het vergroten van het begrip van deze erfgoedlandschappen voor toekomstig gebruik.","buitenplaatsen; landgoederen; buitenplaatsenlandschappen; buitenplaatsbiotopen; landschap; stad; zeventiende eeuw; Holland; stedelijk groen; buitenplaatsenzone; landschapsarchitectuur","nl","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-183-6","","","","A+BE | Architecture and the Built Environment No 7 (2019)","","","","","Landscape Architecture","","",""
"uuid:00f46cda-0b41-48a1-a7c4-f050c13d90fb","http://resolver.tudelft.nl/uuid:00f46cda-0b41-48a1-a7c4-f050c13d90fb","Multiscale study of microstructural evolution and damage in rail steels","Kumar, A. (TU Delft (OLD) MSE-3)","Sietsma, J. (promotor); Petrov, R.H. (promotor); Delft University of Technology (degree granting institution)","2019","In this PhD thesis, we investigate the microstructural evolution and damage in different steel grades used in railway applications such as switches, crossings and curved tracks, which are the key components in the rail transportation industries around the world. The damage in these components leads to a maintenance cost of billions of Euros per year worldwide and in the worst case scenario it can pose severe safety threats to the passengers. The damage mechanisms in these components depend on the microstructure of the steels used The increasing demand of high speed rail transportation and increasing traffic intensity require a thorough understanding of the damage mechanisms in the currently used steel grades in switches, crossings and curved tracks. This understanding can provide guidelines for the design of sustainable/damage resistant materials with an improved life time. Different rail steels such as fine pearlitic steels (R350HT), cast Hadfield steels and Continuously Cooled Carbide Free Bainitic Steels (CC-CFBS) have been investigated from the macro to the atomic scale to understand the physical mechanisms of the damage in relation with their particular microstructure.","","en","doctoral thesis","","","","","","","","","","","(OLD) MSE-3","","",""
"uuid:d3d6a8ba-45ff-406d-a81a-087e8bc8dbf0","http://resolver.tudelft.nl/uuid:d3d6a8ba-45ff-406d-a81a-087e8bc8dbf0","Bisimilar stochastic systems","Tkachev, I. (TU Delft Team Bart De Schutter)","De Schutter, B.H.K. (promotor); Abate, A. (promotor); Mohajerin Esfahani, P. (copromotor); Delft University of Technology (degree granting institution)","2019","Stochastic systems have been widely investigated and employed in numerous
applications in different areas such as finance, biology and engineering as
they allow accounting for imprecisions so often faced in every practical tasks. Often that task would require to find the best action sequence in order to optimize the outcome. When the model is small, one can efficiently employ algorithmic techniques to synthesize such a control policy. Hence, in case of more complex models, instead of solving control tasks there directly, one may want to approximate them with simpler ones and then use those algorithms. This method is called abstraction for it abstracts the original “physical” model to an “abstract” one, only needed to ease the computations. Ideally, this abstract model is somewhat similar to the original one, as we want to extrapolate results achieved over the former to the setting of the latter. One way this similarity can be ensured is by means of the (bi)simulation methods, that give sufficient conditions to the closeness of behaviors of the two systems being compared. Such techniques became popular in discrete non-stochastic models, then advanced to continuous ones and started making steps to discrete stochastic systems. Yet, definite results were not achieved for abstractions of continuous stochastic models. There were trials to extend ideas from continuous non-stochastic framework, or discrete stochastic one, but they were mostly fragmentary. This thesis brings those methods together to build a unified framework and shows immediate benefits of doing this.
To define the closeness between the systems we look at their path-wise properties, which cover most of the tasks whose relevance was praised in the literature. That comprises both additive cost-like criteria and formal specifications, e.g. encoded by LTL formulae of the kind “reach the goal set through the safe set while avoiding dangerous states”. We derive guarantees on the approximation error and suggest how to build an abstraction for a given tolerance level. These guarantees work mostly for the finite time horizon properties, hence for the rest we develop task-dependent solution methods, further connecting with the existing literature. Besides those concrete results, we also put some effort in developing the conceptual side of the bisimulation framework for stochastic systems. For example, we know how important it is to choose a definition of behavior here, since bisimiliarity is useful as long as it guarantees closeness of behaviors one is interested in.
We hence stress the importance of keeping in mind the final goal while extrapolating abstract solution methods, and show which issues may arise when this goal is forgotten. We also extend the framework we deal with beyond bisimulation of stochastic systems only, providing a formalization of approximate relations and their connections with pseudo-metrics, proving several theorems in probabilistic approximation, whose generality is greater than the scope of this thesis, and also provide a category-theoretical basis for bisimulations of stochastic systems, hence opening one more door from which this problem can be approached.","","en","doctoral thesis","","978-94-6384-048-4","","","","","","","","","Team Bart De Schutter","","",""
"uuid:1de25f4d-c0cb-42d6-9685-d75b300c0aad","http://resolver.tudelft.nl/uuid:1de25f4d-c0cb-42d6-9685-d75b300c0aad","The situated Design Rationale: of a social robot for child's disease self-management","Looije, R. (TU Delft Interactive Intelligence)","Neerincx, M.A. (promotor); Hindriks, K.V. (promotor); Delft University of Technology (degree granting institution)","2019","A young boy with type 1 Diabetes Mellitus is supported by a social robot on the road to self-management. The robot has knowledge on the goals that the boy needs to reach, as discussed with his health care professional. The robot also knows the boy’s activity options and preferences. It suggests activities based on this knowledge, but also encourages the boy to try new approaches. For parents, such a social robot means that they can be less teacher and more parent and for the health care professionals it means they can focus on the emotional aspects instead of the knowledge aspects during visits.
Finally, the boy sees the robot as something that is fun and a peer in contrast to someone/something with a higher authority. The robot supports relatedness and a feeling of competence, the different activities provide a feeling of autonomy, and less budding of the parents reduces stress for the whole family. This all supports that the boy sees diabetesas his own responsibility and feels that he has enough competence and autonomy to take care of diabetes himself. In support of this vision we look in this thesis at the design and evaluation of a social robot.","Social robot; Cognitive engineering; Design rationale; Diabetes; Children","en","doctoral thesis","","978-94-6366-157-7","","","","","","","","","Interactive Intelligence","","",""
"uuid:95a09bb2-2665-4098-8428-62cad20dfa56","http://resolver.tudelft.nl/uuid:95a09bb2-2665-4098-8428-62cad20dfa56","Regional Design: Discretionary Approaches to Planning in the Netherlands","Balz, Verena Elisabeth (TU Delft Spatial Planning and Strategy)","Zonneveld, W.A.M. (promotor); Nadin, V. (promotor); Delft University of Technology (degree granting institution)","2019","This thesis elaborates on the role and position of regional design in spatial planning. Building upon the argument that design in this realm aims to improve planning guidance by judging its implications for particular situations, the thesis develops an analytical framework for an enhanced understanding of how design both influences, and is influenced by, prevailing planning rationales. The analytical framework is applied to a set of regional design initiatives that evolved
in the context of Dutch national plans between 1988 and 2012. Significantly, the analysis reveals aspects of spatial planning frameworks that shape the performances of design practice, of particular importance being the flexibility of planning frameworks and the involvement of actors in initiating, conducting and judging design. In theoretical terms, the thesis contributes to the integration of planning and design theory. The societal relevance of this dissertation evolves
against the background of an increasing use of regional design-led practices in Dutch spatial planning since the mid-1980s.","Regional design; Spatial planning; Governance","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-182-9","","","","A+BE | Architecture and the Built Environment No 6 (2019)","","","","","Spatial Planning and Strategy","","",""
"uuid:ad56c73d-4986-4358-9dd2-2929004cea6f","http://resolver.tudelft.nl/uuid:ad56c73d-4986-4358-9dd2-2929004cea6f","Studies on foam in porous media and the effect of oil","Hussain, A.A.A. (TU Delft Reservoir Engineering)","Rossen, W.R. (promotor); Vincent-Bonnieu, S.Y.F. (copromotor); Delft University of Technology (degree granting institution)","2019","Foam flooding can be applied in soil-remediation techniques or for improving oil recovery processes in petroleum reservoirs. There are models which aim to predict the behaviour of foam in presence of oil in bulk and in porous media, however these models are not very reliable. In this work we investigate different ways in which a specific crude oil impacts a specific foam in a porous medium. Furthermore, we model surfactant depletion by the gas-water interface, which can partly explain the transition from the low-quality to the high-quality regime of foam in porous media.","Foam; Foam EOR; EOR; Porous Media; crude oil; oil; Surfactant concentration; Surfactant","en","doctoral thesis","","978-94-6366-190-4","","","","","","","","","Reservoir Engineering","","",""
"uuid:008b44a0-d52c-40db-8fad-8d5a6a5ae1dc","http://resolver.tudelft.nl/uuid:008b44a0-d52c-40db-8fad-8d5a6a5ae1dc","Institutional Change through Social Learning: Climate Change Policy Gaming in Kenya","Onencan, A.M. (TU Delft Policy Analysis)","van de Walle, B.A. (promotor); Enserink, B. (promotor); Kortmann, Rens (copromotor); Delft University of Technology (degree granting institution)","2019","Complex and uncertain societal problems cannot be addressed by technical solutions that rely solely on predictions. Institutions that entirely rely on predictions, repeat the same actions (routine), with little reflection on the impact of these technological solutions upon the socio-technical system. Though routine is beneficial for stability and continuity of any institution, it may stifle reflection and make it harder for change. When an institution does not change, it cannot innovate nor adapt to changing circumstances.
Social learning (SL) has been proposed to facilitate institutional change. SL is a change in societal understanding, achieved through social interactions, which eventually gets situated within broader social networks. In principle, SL holds a promise in addressing the problem of routinised, non-adaptive institutions. Nevertheless, there is limited evidence on whether SL does indeed lead to institutional change.
This PhD research uses policy gaming to assess whether SL can lead to institutional change in the Nzoia River Basin. The results indicate that SL has the potential to change routine-based institutions and generate adaptive capacity. The outcomes also indicate the need for the following profound institutional changes in Nzoia River Basin:
Artefacts: Replace current WRM structures with configurations that respect the river, and support the sustainable management of the drainage basin, as a whole.
Values: Value water more than spatial, agricultural and energy-production plans and make water the structuring element within the Nzoia River Basin. This means that any proposed laws, regulations, practices and norms that intend to utilise the scarce water resources unsustainably should not be supported.
Underlying Assumptions: Question underlying assumptions, and make transformations to existing laws, regulations, values, norms and actor-networks to build adaptive capacity.","Water Governance; Social learning; Institutional change; Team interdependence; Climate change adaptation; Policy Gaming; Situation awareness; Diversity; Cooperation; Trust; Cognitive learning; Relational learning; Epistemic learning; Nzoia River Basin; Water resources management","en","doctoral thesis","","978-94-6366-178-2","","","","","","2019-12-26","","","Policy Analysis","","",""
"uuid:49ccc901-a128-424f-9e9d-e2f8c63ad31b","http://resolver.tudelft.nl/uuid:49ccc901-a128-424f-9e9d-e2f8c63ad31b","Investigation of Pressure Assisted Nanosilver Sintering Process for Application in Power Electronics","Zhang, H. (TU Delft Electronic Components, Technology and Materials)","Zhang, Kouchi (promotor); van Driel, W.D. (promotor); Delft University of Technology (degree granting institution)","2019","High power electronics with wide band gap semiconductors are becoming the most promising devices in new energy power suppliers and converters. Highly reliable die attach materials, serving as one of interconnections, play critical roles in power electronic packages and modules. Among which, the nanosilver paste/film has become a promising die attach material with main advantages of a high thermal and electrical conductivity, as well as high temperature stability. Previous works are mostly focusing on the pressure free sintered silver nanoparticles, which has a low bonding strength and high porosity. Alternatively, pressure assisted sintering has exhibited great advantages in enhancing the bonding quality of nanosilver sintered joint. But the sintering properties of pressure sintered silver nanoparticles and the application of this technology in power electronics packaging are still lacking. In this thesis, a comprehensive research is performed on the pressure assisted sintering of silver nanoparticles. The results indicate that the sintering pressure demonstrates significant effect on enhancing the bonding strength of sintered silver nanoparticles. Furthermore, the increase of sintering pressure from 5 MPa to 30 MPa improves the resistance to plastic deformation and creep of nanosilver sintered joint. In addition, the designed nanosilver sintering technology is successfully employed in fabricating the double side sintered power package. Besides, the nanosilver sintering process is designed and employed in the ceramic packaging.","nanosilver sintering; pressure; Shear strength; Nanoindentation; stress distribution","en","doctoral thesis","","978-91-6366-176-8","","","","","","","","","Electronic Components, Technology and Materials","","",""
"uuid:7aa8438c-6106-4c0f-a33f-0ceb8782ad23","http://resolver.tudelft.nl/uuid:7aa8438c-6106-4c0f-a33f-0ceb8782ad23","Photovoltaic Windows: Theories, Devices and Applications","Gao, Y. (TU Delft Photovoltaic Materials and Devices)","Zeman, M. (promotor); Zhang, Kouchi (promotor); Isabella, O. (copromotor); Delft University of Technology (degree granting institution)","2019","Current photovoltaic (PV) industrial chain mainly serves the conventional utility-scale PV power stations. Rigid and opaque silicon-based PV modules domain the market so far. As distributed PV capacities expand, PV modules tend to be integrated with existing infrastructures (mostly, buildings). To adapt with the building environment, innovative design is required from the cell level to the system level. This dissertation specially deals with the window-integrated photovoltaics. Two types of PV windows, those with opaque PV shading elements and those with semi-transparent PV (STPV) glazing, are mainly explored in terms of concerned performances. A mathematical model of solar irradiance and a geometrical model of a reference office are built in Chapter 2. One-axis PV blinds and the total input power are modeled and analyzed in regard to annual power generation and glare protection. An optimal sun-tracking angle has been found to achieve both maximum power generation and non-glare daylighting. Optimal design of cell layout is also proposed to avoid shading from window frames. Compared with conventional quasi-perpendicular sun tracking, the proposed sun-tracking methods improve the annual energy generation by 12.00% and the annual average efficiency by 8.52%. In Chapter 3, PV shading elements with extra degree of freedoms (DOFs) have been modeled and analyzed in a similar way as in Chapter 2. Two-DOF PV shading elements have been proved to be the same as one-axis PV blinds in respect to optimal sun-tracking positions. PV shading elements with three-DOF sun-tracking abilities are demonstrated capable to meet all the requirements, i.e. gaining the maximum power generation, protecting from glare, and avoiding shadows from the window frame. A corresponding variable-pivot three DOF (VP-3-DOF) sun-tracking algorithm is given in the form of an analytical solution. Following aforementioned two chapters, the overall energy performance of the reference office with one-axis PV blinds is analyzed over an entire year in Chapter 4. Simulations show that using the optimal shade-free tracking method, the net energy consumption of the building, considering PV production, artificial lighting, heating and cooling, is reduced by 10.49%, compared to the perpendicular tracking method. In Chapter 5, PV windows are applied to the skylight in Dutch greenhouses. Unlike vertically-mounted PV windows mentioned above, the greenhouse PV panels are installed on a pitched roof to regulate the sunlight for plants, instead of humankind. PV layouts in high and low densities are evaluated under four special sun-tracking positions with regard to power generation and interior irradiance. Simulation results provide guidelines to balance the PV power generation and food production in greenhouses. In Chapter 6, semi-transparent thin-film amorphous silicon solar cells are designed and fabricated for PV windows. Using an optical model, GenPro4, we provide with a simulation method to optimize the configuration of such solar cells. According to the optimized results, we fabricate the single-junction amorphous silicon solar cell, showing an average transmittance of 20.04% with the conversion efficiency of 6.94%. Additionally attached to a polymer dispersed liquid crystal (PDLC) film, which can switch from opaque to transparent state in a second by applying an alternating-current (AC) voltage, the transmittance of the PV window can be further controlled. The prototype of a house model, containing the STPV-PDLC system, has been built to demonstrate the feasibility of such a combination. Besides building applications, PV windows can be further applied to any occasion that requires light transmittance and power supply, such as electric vehicles, aircrafts, billboards, and even mobile phones. Applications of PV windows could be beyond imagination.","photovoltaic windows; building-integrated photovoltaic; building energy; algrivoltaic; thin-film solar cells; smart windows","en","doctoral thesis","","978-94-6366-175-1","","","","","","","","","Photovoltaic Materials and Devices","","",""
"uuid:16e8c6f0-d2c4-47bd-b4e1-1564105c0a94","http://resolver.tudelft.nl/uuid:16e8c6f0-d2c4-47bd-b4e1-1564105c0a94","Imaging DNA nanostructures with advanced TEM techniques","Kabiri, Y. (TU Delft BN/Cees Dekker Lab)","Dekker, C. (promotor); Zandbergen, H.W. (promotor); Delft University of Technology (degree granting institution)","2019","The low contrast of biomolecules in TEM has been a great obstacle for their structure determination and hence to the understanding of their structure-function relation. Historically, single DNA strands remained one the most difficult classes of biomolecular specimens to image, due to low electron scattering strength of its constituent elements. The common practice was then to image them either when freely suspended (without any support) or shadow image them with negative staining technique. Those remedies are limited in terms of applicability to different DNA nanostructures as well as pose difficulties in sample preparation. For example, making the 2D DNA nanostructures freestanding would not be a viable solution for imaging them. This thesis provides a general study to tackle the challenges in imaging nucleic acids with TEM...","transmission electron microscopy; graphene; DNA nanostructures","en","doctoral thesis","","978-90-8593-4080","","","","","","","","","BN/Cees Dekker Lab","","",""
"uuid:230fffcb-b313-4f9f-bec0-e619bc62b0d4","http://resolver.tudelft.nl/uuid:230fffcb-b313-4f9f-bec0-e619bc62b0d4","From micro-mechanisms of damage initiation to constitutive mechanical behavior of bainitic multiphase steels","Shakerifard, B. (TU Delft (OLD) MSE-3)","Kestens, L.A.I. (promotor); Galan Lopez, J. (promotor); Delft University of Technology (degree granting institution)","2019","Global warming, continuous demand on energy from fossil fuels as a limited natural resource, and costumers’ high expectations regarding product quality are three global challenges that the automotive industry is facing. Fuel consumption reduction has a significant impact on preserving fossil fuels, lowering fossil fuel dependency and the CO2 emissions that result in global warming. Weight reduction of car bodies, as so called Bodies-In-White (BIW), is one possible solution that the automotive industry can invest on. However, there are other conflicting parameters to weight reduction such as passenger safety and formability, which need to be considered simultaneously...","steel; damage; dynamic properties; crystal plasticity","en","doctoral thesis","","978-94-028-1514-6","","","","","","2019-12-24","","","(OLD) MSE-3","","",""
"uuid:32a7a099-94c3-45f7-955b-b14628996011","http://resolver.tudelft.nl/uuid:32a7a099-94c3-45f7-955b-b14628996011","The nature of photoexcitations and carrier multiplication in low-dimensional semiconductors","Kulkarni, A. (TU Delft ChemE/Opto-electronic Materials)","Siebbeles, L.D.A. (promotor); Delft University of Technology (degree granting institution)","2019","The aim of this thesis is to study the nature of photoexcitations and carrier multiplication in low-dimensional semiconductors using ultrafast spectroscopy techniques. Ongoing from a macroscopic scale to a nanoscale, the electronic and
optical properties of a material become size dependent, and differ significantly from their bulk counterpart due to quantum confinement, and the surrounding dielectric environment. Electrons and holes in nano-semiconductors can attract each other to form neutral bound electron-hole pairs known as excitons. Stable robust excitons are useful to achieve optical gain and lasing. The Coulomb interaction between electrons and holes in nano-semiconductors is enhanced due to quantum confinement, which can lead to the creation of multiple electron-hole pairs per single absorbed photon via a process called carrier multiplication (CM). CM is beneficial to achieve a power conversion efficiency in a solar cell beyond the Shockley-Quiesser limit.","","en","doctoral thesis","","978-94-028-1559-7","","","","","","","","","ChemE/Opto-electronic Materials","","",""
"uuid:6e705a6a-36d8-427c-8ee5-9e51f0ce41bc","http://resolver.tudelft.nl/uuid:6e705a6a-36d8-427c-8ee5-9e51f0ce41bc","High-Purity Digitally Intensive Frequency Synthesis Exploiting Millimeter-Wave Harmonics","Zong, Z. (TU Delft Electronics)","Staszewski, R.B. (promotor); Delft University of Technology (degree granting institution)","2019","This thesis focuses on improving the phase noise and power efficiency
of millimeter-wave (mm-wave) frequency synthesizers in nanometer CMOS.
The mm-wave frequency spectrum is widely adopted in various upcoming
volume commercial wireless applications. These new applications provide
more interconnection between the physical and digital worlds. It entails a
demand for high speed data communications and accurate object sensing,
which are enabled by the large bandwidth available at mm-wave frequencies.
These systems also require good signal-to-noise ratio (SNR) on mm-wave
transceivers. It sets stringent phase noise specifications on the mm-wave
frequency synthesizers. On the other hand, the power budget on the mm-wave
frequency synthesizers are limited for long battery lifetime and/or thermal
reliability. The low phase noise should be achieved at high power efficiency.
Advanced nanometer CMOS technologies are preferred for the integration
of mm-wave frequency synthesizers. The scaled transistor size favors the cointegration with baseband circuits and large-scale SoCs. The upgrowing speed
of the MOSFETs also extends the upper limits on the operating frequency
of the CMOS circuits. On the other hand, the performance of mm-wave
frequency synthesizers suffers from various constraints and imperfections in
nanometer CMOS technologies. For example, the mm-wave oscillators is
inferior in phase noise due to the low quality-factor LC tank and exacerbated
flicker noise upconversion. Mm-wave frequency dividers/multipliers are power
hungry and limit the power efficiency of the frequency synthesizers. There is
a clear gap in performance between mm-wave and RF frequency synthesizers.","","en","doctoral thesis","","978-94-6384-050-7","","","","","","","","","Electronics","","",""
"uuid:08556a7c-ef90-43f7-998b-f103ceea6267","http://resolver.tudelft.nl/uuid:08556a7c-ef90-43f7-998b-f103ceea6267","Migrating Target Detection in Wideband Radars","Petrov, N. (TU Delft Microwave Sensing, Signals & Systems)","le Chevalier, F. (promotor); Yarovoy, Alexander (promotor); Delft University of Technology (degree granting institution)","2019","Modern surveillance radars are designed to detect moving targets of interest in an adverse environment, which can encompass strong unwanted reflections from ground or sea surface, clouds, precipitation, etc. Detection of weak and small moving targets in environmental clutter remains, however, a challenging task for the existing radar systems. One of the main directions for modern radar performance improvement is the application of wideband high-resolution waveforms, which provide detailed range information of objects in the observed scene. Together with such inherent advantages of wideband waveforms as multi-path separation, clutter reduction and improved target classification, additional benefits can be obtained by exploiting target range migration (range walk), essential for fast moving targets in the high-resolution mode. This thesis aims at the development of novel signal processing techniques for migrating target detection in wideband radars. It involves both resolving range-velocity ambiguities and improvement in target discrimination from ground clutter by accounting for target range migration. It is demonstrated that wideband radars can resolve range-velocity ambiguity by transmitting a single long pulse burst with low pulse repetition frequency (PRF) and exploring target range walk phenomena during the burst. The ambiguity function of such waveform still has strong residuals at the locations of ambiguities, called ambiguous sidelobes, which have to be considered in the processing of wideband data. The presence of ground clutter in the observation scene has a detrimental effect on the wideband radar performance. The impact of the clutter Doppler spectrum and waveform parameters on target detection at clutter ambiguities has been investigated. The improvement over the conventional waveform is demonstrated for narrow clutter Doppler spectrum; in the presence of clutter with a wide Doppler spectrum, the conventional staggered-PRF waveform is preferable. Performance degradation at ambiguous-to-clutter velocities is validated on the real data sets. Modern high-resolution parametric-free spectrum estimators – IAA (Iterative Adaptive Approach) and SPICE (Semi-Parametric Iterative Covariance-based Estimator) – are proposed for the reconstruction of the observed scene from wideband radar measurements with no velocity ambiguities. These algorithms demonstrate significant improvement in rejection of ambiguous sidelobes over the conventional techniques. For clutter-limited case, the covariance-aware SPICE is introduced with improved capability to discriminate targets from clutter. The advantages of the proposed methods are demonstrated in numerical simulations and real data processing. The ambiguous sidelobes can cause severe problems for detection of multiple targets located at similar ranges. A dedicated detector for a dense target scenario has been introduced. It can detect multiple closely spaced targets and mitigate false detections due to their ambiguous sidelobes, holding false alarm probability at the required level. The improvement over conventional processing is demonstrated. Special attention is then devoted to clutter suppression in the high range resolution mode. In meter or sub-meter range resolution, the observed ground clutter, modeled by a compound-Gaussian process, may have significant fluctuations over the range interval, elapsed by the target. An advanced detector for range-migrating targets in compound-Gaussian clutter is developed. It performs two-dimensional clutter filtering – in Doppler frequency and in the range – and benefits from clutter spatial diversity, obtained for a target passing over different patches of clutter. A significant improvement in the detection of fast moving targets in spiky clutter is achieved in comparison to the existing methods. The attained gain depends on clutter characteristics and target velocity: fast moving targets are easier to detect than slow ones with equal signal-to-clutter ratio. The generalized approach for detection of range-extended migrating targets is provided. The performed research provides some fundamental insight for implementation of new radar architectures with the utilization of wideband waveforms.","","en","doctoral thesis","","978-94-028-1578-8","","","","","","","","","Microwave Sensing, Signals & Systems","","",""
"uuid:59434a52-d848-4e9d-9a86-1a9414737125","http://resolver.tudelft.nl/uuid:59434a52-d848-4e9d-9a86-1a9414737125","Design touch matters: Bending and stretching the potentials of smart material composites","Barati, B. (TU Delft Emerging Materials)","Hekkert, P.P.M. (promotor); Karana, E. (copromotor); Delft University of Technology (degree granting institution)","2019","In the past decade, the interest in collaborative materials development projects with designers and materials scientists has gained momentum. Designers’ involvement in early materials development is expected to inform the development process about the potentials of a new material, beyond the values of efficiency and convenience. This paper-based PhD thesis is an attempt to understand what design can do for materials development through studying and questioning current practice. The research has evolved in the specific context of the EU project, Light.Touch.Matters (LTM) that put into practice a proposed methodology for organizing such collaborative projects. The LTM project and its organization set the departure point for further investigations into the new design situation.","","en","doctoral thesis","","978-94-028-1577-1","","","","","","","","","Emerging Materials","","",""
"uuid:b5953a17-322d-49e6-87ba-e299673e8b84","http://resolver.tudelft.nl/uuid:b5953a17-322d-49e6-87ba-e299673e8b84","Evidence-based development and evaluation of haptic interfaces for manual control","Fu, W. (TU Delft Control & Simulation)","van Paassen, M.M. (promotor); Mulder, Max (promotor); Delft University of Technology (degree granting institution)","2019","At present, the rapid development of automation technologies allows robots remarkable precision and endurance, as well as the strength in accomplishing repetitive tasks. Despite this, manual control is still indispensable in many domains where robots and humans play complementary roles, as humans demonstrate superior competence in improvisation and flexibility, as well as the excellent ability to take on tasks where things cannot be fully specified. Haptic interfaces provide a prime example which combines the strengths of these two elements, allowing them to interact and merge into a highly integrated control loop. A haptic interface is usually created by providing force feedback related to the task on a control device. The haptic feedback makes performing manual control more intuitive, allowing the operator to physically act upon what (s)he feels, rather than generating the control activity through only interpreting other sensory inputs, such as visual and auditory cues. Over the last few decades, haptic interfaces have gained popularity as being powerful tools to facilitate manual control. By analogy with a visual interface, one can interpret a haptic interface as the display that presents information to and accepts commands from a human operator. While giving input through the interface, the neuromuscular system of the operator also acts as the eye that perceives the information being presented by a display. This highly interactive nature underlines the importance of orienting the development of all haptic systems towards humans, particularly towards what humans feel and how they need to act. To facilitate future development of haptic interfaces, this thesis focuses on two of the main challenges that have not been adequately addressed from such a human centric perspective: (i) among various possibilities, how can we select the one that works more effectively with humans, i.e., using understanding of human control behavior (how humans act) to guide the development of the philosophy of the design?, and (ii) how can we know whether a device ensures a transparent haptic interaction, i.e., incorporating the characteristics of human haptic perception (what humans feel) into the evaluation of the quality of the display? ...","Haptic interface; haptic perception; manual control behavior; neuromuscular system; mechanical properties; massspring- damper systems; haptic display transparency","en","doctoral thesis","","978-94-6366-177-5","","","","","","","","","Control & Simulation","","",""
"uuid:43d7992a-7077-47ba-b38f-113f5011d07f","http://resolver.tudelft.nl/uuid:43d7992a-7077-47ba-b38f-113f5011d07f","Declarative Syntax Definition for Modern Language Workbenches","de Souza Amorim, L.E. (TU Delft Programming Languages)","Visser, Eelco (promotor); Erdweg, S.T. (promotor); Delft University of Technology (degree granting institution)","2019","Programming languages are one of the key components of computer science,
allowing programmers to control, define, and change the behaviour of computer
systems. However, programming languages require considerable effort to design, implement, and maintain. Fortunately, declarative approaches can be used to define programming languages facilitating their development and implementation.
Commonly, the first step to define a programming language consists of specifying its syntax. Syntax definition formalisms are based on grammars, defining rules that specify the words that belong to a language and how these words must be structured to construct valid programs. Grammars are multipurpose, i.e., they provide an understandable source of documentation,
and can also be used to derive language implementations. Language workbenches assist language engineers to develop and prototype programming languages by deriving syntactic services from a syntax definition formalism.
Many challenging problems still exist when using declarative syntax definitions
in a language workbench. To enable truly declarative syntax definitions, parsers and other tools must support grammars in their natural form, i.e., they must be able to handle ambiguous grammars. Parsers that support ambiguous grammars lack a clear semantics for disambiguation, restricting their parsing performance and the languages they can successfully implement. Complementary to parsing, editor services such as pretty-printing and code completion, often need to be implemented by hand, increasing the cost of maintaining and evolving a language.
Our goal is to use declarative syntax definitions to effectively define the syntax of programming languages and generate efficient tools. To address the above problems, we propose a new semantics for disambiguating context-free grammars, particularly the subset of grammars that define expressions. We study how often these ambiguities occur in real programs, showing the need for efficient disambiguation. Finally, we implement this semantics, generating a parser that performs disambiguation with near-zero performance overhead.
Moreover, we develop a technique to automatically derive parsers and pretty printers for layout-sensitive languages from the syntax definition. By enabling the declarative specification of layout-sensitive languages, we tackle important issues, including usability, performance, and tool support, which prevent the adoption of these languages in tools such as language workbenches.
Finally, we propose a principled approach to derive syntactic code completion from the syntax definition. The current implementation of completion services is often ad-hoc, unsound, and incomplete. By using a principled approach, we are able to reason about soundness and completeness of code completion, opening up a path to richer editing services in language implementations.","syntax definition formalisms; parsing; Language Workbenches","en","doctoral thesis","","978-94-6366-171-3","","","","","","","","","Programming Languages","","",""
"uuid:ed0af513-7621-4007-9a34-1a3e17370952","http://resolver.tudelft.nl/uuid:ed0af513-7621-4007-9a34-1a3e17370952","Building blocks of quantum repeater networks","Rozpedek, F.D. (TU Delft QID/Wehner Group)","Wehner, S.D.C. (promotor); Hanson, R. (copromotor); Delft University of Technology (degree granting institution)","2019","","Quantum information; Quantum networks; Quantum communication; Quantum repeater; Quantum key distribution; Quantum entanglement; Uncertainty principle","en","doctoral thesis","","978-94-6384-043-9","","","","","","","","","QID/Wehner Group","","",""
"uuid:97f225ec-4a10-4e10-a36e-576876fd3887","http://resolver.tudelft.nl/uuid:97f225ec-4a10-4e10-a36e-576876fd3887","Gyroscopic Assistance for Human Balance","Lemus Perez, D.S. (TU Delft Biomechatronics & Human-Machine Control)","Vallery, H. (promotor); van der Helm, F.C.T. (promotor); Delft University of Technology (degree granting institution)","2019","Over the past few decades, there has been an increasing trend in the development of wearable robotics for rehabilitation and human augmentation. Although most such devices have been envisioned and realized to extend human capabilities, they do not primarily target balance control. For a wide range of physiotherapy recipients, impaired balance, rather than a lack of muscle strength, is the main impediment to functional recovery. Recently I proposed and realized a novel wearable robotic device, the GyBAR, that is capable of assisting balance during standing and walking without obstructing the lower extremities; this is achieved by exerting free torques on the upper body with a gyroscopic actuator that is worn like a backpack. This thesis presents a study into the feasibility of control moment gyroscopes (CMGs) as wearable devices for balance assistance in human beings. Here I identify and focus on sensing, actuation and control as the three main components of the GyBAR.","CMG; Balance control; Rehabilitation devices","en","doctoral thesis","","978-94-6384-049-1","","","","","","2020-03-31","","","Biomechatronics & Human-Machine Control","","",""
"uuid:8a8ab38e-ac7a-4e73-afaa-4de6b8f0b429","http://resolver.tudelft.nl/uuid:8a8ab38e-ac7a-4e73-afaa-4de6b8f0b429","Determining Mexican climate-adaptive environmental flows reference values for people and nature: A hydrology-based approach for preventive environmental water allocation","Salinas Rodriquez, Sergio (TU Delft Water Resources; TU Delft Water Management)","van de Giesen, N.C. (promotor); McClain, M.E. (promotor); Delft University of Technology (degree granting institution)","2019","Environmental flows (e-flows) science has significantly advanced in the last decades. In Mexico, a standard for e-flow assessments was recently published as a regulatory instrument to support water planning and management. However, the appropriateness of the technical procedure in a climate change context has not been investigated. Do the e-flows cope with the non-stationary challenge of the flow regime and the water availability shifts in the long term? This thesis aimed to determine Mexican climate-adaptive e-flows reference values for people and nature. The research was based on state-of-the-art environmental water science and practice, and on the current national standard for conducting desktop and on-site assessments (Chapter 2). A novel frequency-of-occurrence approach for assessing e-flows and integrating regimes into volumes for water allocation was developed for perennial rivers (Chapter 3), and adjusted for intermittent and ephemeral streams. This was based on the magnitude of the contribution of hydrological wet, average, dry and very dry low flow conditions (inter-annual and seasonal variability), as well as a flood regime per stream type (Chapters 4 and 5). River discharge, basin rainfall trends, and e-flow regimes were examined in a set of 40 study cases selected according to climate, geography and hydrology representativeness. Hydrology-based likely environmental reserve volumes for preventive water allocation, expressed as a percentage of the mean annual runoff, were obtained based on a central range distribution approach. The performance assessment of these reference values demonstrated that the impact on water availability for allocating such volumes is no different from the current method (baseline) though significantly improved for avoiding under and over-estimations.","Flow regime; Inter-anual & seasonal variability; Environmental flows; Environmental water reserve; Hydrology-based desktop approach","en","doctoral thesis","","978-94-028-1561-0","","","","","","","","Water Management","Water Resources","","",""
"uuid:f80750ee-db68-480e-8c58-2c167bd24ee5","http://resolver.tudelft.nl/uuid:f80750ee-db68-480e-8c58-2c167bd24ee5","Tools for Developing Cognitive Agents","Koeman, V.J. (TU Delft Interactive Intelligence)","Hindriks, K.V. (promotor); Jonker, C.M. (promotor); Delft University of Technology (degree granting institution)","2019","Agent-oriented programming (AOP) is a programming paradigm introduced roughly thirty years ago as an approach to problems in Artificial Intelligence (AI). An agent is a piece of software that can perceive its environment (e.g., through sensors) and act upon that environment (e.g., through actuators). A cognitive agent is a specific type of agent that executes a decision cycle in which it processes events and selects actions based on cognitive notions such as beliefs and goals. Often, multiple agents are used, which is referred to as a multi-agent system (MAS). MAS is generally advertised an approach to handling problems that require multiple problem solving methods, multiple perspectives, and multiple problem solving entities. Tools and techniques for the programming of cognitive agents need to be based on the underlying agent-oriented paradigm, which is a significant challenge, as unlike more traditional paradigms, they should for example take into account that agents execute a specific decision cycle and operate in non-deterministic environments. Therefore, in this thesis, we take existing AOP theories a step further by designing tools for the development of cognitive agent programs with an explicit focus on usability. Each development tool we propose is extensively evaluated on hundreds of (novice) agent programmers. In the context of AOP, the process of detecting, locating and correcting mistakes in a computer program, known as debugging, is particularly challenging. As large part of the effort of a programmer consists of debugging a program, efficient debugging is an essential factor for both productivity and program quality. In this thesis, we contribute both to the process of locating mistakes in agent programs as well as the process of identifying misbehaviour of an agent in the first place...","","en","doctoral thesis","","9789463661676","","","","","","","","","Interactive Intelligence","","",""
"uuid:49763197-fec0-49e6-a496-6ac0068585db","http://resolver.tudelft.nl/uuid:49763197-fec0-49e6-a496-6ac0068585db","The Effect of Oil on Foam for Enhanced Oil Recovery: Theory and Measurements","Tang, J. (TU Delft Reservoir Engineering)","Rossen, W.R. (copromotor); Delft University of Technology (degree granting institution)","2019","Foam has unique microstructure in pore networks and reduces gas mobility significantly, which improves considerably the sweep efficiency of gas injection. Foam injection is thus regarded as a promising enhanced oil recovery (EOR) technology. One key to success of foam EOR is foam stability in presence of oil. This thesis seeks to understand fundamentally both steady-state and transient foam flow with oil in porous media through theoretical analysis and coreflood measurements. A quantitative modeling study is conducted to illustrate how the two algorithms (""wet-foam"" model and ""dry-out"" model) represent the effect of oil on foam. Experimental observations evidently show that the two foam regimes without oil also apply to foam with oil, i.e. high- and low-quality regimes depending on foam quality. Oil affects both regimes with a stronger effect on the high-quality regime. Model fitting to data shows that currently applied implicit-texture (IT) foam models are suitable to represent foam flow with oil; both wet-foam model and dry-out model are needed to describe the effect of the oil on the two foam regimes. Three-phase fractional-flow theory together with the wave curve method (WCM) is applied to understand foam displacements with oil. Theoretical solutions suggest foam displacement cannot bank up an oil bank with oil saturation greater than an upper limit for stable foam. A critical phenomenon, i.e. that some injection conditions correspond to more than one possible foam states as predicted by the IT model, has been analyzed with fractional-flow theory and the WCM. We show how to determine the unique displacing state; the choice of the displacing state depends on initial state. Fundamentally, a boundary curve in ternary saturation space is defined that captures the nature of the dependence of the displacement on initial state. In addition, a new capillary number definition for micromodels is derived from a force balance on a ganglion trapped by capillarity. The definition in particular accounts for the impact of pore geometry and its validity is verified using two-phase flow data in micromodels. Based on current findings, some open questions concerning foam-oil interactions in porous media are defined and summarized at the end of this thesis.","Enhanced oil recovery; Foam flow in porous media with oil; Coreflood study; Implicit-texture modeling; Fractional-flow theory; Capillary number for micromodels","en","doctoral thesis","","978-94-6366-179-9","","","","","","","","","Reservoir Engineering","","",""
"uuid:0777f2f6-55fb-4670-b3ed-2318070cc7e7","http://resolver.tudelft.nl/uuid:0777f2f6-55fb-4670-b3ed-2318070cc7e7","Clay-laden subaqueous gravity flows: Flow structures, deposits, and run-out distance","Hermidas, N. (TU Delft Applied Geology)","Luthi, S.M. (promotor); Eggenhuisen, Joris T. (promotor); Delft University of Technology (degree granting institution)","2019","Submarine gravity flows constitute the last link in the source-to-sink sediment transport chain. They are the main mechanism for the transportation of sediment from the shallower to the deeper parts of the ocean. Due to their great volume, mobility, and power, they pose a formidable threat to the offshore infrastructures, and can generate tsunamis which can result in human mortality and cause great damage to onshore structures. In addition, deposits of ancient submarine gravity flows host many hydrocarbon reservoirs. The quality of these reservoirs is primarily controlled by the grain size and the clay con- centration of the flows that deposited the sediments. Due to the growing population and rise in the per capita energy consumption, connecting the dynamics of clay-laden density flows to their depositional characteristics has become important for oil and gas exploration purposes. The principle questions that were investigated in this study were: (1) How are the dynamics of subaqueous gravity flows related to their deposits?, and, (2) Why are these flows able to travel so far? ...","gravity flows; thixotropy; clay suspension; viscosity bifurcation; run- out distance; flow structures; debris flows","en","doctoral thesis","","978-94-6323-690-4","","","","","","","","","Applied Geology","","",""
"uuid:9397d964-1674-4838-a13a-504742dba55e","http://resolver.tudelft.nl/uuid:9397d964-1674-4838-a13a-504742dba55e","Wave attenuation in coastal mangroves: Mangrove squeeze in the mekong delta","Phan Khanh, L. (TU Delft Coastal Engineering)","Stive, M.J.F. (promotor); Aarninkhof, S.G.J. (promotor); Zijlema, Marcel (copromotor); Delft University of Technology (degree granting institution)","2019","This study explores the influence of the wave characteristics on the attenuation process of waves through coastal mangroves, which are threatened by the coastal mangrove squeeze phenomenon. Coastal mangrove squeeze is the phenomenon where coastal regions, even when sediment availability is sufficient, are eroding due to a lack of accommodation space caused by the land use on the landside and by sea level rise on the waterside. Along the Mekong Delta Coast, only a narrow strip of mangroves of less than 140 m is left at the locations where a strong erosion of up to 100myr¡1 is observed. Furthermore, observations at the south eastern and the eastern coasts of the Mekong Delta are, that a mangrove width ranging from approximately 30m to260m and 140m on average, appears to be stable. Therefore, a hypothesis regarding coastal mangrove squeeze is proposed based on the empirical relationship between mangrove forest width and coastline evolution. The hypothesis is proposed, that a minimum space of coastal mangroves is required for a sustainable development of the mangrove forest...","Coastal mangroves; Coastal squeeze; Wave attenuation; Erosion; Laboratory experiment; Numerical modeling","en","doctoral thesis","","978-94-6384-045-3","","","","","","","","","Coastal Engineering","","",""
"uuid:aa1e9f1e-456b-4ac9-b541-991d7f6baaa6","http://resolver.tudelft.nl/uuid:aa1e9f1e-456b-4ac9-b541-991d7f6baaa6","Optomechanical devices in the quantum regime","Marinkovic, I.","Groeblacher, S. (promotor); van der Zant, H.S.J. (promotor); Delft University of Technology (degree granting institution)","2019","This thesis explores the possibility of controlling the quantum states of high frequency mechanical resonators using infra-red laser pulses. Chapter 1 gives an overview of quantum technologies based on mechanical resonators relevant for this thesis. Chapter 2 provides the basics of the theoretical background of optomechanics. The Hamiltonian that describes the interaction between the moving mirror and electromagnetic waves will be described. The second part of the chapter will deal with methods borrowed from quantum optics and used to demonstrate non-classical behaviour of a mechanical resonator. Chapter 3 presents a physical implementation of optomechanics Hamiltonian in form of silicon nanobeam devices. Additional hardware that needs to be integrated with optomechanical devices in order to perform experiments will be described. This includes optical waveguides and fibers used for coupling light into the cavity. We will start the discussion on how effects beyond simple optomechanics model impact these devices. Chapter 4 describes methods used to microfabricate nanobeam devices on a chip. This chapter aims to give tips and hits toward the successful fabrication of optomechanical devices. Chapter 5 presents the results of an experiment demonstrating non-classical behaviour of a single optomechanical device. We will use a heralding scheme to prepare the nonclassical state of mechanical resonator and use optical detection to confirm its nonclassicality. Chapter 6 describes the measurement of Bell inequality between two optical and two mechanicalmodes. Chapter 7 is the conclusion chapter, where I discuss the results of experiments presented, aswell the potential directions of future experiments and howcontrol over quantumstate of mechanical resonators can be improved.","cavity optomechanics; optomechanical crystals; interferometry; Bell inequality","en","doctoral thesis","","978-90-8593-401-1","","","","","","","","","QN/Groeblacher Lab","","",""
"uuid:af94d535-1853-4a6c-8b3f-77c98a52346a","http://resolver.tudelft.nl/uuid:af94d535-1853-4a6c-8b3f-77c98a52346a","Open Aircraft Performance Modeling: Based on an Analysis of Aircraft Surveillance Data","Sun, Junzi (TU Delft Control & Simulation)","Hoekstra, J.M. (promotor); Ellerbroek, Joost (copromotor); Delft University of Technology (degree granting institution)","2019","A large number of stakeholders exist in the modern air traffic management ecosystem. Air transportation studies benefit from collaboration and the sharing of knowledge and findings between these different players. However, not all parties have equal access to information. Due to the lack of open-source tools and models, it is not always possible to undertake comparative studies and to repeat experiments. The barriers to accessing proprietary tools and models create major limitations in the field of air traffic management research. This dissertation investigates the methods necessary to construct an aircraft performance model based on open data, which can be used freely and redistributed without restrictions. The primary data source presented in this dissertation is aircraft surveillance data that can be intercepted openly with little to no restriction in most regions of the world. The eleven chapters in this dissertation follow the sequence of open data, open models, and performance estimations. This order corresponds to the three main parts of the dissertation. In the first part of the dissertation, open surveillance data is explored. Methods are developed to decode and process this data. Extraction of information is also made possible thanks to machine learning algorithms. The second part of the dissertation examines the main components of the open aircraft performance model. Models related to kinematics, thrust, drag polar, fuel flow, and weather are investigated. The third part of the dissertation looks into the possibility of using surveillance data to estimate aircraft performance parameters, for example, aircraft turn performance, aircraft mass, and thrust settings, for individual flights. With the goal of making future air traffic management studies more transparent, comparable, and reproducible, the models and tools proposed in this dissertation are fully open. The final aircraft performance model, OpenAP, proposed in this dissertation has proven to be an efficient open alternative to current closed-source models.","Aircraft Performance; Air Traffic Management; ADS-B; Drag Polar; Dynamic Model; Engine Fuel Flow; Kinematic Model; Meteo-Particle; Mode-S; Open Data; State Estimation; Thrust","en","doctoral thesis","","978-94-6384-030-9","","","","","","2019-12-09","","","Control & Simulation","","",""
"uuid:c5af67c1-a665-49df-8b23-0f16d7185fa3","http://resolver.tudelft.nl/uuid:c5af67c1-a665-49df-8b23-0f16d7185fa3","Integrating a Photovoltaic Panel and a Battery Pack in One Module: From concept to prototype","Vega Garita, V.E. (TU Delft DC systems, Energy conversion & Storage)","Bauer, P. (promotor); Zeman, M. (promotor); Delft University of Technology (degree granting institution)","2019","Photovoltaic (PV) solar energy is variable and not completely predictable; therefore, different energy storage devices have been researched. Among the variety of options, electrochemical cells (commonly called batteries) are technically feasible because of their maturity and stability. However, PV-battery systems face multiple challenges such as high cost and complexity of installation. Cost is the main concern when trying to enable new solutions for the solar market, especially when competing with other renewable technologies, but most importantly, with fossil fuels to reduce the effects of climate change. As a consequence, a new concept that integrates all the components of a PVbattery system in a single device is introduced. By integrating a power electronics unit and a battery pack at the back of a PV panel, referred as PV-battery Integrated Module (PBIM), the cost of the total system can decrease and become a viable alternative for the solar market. Because the concept is relatively new and not all the challenges have been previously addressed, this dissertation strives to prove the feasibility of the concept and to fill the gaps that have been identified in the literature review. Firstly, an off-grid PV-battery system was selected, and a sizing methodology was proposed to investigate the limitations and boundaries of the integrated device. Having sized the system, the thesis explored the implementation of an energy management system in order to control smartly the direction and magnitude of the power delivery. Then, a thermal model was developed to characterize the thermal response of the PBIM and to recommend a thermal management system to decrease the temperature of operation of the battery pack and power electronics. Finally, by testing a PBIM prototype and developing an integrated model that reproduces the temperature and power flows expected, a battery testing methodology was developed for finding a suitable battery technology that can comply with the requirements set by the expected operating conditions of the device. Therefore, the research carried out in this dissertation proves that the integration of PV-battery system in one device is technically feasible.","Photovoltaic energy; batteries; integration; thermal management; battery testing","en","doctoral thesis","","978-94-6366-170-6","","","","","","","","","DC systems, Energy conversion & Storage","","",""
"uuid:aa2a287e-e55f-4af5-be23-f0bdb7188512","http://resolver.tudelft.nl/uuid:aa2a287e-e55f-4af5-be23-f0bdb7188512","Preparation of polyelectrolyte-coated proteins for controlled drug delivery via supercritical fluid processing","Yu, M. (TU Delft BT/Environmental Biotechnology)","Witkamp, G.J. (promotor); Jiskoot, W (promotor); Delft University of Technology (degree granting institution)","2019","During the past few decades, numerous protein-based pharmaceuticals to treat chronic and life-threatening diseases have emerged. The short plasma half-life of therapeutic proteins requests frequent administration, usually via parenteral routes. This short-coming is proposed to be solved by the development of an injectable microparticulate drug delivery system (DDS) where the proteins are encapsulated to control the release of the drugs after administration. One way of preparing a protein DDS is through the interaction of proteins and biocompatible coating materials, where the coating materials hinder the quick degradation and release of the proteins...","","en","doctoral thesis","","9789463237017","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:9d6f4c39-2439-46cc-bf1a-ba42674bd101","http://resolver.tudelft.nl/uuid:9d6f4c39-2439-46cc-bf1a-ba42674bd101","Mobilizing Young Researchers, Citizen Scientists, and Mobile Technology to Close Water Data Gaps: Methods Development and Initial Results in the Kathmandu Valley, Nepal","Davids, J.C. (TU Delft Water Resources)","van de Giesen, N.C. (promotor); Bogaard, T.A. (promotor); Delft University of Technology (degree granting institution)","2019","Data gaps as educational opportunities - mobilizing young researchers, citizen scientists, and mobile technology in data and resource scarce areas. This dissertation chronicles these themes through the lessons learned along the fledgling journey of SmartPhones4Water (S4W) and S4W-Nepal, from inception through the first few years of implementation. S4W mobilizes young researchers and citizen scientists with simple field data collection methods, low-cost sensors, and a common mobile data collection platform that can be standardized and scaled. S4W's ultimate goal is to improve lives by strengthening our understanding and management of water. If thoughtfully done, this process of filling data and knowledge gaps in data and resource scarce regions can also serve to improve the quality and applicability of young researchers' and citizen scientists' education. S4W’s first pilot project, S4W-Nepal, initially concentrated on the Kathmandu Valley, and is now expanding into other regions of the country. S4W-Nepal facilitates ongoing monitoring of precipitation, stream and groundwater levels and quality, freshwater biodiversity, and several short-term measurement campaigns focused on monsoon precipitation, land use changes, stone spout flow and quality, streamflow, and stream-aquifer interactions. This research contains both methodological components that investigate novel methods for generating hydrometeorological data (Chapters 2 through 4), along with initial applications of these methods to answer specific science questions (Chapters 5 and 6).","Citizen Science; Young Researchers; Mobile Technology; Nepal; Kathmandu Valley; Smartphones; Open Data Kit (ODK); SmartPhones4Water; S4W; S4W-Nepal; Subsampling; Streamflow; Salt Dilution; Precipitation; Land-use; Water Quality; Stream-Aquifer Interactions","en","doctoral thesis","","978-94-028-1547-4","","","","","","","","","Water Resources","","",""
"uuid:72cb7195-8d71-499f-96de-2cddad15ce76","http://resolver.tudelft.nl/uuid:72cb7195-8d71-499f-96de-2cddad15ce76","The Spatial Dimension of Household Energy Consumption","Mashhoodi, B. (TU Delft OLD Urban Compositions)","van Timmeren, A. (promotor); Stead, D. (copromotor); Delft University of Technology (degree granting institution)","2019","The vast majority of previous studies on household energy consumption (HEC) has presumed that the influencing factors of HEC are similar in each and every location regardless of the location specific circumstances. In other words, they assume that some generalizable facts explain the level of HEC and energy poverty across all areas of a city, country, region, and/or continent. At the national scale, the Third National Energy Efficiency Action Plan for the Netherlands, regarding the reduction of household energy consumption has introduced a variety of policy measures and incentives for reduction of HEC among them energy tax, reduction on VAT rate on labour cost of renovation of dwellings, energy saving agreement for rental sector, etc. Furthermore, the policy document emphasise that the geographic scope of all policy measures is “the Netherlands”. In this respect, Third National Energy Efficiency Action Plan for the Netherlands, introduce an identical set of measures and instrument for all areas of the Netherlands regardless of their location-specific circumstances. The objective of this thesis is to examine the validity of this presumption through five different studies four of which published as a scientific journal, and one of which is accepted for publication. To do so, the impact of a variety of the determinants of HEC of the Dutch neighbourhoods are studied and compared. The result of the studies shows that the impact of such determinants is spatially homogenous (i.e. similar across all neighbourhoods in question) or spatially heterogeneous (varies from one neighbourhood to another).","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-181-2","","","","A+BE | Architecture and the Built Environment No 5 (2019)","","","","","OLD Urban Compositions","","",""
"uuid:886b99da-bd19-4a1a-b8b7-dcfe39cb863f","http://resolver.tudelft.nl/uuid:886b99da-bd19-4a1a-b8b7-dcfe39cb863f","Monitoring and operation of ozonation -BAC filtration","Ross, P.S. (TU Delft Sanitary Engineering)","Rietveld, L.C. (promotor); Delft University of Technology (degree granting institution)","2019","The primary goal of a drinking water company is to produce safe drinking water that meets the quality standards defined in national and international guidelines. Depending on the source water, one or multiple treatment steps are required to produce safe drinking water. Generally, the drinking water treatment plants (WTP) are very robust and over-sized and, as a result, have not required stringent advanced control based on the incoming water quality. However, water companies are facing increased challenges due to changes in feed water qualities and increased (micro-) pollution loads. When the source water originates from surface water an extensive treatment system is required, since surface water can be characterized by seasonal variations influenced by temperature, algae blooms, rain-fall run-off containing pathogens, solids and pesticides, environmental spills upstream and, since recently, the increased threat by endocrine disrupting compounds. This continuous changing source water requires improved monitoring and operation of the WTP, anticipating on the disturbances in the process. THESIS OBJECTIVE: Improved monitoring and operation of ozonation and biological activated carbon (BAC) filtration for the removal of pathogens and organic matter, for the optimisation of drinking water production from surface water. Ozonation and BAC filtration processes are susceptible to changes in the feed water quality. Besides, these processes have several control options and interaction between the two processes exists. The main objective for ozonation is disinfection and oxidation of organic matter, which results in an increase in the biodegradability of the natural organic matter (NOM). The main objective for BAC filtration is the removal of organic micropollutants and biodegradation of NOM to ensure the production of biologically safe and stable drinking water. In this thesis, pilot plant research was carried out at Waternet, water cycle company for Amsterdam and surrounding areas, location Weesperkarspel, the Netherlands. Easily assimilable organic carbon (AOC) is frequently used for the assessment of biological stability of drinking water, which is an important consideration in the control of bacterial growth in distribution networks. The first AOC bioassay was developed in 1982 and is based on growth of two bacterial strains (Psuedomonas fluorescens P17 and Spirillum spp. NOX) in drinking water relative to their growth on acetate. Since the original developed method, several new methods for the determination of AOC have been published with the aim of being faster, more reliable and cheaper. Application of these assays raises legitimate questions about the comparison of AOC data from different studies. In this thesis, a round-robin test was performed to evaluate the correlation between three established AOC methods. A total of 14 water samples covering a wide range of AOC concentrations were analysed with the original “van der Kooij” method, the “Werner & Hambsch” method and “Eawag” method. Good correlations were found between AOC concentrations measured with the various methods. The data suggest an acceptable compatibility between different AOC methods, although deviations between the methods call for careful interpretation and reporting of AOC data. The results from the round robin test emphasized the need to understand which measurement method was used to obtain the reported concentration, since the method gives insight in the actual meaning of the results. Sampling of the drinking water is carried out on a regular (almost daily) basis, to ensure the produced drinking water meets the quality standards. There is a trade-off between having a high probability of detecting a deviation while minimizing the measuring effort. In order to determine which measurements should be put in place, in this thesis, a seven step design methodology was developed. This enabled the determination of the required water quality monitoring strategy around ozonation and BAC filtration. It was shown how the previous on-line monitoring program of the treatment plant Weesperkarspel was optimised. Evaluation of on-line water quality sensors showed that the parameters typically measured to show compliance with the WHO standards were commonly available. Direct measurement of the more complex parameters such as AOC and bromate were not available on-line. It was shown that real-time information on the actual Ct value, the bromate and AOC concentration was necessary for continuous optimization of the applied ozone dosage. To address this gap, algorithms were developed for the on-line estimation of the Ct value and the formation of bromate and AOC during ozonation, based on the measured change in UV-Vis spectrum before and after ozonation. It was shown that these algorithms allow for the calculation of the optimal ozone dosage and provide a reliable indication of the amount of bromate and AOC formed during ozonation. Besides using these soft-sensors as surrogate sensors for parameters currently not available on-line, they also provide a cost effective alternative when used to determine multiple parameters through one single instrument. BAC filters are frequently used in the production of drinking water for the removal of organic micro- pollutants and organic matter, especially when produced from (humic rich) surface water. Differences in filter feed water quality are the result of differences in pre-treatment steps (coagulation/flocculation, ozonation and phosphate addition) commonly applied in the production of drinking water. Understanding of how BAC filters react to a change in feed water quality helps to identify where the focus in operating the BAC filters should be on. In this thesis, the immediate response of the BAC filters to a rapid change in feed water quality was investigated as well as the long term effects. The immediate response showed that all filters were able to mitigate a sudden change in feed water quality, either through improved adsorption or increased activity of the biomass on the filter. As a result of this resilience against sudden changes, it was therefore concluded that there is no direct need for very stringent on-line monitoring and continuous adjustments of the feed water quality of the BAC filters. Only the pressure drop and the pH and oxygen concentration in the effluent should be measured. The long term effects of changes in feed water quality were compared to previously published research and confirmed the need for sufficient nutrients (readily available carbon and phosphate) in the feed water for optimal performance. The addition of phosphate resulted in the lowest dissolved organic carbon (DOC) concentration in the effluent of the BAC filters. In this study the influence of intact cells in the feed water on the performance of the BAC filters was shown to be limited. One parameter in the BAC filters that requires enhanced control and understanding is the clogging of the filters, especially when the water temperature starts to increase in spring and summer time. In the BAC filters clogging takes place as a result of the physical, chemical and biological processes occurring in the filters. A simplified model, based on retention and mass balances, to predict the filter run times was developed. Results showed that clogging in the BAC filters was a combination of chemical and biological mechanisms. Already small concentrations of AOC resulted in the development of a biofilm. This biofilm caused the formation of a cake layer and accounted for the majority of the pressure build up in the BAC filters. The simplified model was able to accurately predict the head loss development. The model subsequently allowed for optimising the process control of the full scale BAC filters in terms of predicting the pressure drop and subsequently optimising the backwash sequence, saving 32% backwash water. In this thesis it was shown that the current monitoring strategy of ozonation-BAC filtration could be improved through implementation of the established design methodology in combination with the developed algorithms allowing for on-line estimation of AOC, bromate and Ct value around ozonation, based on the measured change in UV-Vis spectrum. The operation of BAC filters could be improved through a better understanding of the direct response of the BAC filters to a change in feed water quality and the use of simplified models to optimise the operational strategy around backwashing of BAC filters. At Waternet, location Weesperkarspel, some of the findings from this and previous studies have been taken and tested at full scale. As a result, on-line monitoring of ozonation was extended with UV/Vis sensors before and after ozonation and with ozone in water sensors in the influent of the first ozonation contact chambers.","","en","doctoral thesis","","978-94-6323-655-3","","","","","","","","","Sanitary Engineering","","",""
"uuid:0bca20cc-98b8-49a0-b9af-94656f73ebd5","http://resolver.tudelft.nl/uuid:0bca20cc-98b8-49a0-b9af-94656f73ebd5","Dynamic Security Region Assessment","Oluic, M. (TU Delft Energie and Industrie)","Ghandhari, M (promotor); Herder, P.M. (promotor); Delft University of Technology (degree granting institution); KTH Royal Institute of Technology (degree granting institution); Comillas Pontifical University (degree granting institution)","2019","Among a wide variety of topics that are covered by Dynamic Security Assessment (DSA), maintaining synchronous operation and acceptable voltage profiles stand out. These two stability categories are mostly jeopardized in the seconds after a large contingency occurs. Therefore, this thesis tackles the two aspects of large disturbance stability of power systems in the short-term time scale. The classical DSA methods deal with the short-term loss of synchronism by analyzing one operating point and one contingency at a time. However, a small change in operating point may turn a stable system unstable. The first part of the thesis overcomes this gap by proposing the idea of parametrizing the stability boundary. The newly introduced method constructs the parametrized security boundaries in polynomial forms based on a reduced amount of Time Domain Simulation (TDS) data. Such a method retains the positive traits of TDS while being able to estimate a measure of stability even for those points that do not belong to the “training"" set. The polynomial coefficients are further improved via SIME parametrization that has a physical meaning. Finally, when being subject to a constraint by the means of Quadratic Programming (QP), SIME parametrization also becomes competitive with direct methods in the sense of conservativeness. Nevertheless, if TDS fails, any TDS-based DSA approach is useless. Most often, the dynamics of the non-linear power system is described by the set of Differential Algebraic Equations (DAE). TDS can face problems when the DAE model experiences singularity due to the loss of voltage causality. This thesis introduces Voltage Impasse Region (V IR) as the state-space area where the voltage causality is lost due to the non-linear modeling of static loads. The entrance of a dynamic trajectory to a V IR was shown to be accompanied by non-convergence issues in TDS and significant voltage drops. Appropriate Voltage Collapse Indicators (VCIs) are also derived for each load model of interest. The thesis concluded that V IR is a structural problem of the DAE model that should always be accounted for when the short-term stability is assessed.","parametrization; power system transient stability; security boundary; short-term voltage instability; Voltage Impasse Region (VIR)","en","doctoral thesis","","978-91-7873-149-7","","","","","","","","","Energie and Industrie","","",""
"uuid:68bc7aa0-914c-4d2d-8e32-9d0ef996fdcc","http://resolver.tudelft.nl/uuid:68bc7aa0-914c-4d2d-8e32-9d0ef996fdcc","Optimizing photon utilization in LED-based photocatalytic reactors","Khodadadian, F. (TU Delft Intensified Reaction and Separation Systems)","Stankiewicz, A.I. (promotor); Lakerveld, Richard (promotor); van Ommen, J.R. (promotor); Delft University of Technology (degree granting institution)","2019","Photocatalysis involves the absorption of photons by a semiconductor to enhance chemical reactions. Examples of important applications include the degradation of hazardous chemicals, reduction of carbon dioxide to valuable chemicals and (partial) oxidation of hydrocarbons. Despite many successful demonstrations of this technology at lab-scale, its industrial application has been hindered by the low overall efficiency of the process due to several challenges that need to be resolved. One of the main challenges is efficient utilization of light within a photocatalytic reactor, which affects the economic feasibility of the process especially when using artificial light sources. In the last few years, the feasibility of using UV-LEDs as an alternative light source for conventional UV-lamps, such as mercury and xenon lamps, has been shown for applications in the gas and liquid phase. Yet, strategies that would allow for optimal light utilization within LED-based reactors during design and operation are lacking. Therefore, the focus of this thesis is on the efficient use of photons by development and validation of novel approaches for the design, optimization, and control of LED-based photocatalytic reactors. The photocatalytic degradation of toluene in the gas phase is adopted as the model reaction, since toluene is one of the most common indoor pollutants threatening human health. In the design phase of a LED-based reactor, the flexible positioning of LEDs enabled by their small size, in combination with the reactor design parameters, provides a large degree of freedom. When using all of those degrees of freedom simultaneously, mathematical optimization techniques are a necessity. Hence, a model-based approach for optimization of the design of LED-based photocatalytic reactors is developed. A photocatalytic reaction rate is not only a function of the chemical species adsorbed on the catalytic surface, but also on the rate of photons absorbed by the catalyst. Therefore, an efficient photocatalytic reactor design optimizes both the mass transfer as well as the photon transfer. First, an integrated model is developed that describes the distribution of reactants and photons within an annular LED-based photocatalytic reactor. Second, an objective function, representing a trade-off between capital and operating costs is defined and several design variables related to the reactor dimensions and light sources are optimized simultaneously. Furthermore, the capability of the LED-based photocatalytic reactor in controlling the local reaction rate is shown by changing the objective function of the optimization problem. The results demonstrate the importance of model-based optimization to systematically incorporate the inherent trade-offs that exist in the design and operation of LED-based photocatalytic reactors.
A validated process model is essential for optimization. Furthermore, characterization of process trends is needed when developing operational strategies such as automated control. For this purpose, a mini-pilot plant including an annular LED-based photocatalytic reactor has been developed to validate the integrated process model including a radiation field, reaction kinetics, and material balances experimentally for the photocatalytic degradation of toluene. Because water is inevitably present in many photocatalytic applications, a special focus is on the effect of water on reaction kinetics, toluene conversion, mineralization, and catalyst deactivation for characterization of the process trend. The results from parameter estimation studies demonstrate that a competitive reaction rate model can best describe the experimental data with varying water concentration. Furthermore, experimental results demonstrate that toluene conversion is highest at a low water concentration; however, mineralization and catalyst lifetime are enhanced by the presence of water. The validation of the integrated process model and understanding of the role of water allow for improved design and operation of future LED-based photocatalytic reactors.
Following the conclusion from the process characterization study that electron-hole recombination is dominant in the system, the impact of periodical illumination of LEDs on the photonic efficiency of toluene degradation is investigated. It has been suggested that intermittent introduction of photons on the catalytic surface can possibly reduce the electron-hole recombination and, consequently, can improve the photon utilization of the photocatalytic process during operation. Therefore, the impact of light/dark periods and duty cycles is studied. However, no transition or change in the photonic efficiency when moving from a short to a long light/dark time at a fixed duty cycle is observed experimentally for the system studied in this thesis. Furthermore, the results of the experiments at two different periods show an increase in photonic efficiency with a decrease in the duty cycle. However, the photonic efficiency under controlled periodic illumination, regardless of the duty cycle or period, is found to be similar to that under continuous illumination at an equivalent average irradiance, suggesting no mass-transfer limitations in the system. Therefore, it is concluded that periodical illumination does not improve photon utilization in a system where electron-hole recombination is dominant but there is no mass transfer limitation. During operation, the performance of an optimally designed reactor may deviate from optimal conditions because of design uncertainties and disturbances acting on the system. Therefore, the application of automated feedback and feedforward controllers to maintain the reactor conversion close to a desired value by adjusting the photon irradiance within a LED-based photocatalytic reactor is studied. The excellent capability of the feedback controller in tracking different conversion set points is shown in the presence of unmeasured and measured disturbances, which allows for a desired conversion of toluene to be maintained. Furthermore, a feedforward controller has been designed based on an empirical steady-state model to mitigate the effect of changing toluene inlet concentration and relative humidity, which are typical measured input disturbances. The results demonstrate that the feedback and feedforward controllers are complementary and can mitigate the effects of disturbances effectively such that the photocatalytic reactor operates close to the desired output at all times. This study delivers the first example of how online analytical technologies can be combined with “smart” light sources such as LEDs to implement automated process control loops that optimize photon utilization. Future work may expand on this concept by developing more advanced control strategies and exploring applications in different areas. This thesis focuses on the development and validation of methods that provide optimal photon utilization within an annular LED-based photocatalytic reactor for design and operation. However, the proposed approaches and findings of this work can in principle be applied to different configurations of LED-based photocatalytic reactors as well. In addition, the suggested mathematical model in this thesis can be applied as a useful tool for the prediction of mass and photon transfer rate during scale-up studies of LED-based photocatalytic reactors. Furthermore, the developed control structures can be transferred to a larger scale since control structures are generally known to scale-up well. Providing approaches for optimum photon utilization, the outcome of this thesis could facilitate the realization of more economically viable photocatalytic processes when transferring the technology from lab-scale to the industrial applications.","heterogeneous photocatalysis; LED; Model-based optimization; feedback control; Feedforward control; modeling","en","doctoral thesis","","978-94-6375-436-1","","","","","","","","","Intensified Reaction and Separation Systems","","",""
"uuid:8deb1def-c0df-498e-9ff6-25766e547da3","http://resolver.tudelft.nl/uuid:8deb1def-c0df-498e-9ff6-25766e547da3","Adaptable framework methodology for designing human-robot coproduction","Çençen, A. (TU Delft Mechatronic Design)","Geraedts, Jo M.P. (promotor); Horvath, I. (promotor); Verlinden, J.C. (copromotor); Delft University of Technology (degree granting institution)","2019","The research project presented in this thesis is situated in the domain of design research, and focuses on the designers of production systems. In general, it aims to support the re-search towards a better understanding of design for human-robot coproduction (HRC). The specific objective of this research project was the development of support for novice HRC designers for integrating collaborative robots (Cobots) successfully in existing and new human-driven production systems. At the start of the project, it was assumed that novice HRC designers were lacking conceptual design tools for analysing, modelling, simulating and evaluating human-robot coproduction scenarios. Therefore, the design support was realized in the form of an adaptable framework methodology for conceptual design of HRC. The research for this thesis was executed as a PhD project which was supported by the EU-FP7-‘Factory in a day’ project, which enabled the generation and exploration of empirical evidence from the targeted context. In addition the research had access to a laboratory environment in which two types of Cobots were present.","","en","doctoral thesis","","978-94-6384-047-7","","","","","","","","","Mechatronic Design","","",""
"uuid:3f5e34e1-fb18-42d1-b077-38a1a691a301","http://resolver.tudelft.nl/uuid:3f5e34e1-fb18-42d1-b077-38a1a691a301","Exponential Word Embeddings: Models and Approximate Learning","Kekec, I.T. (TU Delft Pattern Recognition and Bioinformatics)","Reinders, M.J.T. (promotor); Tax, D.M.J. (copromotor); Delft University of Technology (degree granting institution)","2019","The digital era floods us with an excessive amount of text data. To make sense of such data automatically, there is an increasing demand for accurate numerical word representations. The complexity of natural languages motivates to represent words with high dimensional vectors. However, learning in a high dimensional space is challenging when the amount of training data is noisy and scarce. Additionally, lingual dependencies complicate learning, mostly because computational resources are limited and typically insufficient to account for all possible dependencies. This thesis addresses both challenges by following a probabilistic machine learning approach to find vectors, word embeddings, performing well under aforementioned limitations. An important finding of this thesis is that by bounding the length of the vector that represents a word as well as penalizing the discrepancy between vectors representing different words make a word embedding robust, which is especially beneficial when noisy and little training data is available. This finding is important because it shows how current word embedding methods are sensitive to small variations in the training data. Although, one might conclude from this finding that more data is not necessary anymore, this thesis does show that training on multiple sources, such as dictionaries and thesaurus, improves performance. But, each data source should be treated carefully, and it is important to weigh informative parts of each data source appropriately. To deal with lingual dependencies, this thesis introduces statistical negative sampling with which the learning objective of a word embedding can be approximated. There are many degrees of freedom in the approximated learning objective, and this thesis argues that current embedding strategies are based on weak heuristics to constrain these freedoms. Novel and more theoretical founded constraints are being proposed to constrain the approximations that are based on global statistics and maximum entropy. Finally, many words in a natural language have multiple meanings, and current word embeddings do not address this because they are built on a common assumption that one vector per word representation can capture all word meanings. This thesis shows that a representation based on multiple vectors per word easily overcomes this limitation by having different vectors representing the different meanings of a word. Taken together, this thesis proposes new insights and a more theoretical foundation for word embeddings which are important to create more powerful models able to deal with the complexity of natural languages.","","en","doctoral thesis","","978-94-6366-172-0","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:87bafdf4-a16e-4abd-8e81-c349cf95f9f2","http://resolver.tudelft.nl/uuid:87bafdf4-a16e-4abd-8e81-c349cf95f9f2","Optimization and model-based control for max-plus linear and continuous piecewise affine systems","Xu, J. (TU Delft Team Bart De Schutter)","De Schutter, B.H.K. (promotor); van den Boom, A.J.J. (copromotor); Delft University of Technology (degree granting institution)","2019","This PhD thesis considers the development of optimization and model-based control techniques for max-plus linear (MPL) and continuous piecewise affine (PWA) systems. The three main topics investigated in this thesis are as follows: 1. Optimistic optimization and planning for model-based control of MPL systems; 2. Optimistic optimization for MPC of continuous PWA systems; 3. MPC for stochastic MPL systems with chance constraints","Model predictive control (MPC); Max-plus linear systems; piecewise affine systems; Optimistic optimization","en","doctoral thesis","","978-94-6186-953-1","","","","","","","","","Team Bart De Schutter","","",""
"uuid:ebfff02c-f87c-4c8a-9edd-53adb12f4403","http://resolver.tudelft.nl/uuid:ebfff02c-f87c-4c8a-9edd-53adb12f4403","Superconducting islands and gate-based readout in semiconductor nanowires","van Veen, J. (TU Delft QRD/Kouwenhoven Lab)","Kouwenhoven, Leo P. (promotor); Goswami, S. (copromotor); Delft University of Technology (degree granting institution)","2019","Quantum computers can solve some problems exponentially faster than classical computers. Unfortunately, the computational power of quantum computers is currently limited by the number of working qubits. It is difficult to scale up these systems, because qubits are easily affected by noise in their environment. This noise leads to decoherence: loss of the qubit’s encoded information. A possible solution to diminish decoherence is using Majorana box qubits, as these qubits are predicted to be insensitive to local noise. However, this promising type of qubit does not exist yet.
With the research described in this thesis, we aim to contribute to the development of Majorana box qubits (MBQs). In these qubits, Majorana zero modes, the basic elements of MBQs, are contained within a superconducting island to suppress Majorana parity fluctuations caused by quasiparticle poisoning. To enable parity readout of the MBQ, these modes are coupled to quantum dots within a nanowire network.
To help realize MBQs, we need a better understanding of quasiparticles in superconducting islands, parity-readout techniques, and ways to fabricate nanowire networks. These three aspects are the focus of the experiments presented in this thesis.
To study superconducting islands and readout techniques, we used InAs semiconductor nanowires with an epitaxially grown Al shell. Majorana signatures have already been observed in such nanowires. We addressed quasiparticle dynamics in superconducting islands by measuring the gate-charge modulation of the switching current. We found a consistent 2e-periodic modulation at zero magnetic field, and an exponential decrease of parity lifetime with increasing magnetic field. We explored MBQ readout, using a quantum dot level as a proxy for a Majorana zero mode, and measured its charge hybridization with another dot using gate-based readout. We showed that we can rapidly discriminate between two settings with different tunnel couplings, demonstrating the potential of gate-based readout to measure MBQs. And, using gate-based readout, we could study charge-transfer processes occurring in hybrid structures of superconducting islands coupled to quantum dots.
Finally, to find a good material platform for nanowire networks, we characterized two two-dimensional systems. We realized quantum point contacts in InSb, which we used to measure the $g$-factor anisotropy, and effective electron mass in this system. And, we studied the spin-orbit interaction in InAs/GaSb by extracting the difference in density between electrons with different spin orientations.
This thesis finishes with a proposal for a series experiments to realize MBQs. These experiments make use of superconducting islands and the reflectometry setup we developed for gate-based readout.
We envision that when sampling and training are performed in hardware, the time required for these processes will be decreased compared to the time required in software. Finally, if the hardware resources allow us to include multiple neural networks, then this can potentially increase the decoding performance. As we presented, dividing the task of decoding to smaller tasks that are distributed to many neural networks can be beneficial.","Quantum computing; Quantum error correction; Quantum Error Detection; Artificial Neural Network; surface code","en","doctoral thesis","","","","","","","","","","","FTQC/Bertels Lab","","",""
"uuid:37157458-338d-4aa9-98df-f0c32470d5d3","http://resolver.tudelft.nl/uuid:37157458-338d-4aa9-98df-f0c32470d5d3","Absolute and Relative Orbit Determination for Satellite Constellations","Mao, X. (TU Delft Astrodynamics & Space Missions)","Visser, P.N.A.M. (promotor); van den IJssel, J.A.A. (copromotor); Delft University of Technology (degree granting institution)","2019","Precise absolute and relative orbit determination, referred to as Precise Orbit Determination (POD) and Precise Baseline Determination (PBD), are a prerequisite for the success of many Low Earth Orbit (LEO) satellite missions. With the spaceborne, high-quality, multi-channel, dual-frequency Global Positioning System (GPS) receivers, typically a precision of the order of a few cm is possible for single-satellite POD, and of a few mm for dual-satellite PBD of formation flying spacecraft with baselines up to hundreds of km. The research in this dissertation addresses and expands methods for computing reliable orbits for not only stable satellite formations such as the US/German GRACE (Gravity Recovery And Climate Experiment) and lower pair of the European Space Agency (ESA) Swarm missions, but also for satellite constellations that include rapidly varying baselines, such as all three Swarm satellites or the combination of the German CHAllenging Minisatellite Payload (CHAMP) and GRACE missions. The POD and PBD solutions are based on an Iterative Extended Kalman Filter (IEKF) that is capable of using relative spacecraft dynamics constraints for enhancing the robustness of the solutions. Moreover, the IEKF allows to iteratively fix the Double-Differenced (DD) carrier-phase integer ambiguities by the Least-squares AMBiguity Decorrelation Adjustment (LAMBDA) method. A subset fixing strategy allowing for partial ambiguity resolution was used instead of the full-set fixing which only accepts ambiguities when all integer ambiguities were fixed for certain epochs. The nominal products of the IEKF are reduceddynamic POD and PBD solutions, but also include the possibility to derive kinematic PBD solutions afterwards. The internal consistency of the reduced-dynamic and kinematic solutions is used as a quality measure in addition to comparisons with POD and PBD solutions by other institutes.","Satellite Constellation; Antenna Pattern; GPS; Orbit Determination; Precise Baseline Determination","en","doctoral thesis","","978-94-028-1555-9","","","","","","","","","Astrodynamics & Space Missions","","",""
"uuid:19f390f7-1814-4c28-b01e-a1ff84f20415","http://resolver.tudelft.nl/uuid:19f390f7-1814-4c28-b01e-a1ff84f20415","Sustainability of Deep Sea Mining Transport Plans","Ma, W. (TU Delft Transport Engineering and Logistics)","van Rhee, C. (promotor); Schott, D.L. (promotor); Delft University of Technology (degree granting institution)","2019","Deep sea mining is an emerging activity which attracts significant attentions from different countries governments and international huge companies around the world. Although its research and development work has been done for roughly half century time, the industrial scale deep sea mining project still has not been implemented yet because of a lot of problems in it, such as the profitability issue and marine environmental impacts. The research conducted in this thesis aims at designing an assessment system to evaluate the sustainability of DSM transport plans. To evaluate the sustainability of deep sea mining transport plans, three types of vertical lifting mechanisms are considered: continuous line bucket lifting system, pipe lifting with centrifugal pumps, and pipe lifting with air pumps in terms of the energy consumption, profitability and the caused environmental impacts. The research starts with a systematic literature review on the current research status of deep sea mining activity sustainability development to find out the existing research gaps and the influencing aspects to achieve this goal. Based on the literature review, it is clear that the sustainability of a deep sea mining transport plan could be influenced by its technological performance, e.g., energy consumption and technology maturity, the economic profitability, the environmental impacts and the social impacts.","","en","doctoral thesis","TRAIL Research School","978090-5584-2513","","","","TRAIL Thesis Series no. T2019/7, the Netherlands Research School TRAIL","","","","","Transport Engineering and Logistics","","",""
"uuid:fb60dba0-e5f9-451e-b664-e3ca0d45b36b","http://resolver.tudelft.nl/uuid:fb60dba0-e5f9-451e-b664-e3ca0d45b36b","Distributed Convex Optimization: Based on Monotone Operator Theory","Sherson, T.W. (TU Delft Signal Processing Systems)","Kleijn, W.B. (promotor); Heusdens, R. (promotor); Delft University of Technology (degree granting institution)","2019","Following their conception in the mid twentieth century, the world of computers has evolved from a landscape of isolated entities into a sprawling web of interconnected machines. Yet, given this evolution, many of the methods we use for allowing computers to work together still reflect their inherently isolated origins with the aggregation of data or master-slave relationships still commonly seeing use. While sufficient for some types of applications, these approaches do not naturally reflect the collaboration strategies we observe in nature and so the question is raised as to whether we can do better?
In parallel to the improvements in computer to computer communication, the emergence of new paradigms such as the Internet of Things (IoT), Big Data processing and cloud computing in recent years has placed an increasing importance on networked systems in many facets of the modern world. From power grid management, to autonomous vehicle navigation, to even our basic means of interaction through social media, these networks are a pervasive presence in our day to day lives. The vast amounts of data generated by these networks and their ever increasing sizes makes it impractical if not impossible to resort to traditional centralized processing and therefore necessitates the search for new methods of signal processing within networked systems.
In this thesis we approach the task of distributed signal processing by exploiting the synergy between such tasks and equivalent convex optimization problems. Specifically, we focus on the task of distributed convex optimization, that of solving optimization problems involving groups of computers in a collaborative manner and the development of distributed solvers for such tasks. Such solvers distinguish themselves by only allowing local computations at each computer in a network and the exchange of information between connected computers. In this way, distributed solvers naturally respect the structure of the underlying network in which they are deployed.
In the pursuit of our goal, we approach the task of distributed solver design via the lens of monotone operator theory. Providing a well known platform for the derivation of many first order convex solvers, herein we demonstrate the use of this theory as a means of constructing and analyzing a number of algorithms for distributed optimization. The first major contribution of this thesis lies in the analysis and understanding of an existing algorithm for distributed optimization within the literature termed the primal dual method of multipliers (PDMM). In particular, by demonstrating a novel interpretation of PDMM from the perspective of monotone operator theory we are able to better understand its convergent characteristics and highlight sufficient conditions for which PDMM will converge at a geometric rate. Furthermore we quantify the impact that network topology has on these convergence rates, drawing a direct connection between spectral characteristics of networks and distributed optimization.
Secondly, we explored the space of solver design by proposing novel algorithms for distributed networks. For the family of separable optimization problems, those with separable objectives and constraints, we demonstrated a distributed solver design using a specific lifted dual form. Based on monotone operator theory, the convergence analysis of the proposed method followed naturally from well known results and broadened the class of distributable problems compared to the likes of PDMM. Furthermore, in the case of time-varying consensus problems, we again proposed a new algorithm by combining a network dependent metric choice with classic operator splitting methods. Again the monotone basis of this algorithm facilitated the convergence analysis of this method which empirically was also shown to converge for general closed, convex and proper functions.
Finally, we demonstrated how these methods could be used for practical distributed signal processing in networks by considering the case of multichannel speech enhancement in wireless acoustic sensor networks. By combining a particular modeling of the acoustic scene with the algorithms mentioned above, the proposed method was not only distributable but also offered increased resilience to steering vector mismatch than other standard approaches. This example also highlights the importance of understanding both the target application and the distributed solvers themselves in developing effective solutions.
Overall, this thesis provides a first foray into the world of distributed optimization via the lens of monotone operator theory. We feel that this perspective provides an ideal reference for the analysis of such algorithms while also providing a general framework for convex optimization solver design in turn. While this thesis is not the end of this branch of research, it indicates the potential of the monotone operator theory as a unifying method for the development and analysis of distributed optimization solutions.","Distributed Signal Processing; Convex Optimization; Monotone Operator Theory; Wireless Sensor Networks","en","doctoral thesis","","978-94-6384-041-5","","","","","","","","","Signal Processing Systems","","",""
"uuid:799ca1cd-d316-4919-8f0c-27f92db39ac5","http://resolver.tudelft.nl/uuid:799ca1cd-d316-4919-8f0c-27f92db39ac5","Safe surgical signatures","Meeuwsen, F.C. (TU Delft Medical Instruments & Bio-Inspired Technology)","van den Dobbelsteen, J.J. (promotor); Dankelman, J. (promotor); Delft University of Technology (degree granting institution)","2019","In this dissertation, we analyse the safe use of medical technology, considering the training of users, technical demands of used devices and instruments, and the surgical workflow. As each of these elements will vary depending on the experience of the user, the equipment used, and variations in the environment; a unique set of parameters for safe use emerges: a ‘safe surgical signature’.
The aim of this thesis is to objectively measure safe application of medical technology in the Operating Room (OR), considering the three main pillars linked to the user, the devices, and environment. All three pillars need to be addressed, present and lined up, to reach its final goal; safe use.","","en","doctoral thesis","","","","","","","","","","","Medical Instruments & Bio-Inspired Technology","","",""
"uuid:d72c0db9-8463-4098-a796-457aaa88eaa3","http://resolver.tudelft.nl/uuid:d72c0db9-8463-4098-a796-457aaa88eaa3","Magnetic field compatible hybrid circuit quantum electrodynamics","Kroll, J. (TU Delft QRD/Kouwenhoven Lab)","Kouwenhoven, Leo P. (promotor); DiCarlo, L. (copromotor); Delft University of Technology (degree granting institution)","2019","Majorana bound states (MBSs) are novel particles predicted to be created when superconductor/semiconductor hybrid structures with strong spin-orbit coupling are subjected to strong magnetic fields. Expected to exhibit non-Abelian exchange statistics, they could form the basis of a new kind of quantum computer that is inherently protected from environmental noise, a common problem that has frustrated other quantum computing platforms. The current techniques used to measure these particles are highly sensitive, having provided the best evidence yet for their existence, but they are intrinsically too slow to form the basis of a useful quantum computer. To remedy this, this thesis integrates exotic materials into high frequency superconducting circuits that have been engineered to be resilient to strong magnetic fields, creating hybrid devices that potentially allow for fast and precise measurement and control of MBSs and their properties.
Several proposals to demonstrate the novel exchange statistics of MBSs use a specific type of superconducting qubit, the `transmon', for fast readout of the state of the MBSs. Problematically, the strong magnetic fields required to induce MBSs would destroy the superconductivity traditional transmons rely on, preventing them from operating as intended. To resolve this, the key constituent components of the transmon, the superconducting resonator and the Josephson junction have been engineered separately to become resilient to strong magnetic fields.
Chapter 4 explores how nanofabrication techniques and careful consideration of the properties of thin superconducting films can be used to engineer superconducting co-planar waveguide resonators that remain operational in strong parallel magnetic fields of \SI{6}{\tesla} and perpendicular magnetic fields of \SI{20}{\milli \tesla}, an order of magnitude greater than previously reported. Building on the results of Chapter 4, Chapter 5 utilises a graphene based Josephson junction, where the monoatomic thickness of the graphene provides an inherent protection against parallel magnetic fields, allowing us to demonstrate operation of a transmon circuit at a parallel magnetic field of \SI{1}{\tesla}.
Advances in nanowire material growth intended to improve the signatures of MBS are used in Chapter 6 to create a low power, highly coherent on-chip microwave source. With broad potential applications in superconducting circuits, it demonstrates a platform well suited for the detection of unique radiation that MBSs are predicted to emit. The thesis is concluded by Chapter 7, which describes the engineering and development of a nanowire based transmon qubit capable of measuring key properties of MBSs in the qubit's energy spectrum.
mobility and recombination dynamics of charge carriers in low-dimensional van
der Waals materials.","Van Der Waals Materials","en","doctoral thesis","","978-94-6375-413-2","","","","","","","","","ChemE/Opto-electronic Materials","","",""
"uuid:d773dbba-870c-4caf-ac55-8d573120669f","http://resolver.tudelft.nl/uuid:d773dbba-870c-4caf-ac55-8d573120669f","Ecological Modelling of River-Wetland Systems: A Case Study for the Abras de Mantequilla Wetland in Ecuador","Alvarez Mieles, M.G. (TU Delft Environmental Fluid Mechanics)","Mynett, A.E. (promotor); Irvine, Kenneth (promotor); Delft University of Technology (degree granting institution)","2019","Wetlands are among the most productive environments in the world. Around 6% of the Earth's land surface is covered by wetlands, which are key to preserving biodiversity. Wetlands provide multiple services like a source for water supply and a shelter for numerous species of fauna and flora. Wetlands are therefore of immense socio-economic as well as ecological importance. In this research the focus was on the Abras de Mantequilla (AdM) wetland, a tropical wetland system that belongs to the most important coastal river basin of Ecuador. It was declared a Ramsar site in 2000 and was the South American case of the EU-FP7 WETwin project, which provided the starting point of this thesis. A range of tools and approaches was used to develop a knowledge base for the AdM wetland. The research involved a combination of primary data collection (two fieldwork campaigns), secondary data acquisition (from literature), multivariate analyses, and numerical modelling approaches to explore the characteristics of the wetland system in terms of hydrological conditions, hydrodynamic patterns, biotic communities, chemical and ecological processes and fish-habitat suitability.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-0-367-34450-4","","","","Dissertation submitted in fulfillment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of the IHE Institute for Water Education.","","","","","Environmental Fluid Mechanics","","",""
"uuid:a98046eb-d641-446d-90af-f40dbacb63cc","http://resolver.tudelft.nl/uuid:a98046eb-d641-446d-90af-f40dbacb63cc","Efficient reduction techniques for a large-scale Transmission Expansion Planning problem","Ploussard, Q. (TU Delft Energie and Industrie)","Herder, P.M. (promotor); Olmos, Luis (promotor); Delft University of Technology (degree granting institution); Comillas Pontifical University (degree granting institution); KTH Royal Institute of Technology (degree granting institution)","2019","The aim of Transmission Expansion Planning (TEP) studies is to decide which, where, and when new grid elements should be built in order to minimize the total system cost. The lumpiness of the investment decisions, together with the large size of the problem, make the problem very hard to solve. Consequently, methods should be put in place to reduce the size of the problem while providing a similar solution to the one that would be obtained considering the full size problem. Techniques to model the TEP problem in a compact way, also called reduction methods, can reduce the size of the TEP problem and make it tractable. This thesis provides new techniques to reduce the size of the TEP problem in its main three dimensions: the representation made of the grid (spatial dimension), the representation made of the relevant operation situations (temporal representation), and the number of candidate grid elements to consider. In each of the three reduction techniques proposed in this thesis work, the first step consists in solving a linear relaxation of the TEP problem. Then, they make use of information that is relevant to make the network investment decisions to formulate the TEP problem in a compact way for a certain dimension. I use the potential benefits brought by candidate lines to reduce the size of the representation made of the temporal variability in the problem. Besides, I reduce the size of the network by preserving the representation made of the congested lines and partially installed lines while computing an equivalent for other network elements. Lastly, I manage to reduce the set of candidate lines to consider based on the set of expanded corridors and the amount of new capacity built in them. I also compare each of the reduction techniques that I have developed to alternative reduction methods discussed in the literature within various case studies. In each of the three reduction methods proposed, the TEP solution computed solving the TEP problem resulting from applying the proposed reduction methods is more accurate (efficient) than the ones computed applying alternative reduction methods. Besides, this solution is almost as efficient as the solution of the original TEP problem, i.e. the TEP problem that has not been reduced by the proposed reduction method. As a next step, one may explore combining the three reduction methods proposed to maximize the reduction achieved in the size of the TEP problem.","energy; electrictity; transmission expansion planning; linear programming; integer linear programming; relaxation methods; clustering; dimension reduction; network theory (graphs); partitioning algorithms","en","doctoral thesis","","978-84-09-05576-0","","","","","","","","","Energie and Industrie","","",""
"uuid:90d1300e-60a9-4edd-90f8-25b46f07f5fc","http://resolver.tudelft.nl/uuid:90d1300e-60a9-4edd-90f8-25b46f07f5fc","Ceramic nanofiltration for direct filtration of municipal sewage","Kramer, F.C. (TU Delft Sanitary Engineering)","Rietveld, L.C. (promotor); Heijman, Sebastiaan (promotor); Delft University of Technology (degree granting institution)","2019","Worldwide population growth, water scarcity, and climate change contribute to an urgent need for alternative water sources for irrigation water, industry water, and, in some countries even, drinking water. The implementation of municipal sewage reclamation is an upcoming trend in water treatment. The use of municipal sewage has the advantage of keeping the water circles small. Moreover, more is to gain from municipal sewage: nutrients and energy are abundantly present in this water and could potentially be recovered too.
The purpose of this research was to study the potential of the application of ceramic nanofiltration for treatment of municipal sewage. Ceramic nanofiltration membranes were chosen because of their high mechanical strength and high chemical and thermal resistance. These membranes are expected not to be damaged by high pressures, temperatures, concentration of chemicals, which enables vigorous chemical cleaning of the membranes, and they are prone to less irreversible fouling compare to polymeric nanofiltration NF.
This research was divided into four parts. First, a preliminary pilot study showed that ceramic nanofiltration membranes have potential for direct treatment of municipal sewage as pretreatment for reverse osmosis. Second, the quality and robustness was thoroughly researched and was lower than expected. Third, the phosphate retention during ceramic NF was notable effected by pH, multivalent counter ions, and a fouling layer on the membrane surface. Fourth, several fouling control method were tested using ceramic nanofiltration: the highest flux was maintain when applying reaction based precoat, resulting in the net highest water production.","ceramic nanofiltration; fouling control; phosphate retention; molecular weight cut-off; sewer mining; water treatment; ceramic membranes","en","doctoral thesis","","978-94-6384-033-0","","","","","","","","","Sanitary Engineering","","",""
"uuid:dd0e5ab0-1427-4ecc-a8d8-4e8f35050fca","http://resolver.tudelft.nl/uuid:dd0e5ab0-1427-4ecc-a8d8-4e8f35050fca","Study of solidification cracking during laser welding in advanced high strength steels: A combined experimental and numerical approach","Agarwal, G. (TU Delft (OLD) MSE-5)","Hermans, M.J.M. (promotor); Richardson, I.M. (promotor); Delft University of Technology (degree granting institution)","2019","Preventing solidification cracking is an essential prerequisite for the safety of a welded structure. An undetected solidification crack has the potential to cause premature failure during service. Two conditions generated by a weld thermal cycle are responsible for the initiation of solidification cracks. The first is the presence of excessive stresses/strains imposed on the solidifying weld metal and the second is the existence of a weak solidifying microstructure. For more than five decades, weld solidification cracking has been a subject of considerable interest. Cracking has been observed in various alloys, used in a wide range of engineering applications. Despite achieving a better understanding over this period, an accurate prediction of the occurrence of solidification cracking under a specific set of conditions remains difficult. An alloy with a high susceptibility to solidification cracking can still exhibit good weldability upon selection of appropriate welding conditions. Conversely, an alloy with supposedly high resistance to cracking, can still fail when subjected to inappropriate welding conditions.
The objective of the research work reported in this dissertation is to study and elucidate the solidification cracking phenomenon in two popular and commercially available automotive sheet steels, namely transformation-induced plasticity (TRIP) and dual phase (DP) steels. In particular, the effect of restraint (strain imposed), shape of the weld pool, solidification morphology, segregation, solidification temperature range, dendrite coherency and interdendritic liquid feeding on susceptibility to solidification cracking is considered.","Solidification cracking; hot cracking; hot tearing; laser welding; advanced high strength steels","en","doctoral thesis","","978-94-6366-165-2","","","","","","","","","(OLD) MSE-5","","",""
"uuid:ea79ba64-262f-4696-abda-f7d143b97bc9","http://resolver.tudelft.nl/uuid:ea79ba64-262f-4696-abda-f7d143b97bc9","Planning under Uncertainty in Constrained and Partially Observable Environments","Walraven, E.M.P. (TU Delft Algorithmics)","Spaan, M.T.J. (promotor); Witteveen, C. (promotor); Delft University of Technology (degree granting institution)","2019","Developing intelligent decision making systems in the real world requires planning algorithms which are able to deal with sources of uncertainty and constraints. An example can be found in smart distribution grids, in which planning can be used to decide when electric vehicles charge their batteries, such that the capacity limits of lines are respected at all times. In this particular example there can be uncertainty in the arrival time and charging demand of vehicles, and constraints follow directly from the capacity limits of the distribution grid to which vehicles are connected. Existing algorithms for planning under uncertainty subject to constraints are currently not suitable for these types of applications, and therefore this dissertation aims improve the applicability of these algorithms by advancing the state of the art in constrained multi-agent planning under uncertainty. The dissertation presents new algorithmic techniques for exact POMDP planning, finite-horizon POMDPs and POMDPs with constraints. Additionally, the dissertation shows how models for constrained planning can be used in smart distribution grids.","planning under uncertainty; smart grids; markov decision process; partially observable markov decision process","en","doctoral thesis","","978-94-6384-034-7","","","","","","2019-05-27","","","Algorithmics","","",""
"uuid:f5ddb664-54ef-4f4f-b810-01c2f727b8b5","http://resolver.tudelft.nl/uuid:f5ddb664-54ef-4f4f-b810-01c2f727b8b5","Raman based identification of on-chip trapped single micro-organisms: A feasibility study","Heldens, J.T. (TU Delft ImPhys/Quantitative Imaging)","van Vliet, L.J. (promotor); Caro-Schuurman, J. (copromotor); Delft University of Technology (degree granting institution)","2019","An important aspect in increasing our health and safety is the development of new sensors for screening drinking water samples for the presence of microbiological contaminants.The main problem associated with the detection and identification of these microbial contaminants is the long process time. The majority of the time is required fortheir purification and multiplication due to the low concentrations in which they arefound. These steps can be avoided by using spectroscopic identification rather than biochemical techniques. Raman spectroscopy provides a fingerprint based on the vibrational states of molecular bonds from which the bio-particles containing these can be identified. The Raman effect is weak, but rich in information and flexible with respect to its excitation wavelength.When working in an aqueous environment it can take advantage of the absorption minima of water to out perform for instance IR spectroscopy. Optical trapping allows the immobilization of particles without additional preparation steps and provides much added value to, and is highly compatible with Raman spectroscopy. Lab-on-a-chip techniques allow for integration and large-scale parallelization of processes, whichis unavoidable when performing large-scale identification of microbial contaminants on the level of single cells. To take advantage of established CMOS processing techniques for mass production in electronics, a CMOS compatible technology for integrated photonics providing waveguides transparent at a Raman suitable wavelengths, is needed. TripleX is such a waveguide technology. The research presented in this thesis shows the realization of an integrated dual-waveguide optical trap and the feasibility of its use for the identification of micro-organisms based on their Raman spectrum induced by the same on-chip optical beams used for trapping. It does so in fourmain steps. Firstly, laser tweezers Raman spectroscopy is used to classify the closely related yeast species Kluyveromyces lactis and Saccharomyces cerevisiae from measurements at the single cell level. Laser tweezers Raman spectroscopy combines optical trapping of a cell and generation of its characteristic Raman spectrum using one laser-beam focus. This enables fingerprinting of a cell’s molecular composition in a harmless fluidic environment within minutes. Laser tweezers Raman spectroscopy is considerably faster than well-known biological techniques based on streak plating or PCR, and requires far less biological material. For each yeast species a training set and a test set were measured. Visual inspection of the spectra showed intra-species variations obstructing division into two classes by eye. Application of a classification rule based on Fisher’s criterion nevertheless led to the successful blind classification of the test-set cells. Finally, a Kolmogorov-Smirnov test indicated that the difference between the distributions of the species was statistically significant, implying biological origin of the classification. This successful extension of laser tweezers Raman spectroscopy to classification of the aforementioned yeasts underlinesits applicability in microbiology and will hopefully contribute to the process of its adoption in this discipline. Laser tweezers Raman spectroscopy is not limited to rapid classification of single cells, but may also include e.g. study of the cell metabolism. Secondly, a new approach to the dual-beam geometry for on-chip optical trapping and Raman spectroscopy, using box shaped-waveguides microfabricated in TripleX technologyis demonstrated. These waveguides consist of SiO2 and Si3N4, so as to provide alow index contrast with respect to the SiO2 claddings and low signal loss, while retaining the advantages of Si3N4. The waveguides enable both the trapping and Raman functionality with the same dual beams. Polystyrene beads of 1µm diameter can be trapped with this device. In the axial direction discrete trapping positions occur, owing to the intensity pattern of the interfering beams. Interpretation of the trapping events on the basis of simulated optical fields and calculated optical forces indicate that a strong trap is formed by the beams emitted by the waveguides. Furthermore, the acquisition of Raman spectraof a single trapped bead is demonstrated. The spectra obtained in this manner show distinct polystyrene Raman peaks for integration times as short as 0.25 seconds. Thirdly, usual procedure of background subtraction is found to be less effective for Raman spectra obtained with the dual-waveguide trap, due to its specific geometry. The differences in the Raman generating properties between four dual-waveguide traps with varying distances between their waveguide facets are explored using a saturated ascorbic acid solution. Furthermore, the origin of a periodic background observed in the ascorbic acid spectra is investigated. Finally, an alternative method of signal acquisition and processing is presented to deal with the lack of fluidic control in the device and the periodic background and low signal-to-noise ratio observed in the spectra. The 10 µm and 5µm traps are used in trapping and Raman generation experiments with biological relevant particles in the form of Bacillus subtilis spores. These experiments result in noisy spectra for many and few spores in both traps. Using the presented processing the spectra are identified as Bacillussubtilis spore spectra. A comparison of the obtained signal-to-noise values to literature benchmarks shows the feasibility of micro-organisms identification with the dual-waveguide trap.","Optical trapping; Raman spectroscopy; On-Chip; micro-organisms; Identification","en","doctoral thesis","","978-94-6384-038-5","","","","","","","","","ImPhys/Quantitative Imaging","","",""
"uuid:105f2bd0-e35f-4047-bb6a-bb14ff118e5e","http://resolver.tudelft.nl/uuid:105f2bd0-e35f-4047-bb6a-bb14ff118e5e","Built Utopias in the Countryside: The Rural and the Modern in Franco’s Spain","Lejeune, J.F.M.P. (TU Delft History, Form & Aesthetics)","Hein, C.M. (promotor); van Bergeijk, H.D. (copromotor); Delft University of Technology (degree granting institution)","2019","Anchored by Hüppauf and Umbach’s notion of Vernacular Modernism and focusing on architecture and urbanism during Franco’s dictatorship from 1939 to 1975, this thesis challenges the hegemonic and Northern-oriented narrative of urban modernity. It develops arguments about the reciprocal influences between the urban and the rural that characterize Spanish modernity, and analyzes the intense architectural and urban debates that resulted from the crisis of 1898, as they focused on the importance of vernacular architecture, in particular the Mediterranean one, in the definition of an “other modernity.” This search culminated before 1936 with the “Lessons of Ibiza,” and was revived at the beginning of the 1950s, when architects like Coderch, Fisac, Bohigas, and the cosigners of the Manifiesto de Ia Alhambra brought back the discourse of the modern vernacular as a politically acceptable form of Spanish modernity, and extended its field of application from the individual house and the rural architecture to the urban conditions, including social and middle-class housing. The core of the dissertation addresses the 20th century phenomenon of the modern agricultural village as built emergence of a rural paradigm of modernity in parallel or alternative to the metropolitan condition. In doing so, it interrogates the question of tradition, modernity, and national identity in urban form between the 1920s and the 1960s. Regarding Spain, it studies the actuation of the two Institutes that were created to implement the Francoist policy of post-war reconstruction and interior colonization—the Dirección General de Regiones Devastadas, and the Instituto Nacional de Colonización. It examines the ideological, political, urban, and architectural principles of Franco’s reconstruction of the devastated countryside, as well as his grand “hydro-social dream” of modernization of the countryside. It analyzes their role in national-building policies in liaison with the early 20thcentury Regenerationist Movement of Joaquin Costa, the first works of hydraulic infrastructure under Primo de Rivera, and the aborted agrarian reform of the Second Republic. Inspired by the Zionist colonization of Palestine and Mussolini’s reclaiming of the Pontine Marshes, Falangist planners developed a national strategy of “interior colonization” that, along with the reclamation and irrigation of extensive and unproductive river basins, entailed the construction of three hundred modern villages or pueblos between 1940 and 1971. Each village was designed as a “rural utopia,” centered on a plaza mayor and the church, which embodied the political ideal of civil life under the nationalcatholic regime and evolved from a traditional town design in the 1940s to an increasingly abstract and modern vision, anchored on the concept of the ‘Heart of the City” after 1952. The program was an important catalyst for the development of Spanish modern architecture after the first period of autarchy and an effective incubator for a new generation of architects, including Alejandro de Ia Sota, José Luis Fernández del Amo, and others. Between tradition and modernity, these architects reinvented the pueblos as platforms of urban and architectonic experimentation in their search for a depurated rural vernacular and a modern urban form. Whereas abstraction was the primary design tool that Fernández del Amo deployed to the limits of the continuity of urban form, de Ia Sota reversed the fundamental reference to the countryside that characterizes Spanish surrealism to bring surrealism within the process of rural modernization in Franco’s Spain.","","en","doctoral thesis","","","","","","","","","","","History, Form & Aesthetics","","",""
"uuid:2c280a38-6b5c-452b-b13b-d1df582ce327","http://resolver.tudelft.nl/uuid:2c280a38-6b5c-452b-b13b-d1df582ce327","Integrated CMOS Current Sensing Systems for Coulomb Counters","Heidary Shalmany, S. (TU Delft Electronic Instrumentation)","Makinwa, K.A.A. (promotor); Delft University of Technology (degree granting institution)","2019","Coulomb counting is a widely used method to estimate battery state-of-charge (SoC). It involves measuring and integrating the battery’s current to determine its net charge flow.","CMOS; Current Sensor; Coulomb Counter; Delta-Sigma Analogto- Digital Converter; Temperature Sensor","en","doctoral thesis","","978-94-028-1490-3","","","","","","","","","Electronic Instrumentation","","",""
"uuid:ac9f0f74-31d5-46d8-abf3-68c4f1b356c2","http://resolver.tudelft.nl/uuid:ac9f0f74-31d5-46d8-abf3-68c4f1b356c2","Weighted Function Spaces with Applications to Boundary Value Problems","Lindemulder, N. (TU Delft Analysis)","Veraar, M.C. (promotor); van Neerven, J.M.A.M. (copromotor); Delft University of Technology (degree granting institution)","2019","This thesis is concerned with the maximal regularity problem for parabolic boundary value problems with inhomogeneous boundary conditions in the setting of weighted function spaces and related function space theoretic problems.
This in particularly includes weighted $L_{q}$-$L_{p}$-maximal regularity but also weighted $L_{q}$-maximal regularity in weighted Triebel-Lizorkin spaces.
The weights we consider are power weights in time and in space, and yield flexibility in the optimal regularity of the initial-boundary data and allow to avoid compatibility conditions at the boundary.
Moreover, the use of scales of weighted Triebel-Lizorkin spaces also provides a quantitative smoothing effect for the solution on the interior of the domain.","anisotropic; Banach space-valued; Bessel potential; elliptic boundary value problem; intersection space; maximal regularity; parabolic boundary value problem; Sobolev; Triebel-Lizorkin","en","doctoral thesis","","978-94-028-1493-4","","","","","","","","","Analysis","","",""
"uuid:24437481-873f-4bc6-84a3-57d0a6e4e0ae","http://resolver.tudelft.nl/uuid:24437481-873f-4bc6-84a3-57d0a6e4e0ae","Music in Use: Novel perspectives on content-based music Retrieval","Yadati, N.K. (TU Delft Multimedia Computing)","Hanjalic, A. (promotor); Liem, C.C.S. (copromotor); Delft University of Technology (degree granting institution)","2019","Music consumption has skyrocketed in the past few years with advancements in internet and streaming technologies. This has resulted in the rapid development of the inter-disciplinary field of Music Information Retrieval (MIR), which develops automatic methods to efficiently and effectively access the wealth of musical content. In general, research in MIR has focused on tasks like semantic filtering, annotation, classification and search. Observing the evolution of MIR over the years, research in this field has been focusing on “what music is” and in this thesis we move towards building tools that can analyse “what music does” to the listener. There is little research on building systems that analyse how music affects the listener or how people use music to suit their needs. In this thesis, we propose methods that push the boundaries of this perspective. The first major part of the thesis focuses on detecting high-level events in music tracks. Research on event detection in music has been restricted to detecting low-level events viz., onsets. There is also an abundance of literature on music auto-tagging, where researchers have focused on adding semantic tags to short music snippets. However, we look at the problem of event detection from a different perspective and turn to social music sharing platform – SoundCloud to understand what events are of importance to the actual listeners. Using a case-study in Electronic Dance Music (EDM), we design an approach to detect high-level events in music. The high-level events in our case-study have a certain impact on the listeners causing them to comment about these events on SoundCloud. Through successful experiments, we demonstrate how these high-level events can be detected efficiently using freely available but noisy user comments. The results of this approach inspired us for further research to investigate other tasks that can give us more insight into how music affects the listener. The second major part of the thesis concerns identifying music that can support different common activities – working, studying, relaxing, working out etc. A certain type of music is suitable for enabling listeners to perform a certain task. We first investigate what activities are important from a listeners’ perspective, for which music is sought, through a data-driven experiment on YouTube. After illustrating how existing music metadata like genre, instrument is insufficient, we propose a method that can successfully classify music based on the activity categories. An important insight from our experiments is that dividing the music track into short frames is not an effective method of feature extraction for activity-based music classification. This task requires a longer time window for feature extraction. Additionally, presence of high-level events like drop can affect the classification performance. After successful validation of our idea on activity-based music classification, we went on to investigate what can potentially distract a listener while doing a task. For this, we gathered valuable input from users of Amazon Mechanical Turk (AMT) on what musical characteristics distract them while doing their tasks. Based on this input, we built a system that can automatically detect a derail moment in a given music track, where the listener could potentially get distracted (derailed). Though this task seems to have a likely subjective component, we demonstrated that there are universal aspects to it as well. Through a literature survey and computational experiments, we demonstrate that we can automatically detect a derail moment. Throughout the thesis, we also stress on the importance of crowdsourcing platforms like AMT and social media sharing platforms like SoundCloud, and YouTube in understanding the user’s requirements and gathering data. We believe that our proposed methods and their outcomes will encourage future researchers to focus on this breed of MIR tasks, where the focus is on how music affects the listener. We also hope that the insights gained through this thesis will inspire designers and developers to build novel user interfaces to enable effective access of music.","music as technology; music for activities; music event detection","en","doctoral thesis","","978-94-6375-416-3","","","","","","","","","Multimedia Computing","","",""
"uuid:f2e8ac06-33c0-423e-9617-6eaa87f7abd8","http://resolver.tudelft.nl/uuid:f2e8ac06-33c0-423e-9617-6eaa87f7abd8","CMOS SPAD Sensors for 3D Time-of-Flight Imaging, LiDAR and Ultra-High Speed Cameras","Zhang, C. (TU Delft (OLD)Applied Quantum Architectures)","Charbon-Iwasaki-Charbon, E. (promotor); Delft University of Technology (degree granting institution)","2019","In conventional applications, such as bio-imaging and microscopy, SPAD is typically used as a single-photon counter. However, this advantage has been challenged by other photon-counting technologies, especially from CMOS-based QIS. Comparatively, apart from single-photon counting capability, QIS is superior to SPAD in terms of intrinsic multi-photon counting capability, quantum efficiency, dark noise, pixel size and fill factor. All these features indicate a low cost and high resolution photon counting imager can be built with QIS, which can be a great competition to SPADs. Moreover, QIS has been demonstrated with 1Mjot array, 0.175e- rms read noise and 1000 fps at less than 20 mWpower consumption.","Single-photon avalanche diode; time-of-flight; LiDAR; image sensor; high-speed sensor","en","doctoral thesis","","","","","","","","","","","(OLD)Applied Quantum Architectures","","",""
"uuid:a431659a-da38-42a5-be17-05b8c241e355","http://resolver.tudelft.nl/uuid:a431659a-da38-42a5-be17-05b8c241e355","Unraveling proteins at the single molecule level using nanopores","Restrepo Perez, L. (TU Delft BN/Chirlmin Joo Lab)","Dekker, C. (promotor); Joo, C. (promotor); Delft University of Technology (degree granting institution)","2019","The function and phenotype of a cell is determined by a complex network of interactions between DNA, RNA, proteins and metabolites. Therefore, a comprehensive approach that integrates genomics, transcriptomics, proteomics, and metabolomics is necessary to achieve full understanding of biological processes and disease. Recent technological developments have mostly focused on the study of genomes. DNA sequencing has become fast, cheap, and ubiquitous. The study of other -omes, especially the proteome, remains expensive and time-consuming...","Single-molecule protein; sequencing; protein fingerprinting; proteins; nanopores; post-translational modifications","en","doctoral thesis","","978.90.8593.395.3","","","","","","2019-05-10","","","BN/Chirlmin Joo Lab","","",""
"uuid:8bf73354-7c68-4512-8c2b-a5f060e783f4","http://resolver.tudelft.nl/uuid:8bf73354-7c68-4512-8c2b-a5f060e783f4","Automatic tuning of photonic beamformers: A data-driven approach","Bliek, L. (TU Delft Team Raf Van de Plas)","Verhaegen, M.H.G. (promotor); Wahls, S. (copromotor); Delft University of Technology (degree granting institution)","2019","Beamforming is a signal processing technique used in highly directional antennas. An array of antenna elements transmits the same signal, but with a different time delay for each element. By providing the right time delays for each antenna element, the whole array transmits a high-powered signal in one desired direction. This technique can be used for example to provide satellite television and Internet connections on board of aircrafts. Recently, developments in the field of integrated microwave photonics have paved the way for broadband, low-loss, and low-weight beamformer systems. These photonic beamformers convert the signals to be transmitted to the optical domain, provide the correct time delays with tunable optical delay lines, and then convert the signal back to the radio frequency domain. The main challenge here lies in tuning the actuators of the tunable optical delay lines in such a way that they provide the desired time delays. Challenges like actuator crosstalk, parameter sensitivity, noise and model errors cause complications when traditional tuning algorithms are used, such as nonlinear optimization routines. All results obtained with these photonic beamformers in the literature so far have been achieved by tuning the whole system by hand, or by applying nonlinear
optimization techniques to a simplified simulation of the system rather than the actual system. In order to find a practical way of tuning a photonic beamformer in real time, this thesis takes a data-driven approach. Instead of relying on perfectly accurate physical models, a surrogate function is used that approximates the relation between the system actuators and a cost function, namely the difference between the measured and desired time delay of each antenna element. By performing nonlinear optimization techniques on this surrogate cost function and by continuously updating the approximation as new measurements are obtained, the time delays of each antenna element should converge towards the desired values. The Data-based Online Nonlinear Extremum-seeker (DONE) algorithm is used to update and optimize the surrogate function in real time. This algorithm is especially designed to optimize cost functions that are costly to evaluate (for example in terms of time), that contain noise, and for which derivatives cannot be easily computed or approximated. The DONE algorithm is applied to a simulation of a photonic beamformer and to the real system, as well as to several other applications. It is shown that the algorithm outperforms comparable methods on several fronts, especially computation time. Furthermore, the theory behind the algorithm is investigated, but practical results are also given, for example rules of thumb for choosing the hyper-parameters. Finally, variations to the DONE algorithm have been developed that are easier to use, can be implemented more efficiently, and can deal with time-varying objective functions.","Photonic beamforming; microwave photonics; surrogate modeling; machine learning; costly and noisy optimization","en","doctoral thesis","","978-94-6323-538-9","","","","","","","","","Team Raf Van de Plas","","",""
"uuid:a2bbb49d-8642-4a32-b479-4f0091f2d206","http://resolver.tudelft.nl/uuid:a2bbb49d-8642-4a32-b479-4f0091f2d206","On the redistribution and sorting of sand at nourishments: Field evidence and modelling of transport processes and bed composition change","Huisman, B.J.A. (TU Delft Coastal Engineering)","Stive, M.J.F. (promotor); Ruessink, Gerben (promotor); de Schipper, M.A. (copromotor); Delft University of Technology (degree granting institution)","2019","Increasingly large sand nourishments are used for the maintenance of sandy coasts around the world. There is, however, still very limited understanding of their behaviour (i.e. redistribution) and effects on the marine environment. This PhD thesis explores the morphological reshaping of shoreface nourishments (i.e. long bunds of sand placed at the sub-tidal bar with volumes of 1 to 5 million m3) based on data at 19 field sites as well as the reshaping of mega nourishments (i.e. temporary land reclamations) using data of the 'Sand Motor' at the Dutch coast (with a volume of 21 million m3). Considerable cross-shore profile change takes place at shoreface nourishments, consisting of a landward movement of the nourishment crest and erosion of the seaward edge of the nourishment, erosion directly landward of the shoreface nourishment (in the first 100 to 150 m) and some accretion in the inner surfzone (at MSL -2m). Especially the water-level gradient driven currents and onshore transport due to wave skewness are responsible for the morphological change, which could be modelled with Xbeach using a lookup table with initial sedimentation-erosion rates for possible climate conditions. Mega nourishments, on the other hand, reshape predominantly in alongshore direction as a result of the alongshore wave-driven current. Design graphs showing the erosion rates, life span and maintenance volumes were made for the mega nourishments, which can be used for the planning phase of projects. Making a differentiation between the non-rotating foreshore and active surfzone proved to be essential for an accurate representation of the wave-driven alongshore transport in 1D coastline models. Furthermore, the lifetime of the nourishment is related to the sensitivity of the alongshore wave-driven transport to a shoreline rotation, which is affected especially by the wave energy. The recent upscaling of the sand nourishment volume in the last decades (i.e. to Sand Motor scale) and increasing anthropogenic pressure also comes with questions regarding the impact that is made on the natural environment. This thesis investigated the development of the bed sediment composition at the 'Sand Motor' using field measurements and numerical modelling, since bed composition is relevant for marine ecology. Considerable alongshore heterogeneity of the bed composition (D50) was observed as the Sand Motor evolved over time with (1) a coarsening of the lower shoreface of the exposed part of the Sand Motor (+90 to +150 µm) and (2) a deposition area with relatively fine material (50 µm finer) just North and South of the Sand Motor. The alongshore heterogeneity of the D50 is most evident outside the surfzone (i.e. seaward of MSL -4 m), while alongshore variation in D50 was relatively small in the surfzone itself (i.e. landward of MSL -4 m). Preferential erosion of the finer sand fractions takes place during mild to moderate wave conditions, while a reduction of the local armouring of the bed takes place during storms which mobilize all sediment fractions and mix the the top-layer of the bed with the relatively finer substrate. A 3D multi-fraction morphological model gave a good hindcast of 2.5 year of observed spatial and temporal changes in D50 at the Sand Motor, which showed that the coarsening of the bed after construction of the Sand Motor is mainly due to the tidal contraction at the Sand Motor. The currents transport especially the fine grains of the sand mixture, which are more easily suspended than the coarse grains. This difference in suspension behaviour is the main cause of the observed bed composition changes at the lower shoreface. Within the surfzone the difference in suspension behaviour of the size fractions will be smaller, as the energetic conditions can suspend all size fractions. The current findings imply that large-scale bed composition changes can take place at any coastal structure which has a considerable impact on the tidal currents.","Coastal safety; Sand nourishment; Morphology; Bed composition; Sediment sorting; Rip-currents; Numerical modelling","en","doctoral thesis","","978-94-6384-037-8","","","","","","","","","Coastal Engineering","","",""
"uuid:d784c51d-1ff0-48e7-b187-6f761491bd11","http://resolver.tudelft.nl/uuid:d784c51d-1ff0-48e7-b187-6f761491bd11","Structured matrices for predictive control of large and multi-dimensional systems","Sinquin, B. (TU Delft Team Raf Van de Plas)","Verhaegen, M.H.G. (promotor); Vdovin, Gleb (promotor); Delft University of Technology (degree granting institution)","2019","The extremely large telescopes that should see first light in coming years demand so-called adaptive optics systems to overcome the devastating effect of the atmospheric turbulence on the image quality. A sensor measures the incoming distortion of the light and is used for reshaping the latter using a deformable mirror. Processing the large number of sensor channels to operate the actuators at kilohertz frequencies is challenging on the computational point of view. The correction applied by the mirror and based on the sensor measurements should indeed not be already outdated because the turbulence has evolved during the computation time. In order to reduce the memory storage and the computational requirements, prior knowledge on the system is commonly translated into assumptions on the system matrices. When the sensors are regularly spread on a two-dimensional grid as is the case in adaptive optics, and the underlying function that describes the spatial dynamics is separable in its horizontal and vertical coordinates, a particular matrix representation is studied. This parametrization allows to write the matrices with a linear number of parameters (instead of quadratic without) and especially to derive more efficient algorithms for identifying from data the spatio-temporal dynamics of the turbulent atmosphere. This PhD thesis draws pros and cons of such a parametrization of large matrices for Linear Time Invariant systems, especially from an identification perspective. Besides, its close connection with tensors raises new fundamental questions in the analysis of such systems.","System Identification; Large scale systems; Kronecker product; Adaptive optics; LQG control","en","doctoral thesis","","978-94-6323-612-6","","","","","","","","","Team Raf Van de Plas","","",""
"uuid:e06bd615-7fc4-481b-a334-37627f142e3d","http://resolver.tudelft.nl/uuid:e06bd615-7fc4-481b-a334-37627f142e3d","Autogenous shrinkage of early age cement paste and mortar","Lu, T. (TU Delft Materials and Environment)","van Breugel, K. (promotor); Delft University of Technology (degree granting institution)","2019","Concrete is a brittle composite material that easily fractures under tension. Due to the fact that the early-age deformation of the concrete member is restrained by adjoining structures, cracking can occur throughout the concrete prior to application of any load. The cracks would provide preferential access for aggressive agents penetrating in the concrete and then cause corrosion of reinforcement and degradation of concrete. As a result, the service life of concrete structure would be decreased. There are many different types of early-age deformation of concrete, e.g. temperature induced strain, drying shrinkage and autogenous shrinkage. Among these types of early-age deformation, autogenous shrinkage is a consequence of the self-desiccation during the cement hydration process. For a long time autogenous shrinkage was considered negligible compared with drying shrinkage. In recent years, autogenous shrinkage has drawn more and more attention due to the increasing use of concretes with low water-binder ratios. Despite the fact that phenomenon of autogenous shrinkage has been recognized for several decades, the mechanism behind it is still not fully understood and no consensus has yet been reached. Three is a general agreement about the existence of a relationship between autogenous deformation and relative humidity change in the capillary pores of the hardening cement paste. Many simulation models were built based on this relationship to predict the development of autogenous shrinkage. The reliability of these predictions, however, is not always satisfactory. The discrepancy between the measured and calculated autogenous deformation becomes very pronounced at later ages. In those simulation models, cement paste was considered as an elastic material and only the elastic part of autogenous shrinkage was predicted. In fact, cement paste is not ideal elastic material. When a cement paste is subjected to a sustained load, it will deform elastically and continue to deform further with time, which process is known as creep. Creep plays an important role in autogenous shrinkage of hydrating cement paste. The ignorance of creep would lead to an underestimation of the autogenous shrinkage. The aim of this project is to study the autogenous shrinkage of Portland cement pastes and blended pastes with supplementary materials. The autogenous shrinkage is supposed to consist of two parts, elastic part and time-dependent part (creep), which are simulated separately. Based on the autogenous shrinkage of cement pastes, autogenous shrinkage of cement mortars and concretes were simulated by taking the restraining effect of rigid sand/aggregate particles into consideration.","autogenous shrinkage; capillary tension; creep; silica fume; fly ash; blast furnace slag; cement paste; cement mortar; concrete","en","doctoral thesis","","978-94-6384-040-8","","","","","","","","","Materials and Environment","","",""
"uuid:4f6a3bb1-c13c-4741-8f31-41dcd0a05caf","http://resolver.tudelft.nl/uuid:4f6a3bb1-c13c-4741-8f31-41dcd0a05caf","The Art of Bridge Design: Identifying a design approach for well-integrated, integrally-designed and socially-valued bridges","Smits, J.E.P. (TU Delft Structural Design & Mechanics)","Nijsse, R. (promotor); Nijhuis, S. (promotor); Delft University of Technology (degree granting institution)","2019","It is hard to imagine a world without bridges. Bridges lie at the heart of our civilization bringing growth and prosperity to our society. It is by virtue of bridges that communities are able to physically connect to new people and to new places that were previously disconnected. However, bridges are more than mere functional assets. A well-designed bridge reflects mankind’s creativity and ingenuity. The way that our bridges are commissioned, designed and procured is rapidly changing. Nowadays, a large number of experts from many different disciplines work on the design during different phases of the project. The segregation of knowledge into discipline-specific fields, and the fragmented approach to bridge procurement, have resulted in a general lack of cohesion in bridge design. The objective of this research is to identify a design approach, through all scales of the design, that leads to bridges that are well-integrated, that are integrally-designed and that are valued by society.
The methodology of this research is the reviewing of numerous projects from my own bridge design practice. By identifying design considerations on four levels, namely the level of the landscape, on the level of the bridge, on the level of the detail and on the level of the material, this research demonstrates how an overall approach to well-integrated, integrally designed and valued bridges can be achieved by addressing each of these scales of the design.
If the mutations in the field of bridge design that have occurred over the past 150 years have taught us one thing, it is that the field of bridge design has become far too complex to be embodied by one person, whether it be an engineer or an architect. The role that the master builder played up until the late renaissance, bringing together aesthetic design and building craft into one person, is nowadays fulfilled by a team of specialists. You could say that the integrated design team is the contemporary version of the renaissance master builder. The basis of the ideal team naturally consists of a lead architect and a chief engineer. Within this team, the architect should be the design integrator; he or she has the task of securing the equilibrium between Beauty, Utility and Solidity throughout every phase of the design process. This balancing act takes place at all scale levels and through all phases of the design.
Een wereld zonder bruggen is moeilijk voor te stellen. Bruggen vormen het hart van onze beschaving en brengen ons groei en welvaart. Bruggen verbinden mensen en plaatsen die voorheen niet meer met elkaar verbonden waren. Echter, bruggen zijn meer dan alleen functionele assets. Een goed ontworpen brug weerspiegelt de creativiteit en vindingrijkheid van de mensheid.
De manier waarop onze bruggen worden gepland, ontworpen en aanbesteed, verandert snel. Tegenwoordig werkt een keur aan specialisten uit verschillende disciplines tijdens verschillende fasen van het project aan het ontwerp. De segregatie van kennis in discipline-specifieke vakgebieden en de gefragmenteerde aanpak van aanbestedingen hebben geleid tot een algemeen gebrek aan samenhang in het brugontwerp. De doelstelling van dit onderzoek is het identificeren van een ontwerpbenadering, door alle schaalniveaus van het ontwerp, die leidt tot bruggen die goed geïntegreerd zijn, die integraal zijn ontworpen en die gewaardeerd worden door de samenleving.
De methodologie van dit onderzoek bestaat uit een toetsing van talrijke projecten uit mijn eigen brugontwerppraktijk. Door ontwerpoverwegingen te identificeren op vier niveaus, namelijk het niveau van het landschap, op het niveau van de brug, op het niveau van het detail en op het niveau van het materiaal, laat dit onderzoek zien hoe een algemene benadering van goed geïntegreerde, integraal ontworpen en gewaardeerde bruggen kan worden bereikt door elk van deze schalen van het ontwerp aan te pakken.
Als de mutaties die zich de afgelopen 150 jaar op het gebied van brugontwerp hebben voorgedaan ons één ding hebben geleerd, dan is het wel dat het veld van brugontwerp veel te complex is geworden om door één persoon te worden belichaamd, of het nu een ingenieur of een architect is. De rol die de bouwmeester tot in de late renaissance heeft gespeeld, waarbij esthetisch ontwerp en bouwtechniek in één persoon werden samengebracht, wordt tegenwoordig vervuld door een team van specialisten. Je zou kunnen stellen dat het geïntegreerde ontwerpteam de hedendaagse versie van de bouwmeester uit de renaissance is. De basis van het ideale team bestaat logischerwijs uit een hoofdarchitect en een hoofdingenieur. In dit team moet de architect de ontwerp-integrator zijn; hij of zij heeft de taak om het evenwicht tussen Schoonheid, Nut en Robuustheid in elke fase van het ontwerpproces te borgen. Deze evenwichtsoefening vindt plaats op alle schaalniveaus en in alle fasen van het ontwerp.
In this thesis, we focused on (i) learner modeling and (ii) generation of educational material for both topic-agnostic and topic-specific MOOC platforms.","","en","doctoral thesis","","978-94-028-1482-8","","","","SIKS Dissertation Series No. 2019-13","","","","","Web Information Systems","","",""
"uuid:ef99fd03-4713-493f-b3c3-1f741193eefd","http://resolver.tudelft.nl/uuid:ef99fd03-4713-493f-b3c3-1f741193eefd","Autonomous Onboard Mission Planning for Multiple Satellite Systems","Zheng, Z. (TU Delft Space Systems Egineering)","Guo, J. (promotor); Gill, E.K.A. (promotor); Delft University of Technology (degree granting institution)","2019","With the rising demands from customers and users and the development of ever advanced technologies, many space missions nowadays require more than one satellite to fulfill their mission objectives. Although replacing single satellite systems (SSSs) by multiple satellite systems (MSSs) offers advantages, such as enhanced spatial and temporal coverage as well as high robustness and multifunctional purposes, it also introduces new challenges. There is no doubt that as the number of satellites in a mission grows, the complexity and operation cost of controlling and coordinating these satellites only by human (or ground based) operators will increase dramatically. In addition, for some deep space missions or complex operational tasks, due to the long signal transmission time between the spacecraft and ground-based antennas or short communication windows, there will not be enough time or resources for operators to sufficiently and efficiently control all of the required onboard functions from mission control centers. Therefore, to enhance the efficiency of operating an MSS, and to reduce the cost of human resources and ground infrastructure, an onboard autonomous system (OAS) for MSS is a promising solution. For specific missions, the use of an OAS may even be a mission enabler. One important function of an OAS is to provide planning and re-planning services based on different mission requirements. The objective of this research is to develop and characterize onboard autonomous mission planning and re-planning approaches for MSSs. Traditional planning approaches have been reviewed and proven to be inappropriate and inefficient for complex planning problems in the harsh space environment when severe system constraints are enforced and a large number of vehicles constitutes the MSS. % Artificial intelligence (AI) approaches, in contrast, are more suitable for complex problems due to their broad adaptability and their ability to cope with large-scale variables. To overcome these deficiencies, engineers and researchers have started to develop OAS with the help of Artificial Intelligence (AI) techniques to allow for more complex space missions. Based on the relevance of this problem, the following research questions (RQs) have been formulated and will be answered in this thesis. % and a review of the state-of-the-art scientific literature \textbf{RQ1: What are the strengths of using AI in space missions? How to use a centralized AI algorithm in a multi-satellite system to decompose mission objectives and perform mission planning for the entire system?} \textbf{RQ2: How to define emergency situations which may occur during mission operations? How to use AI algorithms to handle mission re-planning and re-scheduling problems?} \textbf{RQ3: How to design cooperation and negotiation approaches for an MSS to reach an agreement? How to improve AI algorithms for distributed onboard mission planning problems?} To define potential scenarios, a reference mission is introduced in this thesis, called \textit{Discovering the Sky at the Longest Wavelength (DSL)}. The mission is assumed to comprise one Mother Satellite (MS) and eight Daughter Satellites (DSs) in a lunar orbit. Its scientific objective is to observe the universe in the hitherto-unexplored very low frequency (below 30 MHz) electromagnetic spectrum. The DSs collect scientific data only in those parts of the orbit which is shielded from radio frequencies emitted by the Earth. These DSs can only transmit collected data to the MS when they are outside of this shielded orbit sections, to prevent interferences caused by communication. % The DSs collect scientific data and transmit those to the MS with the constraints what scientific data collection may only occur in the part of the orbit. This part of the orbit is shielded from radio frequencies emitted by the Earth and no other DSs are transmitting data to the MS. This renders mission operations of DSL very complex. The existing body of knowledge on mission planning problems for multi-satellite systems is reviewed. It comprises three categories: classical approaches, heuristic approaches, and advanced techniques (e.g., team negotiation mechanisms, evaluation algorithms). Targeting the complexity of foreseeable DSL planning problems, nine representative optimization algorithms are applied to fourteen test functions. The results indicate that Evolutionary Algorithms (EAs) have a broader adaptability than classical approaches. They are also more efficient than other heuristic approaches. Therefore, EAs family is selected as suitable candidate for the reference MSS. % fourteen test functions are used to test nine representative algorithms to provide a preliminary selection for the reference MSS.=-098 % The fourteen test functions we used contain different types of objective functions and constraints to guarantee the diversity of the preliminary selection. % The goal of this selection is to identify a suitable approach for an MSS to handle different types of optimization problems. Eight constrained and six unconstrained test functions are employed as benchmarks. The operations concept of the DSL mission foresees that the initial mission planning is performed by the MS, while the eight DSs are preliminary executing data collection and transmission tasks. % Considering the scientific design of the DSL mission, the initial mission planning procedures are all performed by the Mother Satellite (MS), while the rest eight Daughter Satellites (DSs) are just participating satellites which response for gathering and transmitting data. During this phase, the MSS implements a centralized architecture and the MS conducts a centralized planning approach. By comparing basic Genetic Algorithm (GA) with several state-of-the-art improved GAs, its weaknesses are revealed. In this thesis, to overcome early and slow convergence problems, the need to develop a new mutation strategy for GA is motivated. % By revealing the weaknesses of the basic Genetic Algorithm (GA), along with a comparison with several other improved GAs, The proposed novel mutation strategy is called Hybrid Dynamic Mutation (HDM), which contains a standard mutation operator and an escape mutation operator. While the standard mutation operator uses a small mutation rate for approaching the global optimum, the escape mutation operator uses a larger mutation rate to allow an escape from local optima. The simulation results indicate that the proposed HDM can improve the basic GA (which turns into the HDMGA) leading to a superior performance on correctness and effectiveness as compared to alternative GAs. Based on these findings, AI related methods are considered a promising category as compared to classical methods due to their flexibility and effectiveness to support the onboard planning for an MSS. In addition, the proposed HDMGA also provides a satisfying result for the considered initial mission planning problems. Internal or external causes, e.g. an actuator failure or the challenging space environment, can lead to a satellite malfunction during mission operations. This thesis considers the two most important behaviors of the DSL mission, observation and communication, and proposes potential emergency scenarios to handle possible system failures on DSs. Two re-planning methods, one called the Cyclically Re-planning Method (CRM), the other one the Near Real-time Re-planning Method (NRRM), are established and compared. The CRM performs re-planning at the beginning of each orbit and only re-plans for one orbit. The NRRM performs re-planning in a near real-time setting when the emergency occurs. Its re-planning covers for the rest of the mission. Three simulation study cases are formulated based on assumed emergency scenarios. % Each case is designed to represent a different level of failures on multiple DSs. The proposed two methods are compared on three aspects: the total number of data observed from all DSs within a certain time frame, the total number of data the MS received from all DSs within a certain time frame, and the average computation time needed for re-planning. The results indicate that: (1) The NRRM allows to observe and transmit more data than the CRM within a specific operational lifetime. (2) The NRRM requires more computational time than the CRM for emergency situations, while it requires less time than the CRM for nominal situations. % For emergency situations, the CRM can therefore provide re-planning sequences faster than the NRRM, while for nominal operations, the NRRM is much faster than the CRM. This research also covers a much more severe scenario, namely that the MS becomes fully non-functional in an emergency situation. This would render the MS unable to provide mission planning and re-planning services for the MSS. Without its main controller on the MS, all DSs now need to cooperate to jointly solve the mission planning problems. Due to the loss of the MS, both distributed and decentralized architectures, which the MSS could then use are introduced. In a distributed architecture, each DS is connected with all other DSs directly or through DS which acts as retranslator. In a decentralized architecture, each DS can only communicate with its neighbors. Considering that the mission allocation problems in different organizational architectures are similar to information games in game theory, a game-theoretical model of the Multi-Satellite Mission Allocation (MSMA) problem is formulated. % Three new negotiation mechanisms are introduced, compared and analyzed in theoretical terms. The Utility-based Regret Play (URP) negotiation mechanism is proposed for an MSMA problem using a distributed architecture. It inherits the ability to evaluate individual utility at each negotiation step from the Utility-based Fictitious Play, and the ability to regret the current choice and for not proposing particular choices in the past negotiation steps from the Regret Matching Play. The Smoke Signal Play (SSP) and Broadcast-based Play (BBP) are developed for a decentralized architecture instead. The SSP is inspired by an old communication method called \textit{Smoke Signal}, where each satellite is considered as a smoke tower, passing information of utility to its succeeding neighbor. The BBP uses broadcasting as the communication method, where each satellite can transmit information to all its neighbors. The simulation results show that the URP can outperform the other three state-of-the-art mechanisms (Action-based Fictitious Play, Utility-based Fictitious Play, and Regret Matching Play) with studied cases. For the decentralized architecture, the results reveal that both SSP and BBP can provide valid solutions for mission allocation problems. The BBP mechanism shows a superior performance on computation time as compared to SSP and a state-of-the-art approach called Market-based Auction. The SSP mechanism, on the other hand, shows the best performance with respect to power consumption. To solve complex optimization problems in distributed mission planning scenarios, an approach, named Hybrid Distributed GA (HDGA) is proposed. This approach contains two modules: the Local Constraint Satisfaction module (LCS) and the Globally Distributed Optimization module (GDO). In the LCS module, the greedy best-first search algorithm is employed as the local search heuristic for helping each DS to find suitable solutions which can satisfy individual constraints. This module is designed to generate multiple solutions to form local populations for the GDO. The GDO module employs the HDMGA as the core optimization algorithm, while the individual populations are formed through the local populations exchange procedure between one participant and all other participants. For a standard planning case, the results indicate that HDGA can reduce the computation time while ensuring a higher success rate compared to the HDMGA. Comparing the HDGA with two other state-of-the-art distributed optimization algorithms, the Distributed Ant Colony Optimization (DACO) and the Coevolutionary Particle Swarm Optimization (CPSO), the statistical results reveal that HDGA is more stable and accurate to handle large-scale planning problems. The HDGA also shows the best performance on computation time among all tested distributed approaches. With the rising demands from customers and users and the development of ever advanced technologies, many space missions nowadays require more than one satellite to fulfill their mission objectives. Although replacing single satellite systems (SSSs) by multiple satellite systems (MSSs) offers advantages, such as enhanced spatial and temporal coverage as well as high robustness and multifunctional purposes, it also introduces new challenges. There is no doubt that as the number of satellites in a mission grows, the complexity and operation cost of controlling and coordinating these satellites only by human (or ground based) operators will increase dramatically. In addition, for some deep space missions or complex operational tasks, due to the long signal transmission time between the spacecraft and ground-based antennas or short communication windows, there will not be enough time or resources for operators to sufficiently and efficiently control all of the required onboard functions from mission control centers. Therefore, to enhance the efficiency of operating an MSS, and to reduce the cost of human resources and ground infrastructure, an onboard autonomous system (OAS) for MSS is a promising solution. For specific missions, the use of an OAS may even be a mission enabler. One important function of an OAS is to provide planning and re-planning services based on different mission requirements. The objective of this research is to develop and characterize onboard autonomous mission planning and re-planning approaches for MSSs. Traditional planning approaches have been reviewed and proven to be inappropriate and inefficient for complex planning problems in the harsh space environment when severe system constraints are enforced and a large number of vehicles constitutes the MSS. % Artificial intelligence (AI) approaches, in contrast, are more suitable for complex problems due to their broad adaptability and their ability to cope with large-scale variables. To overcome these deficiencies, engineers and researchers have started to develop OAS with the help of Artificial Intelligence (AI) techniques to allow for more complex space missions. Based on the relevance of this problem, the following research questions (RQs) have been formulated and will be answered in this thesis. % and a review of the state-of-the-art scientific literature \textbf{RQ1: What are the strengths of using AI in space missions? How to use a centralized AI algorithm in a multi-satellite system to decompose mission objectives and perform mission planning for the entire system?} \textbf{RQ2: How to define emergency situations which may occur during mission operations? How to use AI algorithms to handle mission re-planning and re-scheduling problems?} \textbf{RQ3: How to design cooperation and negotiation approaches for an MSS to reach an agreement? How to improve AI algorithms for distributed onboard mission planning problems?} To define potential scenarios, a reference mission is introduced in this thesis, called \textit{Discovering the Sky at the Longest Wavelength (DSL)}. The mission is assumed to comprise one Mother Satellite (MS) and eight Daughter Satellites (DSs) in a lunar orbit. Its scientific objective is to observe the universe in the hitherto-unexplored very low frequency (below 30 MHz) electromagnetic spectrum. The DSs collect scientific data only in those parts of the orbit which is shielded from radio frequencies emitted by the Earth. These DSs can only transmit collected data to the MS when they are outside of this shielded orbit sections, to prevent interferences caused by communication. % The DSs collect scientific data and transmit those to the MS with the constraints what scientific data collection may only occur in the part of the orbit. This part of the orbit is shielded from radio frequencies emitted by the Earth and no other DSs are transmitting data to the MS. This renders mission operations of DSL very complex. The existing body of knowledge on mission planning problems for multi-satellite systems is reviewed. It comprises three categories: classical approaches, heuristic approaches, and advanced techniques (e.g., team negotiation mechanisms, evaluation algorithms). Targeting the complexity of foreseeable DSL planning problems, nine representative optimization algorithms are applied to fourteen test functions. The results indicate that Evolutionary Algorithms (EAs) have a broader adaptability than classical approaches. They are also more efficient than other heuristic approaches. Therefore, EAs family is selected as suitable candidate for the reference MSS. % fourteen test functions are used to test nine representative algorithms to provide a preliminary selection for the reference MSS.=-098 % The fourteen test functions we used contain different types of objective functions and constraints to guarantee the diversity of the preliminary selection. % The goal of this selection is to identify a suitable approach for an MSS to handle different types of optimization problems. Eight constrained and six unconstrained test functions are employed as benchmarks. The operations concept of the DSL mission foresees that the initial mission planning is performed by the MS, while the eight DSs are preliminary executing data collection and transmission tasks. % Considering the scientific design of the DSL mission, the initial mission planning procedures are all performed by the Mother Satellite (MS), while the rest eight Daughter Satellites (DSs) are just participating satellites which response for gathering and transmitting data. During this phase, the MSS implements a centralized architecture and the MS conducts a centralized planning approach. By comparing basic Genetic Algorithm (GA) with several state-of-the-art improved GAs, its weaknesses are revealed. In this thesis, to overcome early and slow convergence problems, the need to develop a new mutation strategy for GA is motivated. % By revealing the weaknesses of the basic Genetic Algorithm (GA), along with a comparison with several other improved GAs, The proposed novel mutation strategy is called Hybrid Dynamic Mutation (HDM), which contains a standard mutation operator and an escape mutation operator. While the standard mutation operator uses a small mutation rate for approaching the global optimum, the escape mutation operator uses a larger mutation rate to allow an escape from local optima. The simulation results indicate that the proposed HDM can improve the basic GA (which turns into the HDMGA) leading to a superior performance on correctness and effectiveness as compared to alternative GAs. Based on these findings, AI related methods are considered a promising category as compared to classical methods due to their flexibility and effectiveness to support the onboard planning for an MSS. In addition, the proposed HDMGA also provides a satisfying result for the considered initial mission planning problems. Internal or external causes, e.g. an actuator failure or the challenging space environment, can lead to a satellite malfunction during mission operations. This thesis considers the two most important behaviors of the DSL mission, observation and communication, and proposes potential emergency scenarios to handle possible system failures on DSs. Two re-planning methods, one called the Cyclically Re-planning Method (CRM), the other one the Near Real-time Re-planning Method (NRRM), are established and compared. The CRM performs re-planning at the beginning of each orbit and only re-plans for one orbit. The NRRM performs re-planning in a near real-time setting when the emergency occurs. Its re-planning covers for the rest of the mission. Three simulation study cases are formulated based on assumed emergency scenarios. % Each case is designed to represent a different level of failures on multiple DSs. The proposed two methods are compared on three aspects: the total number of data observed from all DSs within a certain time frame, the total number of data the MS received from all DSs within a certain time frame, and the average computation time needed for re-planning. The results indicate that: (1) The NRRM allows to observe and transmit more data than the CRM within a specific operational lifetime. (2) The NRRM requires more computational time than the CRM for emergency situations, while it requires less time than the CRM for nominal situations. % For emergency situations, the CRM can therefore provide re-planning sequences faster than the NRRM, while for nominal operations, the NRRM is much faster than the CRM. This research also covers a much more severe scenario, namely that the MS becomes fully non-functional in an emergency situation. This would render the MS unable to provide mission planning and re-planning services for the MSS. Without its main controller on the MS, all DSs now need to cooperate to jointly solve the mission planning problems. Due to the loss of the MS, both distributed and decentralized architectures, which the MSS could then use are introduced. In a distributed architecture, each DS is connected with all other DSs directly or through DS which acts as retranslator. In a decentralized architecture, each DS can only communicate with its neighbors. Considering that the mission allocation problems in different organizational architectures are similar to information games in game theory, a game-theoretical model of the Multi-Satellite Mission Allocation (MSMA) problem is formulated. % Three new negotiation mechanisms are introduced, compared and analyzed in theoretical terms. The Utility-based Regret Play (URP) negotiation mechanism is proposed for an MSMA problem using a distributed architecture. It inherits the ability to evaluate individual utility at each negotiation step from the Utility-based Fictitious Play, and the ability to regret the current choice and for not proposing particular choices in the past negotiation steps from the Regret Matching Play. The Smoke Signal Play (SSP) and Broadcast-based Play (BBP) are developed for a decentralized architecture instead. The SSP is inspired by an old communication method called \textit{Smoke Signal}, where each satellite is considered as a smoke tower, passing information of utility to its succeeding neighbor. The BBP uses broadcasting as the communication method, where each satellite can transmit information to all its neighbors. The simulation results show that the URP can outperform the other three state-of-the-art mechanisms (Action-based Fictitious Play, Utility-based Fictitious Play, and Regret Matching Play) with studied cases. For the decentralized architecture, the results reveal that both SSP and BBP can provide valid solutions for mission allocation problems. The BBP mechanism shows a superior performance on computation time as compared to SSP and a state-of-the-art approach called Market-based Auction. The SSP mechanism, on the other hand, shows the best performance with respect to power consumption. To solve complex optimization problems in distributed mission planning scenarios, an approach, named Hybrid Distributed GA (HDGA) is proposed. This approach contains two modules: the Local Constraint Satisfaction module (LCS) and the Globally Distributed Optimization module (GDO). In the LCS module, the greedy best-first search algorithm is employed as the local search heuristic for helping each DS to find suitable solutions which can satisfy individual constraints. This module is designed to generate multiple solutions to form local populations for the GDO. The GDO module employs the HDMGA as the core optimization algorithm, while the individual populations are formed through the local populations exchange procedure between one participant and all other participants. For a standard planning case, the results indicate that HDGA can reduce the computation time while ensuring a higher success rate compared to the HDMGA. Comparing the HDGA with two other state-of-the-art distributed optimization algorithms, the Distributed Ant Colony Optimization (DACO) and the Coevolutionary Particle Swarm Optimization (CPSO), the statistical results reveal that HDGA is more stable and accurate to handle large-scale planning problems. The HDGA also shows the best performance on computation time among all tested distributed approaches.","Artificial Intelligence; Onboard Autonomy; Mission Planning; Multiple Satellite Systems; Team Negotiation Mechanisms; Centralized and Distributed optimization","en","doctoral thesis","Ipskamp","978-94-028-1480-4","","","","","","","","","Space Systems Egineering","","",""
"uuid:b20beffc-f69f-4723-9c91-77da79861f62","http://resolver.tudelft.nl/uuid:b20beffc-f69f-4723-9c91-77da79861f62","From sequentially linear analysis to incremental sequentially linear analysis: Robust algorithms for solving the non-linear equations of structures of quasi-brittle materials","Yu, C. (TU Delft Applied Mechanics)","Rots, J.G. (promotor); Hoogenboom, P.C.J. (copromotor); Delft University of Technology (degree granting institution)","2019","It is difficult to accurately predict the strength of masonry and concrete structures. The most widely used method for simulating their behaviour is finite element analysis with the Newton-Raphson method and arch length control. However, the Newton-Raphson method can diverge and not produce a result, for example in bifurcations or during snap-back. In order to enhance the robustness of solving non-linear problems, a new method – called incremental sequentially linear analysis (ISLA) – is proposed. The method is based on a combination of the Newton-Raphson method and a total approach called sequentially linear analysis.
In ISLA, local damage is induced by reducing the material secant stiffness of the element that fails a unity check. The load is applied in force increments or displacement increments, which are adjusted to trace the complete structural response.
It has been showed that ISLA can handle non-proportional loading, geometrically non-linear analysis and transient analysis. The robustness of ISLA has been demonstrated in four examples: a concrete beam with both prestress and vertical load; out-of-plane bending of a masonry wall with overburden; a differential settlement test on a pre-loaded masonry façade and a 3D pushover analysis of a masonry house.
In order for any quantum processor to be operated, a so-called quantum-classical interface is required for the quantum bit (qubit) read-out and control. This interface consists of various electronic blocks, such as analog-to-digital converters, digital-to-analog converters, mixers, amplifiers and a digital controller. Especially the analog blocks require effort to meet the noise and stability constraints as not to disturb the very sensitive qubits and allow reading of the tiny signals. As the qubits live in extremely deep-cryogenic temperatures, long wires interface the cryogenic with the room temperature environment, where most of the electronics is situated. However, for a scalable system, heat injection becomes a serious problem, with many wires between 300 K and sub-Kelvin. Furthermore, such amount of interconnects is challenging to mechanically place in a dilution refrigerator.
Therefore, in this work, we propose to implement the electronics not at room temperature, but at a temperature much closer to the qubits, for example at 4 K. This not only reduces significantly the wiring between room temperature and the qubits, but we can also benefit from lower electronic noise at such a temperature. The operation of conventional electronics almost 200°C below its normal temperature range is not trivial as device properties alter significantly and most circuits no longer operate as intended.
In CMOS processes, the main technology in the integrated electronics world, the behaviour of the transistors, required for the implementation of any circuit, deviates considerably at low temperatures. The transistor's threshold voltage goes up, the mobility increases and the subthreshold slope becomes steeper, to name just a few of these deviations. Although there are improvements in performance, there are also some counter effects, and characterization of the transistors is needed to observe the changes. Once devices are characterized at such low temperatures, new models can be built and circuits can be simulated and adapted to operate properly at cryogenic temperatures. Luckily, there are various commercially available devices that can already withstand the chills of cold. We demonstrated various commercially available devices, such as a field-programmable gate array (FPGA) implemented in a 28 nm CMOS process, to be operating without major concerns at 4~K. Its properties alter only slightly, within 5 to 10%, and all tested circuit implementations, working at 300~K, also worked at 4 K.
We combined both commercially available devices, that operate 200 K below their specified temperature range, with custom designed CMOS circuits to implement a cryogenic read-out platform for spin qubits. This system comprises amplifiers, an ADC, an FPGA, voltage regulators and a clock generator. It allows to amplify the tiny signal from the qubits and digitize it directly in the FPGA in order to process the data locally at 4~K. This system is one of the first systematic attempts at operating a part of the quantum-classical interface at cryogenic temperatures and forms the basis for future systems comprising the complete electronic interface for qubits to operate at such low temperatures.
One of the main problems to tackle for cryogenic electronic systems is their power consumption. Power budgets are simply limited to roughly 1 or 2 Watts at 4 K and exponentially lower at deeper cryogenic temperatures, thus limiting the size of large-scale electronic systems. Our approach of combined commercial and custom circuits will have to be steadily replaced by a single (custom) technology that meets both power and scalability constraints. One of the best candidates is CMOS, a technology that the industry has relied upon for several decades and benefits from many optimizations thanks to Moore's law.","cryogenic electronics; FPGA; CMOS; transistor; qubit; quantum processor","en","doctoral thesis","","978-94-6384-029-3","","","","","","2019-05-01","","","OLD QCD/Charbon Lab","","",""
"uuid:28e12122-9c63-4260-aa87-b9e8f7de35fe","http://resolver.tudelft.nl/uuid:28e12122-9c63-4260-aa87-b9e8f7de35fe","Regime shifts in sediment concentrations in tide-dominated estuaries","Dijkstra, Y.M. (TU Delft Mathematical Physics)","Schuttelaars, H.M. (promotor); Wang, Zhengbing (promotor); Delft University of Technology (degree granting institution)","2019","Within estuaries one can often observe areas where the concentration of fine suspended sediments is higher than in the surrounding waters, called estuarine turbidity maxima (ETM). ETM play an important role in the natural and socio-economic value of estuaries. The suspended sediments can for example greatly diminish the value of the estuarine ecosystem by negatively affecting the light climate and oxygen level, as well as the economic value by leading to increased dredging costs for maintaining the depth of shipping channels. In at least two tide-dominated estuaries, the Ems River (Netherlands, Germany) and Loire River (France), the suspended sediment concentration has increased dramatically over the course of several decades. This has resulted in a great decline of the ecosystem and increase in dredging costs. As it is not well understood why these so called regime shifts in suspended sediment concentrations occurred, it remains unclear whether similar regime shifts can occur in other tide-dominated estuaries. The current leading hypothesis states that the regime shifts in the Ems and Loire are a result of man-made deepening of the estuary in the preceding decades. In addition, the hypothesis states that a similar regime shift can also occur in other tide-dominated estuaries that are subject to deepening. In this thesis, this hypothesis is systematically investigated by investigating the main physical processes that drive the sediment dynamics in tide-dominated estuaries and their response to channel deepening. This is illustrated by taking examples of two estuaries: the Ems (Netherlands, Germany) and Scheldt (Netherlands, Belgium)…","","en","doctoral thesis","","978-94-6380-311-3","","","","","","","","","Mathematical Physics","","",""
"uuid:76b3c6ed-d581-4487-863c-5e2dbdbaadfb","http://resolver.tudelft.nl/uuid:76b3c6ed-d581-4487-863c-5e2dbdbaadfb","Plastic deformation of self-affine rough metal surfaces under contact loading: A green’s function dislocation dynamics analysis","Parayil Venugopalan, S. (TU Delft (OLD) MSE-7)","Nicola, L. (promotor); Delft University of Technology (degree granting institution)","2019","","Contact mechanics; Plasticity; Dislocation dynamics; Size effect; Tribology; Self-affine; Fractal; Rough surface","en","doctoral thesis","","978-94-6366-163-8","","","","","","","","","(OLD) MSE-7","","",""
"uuid:2b87c62b-6078-454d-939a-20966fefc464","http://resolver.tudelft.nl/uuid:2b87c62b-6078-454d-939a-20966fefc464","Skyrmions and spirals in cubic chiral magnets","Bannenberg, L.J. (TU Delft RST/Neutron and Positron Methods in Materials)","Pappas, C. (promotor); van Well, A.A. (copromotor); Delft University of Technology (degree granting institution)","2019","class=""MsoNormal"" style=""mso-margin-top-alt:auto;mso-margin-bottom-alt:auto; line-height:normal"">Magnetic skyrmions are two-dimensional topologicallyprotected spin textureswith particle like properties which may crystallize inlattices that are typically oriented perpendicular to the magnetic field. Theirfirst observation was in the archetype cubic helimagnet MnSi, where skyrmionlattices appear spontaneously in the A-phase located just below the transitiontemperature and under magnetic fields. Single crystal cubic helimagnets host, besideskyrmion lattices, a variety of helimagnetic phases that are stabilized bythree hierarchically ordered interactions, being the strongest ferromagnetic,the weaker Dzyaloshinsky-Moriya which is non-zero due to the absence ofinversion symmetry, and the weakest anisotropy interaction. The competitionbetween these and the Zeeman interaction results in a relatively generic phasediagram for cubic helimagnets. Albeit this relative generic phase diagram, eachof these systems have their own particularities. This thesis mainly presentsresults of magnetization, ac susceptibility and neutron scattering studies offour cubic helimagnets: MnSi, Mn1-xFexSi, Fe1-xCoxSi and Cu2OSeO3
","","en","doctoral thesis","","978-94-028-1463-7","","","","","","","","","RST/Neutron and Positron Methods in Materials","","",""
"uuid:49ce6618-c9d5-4a46-bbc9-7439484d1ff8","http://resolver.tudelft.nl/uuid:49ce6618-c9d5-4a46-bbc9-7439484d1ff8","Self-healing Polyimides","Susa, A. (TU Delft Novel Aerospace Materials)","van der Zwaag, S. (promotor); Garcia, Santiago J. (promotor); Delft University of Technology (degree granting institution)","2019","Covalent reversible chemistries give rise to polymers with reasonable mechanical properties yet require external stimuli to heal. Oppositely, supramolecular systems can heal autonomously, but their properties are still far away from most of those set by application requirements. Both downsides need to be addressed before intrinsic healing polymers can emerge out of academic literature and be found in daily life products. The aims of the research described in this thesis are: i) to develop intrinsic self-healing polymers with mechanical properties that significantly exceed those currently reported in the academic literature but can nevertheless repair cracks at room temperature; ii) to contribute to a better understanding of the importance of the polymer chain architecture on the physical processes during the repair of mechanical damage...","","en","doctoral thesis","","978-94-028-1431-6","","","","","","","","","Novel Aerospace Materials","","",""
"uuid:38dfa450-82c2-4b63-92d1-26b2eb3adeed","http://resolver.tudelft.nl/uuid:38dfa450-82c2-4b63-92d1-26b2eb3adeed","CMOS Wide-Bandwidth Magnetic Sensors for Contactless Current Measurements","Jiang, J. (TU Delft Electronic Instrumentation)","Makinwa, K.A.A. (promotor); Delft University of Technology (degree granting institution)","2019","This Ph.D. dissertation describes the theory and realization of wide-bandwidth magnetic sensors for current measurements which breach the conventional offset and noise constraints in CMOS processes. These are achieved by several circuit-level and system-level innovations, including the use of three orthogonal ripple reduction loops (RRLs) in spinning-current Hall sensors, a multipath
architecture with double Hall sensors, and a hybrid magnetic sensor system combining Hall sensors and pick-up coils. The prototypes with these techniques have advanced the bandwidth of state-of-the-art CMOS magnetic sensors by more than two orders of magnitude.","CMOS; Hall sensor; wide bandwidth; current sensor; ripple reduction loop (RRL); pick-up coils; Multipath; Multisensor system; Spinning Current; Hybrid sensor system","en","doctoral thesis","","978-94-6384-025-5","","","","Junfeng Jiang was born in 1986 in Dalian, China. He received his B.Sc. degree in electrical engineering from Dalian University of Technology, Dalian, China, in 2009, and his M.Sc. degree in microelectronics from Delft University of Technology, Delft, the Netherlands in 2011. He expects to receive his Ph.D. degree from the same university in 2019, for his work on CMOS wide-bandwidth magnetic sensors for contactless current measurements. From 2010 to 2016, he was a visiting scholar at Texas Instruments (Formerly National Semiconductor Corporation), Delft, where he worked on CMOS wide-bandwidth magnetic sensors. From 2016 to 2017, he was with Texas Instruments Deutschland GmbH, Freising, Germany. Since November 2017, he has been with the sensing group of Texas Instruments, Dallas, USA. His interests include the design of precision analog systems, magnetic sensors and mixed-signal integrated circuits. Junfeng Jiang received the A-SSCC 2015 Travel Grant, SSCS Predoctoral Achievement Award 2016-2017 and ESSCIRC 2016 Best Paper Award.","","","","","Electronic Instrumentation","","",""
"uuid:193f6664-0f19-488f-af6a-21b17ba75be0","http://resolver.tudelft.nl/uuid:193f6664-0f19-488f-af6a-21b17ba75be0","A Physics-based Approach to Assess Critical Load Cases for Landing Gears within Aircraft Conceptual Design","Wu, P. (TU Delft Flight Performance and Propulsion)","Veldhuis, L.L.M. (promotor); Voskuijl, M. (copromotor); Delft University of Technology (degree granting institution)","2019","The European Union and the United States are proposing to bring in more strict flight vehicle emission criteria in their reports of the high‐level groups on aviation research, i.e. EU Flightpath 2050 and US Destination 2025. More fuel‐efficient aircraft must be developed to achieve this target. Moreover, the increasingly competitive aviation market also expects more fuel‐efficient aircraft to be designed. An efficient and reliable aircraft design with a decreased weight could significantly contribute to the improvement of aircraft economical and environmental performance. Various research studies have highlighted the potential for significant weight savings on the landing gear system. In general, the landing gear accounts for around 5% of aircraft Maximum Landing Weight. In the aircraft conceptual design stage, there are two methods to achieve weight savings on the landing gear system: 1. Investigation of conventional designs 2. Introduction of innovative designs In the use of these two methods, a key step is to verify the design of the landing gear w.r.t certain critical load cases. A landing gear critical load case is defined as a set of combinations of aircraft flight attitudes and motions, control surfaces and engine throttle settings, and environmental conditions that could lead to damage and failure of the landing gear structure. These critical load cases reflect the possible extreme conditions that might occur in operation. These critical load cases are traditionally obtained by utilizing the methods based on statistical data while ignoring specific flight dynamics and landing gear characteristics. These methods could lead to three problems.","Flight Dynamics and Loads; Landing Gear; Load Cases; Multidisciplinary Design; Analysis and Optimization","en","doctoral thesis","","978‐94‐6384‐039‐2","","","","","","","","","Flight Performance and Propulsion","","",""
"uuid:ef8425da-0eee-45b3-8e29-086e5fb41ede","http://resolver.tudelft.nl/uuid:ef8425da-0eee-45b3-8e29-086e5fb41ede","Estimation of regional mass balance changes of the Greenland ice sheet using GRACE data and the Input-output method","Xu, Z. (TU Delft Astrodynamics & Space Missions)","Visser, P.N.A.M. (promotor); Schrama, Ernst (copromotor); Delft University of Technology (degree granting institution)","2019","In the 21st century, polar land ice melting became one of the driving factors of global sea level rise, which is discussed widely by the media and the public. Although the fact of the shrinking ice caps and accompanying changes in the sea level is established, the actual amount of polar ice melting still needs to be quantified in separate regions. Sitting on top of bedrock, the Greenland ice sheet (GrIS) is the second largest ice sheet on Earth. With traditional glaciological methods the change of the Greenland ice sheet is difficult to measure directly, however with the GRACE (Gravity Recovery and Climate Experiment) satellite system the mass changes can be measured directly. There are several sub-drainage areas within the Greenland Ice Sheet. Some of the subsystems may contribute differently to the overall mass changes of GrIS. For instance, while the mass loss in the GrIS ablation zone is enhanced during the last decades, the central high altitude areas experienced increased mass accumulation (Krabill et al., 2000, Thomas et al., 2001, Colgan et al., 2015, Xu et al., 2016). It is important to quantify the regional mass changes because it gives us insight what is going on beyond the realization that the GrIS is shrinking...","Greenland Ice Sheet; GRACE; RACMO2; mass balance; constraints","en","doctoral thesis","","","","","","","","","","","Astrodynamics & Space Missions","","",""
"uuid:5cb3dd60-78ab-4272-a820-7a754c2cc6a0","http://resolver.tudelft.nl/uuid:5cb3dd60-78ab-4272-a820-7a754c2cc6a0","The dynamics of microtubule stability: A reconstitution of regulated microtubule assembly under force","Kok, M.W.A. (TU Delft BN/Marileen Dogterom Lab)","Dogterom, A.M. (promotor); Delft University of Technology (degree granting institution)","2019","The microtubule cytoskeleton is an intracellular polymer network involved in cell shape, cell motility, and cell division. Microtubules are self-assembling, dynamic polymers composed of tubulin proteins that alternate between phases of growth and shrinkage, a behaviour known as “dynamic instability”. A key feature of microtubules is their ability to generate pushing and pulling forces through controlled assembly and disassembly, guiding for example cell division. The stability of microtubules is largely governed by the progressive hydrolysis of incorporated tubulin dimers. In this thesis we sought to resolve microtubule assembly under force in the presence of regulatory proteins. Through the in vitro reconstitution of microtubule growth against novel micro-fabricated barriers, we studied the biochemical changes leading up to microtubule destabilization. We found that the resulting microtubule lifetimes can be understood with a simple phenomenological 1D model based on noisy microtubule growth and a single hydrolysis rate. Additionally, we explored the implementation of light-inducible protein-protein interactions to gain spatiotemporal control over microtubule function in particular and over in vitro reconstitutions in general. As a proof-of-principle, we developed a light-inducible microtubule gliding assay based on the reversible recruitment of motor proteins to a surface.","Microtubules; Optogenetics; Micro-fabrication; Reconstitution; EB3; CLASP; Monte Carlo simulations; Motor proteins","en","doctoral thesis","Casimir PhD Series","978-90-8593-392-2","","","","","","","","","BN/Marileen Dogterom Lab","","",""
"uuid:17e8a268-130c-4c36-b235-ba9e7747c45f","http://resolver.tudelft.nl/uuid:17e8a268-130c-4c36-b235-ba9e7747c45f","Hydrometallurgical recycling of rare earth elements from secondary resources","Peelman, S. (TU Delft (OLD) MSE-3)","Sietsma, J. (promotor); Yang, Y. (promotor); Delft University of Technology (degree granting institution)","2019","The rare earth elements (REEs) are a material group that is becoming increasingly important in modern day technologies, with applications in electronics (e.g. FeNdB magnets and luminescent phosphors), chemical industry (e.g. REE catalysts), energy industry (e.g. NiMH batteries and windmills) and medicine (e.g. Gd MRI contrast fluid). Considering the importance of REEs, a steady and secure supply is essential. However, the European Union does not have any domestic production of these elements and is reliant on import from China to meet its REE demand. With the volatility of the REE market and potential Chinese export restrictions, the EU has begun exploring secondary low-grade resources to mitigate a potential shortage of REEs.","","en","doctoral thesis","","","","","","","","","","","(OLD) MSE-3","","",""
"uuid:6d8bd168-e057-4ee9-854c-32c84015e4c4","http://resolver.tudelft.nl/uuid:6d8bd168-e057-4ee9-854c-32c84015e4c4","Theoretical and Experimental Investigation of Boundary Layer Ingestion for Aircraft Application","Lv, P. (TU Delft Flight Performance and Propulsion)","Delft University of Technology (degree granting institution)","2019","This thesis presents research on Boundary Layer Ingestion (BLI). BLI is an unconventional aircraft-engine integration technique which aims at integrating the aircraft and the propulsion system such that the overall aircraft fuel consumption can be reduced.
This research begins with a literature survey on propulsion integration. The literature is not only limited to recent work on novel aircraft concepts utilizing BLI but also covers the previous studies on ship propellers and pusher propellers used for aircraft. Various studies in the literature show that BLI and/or similar configurations can effectively reduce the total power consumption of the propulsion system. However, discrepancies can be identified amongst various research with respect to the improvements that BLI could provide in terms of reduction in the power consumption. Different research methods have been used to investigate and study this phenomenon in the literature. In particular, the various research methods give an indication that the physics involved in BLI might not be well understood. As a result, the current research aims to enhance the fundamental understanding of BLI. The main research questions are identified below:
- What are the mechanisms of BLI for the provided benefit?
- How large is the benefit of BLI?","","en","doctoral thesis","","","","","","","","","","","Flight Performance and Propulsion","","",""
"uuid:f6120a10-0a29-43f8-b900-bb62c856e74b","http://resolver.tudelft.nl/uuid:f6120a10-0a29-43f8-b900-bb62c856e74b","Various aspects of quantum transport through single molecules: Amechanical break-junction study","Stefani, D. (TU Delft QN/van der Zant Lab)","van der Zant, H.S.J. (promotor); Mayor, M (promotor); Delft University of Technology (degree granting institution)","2019","This dissertation concerns transport measurements in single-molecule junctions using the mechanically controlled break junction (MCBJ) technique. It describes various aspects that play a role in charge transport through single molecules, in order to develop the necessary knowledge to ultimately develop electronic devices based on intrinsic molecular functionality.","Single-molecule; Charge transport; mechanically controlled break-junctions; quantum interference; biomolecules; nanotechnology","en","doctoral thesis","","978-90-8593-396-0","","","","","","2020-04-04","","","QN/van der Zant Lab","","",""
"uuid:88753f69-c90e-4f53-a0cc-bec7fc559455","http://resolver.tudelft.nl/uuid:88753f69-c90e-4f53-a0cc-bec7fc559455","Asset Management DataInfrastructures","Brous, P.A. (TU Delft Information and Communication Technology)","Janssen, M.F.W.H.A. (promotor); Herder, P.M. (promotor); Delft University of Technology (degree granting institution)","2019","Many organizations tasked with managing public utility infrastructure routinely collect and store large volumes of data for decision making purposes in their management and maintenance processes. This data is collected, stored and analyzed within asset management data infrastructures, however, traditional data management methods are becoming increasingly inadequate. More and more, data is being provided by new sources that can communicate over the internet, collectively known as the Internet of Things (IoT). IoT may benefit the management of public utility infrastructures by providing enough quality data to generate trusted information required to make the right decisions at the right time, helping asset management organizations improve their decision-making capability. The extensible asset management data infrastructure model presented in this dissertation aims at improving our understanding of asset management through IoT. Using a Duality of Technology lens, this research takes the view that IoT is continually being socially and physically constructed, and discriminates between human activity that affects IoT, and human activity that is affected by IoT. Explorative case studies in the asset management domain are used as the main research method. Taking the view that asset management data infrastructures are complex adaptive systems ensures that the resulting model is capable of dealing with the evolution of asset management data infrastructures in the face of new technologies and new requirements. The usability of the model is tested by means of test case studies. The tests indicate that the model can be used to improve our understanding of asset management through IoT and to provide actionable insights for the achievement of expected benefits and mitigation of risks of asset management through IoT.
An example of a topological quantum state is the Majorana zero mode. Majorana zero modes can be realized in a 1D system with strong spin-orbit coupling and superconductivity, in an external magnetic field. Such materials are not known in nature, but can be engineered by coupling a semiconductor nanowire to a superconducting material. To use these Majorana zero modes as qubits, multiple nanowires have to be connected to each other in a 2D network. The experiments described in this thesis aim to develop such networks based on InSb (indium antimonide) semiconductor nanowires.
A few necessary theoretical concepts are briefly introduced. Subsequently, the nanofabrication and electrical measurement techniques used to study the nanowires are described, with emphasis on the challenges related to working with hybrid semiconductorsuperconductor (InSb-Al)materials. Two methods are then presented to realize nanowire networks. Transport experiments on these networks show strong phase coherence and a hard superconducting gap, demonstrating the high quality of the material.
In addition to the intrinsic quality of the material, the electrostatic environment plays an important role for the functionality of hybrid materials. The coupling between the superconductor (Al) and the semiconductor (InSb) is studied by applying an external electric field. This electric field influences material properties such as the spin-orbit coupling and the Landé g -factor. An essential property of the Majorana zero modes is the fact that their state cannot be described locally. Exploratory experiments with the aim of demonstrating this non-locality are described, followed by theoretical simulations demonstrating the limitations of common experimental practice based on local measurements. Finally, several suggestions for future experiments are made, aimed at demonstrating and manipulatingMajorana zero modes.","Majorana zero modes; semiconductor nanowires; superconductors; nanoscale physics","en","doctoral thesis","","978-90-8593-393-9","","","","Casimir PhD Series, Delft-Leiden 2019-11","","","","","QRD/Kouwenhoven Lab","","",""
"uuid:bcf14d54-74cc-4fe8-9285-466cee3936ab","http://resolver.tudelft.nl/uuid:bcf14d54-74cc-4fe8-9285-466cee3936ab","A socio-technical exploration of the Car as Power Plant","Park Lee, E.H. (TU Delft Energie and Industrie)","Lukszo, Z. (promotor); Herder, P.M. (promotor); Delft University of Technology (degree granting institution)","2019","In the transition towards low-carbon energy systems, the growth of variable renewable energy sources (V-RES) like solar and wind in the electricity systems is calling for more flexibility measures. These are needed to cope with the increased uncertainty and variability that affects the residual demand. Flexibility can be offered by traditional players in the sector, through dispatchable generation, storage, demand response, and increased interconnection. However, there are also increasing opportunities for new actors and roles. Aggregators, for example, can exploit the flexibility of small consumers and trade it on their behalf in the electricitymarkets. This flexibility can also be provided
from other sectors, such as heating and transportation. With the adoption and diffusion of electric vehicles, the aggregated capacity is considered to have significant potential to support the grid in the future. Vehicles are only used 5% of the time for driving. Thus, when parked, they could be used for providing flexibility through storage or providing vehicle-to-grid (V2G) power.","vehicle-to-grid; contracts; fuel cell electric vehicles; Agent-based modeling and simulation","en","doctoral thesis","","978-94-6323-596-9","","","","","","","","","Energie and Industrie","","",""
"uuid:28d1cbd6-d2e0-4ac1-bcad-47f598d9b183","http://resolver.tudelft.nl/uuid:28d1cbd6-d2e0-4ac1-bcad-47f598d9b183","A single wake oscillator model for coupled cross-flow and in-line vortex-induced vibrations of marine structures","Qu, Y. (TU Delft Offshore Engineering)","Metrikine, A. (promotor); Delft University of Technology (degree granting institution)","2019","Vortex-induced vibration (VIV) is awell-known phenomenon for civil and offshore structures. Currently, the prediction of this type of vibration in practice currently mainly relies on the force-decomposition method. However, the limitations of this method have restricted the applicability of the method, and alternative models are therefore needed to meet increasing demands for the more accurate prediction of VIV under more complicated conditions. The wake oscillator model overcomes the main limitations of the force-decomposition method to some extent, and it is one of the promising models that has gained popularity in recent years. Although the concept of the wake oscillator was first proposed over half a century ago and has been developed much since then, the existing wake oscillator models still have some limitations, which have restricted their applications. The main objective of this study is to improve the wake oscillator model for better modelling of the VIV of cylindrical structures, and efforts are made in this thesis to (a) reproduce the free and forced vibration experiments by introducing nonlinear coupling, and (b) develop a single wake oscillator equation that is coupled to both cross-flow and in-line motions for the prediction of coupled cross-flow and in-line VIV...","vortex-induced vibrations; wake oscillator model; fluid-structure interaction; coupled cross-flow and in-line vibrations","en","doctoral thesis","","978-94-6366-158-4","","","","","","2020-06-30","","","Offshore Engineering","","",""
"uuid:70c2b43b-e2c0-43e1-9bcf-117703e9b23a","http://resolver.tudelft.nl/uuid:70c2b43b-e2c0-43e1-9bcf-117703e9b23a","Downstream Process Development for Bio-based Production of Phenolics","Henriques da Silva, M.D. (TU Delft BT/Bioprocess Engineering)","Ottens, M. (promotor); van der Wielen, L.A.M. (promotor); Delft University of Technology (degree granting institution)","2019","Polyphenols are molecules with a wide range of bioactive properties. They are being applied as nutraceuticals or natural colorants. The growing interest in these molecules has led to the creation of projects such as the BacHBerry project (www. bachberry.eu), with the main goal of discovering novel polyphenols and tapping their commercial application (Chapter 1). Moreover, the same project targets the synthesis of those secondary metabolites by fermentation, in order to increase process yield and decrease the environmental impact. That shift in the current paradigm, based on the recovery of polyphenols by plant extraction requires the development of different downstream process strategies","","en","doctoral thesis","","","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:dcf9932a-3391-480a-a900-b2697d796d9e","http://resolver.tudelft.nl/uuid:dcf9932a-3391-480a-a900-b2697d796d9e","Active corrosion protection of aerospace aluminium alloys by lithium-leaching coatings","Visser, P. (TU Delft (OLD) MSE-6)","Mol, J.M.C. (promotor); Terryn, H.A. (promotor); Delft University of Technology (degree granting institution)","2019","For decades, scientists and engineers are searching for a safe and environmentally friendly alternative for the toxic chromate corrosion inhibitors in active protective coatings for the protection of aerospace aluminium alloys. In this search many different compounds have been investigated as leachable corrosion inhibitor, but no alternative with equal or better performance compared to chromates has been found yet. In 2010 it was discovered that organic coatings loaded with lithium-salts (Li) as leachable corrosion inhibitor provided very effective and promising corrosion inhibition on aluminium alloys when exposed to industrial accelerated corrosion tests. Initial investigations showed the formation of a corrosion protective layer on the aluminium alloy in a defect area, which appears to be a key feature of these Li-leaching coatings.","Aluminium; Cr(VI)-free; Lithium; Active Protective Coatings; Corrosion Protection; Leaching","en","doctoral thesis","","978-94-6380-267-3","","","","","","","","","(OLD) MSE-6","","",""
"uuid:69e21c56-6fd4-429e-add4-6d4aa5a8cce1","http://resolver.tudelft.nl/uuid:69e21c56-6fd4-429e-add4-6d4aa5a8cce1","Patterns & Variations: Designerly Explorations in Architectural Composition and Perception","Breen, J.L.H. (TU Delft Space & Type)","Bekkering, H.C. (promotor); Avermaete, T.L.P. (promotor); Delft University of Technology (degree granting institution)","2019","How can we better understand and explain the phenomena of architectural composition and perception? The aim of this research is to systematically and imaginatively (re)consider the conditions of architectural composition, whilst doing justice to the operational and the aesthetic issues of design. The ambition is to contribute towards generating a deeper, more objective understanding, concerning the craft of architectural design and consequently: the art of architecture. To unravel the expressive themes that are at interplay in a designed object and to demonstrate their combined workings, formal characteristics are identified and conceptual and analytical models were developed. These are applied and tested in case-studies. The Patterns & Variations study as a whole has been a laboratory and testing-ground for a variety of steadily evolving assumptions, interpretations, and applications that have been generated over a number of years. It is the closing piece of an extensive, personal search, which has sought to bridge the gap between practice, education, research and theory.","Architecture; Composition; Perception; Conceptions; Visualisations","en","doctoral thesis","","978-94-6384-031-6","","","","","","","","","Space & Type","","",""
"uuid:427e3ce3-2b01-4fa0-9e80-bf0e9c033213","http://resolver.tudelft.nl/uuid:427e3ce3-2b01-4fa0-9e80-bf0e9c033213","PET detector technologies for next-generation molecular imaging: From single-positron counting to single-photoelectron counting","Venialgo Araujo, E. (TU Delft (OLD)Applied Quantum Architectures)","Charbon-Iwasaki-Charbon, E. (promotor); Delft University of Technology (degree granting institution)","2019","Positron Emission Tomography (PET) is one of the most relevant medical imaging techniques utilized for cancer detection and tumor staging. The success of PET relies on the high sensitivity and accuracy to detect and quantify molecular probe concentrations, in the order of picomole/liter. Although there are several positron-emitting molecular probes available, the 18F-fludeoxyglucose (18F-FDG) contributes remarkably to the high PET specificity and sensitivity. Since the success of PET imaging is strongly connected to the 18F-FDG, this imaging technique is also known as FDG-PET. In FDG-PET imaging three elements are key: - the molecular probe, - a PET scanner, - and an image reconstruction algorithm. The molecular probe is the contrast enhancement agent, which is administrated to the patient and absorbed by the target volumes. The emitted radiation produced by electron-positron annihilation is detected by the PET scanner, and the detection information is utilized to reconstruct a volumetric probe distribution. In essence, a PET scanner is a large acquisition system composed of thousands of channels that detect coincident gamma-photons generated during electron-positron annihilations. Typically, a single detection channel is composed of a scintillation material and a photodetector. The scintillation material absorbs the gamma-energy and emits light photons that produce digital or analog signals in the photodetectors. Nowadays, novel silicon-based photodetectors known as silicon photomultipliers (SiPMs) have been adopted as the next-generation photodetectors for PET applications. In order to further improve the FDG-PET molecular sensitivity and specificity, next-generation instrumentation requires a more accurate time estimation of the detected gamma-photon. Since in time-of-flight (TOF) PET the reconstructed images have an improved signal-to-noise ratio (SNR), which depends on the gamma-photon timemark precision. Additionally, increasing the detection sensitivity improves the statistical quality of information utilized during the image reconstruction process. This thesis introduces the basic concepts of molecular imaging and the key elements of FDG-PET in chapters 1 and 2. A comprehensive theoretical analysis on the utilization of the scintillation light information for gamma-photon timemark estimation is presented in chapter 3. Several estimation methods, such as maximum-likelihood estimation (MLE) and best linear unbiased estimation (BLUE) are presented, as well as a performance comparison with respect to the Cramér-Rao lower bound. Additionally, a detailed study is performed to determine the conditions that allow to reach the Cramér-Rao lower bound. Currently, FDG-PET imaging equipment is not equally available worldwide and one of the reasons is the high costs involved. Often, the design and implementation of TOF-PET instrumentation requires application specific integrated circuit (ASIC) designs, which increases the complexity of the design and required long prototyping phases. Chapter 4 describes the design, implementation, and characterization of TOF-PET instrumentation based on off-the-shelf components, configurable time-to-digital converters (TDCs) implemented on field-programmable gate arrays (FPGAs), and analog SiPMs (A-SiPMs). The proposed solution achieves TOF precision with a full-flexible, fast-prototyping, and ASIC-less designs. Recently, digital SiPMs (D-SiPMs) emerged as a next-generation photodetector for PET applications. In particular, the multichannel digital SiPM (MD-SiPM) architecture integrates single-photon avalanche diodes (SPADs), TDCs, and a readout logic into a monolithic CMOS photodetector. This type of photodetector confines all the measurement devices and circuits within an integrated solution. Therefore, it allows a direct system integration of a large number of channels since only digital signals are required for its operation. However, D-SiPM research and development requires long development and integration cycles due to the high complexity involved. Chapter 5 describes an individual building block and full-system comprehensive analysis of a monolithic array of 18x9 MD-SiPMs. Additionally, it describes in detail the methods developed for multiple TDC systems. In chapter 6, the system integration of MD-SiPMs for building PET detector modules is explained. The challenges of utilizing complex photodetectors for building PET modules, attachment of scintillator matrices, and digital readout strategies are described in a comprehensive manner. Finally, a conclusion of the PET technologies investigated throughout this thesis is given. In addition, an outlook of newer detection methods based on Cherenkov-PET and the corresponding requirements and eventual advantages is discussed.","","en","doctoral thesis","","978-94-6323-595-2","","","","","","2019-10-05","","","(OLD)Applied Quantum Architectures","","",""
"uuid:1d7a10a3-e6af-439e-880b-720b35458f75","http://resolver.tudelft.nl/uuid:1d7a10a3-e6af-439e-880b-720b35458f75","Indonesië op de kaart: De rol van de Nederlandse aanwezigheid in Indonesië bij de ontwikkeling van de geodesie in Nederland","Ekkelenkamp, H. (TU Delft Mathematical Geodesy and Positioning)","Hanssen, R.F. (promotor); van den Doel, H.W. (promotor); Delft University of Technology (degree granting institution)","2019","Met hun grafische weergave van de werkelijkheid hebben kaarten al eeuwenlang tot de verbeelding gesproken. Grote veranderingen rond 1800 op nationaal en koloniaal gebied hebben de wereldwijde vraag naar betrouwbare kaarten aanzienlijk doen toenemen. Met “Indonesië op de kaart” is onderzocht hoe de Indonesische archipel in de periode 1800-1990 in kaart is gebracht. Tot de overdracht aan de Republiek Indonesië in 1949 hebben koloniale verhoudingen invloed uitgeoefend op het ontstaan van kaarten. De invloed van het koloniale verleden op het ontstaan van kaarten en de toegepaste geodetische methoden is nauwelijks bekend. Opeenvolgende Gouverneurs-Generaal en Ministers van Koloniën hebben hun stempel gedrukt op het beleid en zo mede de historische context bepaald voor de vraag naar kaarten. Aan de hand van de activiteiten van een elftal bestuurders zijn politieke en economische ontwikkelingen geschetst, die van invloed waren op het ontstaan van kaarten. Een hoofdvraag is de rol, die de Nederlandse aanwezigheid in Indonesië gespeeld heeft, voor de technische en de organisatorische ontwikkeling van de geodesie en het geodesie-onderwijs in Nederland in de periode 1850-1950. Landmeten en het waterpassen waren eeuwenlang een belangrijk onderdeel van de civiele, militaire en geodetische opleidingen in Nederland. Uitgezonden militaire en civiele experts waren vaak ingenieurs. Zij publiceerden hun technische resultaten in jaarverslagen, rapporten en artikelen. Hier laten we zien dat die Nederlandse aanwezigheid in Indonesië grote invloed heeft gehad op de geodesie en het geodesie-onderwijs in Nederland. In het kader van de wetenschapsgeschiedenis zijn de ontwikkelingen in de geodesie gevolgd als basis voor topografische en hydrografische opneming en kartering. Zoals vaak in de wetenschap werd lang uitgegaan van een paradigma, totdat dit niet meer houdbaar was en een nieuw paradigma ervoor in de plaats kwam. Een voorbeeld van een paradigmaverandering in Indië was het gebruik van triangulatie of driehoeksmeting. Die werd aanvankelijk gebruikt ter controle achteraf van de topografische opnemingen, maar werd later vooraf gebruikt als wiskundige basis voor die opnemingen. Drie terreinen zijn gekozen waar kaarten onontbeerlijk waren en waardoor ook veel kaarten ontstaan zijn: steden en openbare werken, spoor- en tramwegen en telecomverbindingen. De groei van de bevolking en de snelle ontwikkeling vanaf 1870 van de infrastructuur hebben de vraag naar kaarten enorm doen toenemen. De geodetische uitdagingen, die de basis voor kaarten vormden, zijn behandeld aan de hand van navigatie en plaatsbepaling, hydrografie, triangulatie, hoogtemeting en fotogrammetrie. Op enkele gebieden speelde Indië een voortrekkersrol en heeft Nederland van de opgedane ervaring kunnen profiteren. Verstorende factoren bij de metingen in de tropische omgeving weken nogal af van die in Nederland. Het onderzoek laat zien dat het tropische klimaat, de andere geografie, atmosferische refractie, schietloodafwijkingen, zwaartekrachtafwijkingen en kaart-projecties een geheel andere benadering dan in Nederland vergden. De geodetische activiteiten zijn beschreven aan de hand van uitgevoerde triangulaties en topografische opnemingen, die de basis voor kaarten vormden. De organisatie en werkwijze van de Topografische Dienst en het Kadaster, zowel in Nederlands-Indië als in Nederland, krijgen alle aandacht. Belangrijke personen en hun resultaten worden naar voren gehaald. Hydrografische activiteiten en de oceanografische expedities in het eilandenrijk, laten het grote belang zien van veilige vaarroutes en diepzee-onderzoek. Behalve topografische kaarten is ook het ontstaan van zeekaarten, luchtvaartkaarten en atlassen onderzocht. Aan de hand van een representatieve selectie uit duizenden vervaardigde kaarten, waarvan een deel als annex is opgenomen, wordt een beeld gegeven van de geleverde prestaties. Een globale vergelijking is gemaakt met initiële geodetische activiteiten in Frankrijk, Nederland, Duitsland, Groot-Brittannië en India. India was vergelijkbaar met Indië en werd door de Topografische Dienst in Batavia (nu Jakarta) op de voet gevolgd. Een vergelijking is gemaakt tussen Nederland en Nederlands-Indië voor enkele specifieke geometingen zoals astronomische waarneming, triangulatie, hoogtemeting, fotogrammetrie en hydrografie. Aan het geodesie-onderwijs, met name aan de TH Delft en TH Bandung, is uitvoerig aandacht besteed en de wisselwerking is daarin meegenomen. Vergelijking van studieboeken uit Europa en Indië laat zien dat het kennisniveau goed overeenkwam. De wetenschap in Nederlands-Indië, die op een hoog peil stond, heeft ook invloed op de Nederlandse wetenschap gehad. Zowel de uitwisseling van kennis aan de hand van publicaties en congressen, als de terugkeer van experts hebben hieraan bijgedragen. De conclusies zijn onderscheiden in drie categorieën: geometingen, personen en resultaten. Die hebben een nauwe relatie en tonen aan dat “Indonesië op de kaart” meer is dan een grafische voorstelling van de topografie of hydrografie. De andere lokale omstandigheden hebben in Nederland de geodesie-ontwikkeling en het onderwijs aanzienlijk verbreed. Het zijn echter de mensen geweest, die met bescheiden middelen de resultaten tot stand gebracht hebben. De grote hoeveelheid literatuur is daar getuige van.","Indonesië; Nederlands-Indië; VOC; geodesie; landmeten; triangulatie; hydrografie; kartografie; fotogrammetrie; Kadaster; Topografische Dienst; waterwerken; spoorwegen; telecommunicatie; Radio Kootwijk; Radio Bandung; TH Bandung; Technische Universiteit Delft; Universiteit Leiden","nl","doctoral thesis","","978-94-6375-250-3","","","","","","2020-11-01","","","Mathematical Geodesy and Positioning","","",""
"uuid:2aff1a7e-45eb-4d10-9944-8e06ef12b9fa","http://resolver.tudelft.nl/uuid:2aff1a7e-45eb-4d10-9944-8e06ef12b9fa","Learning Analytics Technology to Understand Learner Behavioral Engagement in MOOCs","Zhao, Y. (TU Delft Web Information Systems)","Houben, G.J.P.M. (promotor); Hauff, C. (copromotor); Lofi, C. (copromotor); Delft University of Technology (degree granting institution)","2019","As one of the most prominent examples of technology-enhanced learning, massive open online courses (MOOCs) have attracted extensive attention of learners, educators, and researchers since 2012. However, a low completion rate is a ubiquitous and severe problem in MOOCs, which means that only a small portion of learners got scores higher than or equal to the course requirements in MOOCs. Learner engagement is commonly presumed to be highly related to the completion rates of MOOCs...","","en","doctoral thesis","","978-94-028-1462-0","","","","","","","","","Web Information Systems","","",""
"uuid:bf0d3c09-5b31-403e-af75-daf9c1fb2b96","http://resolver.tudelft.nl/uuid:bf0d3c09-5b31-403e-af75-daf9c1fb2b96","Elephants in the Boardroom?: Sustainable values-based strategic decision-making in a Dutch housing association","Hoomans, S. (TU Delft Housing Management)","Gruis, V.H. (promotor); Remøy, H.T. (copromotor); Delft University of Technology (degree granting institution)","2019","The central research question in this study is: Which meaning is given to sustainability within a Dutch housing association and does making sense of the concept of sustainability lead to sustainable strategic choices? The chosen research strategy is a longitudinal case study in the Dutch housing association Welbions. Data was collected in three periods between 2009 and 2018. Welbions associates sustainability mainly with the financial position, costs and affordability, and interprets the concept as investment measures in energy savings, reducing the usage of gas and CO2-emissions which are aimed at in covenants. From the listed factors influencing strategic decision-making, the economic, technical and personal frames appeared to be used mostly. The organizational and ethical frame were used only once, and the aesthetic frame was not used at all. Noteworthy is that ecological developments were not mentioned. Frames derived from the decision criteria showed a dominating economic frame. Making sense of sustainability does not result in sustainability-based actions, or choice. This indicates that sustainable values have not gained a position in strategic decision-making, compared to traditional values such as cost-efficiency and affordability.","Sustainability; Sensemaking; Strategic decision making; housing association","en","doctoral thesis","","978-94-028-1429-3","","","","","","","","","Housing Management","","",""
"uuid:faa16f1c-524c-4958-b8b0-d9ce4fd7c045","http://resolver.tudelft.nl/uuid:faa16f1c-524c-4958-b8b0-d9ce4fd7c045","Seasonal hydro-and morphodynamics of data-limited bay and coastal inlet systems","Do, T.K.A. (TU Delft Coastal Engineering)","Stive, M.J.F. (promotor); Wang, Zhengbing (promotor); de Vries, S. (copromotor); Delft University of Technology (degree granting institution)","2019","The main objective of this study is to unravel the physical processes that control typical coastal systems in Central Vietnam while challenged by the fact that it is a data limited environment. Inlets and bays are the typical coastal system along the central coast of Vietnam. These coastal systems are strongly influenced by tropical monsoon conditions, which are characterised by variations in seasonal wave conditions and seasonal river flow. These systems are even more vulnerable to extreme weather conditions, such as floods and storms, because of the complex topography of a relatively narrow and steep mountain range which is directly connected to a dense river network in the low-lying coastal plains at the downstream end. Economic and ecological values in the coastal area are under pressure as a result of the intensification of natural disasters and human interventions. Notable examples of this are Cua Dai beach and Da Nang bay. Cua Dai beach lies adjacent to the Cua Dai inlet which is a typical seasonal varying tidal inlet connected to the catchment area of the Vu Gia-Thu Bon River. Cua Dai beach has suffered extreme erosion in the recent decade. Da Nang bay is a complex bay beach headland downstream of the Vu Gia River that discharges into this bay. Also, this system is affected by human interventions. Due to the common downstream basin of the Vu Gia-Thu Bon River system, both being typical coastal systems in Central Vietnam that experience data limitations, this study attempts to combine and understand the hydrodynamics and morphodynamics of Cua Dai inlet and the complex Da Nang embayment. In order to identify and quantify the main processes governing the evolution of Cua Dai beach to explain the morphological changes and extreme erosion in recent years, a new approach was developed. Historical shoreline positions and sediment budget changes are the two parameters of main importance in the approach to quantify the erosion processes in the Cua Dai coastal inlet. Historical shoreline changes were derived fromsatellite images and associated sediment budgets were estimated based on shoreline change rates using additional assumptions, such as defining a closure depth and a time invariant beach profile. To gain insight into the sediment transport along the Cua Dai beach, additional numerical models and empirical equations are used to investigate the variation in alongshore sediment transport induced by waves. Further analysis on how seasonal variation in both waves and river discharge impacts the morphodynamics of the ebb tidal delta and its adjacent coasts is performed based on process-based modelling.","seasonal inlet; data-limited; headland bay beach","en","doctoral thesis","","978-94-6366-154-6","","","","","","","","","Coastal Engineering","","",""
"uuid:df6c0760-89ba-4db0-9621-19c512eb1955","http://resolver.tudelft.nl/uuid:df6c0760-89ba-4db0-9621-19c512eb1955","Dimensionality-Reduction Algorithms for Progressive Visual Analytics","Pezzotti, N. (TU Delft Computer Graphics and Visualisation)","Vilanova Bartroli, A. (promotor); Lelieveldt, B.P.F. (promotor); Eisemann, E. (promotor); Delft University of Technology (degree granting institution)","2019","Visual analysis of high dimensional data is a challenging process. Direct visualizations work well for a few dimensions but do not scale to the hundreds or thousands of dimensions that have become increasingly common in current data analytics problems. Visual analytics is the science of analytical reasoning facilitated by interactive visual interfaces, and it has been proven as an effective tool for high dimensional data analysis. In visual analytics systems, several visualizations are jointly analyzed in order to discover patterns in the data. One of the fundamental tools that has been integrated in visual analytics, is nonlinear dimensionality-reduction; a tool for the indirect visualization aimed at the discovery and analysis of non-linear patterns in the high-dimensional data. However, the computational complexity of non-linear dimensionality-reduction techniques does not allow direct employment in interactive systems. This limitation makes the analytic process a time-consuming task that can take hours, days or even weeks to be performed. In this thesis, we present novel algorithmic solutions that enable integration of non-linear dimensionality-reduction techniques in visual analytics systems. Our proposed algorithms are, not only much faster than existing solutions, but provide richer insights into the data at hand. This result, is achieved by introducing new data processing and optimization techniques and by embracing the recently introduced concept of Progressive Visual Analytics; a computational paradigm that enables the interactivity of complex analytics techniques by means of visualization as well as interaction with intermediate results. Moreover, we present several applications that are designed to provide unprecedented analytical capabilities in several domains. These applications are powered by the algorithms introduced in this dissertation and led to several discoveries in areas ranging from the biomedical research field, to social-network data analysis and machine-learning models interpretability.","","en","doctoral thesis","","978-94-6380-274-1","","","","","","","","","Computer Graphics and Visualisation","","",""
"uuid:850a6ccc-7686-4536-8469-418691dc2dbb","http://resolver.tudelft.nl/uuid:850a6ccc-7686-4536-8469-418691dc2dbb","Multi-Spherical Composite-Overwrapped Cryogenic Fuel Tanks for Hypersonic Aircrafts","Tapeinos, I. (TU Delft Aerospace Manufacturing Technologies)","Benedictus, R. (promotor); Koussios, S. (copromotor); Delft University of Technology (degree granting institution)","2019","In the field of cryogenic storage, the medium inside the pressure vessel is in a liquid state and therefore cannot be further compressed. As a result, the storage tank should be designed in such a way, that it makes the best possible use of the available space (under a minimum weight) where it will be placed (e.g. within a reusable flight vehicle). Unlike conventional cylindrical pressure vessels, conformable pressure vessels provide an effective solution for this application in terms of volumetric and gravimetric efficiency. More specifically, conformable structures in the form of intersecting spheres (multi-sphere)-manufactured from composite materials- would be a beneficial configuration, since they can lead to weight savings associated with equal membrane strains when subjected to uniform pressure. Furthermore, because spheres have the minimum surface area for a given volume, they result in the minimization of passive heat in the tank and fuel boil-off, thus reducing the weight penalty associated with required thermal insulation thickness in cryogenic environments. Therefore a vessel configuration that incorporates partially merged spheres overwrapped with uni-directional (UD) carbon fiber straps applied at the merging points to introduce a uniform strain field would lead to a high volumetric efficiency at a low weight penalty...","","en","doctoral thesis","","","","","","","","","","","Aerospace Manufacturing Technologies","","",""
"uuid:d0423cf6-90b8-4a2a-b4cd-a1cff0f84c96","http://resolver.tudelft.nl/uuid:d0423cf6-90b8-4a2a-b4cd-a1cff0f84c96","Mechatronic design for repeatability of a single-camera alignment system in pick-and-place machines","Verstegen, P.P.H. (TU Delft Mechatronic Systems Design)","Herder, J.L. (promotor); Spronck, J.W. (copromotor); Delft University of Technology (degree granting institution)","2019","The demand for electronic devices like telephones, tablets and computers is still growing and will grow further in the future. All these devices contain electronic components, which are placed on printed circuit boards (PCBs). The assembly of a PCB is performed by several machines in an assembly line. As the accurate placement of several hundreds of these components on a PCB is a complex task, specialised Pick and Place (P&P) machines are used in an assembly line to perform this task. The production time required to assemble a PCB is determined by the a number of components a P&P machine can place per hour. The ”extremely high-volume production machine” AX-5 P&P machine (Assembl´eon B.V.) has a maximum of 20 P&P robots in parallel and is selected as the benchmark machine. Each robot is capable of placing 8,000 components per hour. To double the throughput from 8,000 to 16,000 components per hour per P&P robot a shuttle is added while maintaining the accuracy.","","en","doctoral thesis","","978-90-9024624-6","","","","","","","","","Mechatronic Systems Design","","",""
"uuid:89c0f1a2-d19f-4466-9cc5-52aeb3950e53","http://resolver.tudelft.nl/uuid:89c0f1a2-d19f-4466-9cc5-52aeb3950e53","Resource-constrained Multi-agent Markov Decision Processes","de Nijs, F. (TU Delft Algorithmics)","de Weerdt, M.M. (promotor); Spaan, M.T.J. (promotor); Delft University of Technology (degree granting institution)","2019","Intelligent autonomous agents, designed to automate and simplify many aspects of our society, will increasingly be required to also interact with other agents autonomously. Where agents interact, they are likely to encounter resource constraints. For example, agents managing household appliances to optimize electricity usage might need to share the limited capacity of the distribution grid.
This thesis describes research into new algorithms for optimizing the behavior of agents operating in constrained environments, when these agents have significant uncertainty about the effects of their actions on their state. Such systems are effectively modeled in a framework of constrained multi-agent Markov decision processes (MDPs). A single-agent MDP model captures the uncertainty in the outcome of the actions chosen by a specific agent. It does so by providing a probabilistic model of state transitions, describing the likelihood of arriving in a future state, conditional on the current state and action. Agents collect different rewards or penalties depending on the current state and chosen action, informing their objective of maximizing their expected reward. To include constraints, resource consumption functions are added to the actions, and the agents' (shared) objective is modified with a condition restricting their (cumulative) resource consumption. We propose novel algorithms to advance the state of the art in three challenging settings: computing static preallocations off-line, computing dynamic (re)allocations on-line, and optimally learning model dynamics through safe reinforcement learning under the constraints. Taken together, these algorithms show how agents can coordinate their actions under uncertainty and shared resource constraints in a broad range of conditions. Furthermore, the proposed solutions are complementary: static preallocations can be used as back-up strategy for when a communication disruption prevents the use of dynamic allocations.","Decision making under uncertainty; Multi-agent systems; Optimization; Constraint decoupling; Reinforcement learning","en","doctoral thesis","","978-94-6375-357-9","","","","","","","","","Algorithmics","","",""
"uuid:7c2e1f22-2d40-4974-8be5-be4cec493941","http://resolver.tudelft.nl/uuid:7c2e1f22-2d40-4974-8be5-be4cec493941","Model Predictive Control of fuel-cell-Car-based smart energy systems in the presence of uncertainty","Alavi, F. (TU Delft Team Bart De Schutter)","De Schutter, B.H.K. (promotor); van de Wouw, N. (promotor); Delft University of Technology (degree granting institution)","2019","In this thesis, we design control algorithms for power scheduling of a fleet of fuel cell cars in a microgrid. Fuel cell cars are a relatively new type of vehicles. The driving force of these cars comes from an electrical motor and in order to generate the required electricity for the operation of the motor, the vehicle is equipped with a fuel cell system. The purpose of the fuel cell system is to convert the chemical energy of hydrogen into electricity. By considering the fact that fuel cell cars have the ability to generate electricity from hydrogen, these type of vehicles can be considered as a new type of flexible power plant. The idea of generating electricity inside a parking lot by using fuel cell cars is what we refer to as the Car as Power Plant (CaPP) concept. In this PhD thesis, we consider the power scheduling problem of a fleet of fuel cell cars in the CaPP concept. Several robust model predictive control methods are developed to determine the power generation schedule of the fuel cell cars inside the microgrid.","model predictive control; Energy management systems; fuel cell cars; microgrid; min-max control","en","doctoral thesis","","978-94-6366-149-2","","","","","","","","","Team Bart De Schutter","","",""
"uuid:35430f5f-daa8-49df-999f-bf97addd51ab","http://resolver.tudelft.nl/uuid:35430f5f-daa8-49df-999f-bf97addd51ab","Smooth nonparametric estimation under monotonicity constraints","Musta, E. (TU Delft Statistics)","Jongbloed, G. (promotor); Lopuhaä, H.P. (promotor); Delft University of Technology (degree granting institution)","2019","In this thesis we address the problem of estimating a curve of interest (which might be a probability density, a failure rate or a regression function) under monotonicity constraints. The main concern is investigating large sample distributional properties of smooth isotonic estimators, which have a faster rate of convergence and a nicer graphical representation compared to standard isotonic estimators such as the constrained nonparametric maximum likelihood and the Grenander-type estimator. In the first part, we focus on the pointwise behavior of estimators for the hazard rate in the right censoring and Cox regression models, while the second part is dedicated to global errors of estimators in a general setup, which includes estimation of a probability density, a failure rate, or a regression function. We provide central limit theorems and assess the finite sample performance of the estimators by means of simulation studies for constructing confidence intervals and goodness of fit tests.","Isotonic estimation; Kernel smoothing; Nonparametric estimation; Maximum likelihood estimation; Grenander-type estimator; Cox regression model; global errors; confidence intervals; testing monotonicity; central limit theorem; Weak convergence","en","doctoral thesis","","978-94-6384-012-5","","","","","","","","","Statistics","","",""
"uuid:f090d58f-558c-47ed-8c9c-3152dadbc4ae","http://resolver.tudelft.nl/uuid:f090d58f-558c-47ed-8c9c-3152dadbc4ae","Making light jump: Photonic crystals on trampoline membranes for optomechanics experiments","Pinto Moura, J.P. (TU Delft QN/Groeblacher Lab)","van der Zant, H.S.J. (promotor); Groeblacher, S. (copromotor); Delft University of Technology (degree granting institution)","2019","Cavity optomechanics studies the interaction between mechanical resonators and optical cavities through radiation pressure forces and aims to harness this interaction for applications in the areas of high precision metrology, tests of fundamental quantum mechanics, or quantum information processing. For the most ambitious of these applications it is necessary that the mechanical resonator has a sufficiently high mechanical quality factor such that it can undergo at least a few coherent oscillations before interacting with incoherent thermal phonons. Furthermore, the optomechanical coupling must be large enough to make the interaction between optics and mechanics probable and, ideally, deterministic.
This work pursues both goals using a thin membrane in the middle (MIM) of an optical cavity. This is a common configuration in cavity optomechanics but most experiments to date have lowmechanical quality factors and optomechanical couplings.","Optical cavities; mechanical resonators; silicon nitride; optomechanics; photonic crystal slabs; optomechanical arrays","en","doctoral thesis","","978-90-8593-390-8","","","","Casimir PhD series 2019-06","","","","","QN/Groeblacher Lab","","",""
"uuid:51dde3f6-2a38-47a0-b719-420ff74ded5d","http://resolver.tudelft.nl/uuid:51dde3f6-2a38-47a0-b719-420ff74ded5d","Topology optimization for high-resolution designs: Application in solar cell metallization","Gupta, D.K. (TU Delft Computational Design and Mechanics)","van Keulen, A. (promotor); Langelaar, Matthijs (promotor); Delft University of Technology (degree granting institution)","2019","Due to global population growth and industrial development, there is a rising demand for energy. It is desired that this demand is met in a cleaner and more sustainable way. Among the various renewable energy sources, solar power is experiencing remarkable growth throughout the world. To ensure that solar power can be a sustainable solution for the future energy demands, intensive research is being conducted to make solar cells more efficient and thereby reduce the cost of solar energy. Solar cells have metallization patterns on the front side to collect current generated in the semiconductor layer. The performance of a solar cell significantly depends on the amount of electrode material used for metallization, and the pattern in which it is deposited. There exist several optimization approaches to optimize the metallization distribution on the front surface of solar cells. However, due to the numerical simplifications associated with these methods, only limited gains in power output are observed. Moreover, the applicability of these methods is historically restricted to rectangular or circular domains. There has recently been a drive towards increased freeform photovoltaic installations. Given that these shapes can be very arbitrary, the optimal metallization patterns for such geometries can be expected to be complex, and the traditional methods cannot be used to design them.","metallization designs; solar cells; topology optimization; freeform; multiresolution; adaptivity","en","doctoral thesis","","978-94-6366-152-2","","","","","","","","","Computational Design and Mechanics","","",""
"uuid:c71f55e1-9049-47ae-85ba-cda251757064","http://resolver.tudelft.nl/uuid:c71f55e1-9049-47ae-85ba-cda251757064","Self-Sorting and Directed Molecular Self-Assembly towards New Soft Materials","Wang, Y. (TU Delft ChemE/Advanced Soft Matter)","van Esch, J.H. (promotor); Eelkema, R. (promotor); Delft University of Technology (degree granting institution)","2019","Molecular self-assembly has been realized as a powerful approach to control the organization of materials from molecular to macroscopic length scale. While for a long time molecular self-assembly has focused on the investigation of systems involving a single component and under thermodynamic equilibrium. In recent years the interests are shifting towards more complex multicomponent and non-equilibrium self-assembly systems, where the richest functions of the resulted supramolecular objects can be harnessed. In this thesis, multicomponent supramolecular self-assembly and directed molecular self-assembly leading to out-of-equilibrium supramolecular systems are investigated, with the aim to construct new soft functional materials.","","en","doctoral thesis","","","","","","","","","","","ChemE/Advanced Soft Matter","","",""
"uuid:1e32cb50-2e71-4a18-956b-f0331270c9b0","http://resolver.tudelft.nl/uuid:1e32cb50-2e71-4a18-956b-f0331270c9b0","Mechanics of marginal solids: Length, strain, and time scales","Baumgarten, K. (TU Delft Engineering Thermodynamics)","Vlugt, T.J.H. (promotor); Tighe, B.P. (copromotor); Delft University of Technology (degree granting institution)","2019","Network materials, foams, and emulsions are ubiquitous in our daily life. We have a good intuition about how they respond as we handle them, but our theoretical understanding is poor. One of their most interesting features is that they are unusually fragile and appear to switch between solid and liquid state seamlessly.
In fact, foams and emulsions undergo a non-equilibrium phase transition as their packing fraction increases - this is the jamming transition. Networks show a similar transition as their connectivity increases, where the material switches from sloppy to rigid.
The fact that these materials undergo a phase transition, opens up the theoretical toolset of statistical mechanics. An important part of current research is therefore dedicated to finding diverging length and time scales and investigating the critical behavior of the systems in detail.
Because the systems in question are highly disordered, analytical modeling is challenging. At the same time there are significant experimental obstacles to approaching the critical point closely. For this reason, the development of simulation software plays an important role - all data presented in this thesis is generated through simulations. As the subtitle of this dissertation suggests, our findings concern length, strain, and time scales which can be found in the linear response to external forces.","Jamming; Elasticity; Viscoelastictiy; Shear Deformation; Emulsions; Foams; Granular Solids; Biological Networks; Polymer Networks","en","doctoral thesis","","978-94-6375-355-5","","","","","","","","","Engineering Thermodynamics","","",""
"uuid:5f58f0f1-0caf-43d8-b054-e07862619817","http://resolver.tudelft.nl/uuid:5f58f0f1-0caf-43d8-b054-e07862619817","Model Reduction for Interactive Geometry Processing","Brandt, C. (TU Delft Computer Graphics and Visualisation)","Eisemann, E. (promotor); Hildebrandt, K.A. (copromotor); Delft University of Technology (degree granting institution)","2019","The research field of geometry processing is concerned with the representation, analysis, modeling, simulation and optimization of geometric data. In this thesis, we introduce novel techniques and efficient algorithms for problems in geometry processing, such as the modeling and simulation of elastic deformable objects, the design of tangential vector fields or the automatic generation of spline curves. The complexity of the geometric data determines the computation time of algorithms within these applications. The high resolution of modern meshes, for example, poses a big challenge when geometric processing tools are expected to perform at interactive rates. To this end the goal of this thesis is to introduce fast approximation techniques for problems in geometry processing. One line of research to achieve this goal will be to introduce novel model order reduction techniques to problems in geometry processing. Model order reduction is a concept to reduce the computational complexity of models in numerical simulations, energy optimizations and modeling problems. New specialized model order reduction approaches are introduced and existing techniques are applied to enhance tools within the field of geometry processing. In addition to introducing model reduction techniques, we make several other contributions to the field. We present novel discrete differential operators and higher order smoothness energies for the modeling of tangential (n-)vector fields. These are used, to develop novel tools for the modeling of fur, stroke based renderings or anisotropic reflection properties on meshes. We propose a geometric flow for curves in shape space that allows for the processing and creation of animations of elastic deformable objects. A new optimization scheme for sparsity regularized functionals is introduced and used to compute natural, localized deformations of geometrical objects. Lastly, we reformulate the classical problem of spline optimization as a sparsity regularized optimization problem.","geometry; simulation; computer graphics; model reduction; mathematics","en","doctoral thesis","","978-94-6323-562-4","","","","","","","","","Computer Graphics and Visualisation","","",""
"uuid:19aa4685-b75a-4fa3-bdfc-54401c6235d6","http://resolver.tudelft.nl/uuid:19aa4685-b75a-4fa3-bdfc-54401c6235d6","Analyzing and Modeling Capacity for Decentralized Air Traffic Control","Sunil, E. (TU Delft Control & Simulation)","Hoekstra, J.M. (promotor); Ellerbroek, Joost (copromotor); Delft University of Technology (degree granting institution)","2019","The current system of Air Traffic Control (ATC) relies on a centralized control architecture. At its core, this system is heavily dependent on manual intervention by human Air Traffic Controllers (ATCos) to ensure safe operations. The capacity of this system is, therefore, closely tied to the maximum workload that can be tolerated by ATCos. Although this system has served the needs of the air transportation industry thus far, the increasing delays and congestion reported in many areas indicates that the current centralized operational model is rapidly approaching saturation levels. To cope with the expected future increases of traffic demand, many researchers have proposed a transition to a decentralized traffic separation paradigm in en route airspaces. Although there are several variants of decentralized ATC, this thesis focuses on a variant known as self-separation. In self-separated airspace, each individual aircraft is responsible for its own separation with all surrounding traffic. To facilitate self-separation, significant research effort has been devoted towards the development of new algorithms for automated airborne Conflict Detection and Resolution (CD&R). However, in spite of over two decades of active research highlighting its theorized benefits, decentralization/self-separation is yet to be deployed in the field. From a technical point of view a lack of understanding on three open issues namely airspace design, airspace safety modeling, and airspace capacity modeling, have impeded its further development and implementation. The goal of this research is to address these three open problems in order to bring self-separated ATC closer to reality. Consequently, the main body of this thesis is divided into three parts, with each part tackling one of the three aforementioned open problems...","Airspace design; Airspace safety; Airspace capacity; Airspace stability; Conflict probability; free-flight; Self-Separation; Air Traffic Management (ATM); Air Traffic Control","en","doctoral thesis","","","","","","","","","","","Control & Simulation","","",""
"uuid:8d8c14e3-cdfb-4e15-8314-35dc296fdbde","http://resolver.tudelft.nl/uuid:8d8c14e3-cdfb-4e15-8314-35dc296fdbde","Influence of inland vessel stern shape aspects on propulsive performance: Derivation of insights and guidelines based on a computational study","Rotteveel, E. (TU Delft Ship Design, Production and Operations)","Hopman, J.J. (promotor); van Terwisga, T.J.C. (promotor); Hekkenberg, R.G. (copromotor); Delft University of Technology (degree granting institution)","2019","This research focuses on identifying the most important stern shape aspects, with regard to resistance and propulsion power, of inland ships. Such information should help designers to determine which hull form aspect to adjust in case design requirements need them to do so. The information is obtained by firstly conducting a large series of CFD calculations, using the PARNASSOS code, for systematically varied inland ships. Next, response surface technologies are used to identify the most important aspects. This is done by sequentially adding and/or removing parameters from the response surface in order to find the combination of parameters that explains the majority of the variance in the performance data for the tested hull forms. Finally, an optimization algorithm is used to determine the optimal hull forms for varying displacement, showing which parameters should be adjusted preferably in order to increase (or decrease) ship displacement. This, specifically, should aid designers in making the trade-off between displacement (or cargo capacity) and energy consumption.","inland navigation; Shallow water; ship design; Computational fluid dynamics (CFD); optimisation; Response surface methodology; Feature selection; propulsion; ship hydrodynamics","en","doctoral thesis","Delft University of Technology","978-94-6380-242-0","","","","","","","","","Ship Design, Production and Operations","","",""
"uuid:609b6589-64bb-4e03-ad53-b04024b281eb","http://resolver.tudelft.nl/uuid:609b6589-64bb-4e03-ad53-b04024b281eb","Producing high-value chemicals in Escherichia coli through synthetic biology and metabolic Engineering","Shomar Monges, H. (TU Delft BN/Greg Bokinsky Lab)","Dogterom, A.M. (promotor); Bokinsky, G.E. (copromotor); Delft University of Technology (degree granting institution)","2019","For millennia, humans have used microbes to produce industrial products of social and economical value through fermentation processes. In recent years, the application of engineering principles to microbiology have dramatically expanded our ability to modify and optimize microbes for the production of a wide variety of commercial products from renewable feedstocks: food and commodity chemicals, to biofuels and fine chemicals such as pharmaceuticals, fragrances, cosmetics or dyes. The use of microbial bioprocesses for the production of natural products represents an attractive and sustainable alternative to current industrial production methods, which mainly rely on chemical synthesis and/ or extraction from native producers. Advanced biomanufacturing technologies would not only provide sustainable economic benefits (by reducing the monetary cost of production of useful chemicals), but also offer social and environmental benefits. Synthetic biology has allowed engineering the production of many industrial compounds within microbes that do not naturally produce them – this is called “heterologous microbial biosynthesis”. In addition to replacing current manufacturing processes, heterologous microbial biosynthesis likely offers the only viable platform to produce certain natural products at industrial scales. Indeed, many relevant compounds cannot be viably manufactured through chemical synthesis, and/or are produced at undetectable/insufficient levels in native organisms. However, many heterologous bioprocesses remain in their infancy to fully enable an economically viable delivery of relevant natural products to the market. In order to build and sustain the promise of a bioeconomy for the 21st century, metabolic engineering is under pressure to continue to provide largescale, sustainable and cost-competitive bioprocesses that meet global needs. In this thesis, we focus on the development of microbial strains to accelerate the microbial production of 2 different families of high-value compounds of prominent biotechnological relevance within the established microbial chassis Escherichia coli: antibiotics and isoprenoids. The fight against antimicrobial resistance is considered one of the greatest public health challenges of the 21st century. Recent technologies have uncovered new antibiotics that, if harnessed, might help alleviate this crisis. However, most of these new antibiotic compounds are far too complex for economical chemical synthesis, and are naturally produced by unculturable and/or genetically intractable microbes. Developing new heterologous microbial platforms for antibiotic production may be an efficient solution for harnessing the clinical potential of these molecules and their commercialization. Isoprenoids represent one of the largest families of natural compounds (over 50,000 molecules) with an incredible number of practical uses, and of great commercial value: from high-value compounds such as many pharmaceuticals, fragrances and flavors, to commodity chemicals such as solvents, rubber or advanced biofuels. We focus in particular on relevant obstacles associated with the development of proof-of-principle strains for the laboratory-scale production of these high-value chemicals.","Synthetic biology; metabolic engineering; biomanufacturing; antibiotics; carbapenems; ironsulfur cluster enzymes; metalloenzymes","en","doctoral thesis","","978-90-8593-386-1","","","","","","","","","BN/Greg Bokinsky Lab","","",""
"uuid:82ba446c-31e9-42fa-bf01-13e60f2003e5","http://resolver.tudelft.nl/uuid:82ba446c-31e9-42fa-bf01-13e60f2003e5","Energy structure of hybrid Semiconductor-superconductor nanowire Based devices","Proutski, A. (TU Delft QRD/Geresdi Lab)","Kouwenhoven, Leo P. (promotor); Geresdi, A. (copromotor); Delft University of Technology (degree granting institution)","2019","Materials possessing superconducting properties are highly sought after due to their potential technological applications. Much of the work has focused on utilising a thin insulating barrier separating a pair of superconducting electrodes, a Josephson junction, as a workhorse in the field of superconducting based quantumcomputation. Yet such structures are highly sensitive to their surrounding environment. Furthermore the associated energy scales, Josephson coupling and charging energy, are set by the junction geometry. Replacing the insulating barrier with a semiconducting material has the significant advantage of offering tunable energy scales with the aid of an applied electric field. Due to recent advancement in material development, hybrid combinations of superconductors and semiconductors have made it possible to devise various architectures. The present thesis focuses on investigating Josephson junctions and Cooper-pair transistors formed from semiconducting nanowires covered by a superconducting layer.","Superconductor; Semiconductor; Josephson junctions; Radiation; Spectroscopy; Andreev bound states; Cooper-pair transistors","en","doctoral thesis","","978-90-8593-387-8","","","","","","","","","QRD/Geresdi Lab","","",""
"uuid:9c06cd80-10a2-465b-ba58-21bf3ad6795e","http://resolver.tudelft.nl/uuid:9c06cd80-10a2-465b-ba58-21bf3ad6795e","Trust unravelled: In inter-organisational relationships in a regulated tender environment","Smolders, A.L. (TU Delft Railway Engineering)","Dollevoet, R.P.B.J. (promotor); Santema, S.C. (copromotor); Veeneman, Wijnand (copromotor); Delft University of Technology (degree granting institution)","2019","Scientists say that trust in inter-organisational relationships leads to high performance, project success, and better quality in construction work. Later research shows that distrust also plays an important role in preventing excessive trust in relationships. It is better to say that a stable state of trust is the balance between trust and distrust and leads to optimal performance. As an employee at the Dutch rail infrastructure manager (the asset owner), my experience in the rail maintenance market is that trust has a limited role in inter-organisational relationships. Inter-organisational relationships are organised via contracts where the output (performance), minimum standards, and tasks and roles are described in detail. Corporate lawyers regularly discuss the requests of change (technically and financially) from the rail maintenance contractors. Trust may have a larger role in inter-organisational relationships between the key figures who manage the contract to improve the output (performance).","trust; distrust; inter-organisational relationship; contract; monopsony; oligopoly","en","doctoral thesis","","978-94-6384-016-3","","","","","","","","","Railway Engineering","","",""
"uuid:9339474c-3c48-437f-8aa5-4b908368c17e","http://resolver.tudelft.nl/uuid:9339474c-3c48-437f-8aa5-4b908368c17e","Building safety with nature: Salt marshes for flood risk reduction","Vuik, V. (TU Delft Coastal Engineering)","Jonkman, Sebastiaan N. (promotor); Borsje, Bas W. (copromotor); Delft University of Technology (degree granting institution)","2019","Flood risk reduction in coastal areas is traditionally approached from a conventional engineering perspective, where dikes and dams are built to withstand the forces of tides, surges and waves. Recently, a nature-based approach to flood risk reduction is increasingly promoted, in which the benefits of coastal ecosystems for reducing the impact of extreme weather events are utilized. Ecosystems such as salt marshes, mangrove forests, coral reefs and sand dunes are preserved, enhanced or even created, in order to reduce flood risk in coastal areas. Nature-based flood defenses can work stand-alone, like sand dunes, but can also function in combination with engineered defenses, for example when vegetated foreshores reduce wave loads on dikes or dams.","Flood risk; nature-based solutions; foreshore; salt marsh; vegetation","en","doctoral thesis","","978-94-6332-470-0","","","","","","","","","Coastal Engineering","","",""
"uuid:a2314b15-6fee-4be1-bf77-f78b4356d11a","http://resolver.tudelft.nl/uuid:a2314b15-6fee-4be1-bf77-f78b4356d11a","Pricing and calibration with stochastic local volatility models in a monte carlo setting","van der Stoep, A.W. (TU Delft Numerical Analysis)","Oosterlee, C.W. (promotor); Grzelak, L.A. (copromotor); Delft University of Technology (degree granting institution)","2019","A general purpose of mathematical models is to accurately mimic some observed phenomena in the real world. In financial engineering, for example, one aim is to reproduce market prices of financial contracts with the help of applied mathematics. In the Foreign Exchange (FX) market, the so-called implied volatility smile plays a key role in the pricing and hedging of financial derivative contracts. This volatility smile is a phenomenon that reflects the prices of European-type options for different strike prices; the implied volatility tends to be higher for options that are deeper In The Money and Out of The Money than options that are approximately At The Money. In order for a pricing model to be accepted in the financial industry, it should at least be able to accurately price back the most simple financial derivative contracts, namely European call and put options. In other words, the model should calibrate well to the implied volatility smile observed in the financial market. The calibration should not only be accurate, but also reasonably fast. Another feature we wish the financial asset model to possess, is an accurate pricing of so-called exotic financial products. Exotic products are not traded on regular exchanges, but over-the-counter, i.e. directly between two parties without the supervision of an exchange. An example is a barrier option, which is a financial contract of which its payoff depends on the possible event that the underlying asset price hits a certain pre-determined level. The model prices of these path-dependent contracts are determined by the transition densities of the relevant underlying asset(s) between future time-points. These transition densities are reflected by the forward volatility smile the model implies; in order for the model to accurately price exotic products, it should yield realistic forward volatilities..","","en","doctoral thesis","","","","","","","","","","","Numerical Analysis","","",""
"uuid:5e9805ca-95d0-451e-a8f0-55decb26c94a","http://resolver.tudelft.nl/uuid:5e9805ca-95d0-451e-a8f0-55decb26c94a","Declarative Specification of Information System Data Models and Business Logic","Harkes, D.C. (TU Delft Programming Languages)","Visser, Eelco (promotor); Delft University of Technology (degree granting institution)","2019","Information systems are systems for the collection, organization, storage, and communication of information. Information systems aim to support operations, management and decision-making. In order to do this, these systems filter and process data according to business logic to create new data. Typically these information systems contain large amounts of data and receive frequent updates to this data. Over time requirements for information systems change, from the decision making logic to the number of users interacting with the system.
As organizations evolve, so must their information systems. Our reliance on information systems to make decisions and the ever changing requirements poses the following challenges for information system engineering. _Validatability:_ how easy is it for information system developers to establish that a system 'does the right thing'?
_Traceability:_ can the origin of decisions made by the system be verified?
_Reliability:_ can we trust the system to consistently make decisions and not lose our data?
_Performance:_ can the system keep responding promptly to the load of its users?
_Availability:_ can we trust that the system performs its functionality all of the time?
And finally, _modifiability:_ how easy is it to change the system specification when requirements change?
In this dissertation we show the feasibility and usefulness of declarative programming for information systems in light of these challenges.
Our research method is _design research_.
This iterative method repeats four phases: analysis, design, evaluation, and diffusion.
We _analyze_ the challenges of information system engineering, _design_ a new programming language to address these, _evaluate_ our new programming language in practice, and _diffuse_ our knowledge through scholarly articles.
This resulted in four new declarative languages: the Relations language, IceDust, IceDust2, and PixieDust.
Our contributions can be summarized by the new features of these languages.
_Native multiplicities, bidirectional relations, and concise navigation_ improve information system validatability and modifiability over object-oriented and relational approaches.
_Derived attribute values_ improve traceability.
_Incremental and eventual computing_ based on path analysis and _calculation strategy switching_ improve information system modifiability without sacrificing performance and availability over object-oriented and relational approaches.
_Calculation strategy composition_ improves validatability, modifiability, and reliability over reactive programming approaches.
And finally, _Bidirectional derived relations_ improve information system validatability over relational approaches.
The results of this dissertation can be applied in practice.
We applied IceDust2 to the learning management information system WebLab.
We found that validatability, traceability, reliability, and modifiability were considerably improved while retaining similar performance and availability.
Moreover, the fact that IceDust and PixieDust work in different domains, business logic and user interfaces respectively, suggests that our language features could be applied to more domains.","","en","doctoral thesis","","978-94-6366-146-1","","","","","","","","","Programming Languages","","",""
"uuid:5986ee2b-e9e6-42ed-a932-bddd5e78648e","http://resolver.tudelft.nl/uuid:5986ee2b-e9e6-42ed-a932-bddd5e78648e","Circuit Quantum Electrodynamics in a Magnetic Field","Lüthi, F. (TU Delft QCD/DiCarlo Lab)","DiCarlo, L. (promotor); Kouwenhoven, Leo P. (promotor); Delft University of Technology (degree granting institution)","2019","Quantum computers promise to solve certain problems such as quantum chemistry simulations much more efficiently than their classical counterparts. Although it is still unclear what material system will ultimately host large-scale quantum computers, solid-state systems are promising candidates due to their inherent scalability and advanced fabrication techniques that can be adapted from comparable technologies. Crucially, a future quantum computer will
depend on the quality of its most fundamental building block, the quantum bit, or qubit. Qubits, although ideally insensitive to potential noise, are very susceptible to slight changes in their environment. Therefore, they do not only make the building block for quantum computers, but are also precise sensors.
One of the most studied solid-state implementations of a qubit is the transmon, a weakly anharmonic oscillator based on superconducting capacitive and nonlinear inductive elements. Typically, Al-AlOx-Al superconductor-insulator-superconductor Josephson junctions are used for the latter. The interaction of the transmon with the control circuitry, typically superconducting resonators, is described by circuit quantum electrodynamics. In this PhD thesis, a more recently demonstrated type of qubit is further developed and studied in detail using circuit quantum electrodynamics. In these qubits, the Josephson element of the transmon is replaced with indium arsenide nanowires, forming a superconductor-normal metalsuperconductor junction. In addition to the standard flux tunability, these qubits can also be voltage tuned. Due to the compatibility of all the materials used with an applied magnetic field, this type of qubit is a good candidate to be used as a precise and accurate sensor in a magnetic field. The goal of this work is to introduce the in-plane magnetic field as a new tuning knob to the toolbox of circuit quantum electrodynamics.
Advances in material science, especially the epitaxial growth of an aluminum shell directly on the indium arsenide nanowire, have enabled the fabrication of nanowire transmons with state-of-the-art coherence. An understanding of their workings in a zero-field environment is important before applying a magnetic field. Thus, we characterize the noise these qubits are subject to (Chapter 4) and find a strong coupling of charge two-level systems to their Josephson energy next to the expected weakly coupled flux and voltage noise.
Applying a magnetic field reveals that coherence in these qubits can be observed up to 70 mT, substantially above the superconducting gap of bulk aluminum (Chapter 5). Effects limiting the performance include the thick and fully covering aluminum shell, and the alignment and stability of the magnetic field. The use of different nanowires, the installation of a persistent-current vector solenoid and additional magnetic shielding then enables the operation of voltage- and flux-tunable devices in a magnetic field (Chapter 6). This constitutes a good starting point for circuit quantum electrodynamics experiments in a magnetic field, such
as the investigation of the microscopic origin of flux-noise.","","en","doctoral thesis","","978-90-8593-388-5","","","","","","","","","QCD/DiCarlo Lab","","",""
"uuid:fc159c49-40c7-43e4-b0a5-3ebf0bbd8b53","http://resolver.tudelft.nl/uuid:fc159c49-40c7-43e4-b0a5-3ebf0bbd8b53","Symmetries and Boundary Conditions of Topological Materials: General Theory and Applications","Rosdahl, T.O. (TU Delft QN/Akhmerov Group)","Akhmerov, A.R. (promotor); Wimmer, M.T. (copromotor); Delft University of Technology (degree granting institution)","2019","","","en","doctoral thesis","","978-94-6366-144-7","","","","","","2019-03-25","","","QN/Akhmerov Group","","",""
"uuid:a638c550-0d30-41df-9d49-4f935890bd2b","http://resolver.tudelft.nl/uuid:a638c550-0d30-41df-9d49-4f935890bd2b","Hazard Relative Navigation: Towards safe autonomous planetary landings in unknown hazardous terrain","Woicke, S. (TU Delft Astrodynamics & Space Missions)","Mooij, E. (promotor); Visser, P.N.A.M. (copromotor); Delft University of Technology (degree granting institution)","2019","Many successful landings have been performed on celestial bodies such as Mars, the Moon, Venus and others. All of these had in common that they were designed such that they had to land in regions, which were supposedly free of any hazards or that a certain level of risk was accepted. However, while rocks and other geological features are nightmares of any landing engineer they are the dream targets of scientists. Therefore, currently landing-site selection is a trade-off between the scientists’ wishes and the engineers’ fears. To bring the engineering capabilities closer to what the scientists desire, landing capabilities need to be advanced. Therefore, this work tries to answer the research question: Are autonomous safe landings in hazardous and potentially unknown environments possible? which lead to the following two sub-questions: 1. How can a landing vehicle autonomously assess the safety of a potentially unknown and unmapped landing site? 2. Howcan a landing vehicle ensure a safe touch downavoiding autonomously detected hazards?","Hazard detection; hazard relative navigation; terrain relative navigation; planetary landing; Moon","en","doctoral thesis","","978-94-028-1413-2","","","","","","2019-03-25","","","Astrodynamics & Space Missions","","",""
"uuid:bc4fe937-2711-4ee0-95b7-baad7c5d234c","http://resolver.tudelft.nl/uuid:bc4fe937-2711-4ee0-95b7-baad7c5d234c","Energy flux method for identification of damping in high-rise buildings subject to wind","Sánchez Gómez, S. (TU Delft Offshore Engineering)","Metrikine, A. (promotor); Delft University of Technology (degree granting institution)","2019","Buildings are becoming taller, lighter, slenderer. These changing characteristics make tall buildings more sensitive to environmental loads, including wind gusts. A building is considered ""tall"" when its height and slenderness influence the design. Given the demand of improving building performance, the serviceability limit state (SLS) has become the most important design criterion of tall buildings. The structural serviceability is directly related to the building motions generated by wind gusts. These motions can influence the well-being of the building occupants. Whereas the human perception of movement is related to the jerk sensation, acceleration is the widely accepted parameter for measuring comfort level. In literature, a few well-established criteria for determining human perception to building vibrations can be found. In this work, the van Koten criteria are used to study human perception of building vibrations, using data collected from full-scale measurements of several high-rise buildings in The Netherlands. Whereas results clearly show that acceleration levels are barely perceptible, people still often feel insecure in the interior of high-rise buildings, meaning that human perception is extremely subjective.
Dynamic systems are governed by their mass, damping, and stiffness. Damping can be understood as the energy dissipation in a system. Therefore, it determines the maximum acceleration that can be felt. Given its physical complexity, damping is the most uncertain parameter to be predicted. Presently, there are several damping predictors to determine damping in high-rise buildings. The resultant damping obtained by means of damping predictors is the result of the contribution of two main energy dissipation sources: the soil foundation interaction and the internal damping in the structure. Using these predictors, damping related to soil-foundation is a constant value, whereas structural damping increases with respect to the amplitude of vibration. Unfortunately, the use of these predictors result in large scatter compared to the experimentally identified damping values of buildings located in The Netherlands. Given that the parameters of these predictors are tuned based on full-scale experimental values, the discrepancy between experimentally identified damping of the buildings and the resultant values obtained by means of damping predictors is not easy to explain. In this work, a predictor based on the same principles, and tuned to fit the data collected from the full-scale measurements is presented and applied. Unfortunately, this predictor does not give enough insight to understand the behaviour of the dissipation mechanisms in a tall building.
It is therefore the aim of this work to develop a tool for better assessing the energy dissipation in high-rise buildings to improve damping prediction. In a tall building, there are three types of energy dissipation (i.e the structural energy dissipation; soil energy dissipation and energy dissipation caused by the wind around the building). In this work, the aerodynamic damping caused by the wind around a building is considered negligible. To get a better overall damping prediction, an attempt to identify the contribution of the different damping sources to the overall damping is carried out. However, given the fact that wind loads cannot excite higher frequency modes in a tall building, the energy dissipation of specific areas of the structure cannot be adequately identified by using modal based techniques. Therefore, a different approach is needed to identify the energy dissipated in local areas without a modal description of the structure. In this work, the energy-flux analysis is proposed as a damping identification tool. This approach isolates a certain area of the structure to formulate an energy balance around it. The connection between this local area and the rest of the structure is made via the energy flux, which accounts for the energy coming in and going out of the local area. By doing this analysis, the energy dissipation of a local area can be identified. In Chapters 4 and 5, an energy-flux analysis is used to identify the energy dissipation in local areas of the structure. Then, a damping operator can be quantified. Another advantage of this approach is the added possibility of studying the behaviours of different damping operators by computing their energy dissipation. To validate the method two lab-scale structures, a lab-scaled beam, a lab-scaled steel-frame building and a full-scale high-rise building are used. This is done in the following manner. First, the structures are instrumented using accelerometers in the case of the lab-scale beam and accelerometers and strain gauges in the case of the lab-scale steel frame and high-rise building. Then, equivalent viscous damping is experimentally identified by means of the collected data. Second, a model representative of the structure to be analysed is developed. The model is made with continuous and discrete structural elements (e.g. beams, springs, dashpots). These models are used in order to interpret energy change, energy flux and dissipation energy. The energy balance can be formulated around a specific area of the model. Then, by making use of experimental data, the energy enclosed in this specific area can be computed, and energy dissipation can be identified. To compare percentages of critical damping, the energy dissipation is formulated in terms of a damping operator. This operator can be used to compute equivalent viscous damping, which makes use of the energy-flux analysis by comparing it to the experimentally identified equivalent damping values. Based on the results presented in this work, it is proven that this approach is a consistent framework for damping identification.
In Chapter 6, a basic model for tall-building damping assessment during the design phase is presented. The model combines different models. The cone model describes the soil-foundation interaction and a Euler-Bernoulli beam model represents the building. Assuming a small vibration field, the mechanism responsible for the energy dissipation in the building is presumed to be directly related to the building's deformation. Therefore, the influence of building damping is studied based on the bending of the beam model used to describe the building. This influence varies with the change in the building deformation caused by different foundation stiffnesses. Likewise, the influence of soil-building interaction damping varies when changing the soil-foundation stiffness. Results provide evidence that the soil-foundation interaction of tall buildings may play an important role in the overall damping identification for certain soil characteristics, like the ones present in The Netherlands.","","en","doctoral thesis","","","","","","","","","","","Offshore Engineering","","",""
"uuid:12961f87-eeff-41b5-8688-df28e0ad9860","http://resolver.tudelft.nl/uuid:12961f87-eeff-41b5-8688-df28e0ad9860","Optimization in the Photolithography Bay: Scheduling and the Traveling Salesman Problem","Janssen, T.M.L. (TU Delft Discrete Mathematics and Optimization)","Aardal, K.I. (promotor); van Iersel, L.J.J. (copromotor); Delft University of Technology (degree granting institution)","2019","In a semiconductor factory, integrated circuits (or chips) are constructed on top of slabs of silicon, called wafers. The construction of these wafers is complicated and many different processing steps are needed to gradually building the chip layer by layer. Of these steps, photolithography uses the most expensive equipment. Therefore, the photolithography equipment is often the bottleneck of the factory. Photolithography is used to transfer the geometric pattern of a chip on a wafer. First a light-sensitive photoresist is put on the wafer. ThenUV light is sent through a photomask on the photoresist. The exposed parts of the photoresist will chemically react, creating the pattern. After the exposure, chemical reactions and metal depositions make a layer of circuits on the wafer. In this thesis, we try to increase the production of the semiconductor factory by reducing the time needed for the photolithography. In the first part, we look at themachine level. The time to process a wafer on a lithography steppermachine is determined by different elements of the process (Chapter 2). It turns out that the blade movement required in the exposure step has a significant impact total time required to process a wafer. The blade movement in turn depends on the order in which the different images are processed. Hence we want to find an ordering of the images, such that the blade movement is minimized. This problem turns out to be equivalent to the a priori traveling salesmen problem in the scenario model. The practical problem instances found are solved a limited amount of time using an integer linear programming solver and the average blade movement is reduced by approximately 20%, which reduces the average exposure time 1.6%.","","en","doctoral thesis","","978-94-6332-460-1","","","","","","","","","Discrete Mathematics and Optimization","","",""
"uuid:70778c5c-3f00-4854-a0fb-98b24c9ed1cb","http://resolver.tudelft.nl/uuid:70778c5c-3f00-4854-a0fb-98b24c9ed1cb","Database-driven online safe flight envelope prediction and protection for enhanced aircraft fault tolerance","Zhang, Y. (TU Delft Control & Simulation)","Mulder, Max (promotor); Chu, Q. P. (promotor); de Visser, C.C. (copromotor); Delft University of Technology (degree granting institution)","2019","Among all the contributors to fatal accidents, in-flight loss of control (LOC-I) remains one of the largest categories, as indicated by statistics of investigations into past civil aircraft accidents. In flight LOC generally refers to accidents in which the flight crew was unable to maintain control of the aircraft in flight, resulting in an unrecoverable deviation from the intended flight path. Compared with other accidents occurrence categories, LOC-I is more challenging to predict and prevent, since it is often the result of a highly complex combination of a wide range of contributing factors. Many in-depth researches into loss of control accidents have been conducted to find out how these events unfold, and to develop effective intervention strategies for preventing LOC.","flight envelope; loss-of-control; database; fault tolerance; machine learning","en","doctoral thesis","","978-94-028-1418-7","","","","","","","","","Control & Simulation","","",""
"uuid:d165937b-4e6d-459d-acfb-5d45e46d4edf","http://resolver.tudelft.nl/uuid:d165937b-4e6d-459d-acfb-5d45e46d4edf","Imitating nature to produce nacre-inspired composite materials with bacteria","Schmieden, D.T. (TU Delft BN/Marie-Eve Aubin-Tam Lab)","Meyer, A.S. (promotor); Aubin-Tam, M.E. (copromotor); Delft University of Technology (degree granting institution)","2019","In this study, a method for the bacterial production of a nacre-mimicking composite material was developed. Nacre (mother-of-pearl) is an organic-inorganic composite found in the inner lining of many mollusk shells and in pearls. It has a brick-and-mortar structure consisting of 95% aragonite (calcium carbonate) platelets and 5% organic matrix. Serving as a protective structure against e.g. predators, nacre has developed into an extremely strong and tough material, despite largely consisting of ceramic calciumcarbonate. Numerous mechanisms have been proposed to explain the outstanding mechanical properties of nacre, such as crack deflection and local strain hardening. Many groups are pursuing the aim of developing new materials which mimic nacre’s structure and mechanical properties. Nacre is produced by mollusks at ambient temperatures with easily obtainable materials and with low expenditure of energy. In contrast, human methods usually require extensive energy input, high temperatures and/or pressures, and environmentally damaging chemicals.","Biomimetics; nacre; biomaterials; synthetic biology; 3D printing; bioprinting","en","doctoral thesis","","978-90-8593-385-4","","","","","","","","","BN/Marie-Eve Aubin-Tam Lab","","",""
"uuid:0af59938-45cc-444d-bd62-289a662f854d","http://resolver.tudelft.nl/uuid:0af59938-45cc-444d-bd62-289a662f854d","Image analysis methods for dynamic hepatocyte-specific contrast enhanced MRI","Zhang, T. (TU Delft ImPhys/Quantitative Imaging)","van Vliet, L.J. (promotor); Stoker, Jaap (promotor); Vos, F.M. (promotor); Lavini, Cristina (promotor); Delft University of Technology (degree granting institution)","2019","Patients with colorectal cancer are frequently presented with liver metastases for which (partial) resection is often the best therapy. However, the future remnant liver, the remaining part of the liver after resection, should allow adequate liver function to avoid liver failure. This thesis presents novel methods for the accurate voxel-wise estimation of the future remnant liver’s function based on pharmacokinetic modeling of dynamic contractenhanced (DCE)MRI. The methods comprise a variety of novel techniques for DCE-MRI of the liver: 1) 4D registration of the DCE series; 2) delineation of the liver, the liver vasculature and the liver’s anatomical segments; 3) pharmacokinetic (PK) modeling of the perfusion based on the intra-cellular contrast agent Gd-EOB-DTPA (Primovist); 4) assessment of the relation between DCE-MRI and hepatobiliary scintigraphy (HBS). Spatial alignment of the voxels in the 4D DCE-MRI is an important requirement for PK modeling. We exploit the proximity of deformation fields to sequentially register images in an ordered fashion. The global liver displacement helps in predicting the deformation ‘tendency’ along the time axis. The deformation tendency allows us to obtain a better starting point for the registration. Such a method aims to start the registration optimization close to the optimum and avoid getting trapped in a local minimum. We apply a liver-specific contrast agent, due to which the liver shows","DCE-MRI; liver metastases; colorectal cancer","en","doctoral thesis","","978-94-6375-351-7","","","","","","","","","ImPhys/Quantitative Imaging","","",""
"uuid:35d2e152-0cfe-439e-a276-da4a69b11acd","http://resolver.tudelft.nl/uuid:35d2e152-0cfe-439e-a276-da4a69b11acd","A Novel Design of the Transport Infrastructure for Traffic Simulation Models","Tamminga, G.F. (TU Delft Transport and Planning)","Hoogendoorn, S.P. (promotor); van Lint, J.W.C. (promotor); Delft University of Technology (degree granting institution)","2019","Over the past decades, transport and traffic models have become powerful tools for transport and traffic practitioners and academics all over the world. Most of these commercial model packages provide proprietary and closed source software, which generally implies that users are not enabled to freely edit, modify and extend the code. Specifically for some of the new fields of applications such as the modelling of in-car systems and other ITS systems, and more generally for scientific research, access to the algorithms is often essential. The de facto option then is to code traffic models (or parts thereof) from scratch. To prevent this loss of knowledge and experience, an approach is required where code reuse and proper documentation is stimulated.","","en","doctoral thesis","TRAIL Research School","978-90-5584-247-6","","","","TRAIL Thesis Series no. T2019/4, the Netherlands Research School TRAIL","","","","","Transport and Planning","","",""
"uuid:530d6a50-4128-4b89-8d5f-e09d5387a502","http://resolver.tudelft.nl/uuid:530d6a50-4128-4b89-8d5f-e09d5387a502","Integrating Multiple Sources of Information for Improving Hydrological Modelling: an Ensemble Approach","Hartanto, I.M. (TU Delft Water Resources)","Solomatine, D.P. (promotor); van Andel, S.J. (copromotor); Delft University of Technology (degree granting institution)","2019","The availability of Earth observation (EO) and numerical weather prediction data for hydrological modelling and water management has increased significantly, creating a situation that today, for the same variable, estimates may be available from two or more sources of information. Precipitation data, for example, can be obtained from rain gauges, weather radar, satellites, or outputs from numerical weather models. Land use data can be obtained from land survey, satellite imagery, or a combination of the two. Each of these data sources provides an estimate of a catchment characteristic and related hydrological model parameters, or of a hydrometeorological variable. Estimates from each data source vary in magnitude or temporal and spatial variability. It is not always
possible to judge which data source is the most accurate. One data source may perform poorly in one situation but give an accurate estimate for another. Yet, in hydrological modelling, usually, a particular set of catchment characteristics and input data is selected, possibly ignoring other relevant data sources. One of the reasons may be that despite vast research and development efforts in integration methods for sub-sets of the available data sources, there is no comprehensive data-model integration framework assuming existence and enabling effective use of multiple data sources in hydrological modelling.
The main objective of this thesis, therefore, is to develop such a data-model integration framework, and test it on a case study.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-0-367-26543-4","","","","Dissertation submitted in fulfillment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of the UNESCO-IHE Institute for Water Education.","","","","","Water Resources","","",""
"uuid:738b9b01-d130-4ae4-bc51-c989824a8760","http://resolver.tudelft.nl/uuid:738b9b01-d130-4ae4-bc51-c989824a8760","Planetary Radio Interferometry and Doppler Experiment (PRIDE) for radio occultation studies: A Venus Express test case","Bocanegra Bahamon, T.M. (TU Delft Astrodynamics & Space Missions)","Vermeersen, L.L.A. (promotor); Gurvits, L. (promotor); Delft University of Technology (degree granting institution)","2019","The thesis that you are about to read deals with the implementation of a technique to study atmospheres of planets or moons in the Solar System. We use radio telescopes on Earth to track spacecraft that are orbiting planets, and use the signal the spacecraft emits, as it crosses the planet’s atmosphere, to investigate its physical characteristics. Amazing, isn’t it? Often, we get lost in our daily routines and we lose sight of the overall picture. We forget how truly astounding the experiments we are able to undertake are, using the universe as our lab. I feel very privileged to have been able to do this as part of the work that led to this dissertation.","Doppler and VLBI spacecraft tracking; planetary missions; radio science applications; radio occultation; planetary atmospheres","en","doctoral thesis","","978-94-6375-341-8","","","","","","","","","Astrodynamics & Space Missions","","",""
"uuid:79ff3197-2134-4057-8e6c-c3239e2f2a7b","http://resolver.tudelft.nl/uuid:79ff3197-2134-4057-8e6c-c3239e2f2a7b","Incremental nonlinear control of hydraulic parallel robots: An application to the SIMONA research simulator","Huang, Y. (TU Delft Control & Simulation)","Mulder, Max (promotor); Chu, Q. P. (promotor); Pool, D.M. (copromotor); Delft University of Technology (degree granting institution)","2019","In advanced robotic applications such as robotic locomotion, vehicle and flight simulators, and material test devices, there are higher requirements on stiffness, robustness and power ability for the mechanical structure and the actuator. Hence, it is common for such applications to use parallel manipulators and hydraulic actuators, due to their advantages in these aspects over their counterparts of serialmanipulators and electrical actuators. When high-precision motion control is required for such systems, advanced model-based controllers, including feedback linearization and adaptive control, have been proposed in state-of-the-art studies for both hydraulic and parallel mechanical systems. However, the high complexity, nonlinearity and model uncertainty of these systems raise significant challenges for their motion control accuracy.","Parallel Robots; Motion Control; Hydraulic Robots; Force Control; Nonlinear Systems; Model Uncertainty; Robustness; Incremental Nonlinear Dynamic Inversion","en","doctoral thesis","","978-94-028-1419-4","","","","","","","","","Control & Simulation","","",""
"uuid:19e3bd77-d4cb-48a5-9b6a-4e96d95f39ec","http://resolver.tudelft.nl/uuid:19e3bd77-d4cb-48a5-9b6a-4e96d95f39ec","Meshless numericalmethods applied tomultiphysics andmultiscale problems","Lukyanov, A. (TU Delft Numerical Analysis)","Vuik, Cornelis (promotor); Delft University of Technology (degree granting institution)","2019","N many fields of science and engineering, such as fluid or structural mechanics, and nanotechnology, dynamical systems at different scale need to be simulated, optimized or controlled. They are often described by discretizations of systems of nonlinear partial differential equations yielding high-dimensional discrete phase spaces. For this reason, in recent decades, research has mainly focused on the development of sophisticated analytical and numerical (linear and nonlinear) tools to help understand the overall multiscale system behavior. Various models and numerical methods have been developed to simulate different physical processes at different scales. The choice of these methods will depend largely on the problem, the available computational resources and constitutive equations. Smoothed particle hydrodynamics (SPH) was developed a few decades ago to model inviscid fluid and gas flow dynamics in astrophysical problems. The SPH is an interpolation-based numerical technique that can be used to solve systems of partial differential equations (PDEs) using either Lagrangian or Eulerian descriptions. The nature of SPH method allows to incorporate different physical and chemical effects into the discretized governing equations with relatively small code-development effort. In addition, geometrically complex and/or dynamic boundaries, and interfaces can be handled without undue difficulty. The SPH numerical procedure of calculating state variables (i.e., density, velocity, and gradient of deformation) are computed as a weighted average of values in a local region.","meshless methods; shock waves; multiscale linear solver; high order discretization","en","doctoral thesis","","","","","","","","","","","Numerical Analysis","","",""
"uuid:0458fd29-920b-43cb-8c2b-e04be8db0dc7","http://resolver.tudelft.nl/uuid:0458fd29-920b-43cb-8c2b-e04be8db0dc7","POD-Based Deflation Method For Reservoir Simulation","Diaz Cortes, G.B. (TU Delft Numerical Analysis)","Vuik, Cornelis (promotor); Jansen, J.D. (promotor); Delft University of Technology (degree granting institution)","2019","Simulation of flow through highly heterogeneous porous media results in large ill-conditioned systems of equations. In particular, solving the linearized pressure system can be especially time-consuming. Therefore, extensive efforts to find ways to address this issue effectively are required. In this work, we introduce a POD-based deflation method that combines the advantages of two state of the art techniques: Proper Orthogonal Decomposition (POD) and the deflation method. The dominant features of the system are captured in a set of POD basis vectors, used later to accelerate the solution of linear systems with a deflation procedure.
If all of the system information is contained in the POD basis, the deflation method converges in one iteration. This behavior was compared with the usual choices of deflation vectors, which require more than 18 iterations for the same number of deflation vectors. If only part of this information is obtained, the POD-based deflation method gives a good initial solution, after one iteration the error of the solution is of order 10^{-4}. The applicability of the POD-based deflation method does not depend on the test case. It is implemented for reservoir simulation problems, but it can be implemented for any time-varying problem. Furthermore, we study its applicability for various 2L-PCG methods, but it can also be implemented together with many other linear solvers, e.g., multigrid, multilevel, and domain decomposition techniques. The implementation can also be extended to include various preconditioners.","Deflation; POD; Reservoir Simulation; Krylov Methods; Linear Solvers","en","doctoral thesis","","978-94-6380-284-0","","","","","","","","","Numerical Analysis","","",""
"uuid:024d9d2e-cfbd-4753-b7cf-587799110824","http://resolver.tudelft.nl/uuid:024d9d2e-cfbd-4753-b7cf-587799110824","Mitigating salt damage in lime-based mortars by built-in crystallization modifiers","Granneman, S.J.C. (TU Delft Heritage & Technology)","van Hees, R.P.J. (promotor); Lubelli, B. (copromotor); Delft University of Technology (degree granting institution)","2019","Damage due to the crystallization of salts is a common problem in porous building materials. Also mortars used in the restoration of historic buildings are often subjected to a high salt load, resulting in a rapid degradation and high maintenance costs. Especially lime-based mortars, used for example as render, plaster, bedding mortar or pointing mortar, are vulnerable to this type of damage. Despite the extensive research efforts, no definitive solution yet exists to tackle the problem of salt decay in building materials. Existing solutions to improve mortar resistance with respect to salt decay such as modifying the moisture transport properties of the mortar (e.g. plaster with water-repellent properties) or increasing its mechanical strength of the plaster mortar (e.g. by using a different binder such as cement), often show compatibility problems and might cause even more damage to the to be restored (historic) fabric. Recently a new approach, based on the use of crystallization modifiers to alter the salt crystallization process, has been proposed. The aim of modifiers is to prevent or mitigate salt crystallization damage in building materials. Modifiers are ions or molecules that can keep the salts longer in solution (inhibitors), facilitate the precipitation of a certain crystal phase (promoters) and/or change the shape and size of the grown crystals (habit modifiers). These effects are often present in combination, i.e. a promoter or inhibitor can at the same time act as habit modifier.","","en","doctoral thesis","","978-94-028-1394-4","","","","","","2019-09-16","","","Heritage & Technology","","",""
"uuid:ea1b4101-0e55-4abe-9539-ae5d81cf9f65","http://resolver.tudelft.nl/uuid:ea1b4101-0e55-4abe-9539-ae5d81cf9f65","A guideline for selecting MDAO workflows with an application in offshore wind energy","Sanchez Perez Moreno, S. (TU Delft Wind Energy)","van Bussel, G.J.W. (promotor); Zaaijer, M B (copromotor); Delft University of Technology (degree granting institution)","2019","A system is a set of interconnected components whose individual behaviour and interactions determine the overall performance of the set. Wind farms are amongst the most complex systems deployed worldwide, based on their uncertainty, heterogeneity and complexity. Moreover, many technical and social disciplines may simultaneously describe the performance of a complex system such as wind farms.","Offshore wind farm design; systems engineering; MDAO workflows","en","doctoral thesis","","978-94-6366-138-6","","","","","","","","","Wind Energy","","",""
"uuid:8062f124-6fb4-43fe-8bd6-c2122c872409","http://resolver.tudelft.nl/uuid:8062f124-6fb4-43fe-8bd6-c2122c872409","Three-dimensional ozone distribution based on assimilation of nadir-sounding UV-VIS satellite observations","van Peet, J. (TU Delft Atmospheric Remote Sensing)","Levelt, Pieternel Felicitas (promotor); van der A, R.J. (promotor); Delft University of Technology (degree granting institution)","2019","Ozone (O3) directly and indirectly affects human health (depending on the altitude it is sometimes referred to as “good” or “bad” ozone) and has an important role in the temperature structure of the atmosphere. Because of the impact of ozone on air quality and climate change, the objective of this thesis is to improve our understanding of the global distribution of atmospheric ozone in space and time, not just in the stratosphere, but also in the troposphere, where it directly affects living organisms.
In this thesis, ozone is measured with satellite-based instruments that measure reflected solar light in the Ultra Violet - VISible (UV-VIS) wavelength range (280 < λ< 330 nm). In the UV-VIS, the absorption crosss-ection of ozone varies by several orders of magnitude, providing the altitude information for the ozone distribution. The ozone profiles are retrieved from the measured radiation with the optimal estimation technique. To make optimal use of the advantages of both observations and atmospheric models, they are combined using the Kalman filter data assimilation technique.The assimilation output consists of regular gridded 3D ozone fields without missing data at regular time intervals.","ozone; trace gas; satellite observation; data assimilation","en","doctoral thesis","","978-94-6384-027-9","","","","","","","","","Atmospheric Remote Sensing","","",""
"uuid:af7008fb-c8e2-42d4-b9da-c077366e59ac","http://resolver.tudelft.nl/uuid:af7008fb-c8e2-42d4-b9da-c077366e59ac","Design of the subsurface of land reclamations for freshwater storage and recovery: A new view on land reclamations","van Ginkel, M. (TU Delft Water Resources)","Olsthoorn, T.N. (promotor); Delft University of Technology (degree granting institution)","2019","Worldwide, land reclamations are constructed for the urban expansion of coastal megacities. Freshwater supply plays an important role in their sustainable development, especially in the light of climate change and depletion of natural water resources in the hinterland. The subsurface of these new lands offer opportunities for subsurface freshwater storage and recovery. Moreover, the design from scratch and construction makes it possible to not only manage the mixing and buoyancy of a freshwater volume in a saline aquifer operationally, but also to create physical properties of the subsurface to reach high recovery efficiencies.
In this dissertation, three concepts have been identified that allow managing the mixing and density stratification of a freshwater volume in saline aquifers. These are: 1) the properties of these man made islands that reduce mixing and density stratification, 2) vertical flow barriers of limited depth that prevent the volume of fresh water from expanding radially, speeding up the formation of the freshwater stock, and 3) saltwater extraction from below the freshwater stock, which prevents the freshwater volume for floating up by counteracting buoyancy. Secondly, insight has been given in the internal structure of the porous media and its hydraulic properties of five land reclamations that were constructed by bottom dumping, rainbowing and pipeline discharge.
The increasing number of land reclamations that result from the ongoing worldwide urbanisation of coastal areas, for which a robust freshwater supply must be guaranteed, make the results of this thesis widely applicable.","Artificial Recharge; Coastal aquifers; Land reclamations; Hydraulic properties; Water resources","en","doctoral thesis","","9789463235419","","","","","","","","","Water Resources","","",""
"uuid:cbb5e163-eac7-44b8-b336-deb94106cfce","http://resolver.tudelft.nl/uuid:cbb5e163-eac7-44b8-b336-deb94106cfce","Long-term Dynamics and Stabilization of Intertidal flats: A system approach","Maan, D.C. (TU Delft Water Resources; TU Delft Coastal Engineering)","Wang, Zhengbing (promotor); van Prooijen, Bram (copromotor); Delft University of Technology (degree granting institution)","2019","Decreasing sediment availability, in combination with sea level rise and human fixation of the coastline, results in losses of the intertidal environment (lying in-between the mean low water and mean high water spring tide). This means a loss of biodiversity and an increased coastal vulnerability to extreme events and sea level rise. Thereforeit is of utmost importance to understand the dynamics of the intertidal wetlands; their response to sea level rise and to different types of human interferences. The better we understand the processes that underlie the evolution of the intertidal system, the more effectively we can manipulate the system, to stimulate its rise and maintain its elevation relative to mean sea level.The long-term morphodynamics is difficult to understand due to the interdependencies of the underlying processes; the morphology is shaped by the hydrodynamic forces,while it influences these forces at the same time. Due to the feedback loops, the components are strongly entangled and the whole system cannot be reduced to the sum of itsparts and solved by the traditional reductionist method.In this thesis, system theory and system analysis are applied to get towards an understanding of ‘the intertidal morphodynamical system’. This is the philosophy that states arise that are understandable and possible to determine exactly, despite the many interactionsbetween the variables and the apparent complexity of systems. To describe these states, I follow a top-down approach, where I learn from the observed system behavior. Hence, the observation of conserved properties leads to the important question:‘why are they conserved?’ The answer to this question can reveal much of the system’s dynamics.","","en","doctoral thesis","","978-94-6323-503-7","","","","","","","","","Water Resources","","",""
"uuid:6c3937b1-aa5b-4860-a58c-f57e87518ce9","http://resolver.tudelft.nl/uuid:6c3937b1-aa5b-4860-a58c-f57e87518ce9","Martingales and stochastic calculus in Banach spaces","Yaroslavtsev, I.S. (TU Delft Analysis)","Veraar, M.C. (promotor); van Neerven, J.M.A.M. (promotor); Delft University of Technology (degree granting institution)","2019","In this thesis we study martingales and stochastic integration of processes with
values in UMD Banach spaces.","martingales; UMD Banach spaces; Fourier multipliers; martingale decompositions; weak differential subordination; Burkholder-Davis-Gundy inequalities; stochastic integration; random measures; Novikov inequalities; Burkholder-Rosenthal inequalities; Hilbert transform","en","doctoral thesis","","978-94-028-1398-2","","","","","","","","","Analysis","","",""
"uuid:a536ba72-441e-42fb-803f-a762a9c25c07","http://resolver.tudelft.nl/uuid:a536ba72-441e-42fb-803f-a762a9c25c07","Superconducting quantum interference in semiconducting Josephson junctions","de Vries, F.K. (TU Delft QRD/Goswami Lab)","Kouwenhoven, Leo P. (promotor); Goswami, S. (copromotor); Delft University of Technology (degree granting institution)","2019","A topological superconductor is a new state of matter that attract a lot of interest for its potential application in quantum computers. However, there is no single material known to host this state of matter. In this thesis, combinations of superconductors and semiconductors are investigated experimentally with the goal to engineer such a topological superconductor. The materials chosen combine spin-orbit interaction, superconductivity and onedimensionality. Then, under influence of a magnetic field, the hybrid superconductor semiconductor system is predicted to become topological.
First, the theoretical background of the experiments is presented, with special attention to the superconducting quantum interference in semiconducting Josephson junctions. In addition, a description of the different materials used and the fabrication of the devices, is provided.
In the first experiment we explore hole transport through GeSi core-shell nanowires. Electronic measurements reveal two transport channels only, which underlines the onedimensionality of the nanowire. On top of that, high-quality induced superconductivity is observed in both the tunneling and open regime, and evidence for strong spin-orbit interaction is presented.
Then, we switch materials to a two-dimensional electron and hole gas in an InAs/GaSb double quantum well. The spin-orbit interaction is studied by measuring the difference between the densities of electrons with opposite spin orientation. Two types of spin-orbit interaction are identified by tuning the magnitude of one of them, with an applied electric field.
InAs quantum wells are known to exhibit enhanced conduction at their edges. We find supercurrent through these edges in Josephson junction devices using superconducting quantum interference measurements. The interference pattern reveals a flux periodicity of h/e. Interestingly, while this periodicity is observed in the trivial regime, it was considered a signature of topological superconductivity before. We argue and show that nonlocal processes lead to the h/e effect in our devices. The correlated occurence of enhanced edge conduction and the h/e periodicity is confirmed in Josephson junctions made of InSb flakes.
The final experimental chapter considers a superconducting quantum interference device, fabricated in an InAs quantum well. This geometry allows for control of the superconducting phase difference of the Josephson junction, potentially reducing the magnetic field needed for the device to become topological. Unfortunately, in the measurements we do not observe signatures of topological superconductivity.
At last, we describe what device geometry and material combination could be used to do reach the topological regime. In addition, we discuss ideas for future research of the othermaterial systems used in this thesis.","","en","doctoral thesis","","9789085933847","","","","","","","","","QRD/Goswami Lab","","",""
"uuid:42cf6e19-7746-4d7e-bb6c-b8c357aacaaf","http://resolver.tudelft.nl/uuid:42cf6e19-7746-4d7e-bb6c-b8c357aacaaf","Applications of trajectory-based analysis in optimization and control","Sharifi K., Arman (TU Delft Team Tamas Keviczky)","Keviczky, T. (promotor); Mohajerin Esfahani, P. (copromotor); Delft University of Technology (degree granting institution)","2019","The synergy between optimization and control is a long standing tradition. In fact, this synergy is becoming more and more apparent because of the multi-disciplinary character of the most pressing, current engineering problems along with constant developments of these two fields.
Historically, optimization methods have helped the control community to achieve their design goals formalized in some sort of objective function. On the other hand, control theory has provided a setting to interpret complicated aspects of optimization algorithms.
In this thesis, we address three problem instances that lie on the boundary of optimization and control. We employ tools from one field to address a problem in the other field. Fundamentally, our proposed methods share a similar character: their analysis techniques are trajectory-based. In simple words, our proposed methods exploit the trajectories generated by the dynamics that represent each problem instance.
The first problem focuses on a 2nd-order, damped differential equation (ODE). This ODE along with its numerous variations have been used to develop or analyze various optimization algorithms, known as fast methods.
As an alternative to the existing methods, we first amend the underling ODE with two types of state-dependent inputs, and then extend the resulting controlled dynamics to two hybrid control systems.
Employing a trajectory-based analysis, both control laws are constructed to guarantee exponential convergence in a suboptimality measure.
To show that the trajectories generated by each hybrid control system are well-posed, we demonstrate Zeno-freeness of solution trajectories in both cases.
Furthermore, we propose a mechanism to determine a time-discretization step-size such that the resulting discrete-time hybrid control systems are exponentially stable.
Event-based implementation of control laws have received a lot of attention during the past decade.
The reason for this interest is the hope to reduce the conservatism involved in the traditional periodic implementation.
In the second problem of this thesis, we introduce an event-based sampling policy for a constraint-tightening, robust model predictive control (RMPC) method.
The triggering mechanism is a sequence of hyper-rectangles constructed around the optimal state trajectories.
In particular, the triggering mechanism's nature makes the proposed approach a suitable choice for plants without a centralized sensory node.
A key feature of the proposed method is its complete decoupling from the RMPC method's parameters, facilitating a meaningful comparison between the periodic and aperiodic implementation policies.
Furthermore, we provide two types of convex formulations to design the triggering mechanism.
The last problem we focus on in this thesis is also related to the event-based implementation of a control law.
However, the main aim here is to propose an entity that can be utilized by a real-time engineer to schedule tasks in a networked structure.
A common entity provided in the literature related to event-triggering approaches is the minimal inter-execution time (to show the avoidance of a Zeno behavior in the closed-loop system).
Nonetheless, such a quantity is extremely conservative when used for scheduling purposes.
In this problem, we consider an $\mathcal{L}_2$-based triggering mechanism introduced in the literature and propose a framework to construct a timed safety automaton that can capture the triggering instants generated by this mechanism.
In our analysis, we borrow some tools from stability analysis of delayed systems along with reachability analysis to construct the desired timed safety automaton.","","en","doctoral thesis","","978-94-6384-018-7","","","","","","","","","Team Tamas Keviczky","","",""
"uuid:19c9610b-9a72-42a6-8340-2ba01ec78cc6","http://resolver.tudelft.nl/uuid:19c9610b-9a72-42a6-8340-2ba01ec78cc6","Hydro-elastic analysis of flexible marine propellers","Maljaars, P.J. (TU Delft Ship Hydromechanics and Structures)","Kaminski, M.L. (promotor); van Terwisga, T.J.C. (promotor); Delft University of Technology (degree granting institution)","2019","Higher efficiencies, higher cavitation inception speeds and reduced acoustic signature are claimed benefits of flexible composite propellers. Analysing the hydrodynamic performance of these flexible propellers, implies that a coupled fluid-structure interaction (FSI) computation has to be performed. An FSI coupling can be monolithic, which means the equations for the fluid and structural sub-problem are merged into one set of equations and solved simultaneously. Another approach is to apply a partitioned coupling, in which the existing fluid and structural sub-problem are sequentially solved. Then, coupling iterations are performed to converge to the monolithic solution. When coupling iterations are omitted, the approach becomes a so-called loose coupling. Due to the relatively high fluid added mass, flexible propeller computations require a strong coupling including coupling iterations. Coupling iterations make these kind of computations CPU intensive and therefore it is of importance to solve the structural and fluid problem efficiently.","flexible propellers; composite propellers; hydro-elasticity; fluid-structure interaction","en","doctoral thesis","","978-94-6375-233-6","","","","","","","","","Ship Hydromechanics and Structures","","",""
"uuid:b1582672-db92-4b79-b331-4e9e307f42b2","http://resolver.tudelft.nl/uuid:b1582672-db92-4b79-b331-4e9e307f42b2","Computer-Aided Assessment of Longitudinal Fundus Photos for Screening Diabetic Retinopathy","Adal, K.M. (TU Delft ImPhys/Quantitative Imaging)","van Vliet, L.J. (promotor); Vermeer, K.A. (copromotor); Delft University of Technology (degree granting institution)","2019","Diabetic retinopathy (DR) is a complications of diabetes mellitus, which progressively damages small retinal blood vessels and result in vision loss if not treated and controlled timely. Because of an increase in the risk of vision loss with the duration of diabetes and the latency between DR progression and early symptoms, diabetic patients require periodic screening. The required regular screening by a trained clinician, based on fundus photos, is time consuming, subjective, and resource demanding. Furthermore, the current practice does not scale well with the global rise in the diabetic population. Computer-aided screening offers a solution to this problem. This thesis presents several building blocks for automated analysis of a series of fundus images for DR.","","en","doctoral thesis","","978-94-6384-017-0","","","","","","","","","ImPhys/Quantitative Imaging","","",""
"uuid:96989fc0-eafb-40c2-a622-f9f1a71faa29","http://resolver.tudelft.nl/uuid:96989fc0-eafb-40c2-a622-f9f1a71faa29","Towards activity descriptors for the methane dehydroaromatization catalyst Mo/HZSM-","Vollmer, I. (TU Delft ChemE/Catalysis Engineering)","Kapteijn, F. (promotor); Gascon, Jorge (promotor); Delft University of Technology (degree granting institution)","2019","In view of rising global demand for aromatics as starting chemical for many commodity goods as well as pharmaceuticals, new routes for production are explored. Since the advent of fracking, natural gas has become increasingly cheap and direct utilization for aromatics production has gained attractiveness. Methane dehydroaromatization represents the most direct of such utilization routes and could potentially be very carbon efficient, if no oxidants are added, since byproducts such as CO and CO2 are avoided. For the direct non-oxidative conversion however, fast deactivation due to coke formation and significant thermodynamic limitations still stand in the way of commercialization. Improvements of the catalysts towards coke-resistance and overall stability could significantly speed up the development towards large-scale operation of the methane dehydroaromatization process, although innovation in process development is also believed to be important. It is desirable to develop a catalyst that outperforms the state-of-the-art system Mo/HZSM-5. This system however, continues to perform better than other catalyst formulations, and for 25 years already, no significantly better system was found. Thus, this thesis focusses on developing a fundamental understanding of why this catalytic system continues to outperform other systems. The aim of this thesis was to spot the characteristic traits of this catalyst, which can then be used as guidelines for the development of novel catalysts.","methane dehydroaromatization; molybdenum; zeolite; Catalysis","en","doctoral thesis","","978-94-028-1396-8","","","","","","","","","ChemE/Catalysis Engineering","","",""
"uuid:a48e3fc9-5e7b-4872-9984-22e0eef6686d","http://resolver.tudelft.nl/uuid:a48e3fc9-5e7b-4872-9984-22e0eef6686d","Product emulsification in multiphase fermentations: The unspoken challenge in microbial production of sesquiterpenes","Pedraza de la Cuesta, S. (TU Delft BT/Bioprocess Engineering)","van der Wielen, L.A.M. (promotor); Cuellar Soares, M.C. (copromotor); Delft University of Technology (degree granting institution)","2019","Sesquiterpenes are a versatile group of 15-carbon molecules, traditionally extracted from plants for diverse applications ranging from fuels to fine chemicals and pharmaceuticals. Scarcity of natural resources and emergence of new applications have encouraged the development of sustainable solutions to produce sesquiterpenes. The recent development of engineered microbial strains able to produce and secrete sesquiterpenes reaching fermentation titres in the order of g per L, is a promising alternative to produce diesel-like biofuels from renewable biomass sources, like sugar cane bagasse. The most attractive aspect of sesquiterpene fermentations is that the extracellular product readily forms an oil phase separated from the aqueous fermentation broth in the reactor. The difference of densities between the aqueous broth and the light product phase opens the opportunity of integrating cost-efficient separation techniques (e.g. gravity separation, hydro-cyclones) with the reactor. This scenario could contribute to significantly lowering equipment and utility costs as well as reducing cost of raw materials by allowing for cell recycling. The scale-up of sesquiterpene fermentations has unveiled processing challenges that were not prominently present at laboratory scale.","","en","doctoral thesis","","978-94-6375-310-4","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:7402f7bd-b457-47f5-aa0a-fea3bbdb38eb","http://resolver.tudelft.nl/uuid:7402f7bd-b457-47f5-aa0a-fea3bbdb38eb","Silicon dioxide photonic mems: Chip-to-chip alignment with positionable waveguides","Peters, T.J. (TU Delft Micro and Nano Engineering)","Staufer, U. (promotor); Tichem, M. (copromotor); Delft University of Technology (degree granting institution)","2019","THIS thesis describes the development of a positionable waveguide array realized in a silicon nitride / silicon dioxide (Si3N4 core / SiO2 cladding) photonic platform. The positionable waveguide array is the heart of a novel alignment approach for high precision multi-channel chip-to-chip interconnects. This alignment approach enables submicron accurate alignment of an Indium Phosphide (InP) Photonic Integrated Circuit (PIC) and a TriPleX interposer chip. Mechanically flexible waveguides with integrated alignment functionality are realized within the TriPleX interposer chip. Compared to competing alignment approaches, the proposed concept targets higher accuracy and precision and allows for an increased level of automation to lower assembly time and cost. The final alignment of the waveguides is achieved in two stages. In the first stage, both chips are flip-chip bonded on a common substrate. The result of this first stage is a coarse alignment of the waveguides of both chips, as well as mechanical fixation and electrical connection of both chips. In the second stage, integrated alignment functionality of the positionable waveguide array within the TriPleX interposer chip is used to optimally align the interposer waveguides with the waveguides of the InP PIC. Once aligned, the alignment function of the positionable waveguide array has served its purpose and the positionable waveguide array is mechanically fixed, providing an optimal alignment for the lifetime of the PIC.","","en","doctoral thesis","","978-94-6366-139-3","","","","","","","","","Micro and Nano Engineering","","",""
"uuid:0c847298-0007-4922-aff4-00beb248d664","http://resolver.tudelft.nl/uuid:0c847298-0007-4922-aff4-00beb248d664","Maltose and maltotriose metabolism in brewing-related Saccharomyces yeasts","Brickwedde, A. (TU Delft BT/Industriele Microbiologie)","Pronk, J.T. (promotor); Daran, J.G. (promotor); Delft University of Technology (degree granting institution)","2019","haploid S. cerevisiae laboratory strain of the CEN.PK family. The performance of the constructed hybrid, strain IMS0408, was then compared to those of its parents in anaerobic batch cultures grown on different media and at different temperatures. While S. eubayanus displayed significantly higher growth rates than S. cerevisiae in anaerobic batch cultures below 25 °C, the laboratory hybrid IMS0408 performed as well as the best parent or even better at most tested temperatures. In contrast to its S. eubayanus parent, the hybrid strain was further able to consume maltotriose, the second most abundant sugar in wort, in cultures grown on sugar mixtures. This observation showed how acquisition of the S. cerevisiae genome contributed an important brewing related characteristic of the hybrid. The hybrid strain IMS0408 showed a best parent heterosis in two major characteristics that are relevant in the brewing environment. This heterosis illustrates how an early, spontaneous S. pastorianus lager brewing hybrid might have outcompeted other Saccharomyces species, including its parental ones, under the low-temperature, high-maltotriose conditions of lager fermentation processes.","","en","doctoral thesis","","","","","","","","","","","BT/Industriele Microbiologie","","",""
"uuid:6c84e51e-4111-4638-ae70-24a510db3ca5","http://resolver.tudelft.nl/uuid:6c84e51e-4111-4638-ae70-24a510db3ca5","Numerical investigation of dense condensing flows for next-generation power units","Azzini, L. (TU Delft Flight Performance and Propulsion)","Colonna, Piero (promotor); Pini, M. (copromotor); Delft University of Technology (degree granting institution)","2019","Metastable condensation is the phase transition from vapor to liquid that occurs in a fluid subjected to rapid temperature variations. Under these conditions, the nucleation process is triggered when the fluid is in a supersaturated thermodynamic state. The dispersed phase forming during the process of condensation is not in stable thermodynamic equilibrium with the surrounding vapor. As a consequence, models suitable for condensing flows under large temperature gradients, which are relevant to many scientific studies and industrial applications, are rather complex as they must correctly treat metastable thermodynamic states. Applications of metastable condensation flowmodels include improved climate models [1], biomedical treatments [2], heat transfer enhancement for industrial purposes [3], natural gas separation [4], power conversion [5] and many others. The scope of the research documented in this dissertation is the numerical investigation of metastable condensing flows in turbomachinery for propulsion and power applications. The flow inside turbomachinery components is highly compressible, with absolute temperature gradients that can reach values of the order of 1e6 K/s[6], in the case of supersonic expansions. In such extreme conditions, metastable phenomena impact severely on the component performance in terms of both thermodynamic and fluid dynamic losses and lifetime. The number of technologies for propulsion and power characterised by the presence of condensing mixtures in turbomachinery is increasing. Considerable research and development efforts are currently concerned with components of next-generation thermal power and refrigeration systems, in which the flow undergoes metastable condensation. The characterization of metastable condensing flows and the development of advanced fluid dynamic design tools capable of treating these complex flow phenomena are fundamental steps towards the commercial application fo such promising technologies.","Metastable condensation; Wilson point; supersonic expansion; condensing steam","en","doctoral thesis","","","","","","","","","","","Flight Performance and Propulsion","","",""
"uuid:9e61ed41-f8f3-4941-adc1-18dd50aa330c","http://resolver.tudelft.nl/uuid:9e61ed41-f8f3-4941-adc1-18dd50aa330c","The Intangibles: Values of Heritage Products for Design and Sustainability Initiatives","Suib, S.S.S.B. (TU Delft Design for Sustainability)","van Engelen, J.M.L. (promotor); Brezet, J.C. (promotor); Delft University of Technology (degree granting institution)","2019","Values are attributed to products over time and across generations. They are created, compiled, shared, evolved, exchanged, and also discarded. However, creating an explicit theory and analysis on this subject is challenging due to the abstract and multifaceted nature of the topic with a plethora of theories from various research communities. To manage this complexity, this research focuses on the concept of values in association with heritage products: products that are inherited from the previous generation, in material and immaterial forms. The exploration entails identifying values of heritage products and their potential applications in design and sustainability initiatives.
This research has been divided into two parts. Part 1 focuses on understanding and identifying the values of heritage products and Part 2 presents the adaptation of values of heritage products as a creative resource in design and sustainability initiatives. This exploration is framed against the backdrop of the cultural economy which connects the craft and design domains, includes the discourse of the intangible cultural heritage, and brings forward the craft industry as the empirical context.
Although the poor performance of the track in transition zones is frequently reported, the transition zones have not been paid enough attention. First of all, there is no specific experimental method for assessment of the track condition in transition zones. Therefore, the transition zones are usually treated as open tracks during inspections. Secondly, the effect of the differential settlement (one of the factors causing the transition zone problem) on the track degradation has not been sufficiently studied as compared to the effects of the stiffness variation. Also, due to the insufficient knowledge on the track behaviour in transition zones, the track settlement in transition zones cannot be predicted precisely. As a result, the maintenance is performed in a reactive way. Finally, although many countermeasures have been proposed for transition zones, the tools for assessment of their performance (especially on a long term) are still lacking, which causes difficulties for track designers when selecting the countermeasures. Clearly, the knowledge on the measurement, dynamic behaviour, degradation, and assessment of the track in transition zones should be improved.
This study intended to give answers to the following questions: (1) How to assess the condition of the tracks in transition zones? using which tool? (2) Which factor contributes more to the track degradation in transition zones, the uneven settlement or the stiffness variation? (3) How to predict the track settlement in transition zones on a long term? (4) How to assess the performance of the countermeasures for transition zones?
In attempt to answer these questions, an integrated methodology combing an innovative experimental method and numerical model for analysis of the dynamic behaviour and degradation of railway tracks in transition zones has been developed. The methodology consists of the following three parts:
- An advanced measurement technique based on the DIC (Digital Image Correlation) method that is used to measure the absolute dynamic displacements of rails/sleepers due to the passing trains. The advantage of this technique is that the vertical track displacements are measured simultaneously at multiple points, allowing obtaining the dynamic profile of the track section. Also, no track possession is required during the measurement. The measurement technique provides a basis for assessment of the track condition in the transition zones.
- A novel model for analysis of the dynamic responses in transition zones that uses the explicit Finite Element (FE) method. The track model accounts for both the vertical stiffness variation and the differential track settlement in transition zones. The nonlinear contact elements are used to model the sleeper-ballast interface, which allows the sleeper-ballast interaction to more realistically be described as compared to the existing models.
- A novel procedure to predict the long-term track behaviour (settlement) in transition zones, which is based on the developed FE model of the transition zones and an empirical settlement model of ballast (developed by Y. Sato). Using this procedure, the track settlement in transition zones due to multiple passages of trains can be predicted, which can provide a basis for planning track maintenance in transition zones.
To demonstrate the developed methodology, it was used in a number of applications in this study such as:
- Assessment of the track condition in various transition zones,
- Numerical analysis of the track behaviour and of the factors influencing initiation and propagation of the track settlement in transition zones,
- Assessment of the performance of various countermeasures for transition zones.
Some additional studies on the effect of the moisture condition on track performance in transition zones and on the feasibility of using satellite radar for structural health monitoring of transition zones have been performed as well. The main conclusions of these studies can be summarised as follow:
o The numerical and experimental results confirmed the higher degradation of the track near engineering structures in transition zones as compared to the open track observed in situ.
o The track degradation and the length of the settlement affected zone in the Embankment-Bridge (EB) and the Bridge-Embankment (BE) transitions, which is defined by the train moving direction, are different. That was confirmed by the measurement and numerical results, and by field observations. This phenomenon was explained using the numerical model, namely that the initial location of the track settlement in the EB transition is primarily defined by the pitch motion of the bogies, while in the BE transition it is affected by the ‘gliding’ and ‘bouncing’ motion of the vehicle. The settlement affected zone in the BE transition is longer (depending on the velocity, approx. 2 times for 140 km/h) than the EB transition.
o The track condition in transition zones was successfully assessed using the measurement method. The condition assessment results have good correlation with maintenance history, and satellite data of the considered transition zones.
o The performance of various countermeasures for transition zones was successfully assessed using the developed methodology. The numerical results have shown that the sleepers with modified dimensions (preventive countermeasure) and the adjustable fasteners (corrective countermeasure) can significantly improve the track performance, 51% reduction in ballast stress and 93% reduction in the wheel-rail contact force respectively.
Using the integrated methodology, the research questions have been answered. The proposed methodology provides suitable tools for measurement, assessment, analysis and improvement of the tracks in transition zones. The methodology can be further applied to the design and optimisation of the track in transition zones.","Transition zone; Measurement; Finite Element Method (FEM); Degradation; Prediction; Countermeasure","en","doctoral thesis","","9789463235396","","","","Haoyu Wang was on born in Shenyang, China in 1987. He received a bachelor’s degree in civil engineering and a master’s degree in railway engineering at Beijing Jiaotong University (Beijing, China). He joined the railway section of Delft University of Technology (Delft, the Netherlands) for PhD research in 2011 and worked partly as a teaching assistant in 2015 and 2016. In 2017, he worked as a consultant/railway specialist in Roadscanners (Tampere, Finland). Since 2018, he has worked as a developer/railway specialist in Fugro (Utrecht, the Netherlands). He is specialised in researching and solving problems in the railway industry, focusing on the track structure analysis, condition monitoring, and quality evaluation.","","","","","Railway Engineering","","",""
"uuid:060307ea-39ad-41c8-a160-0edf106594de","http://resolver.tudelft.nl/uuid:060307ea-39ad-41c8-a160-0edf106594de","Dynamical regulation in single cells","Wehrens, M. (TU Delft BN/Sander Tans Lab)","Tans, S.J. (promotor); Delft University of Technology (degree granting institution)","2019","In this thesis, we probe single bacterial cells to further understand both the regulation of cell divisions during adverse conditions and the phenomenon of cellular heterogeneity.","","en","doctoral thesis","","978-94-92323-25-5","","","","","","","","","BN/Sander Tans Lab","","",""
"uuid:9e7d7042-6e2e-46fe-8655-d3d6043d2b9e","http://resolver.tudelft.nl/uuid:9e7d7042-6e2e-46fe-8655-d3d6043d2b9e","Effects of rainfall and catchment scales on hydrological response sensitivity in urban areas","Cristiano, E. (TU Delft Water Resources)","van de Giesen, N.C. (promotor); ten Veldhuis, Marie-claire (promotor); Delft University of Technology (degree granting institution)","2019","Spatial and temporal rainfall variability play an important role in generation of pluvial flooding. In urban areas, this phenomenon has increased in the last decades, due in particular to an intensification of urbanization and imperviousness degree. In fact, population is growing and moving from rural areas to cities, which are becoming more and more urbanized and densely populated. The increase of urbanization and related increase of imperviousness degree, combined with short and intense rainfall events, caused by climate changes, result in a fast hydrological response, with high probability of flooding. Hydrological models can represent the overall flow behaviour but they remain poorly capable of predicting flow peaks, especially in urban areas. In view of this, a better knowledge of the hydrological response of the urban catchment is needed to improve flood prediction and prevent damages caused by pluvial flooding. Due to the high variability of catchment characteristics at small scale, urban runoff processes are particularly sensitive to spatial and temporal variability of rainfall. For this reason, high resolution data are required for accurate runoff estimation. Rainfall is generally measured with rain gauges, which provide accurate measurements in a specific point, but they are not able to fully describe rainfall variability in space. New technologies, such as weather radars, have been used in recent decades to estimate rainfall intensity. Although these instruments provide an indirect measurement of rainfall and require good calibration and error corrections, they can provide rainfall distribution in space and time, which is fundamental to investigate the hydrological response. Rainfall characteristics, such as intensity, total depth, storm velocity and intermittency, strongly affect the hydrological response of the system and it is important to properly characterize them to estimate the runoff. Catchment characteristics, such as drainage area, drainage network, imperviousness degree and slope, and their representation in hydrological models also play an important role in the prediction of hydrological response. At present, combined effects of rainfall and catchment characteristics and scales on urban hydrological response needs further investigations…","Rainfall scale; urban hydrology; hydrological modelling","en","doctoral thesis","","978-94-6366-133-1","","","","","","","","","Water Resources","","",""
"uuid:0c094cf9-ba07-4442-8008-7a216f63b1f3","http://resolver.tudelft.nl/uuid:0c094cf9-ba07-4442-8008-7a216f63b1f3","Control of the key phenomena in continuous and batch crystallization processes: Novel process and equipment design","Anisi, F. (TU Delft Intensified Reaction and Separation Systems)","Stankiewicz, A.I. (promotor); Kramer, H.J.M. (copromotor); Delft University of Technology (degree granting institution)","2019","Crystallization is one of the essential downstream steps of the manufacturing of chemical and pharmaceutical compounds when it comes to separation, purification, and final product formation. Although it has been years since it is widely applied in the aforementioned industries, it is known as one of the complex processes where continuous optimization is increasingly necessary. Insufficient understanding of crystallization phenomena and their interactions from one side, and demanding requirements for product specifications in such industries from the other side, form the basis of challenges within this unit operation...","","en","doctoral thesis","","978-94-6361-229-6","","","","","","","","","Intensified Reaction and Separation Systems","","",""
"uuid:0913c6df-9f01-42a5-add2-302ff0f2b156","http://resolver.tudelft.nl/uuid:0913c6df-9f01-42a5-add2-302ff0f2b156","Highly efficient absorption heat pump and refrigeration systems based on ionic liquids: Fundamentals & Applications","Wang, M. (TU Delft Engineering Thermodynamics)","Infante Ferreira, C.A. (promotor); Vlugt, T.J.H. (promotor); Delft University of Technology (degree granting institution)","2019","Improving efficiencies of thermal energy conversion systems is an important way to slow down global warming and mitigate climate change. Vapor absorption heat pump and refrigeration cycles are highly efficient ways of heating and cooling. These thermally activated systems also provide opportunities for the integration with a wide spectrum of low-grade and renewable heat sources, such as district heating networks, exhaust industrial heat, concentrated solar thermal energy and biomass. New fluids - ionic liquids - have been introduced into the absorption refrigeration/ heat pump field as absorbents to overcome drawbacks of traditional working fluids and to improve the energetic efficiency of systems. Some ionic liquids show high boiling points, superior thermal and chemical stabilities and strong affinities with refrigerants. Ammonia (NH3) is an environmentally friendly refrigerant with favorable thermodynamic and transport performance. Thus, studies in this thesis placed emphasis on the ammonia/ionic liquids working pairs. Studies in this thesis focus on exploring applications of ammonia/ionic liquid based vapor absorption refrigeration cycles, from a practical point of view in the refrigeration and heat pump field. By applying multi-scale evaluations covering thermodynamic and heat and mass transport aspects, it is intended to further understand the fundamentals of applying ionic liquids in heating and cooling systems. The highlights include: Assessments of equilibriummodels applied for ammonia-ionic liquid working fluids; Prediction of properties of ammoniaionic liquid fluids using molecular simulation; Collection and modeling of relevant thermophysical properties; Evaluation of the heat and mass transfer performance. Besides, concepts of using ionic liquids as absorbents with ammonia as the refrigerant in various thermodynamic cycles are analyzed and evaluated for applications in the built environment and industry...","Absorption cycle; ionic liquid; Ammonia; refrigeration; Heat pump; Plate heat exchanger","en","doctoral thesis","","978-94-6366-134-8","","","","","","","","","Engineering Thermodynamics","","",""
"uuid:15e154d9-93bb-4d2a-83c5-6e6ebb0c13d8","http://resolver.tudelft.nl/uuid:15e154d9-93bb-4d2a-83c5-6e6ebb0c13d8","Investigating nickel and ceria anode electrochemistry in multifuel environments","Tabish, A.N. (TU Delft Energy Technology)","Aravind, P.V. (promotor); Boersma, B.J. (promotor); Delft University of Technology (degree granting institution)","2019","Conventional energy technologies and fossil fuels are causing irreversible damage to the environment. A transition from conventional to sustainable technologies is inevitable to address the environmental concerns. Solid oxide fuel cells (SOFCs) can play a key role in this transition because of their high efficiency and fuel flexibility – SOFCs can operate with fossil fuels as well as with renewable fuels. However, several challenges concerning cost reduction, operability, and long-term durability remain in SOFC development. A good physio-chemical and electrochemical understanding of the fuel-electrode is crucial to overcome the operability and durability limiting factors, as well as to design the new, improved, and low-cost electrodes.","Solid Oxide Fuel Cell; Electrochemistry; Pattern electrode","en","doctoral thesis","","978-94-6384-015-6","","","","","","","","","Energy Technology","","",""
"uuid:97b9eabe-159e-43e1-8b35-edc61b1aa682","http://resolver.tudelft.nl/uuid:97b9eabe-159e-43e1-8b35-edc61b1aa682","Carbonation mechanism of alkali-activated fly ash and slag materials: In view of long-term performance predictions","Nedeljković, Marija (TU Delft Materials and Environment)","van Breugel, K. (promotor); Ye, G. (promotor); Delft University of Technology (degree granting institution)","2019","As the building sector is expanding, a growing interest in technologies that can reduce the CO2 emission from concrete production has led to partial replacement of cement with by-products from various industrial processes. Besides the partial replacement of cement, development of alkali activation technology ensures full replacement of cement in concrete. Although alkali activated materials (AAMs) are one of the most sustainable alternatives to cement-based concrete, structural application of AAMs is still not viable, as their long-term performance is not sufficiently studied. For instance, no recommendations are yet given to the scientific and engineering communities as a general approach for testing carbonation of AAMs. Furthermore, there is a limited number of case studies of long-term performance of AAMs in the past to assist the predictive models of their service life.
The long-term performance (carbonation resistance) of AAMs is mainly dependent on the microstructure features of the binder (e.g. phase assemblages and pore structure), which can be modified using different constituents and materials mixture designs. Therefore, the aim of this thesis was the development of a conceptual carbonation mechanism that can be applied to analyse carbonation resistance of any alkali activated concrete mixture. For this reason, the carbonation mechanism was studied at different length scales, from paste to concrete level, while the effects of carbonation on the chemical, physical and mechanical properties were captured. The relationship between carbonation rate, pore solution chemistry and microstructure was investigated. An advanced microstructure characterization of fly ash (FA) and ground granulated blast furnace slag (GGBFS) was performed using PARC software. The combined effect of GGBFS content, curing (sealed/unsealed) and exposure conditions (natural indoor/outdoor and accelerated carbonation) on the carbonation resistance of pastes was considered. Based on the parameter studies (GGBFS content, curing, exposure conditions), recommendations for design of alkali activated concrete for engineering practice are given in view of carbonation resistance.","carbonation; alkali activated materials; curing conditions; relative humidity; pore solution composition; Na+ effective concentration; Na binding capacity; gel phases; CO2 binding capacity; microstructure deterioration; porosity, modulus of elasticity; service life predictions","en","doctoral thesis","","978-94-6384-020-0","","","","","","","","","Materials and Environment","","",""
"uuid:ffbea4e0-2e97-4d41-819d-beec42120b29","http://resolver.tudelft.nl/uuid:ffbea4e0-2e97-4d41-819d-beec42120b29","The deviatoric behaviour of peat: a route between past empiricism and future perspectives","Muraro, S. (TU Delft Geo-engineering)","Jommi, C. (promotor); Delft University of Technology (degree granting institution)","2019","The geotechnical description of peats represents one of the main challenges in the Netherlands to assure the required safety standard and performance of the flood defence infrastructure. Almost a third of the country is situated below the sea and the rivers level with about 60% to 70% of the population and economic assets concentrated in low-laying areas prone to flooding. Flood protection in the Netherlands is assured by a vast system of primary and secondary dykes, of which 14000 km are regional dykes. Design and assessment procedure of these dykes is not straightforward, especially when peats layers are encountered. Adequate geotechnical description of the behaviour of peats at the engineering scale represents one of the biggest concerns that public water authorities and geotechnical engineers are currently facing. The majority of the previous investigations have regarded the volumetric and time dependent behaviour of peat, both from the experimental and the modelling viewpoints. However, the information on the deviatoric counterpart is still scarce and contradictory. This has contributed to generate geotechnical uncertainties on the deviatoric behaviour of peats with severe overly conservative approaches in the current engineering practice and diffuse misconceptions within the research community on traditional experimental tests.","Organic soils; Peats; Field stress-test; Laboratory tests; Constitutive modelling; Numerical Modelling; Kinematic compatibility; End restraint","en","doctoral thesis","","978-94-028-1389-0","","","","","","","","","Geo-engineering","","",""
"uuid:81bfc0a9-e4eb-4133-949b-c9ae0671c610","http://resolver.tudelft.nl/uuid:81bfc0a9-e4eb-4133-949b-c9ae0671c610","Zenith of the quantum doctrine: Dissection of the uses and misuses of quantum theory in the quest for macroscopic mechanical quanta","Pereira Machado, J.D. (TU Delft QN/Blanter Group)","Blanter, Y.M. (promotor); Delft University of Technology (degree granting institution)","2019","The reflections composing this thesis examine the usage and necessity of quantum theory, with an emphasis on systems featuring mechanical resonators. The first chapter introduces the quantum formalism, reviews the historical motivation for the quantization of harmonic oscillators, and presents a derivation of the interaction between the electromagnetic field and mechanical motion in several distinct systems. The second chapter examines the nature of physical effects such as state transfer, squeezing, entanglement, and sideband asymmetry, and how they naturally emerge in non-quantum contexts. A dynamical statistical theory is introduced to aid the quantum/classical comparison, and standard measurement models are reviewed due to their strict connection to non-classicality criteria. The third chapter deals uniquely with quantum effects occurring in systems with mechanical elements, such as phonon anti bunching, parametric down conversion in electromechanical systems, creation and interference of macroscopic super positions in spin-cantilever systems, and collapse and revivals of mechanical motion and mechanical state dependent transmission in membrane-in-the-middle geometries. The fourth and last chapter discusses pervading issues with defining the classical limit, the quantum/classical comparison and definitions of non-classicality.","Cavity Optomechanics; Mechanical quantum states; Quantum- Classical comparison","en","doctoral thesis","","978.90.8593.346.5","","","","","","","","","QN/Blanter Group","","",""
"uuid:19e7a9ff-5336-419f-b49a-82bc7c644b02","http://resolver.tudelft.nl/uuid:19e7a9ff-5336-419f-b49a-82bc7c644b02","Ageing of bituminous materials: Experimental and numerical characterization","Jing, R. (TU Delft Pavement Engineering)","Scarpas, Athanasios (promotor); Liu, X. (copromotor); Varveri, Aikaterini (copromotor); Delft University of Technology (degree granting institution)","2019","Research of the ageing mechanisms in the Netherlands is crucial to deal with ageing infrastructure and the growing pressure on and lack of natural resources. The use of porous asphalt incurs the additional costs because of the shorter service life and the more expensive maintenance required. On the other hand, the Dutch road engineering community has made significant efforts in recycling of pavements after the end of their service life. Currently, 90% of the asphalt is demolished and used in new asphalt pavement. Ideally, it would be nice to keep recycling asphalt concrete indefinitely, but this should be done without loss of functionality or environmental risks.
This thesis aims to acquire an advanced understanding of the fundamental thermal ageing processes of bituminous materials, by conducting a series of experiments and developing a set of computational models. A greater understanding of the bitumen ageing mechanisms and how they change physico-chemical properties of bitumen can help to accurately predict the service life of an asphalt pavement and develop high performance bituminous products that are less sensitive to ageing...","Bitumen; Ageing","en","doctoral thesis","","9789402813777","","","","","","","","","Pavement Engineering","","",""
"uuid:770acc49-69e0-448d-8869-9dd01aca7e19","http://resolver.tudelft.nl/uuid:770acc49-69e0-448d-8869-9dd01aca7e19","Zeolite-based separation and production of branched hydrocarbons","Poursaeidesfahani, A. (TU Delft Engineering Thermodynamics)","Vlugt, T.J.H. (promotor); Dubbeldam, D. (copromotor); Delft University of Technology (degree granting institution)","2019","Separation and selective production of branched paraffins are among the most important and still challenging processes in the oil and gas industry. Addition of branched hydrocarbons can increase the octane number of a fuel without causing additional environmental concerns. Conversion of linear hydrocarbons into branched ones also improves the performance of lubricants at low temperatures. Zeolites are commonly used for separation of branched hydrocarbons and selective conversion of linear long chain hydrocarbons into shorter branched ones...","","en","doctoral thesis","","978-94-6384-014-9","","","","","","","","","Engineering Thermodynamics","","",""
"uuid:513e5fa7-aef5-47e8-a7f9-d99a06c8981d","http://resolver.tudelft.nl/uuid:513e5fa7-aef5-47e8-a7f9-d99a06c8981d","Fully compressible Direct Numerical Simulations of carbon dioxide close to the vapour-liquid critical point","Sengupta, U. (TU Delft Energy Technology)","Boersma, B.J. (promotor); Pecnik, Rene (promotor); Delft University of Technology (degree granting institution)","2019","The challenge of global warming caused by the emission of greenhouse gases has led to the desire for mitigating climate change by exploring the use of alternative sources of energy to reduce the use of traditional fossil fuels. In this context, supercritical fluids play an important role due to their use in various technologies and processes that promote sustainable development. These fluids possess a unique combination of gas-like and liquid-like properties enabling their usage in supercritical power cycles, which are more efficient compared to other methods of energy conversion. In this thesis, we investigate turbulent flows of supercritical CO2 near the vapourliquid critical point in a channel geometry by solving the fully compressible Navier Stokes equations. The purpose of the investigation is to gain a better understanding of the physics of turbulent supercritical fluid flows near the critical point by taking the compressibility effects into account.","","en","doctoral thesis","","","","","","","","","","","Energy Technology","","",""
"uuid:71240415-3de9-44db-819d-e8d59898cf5e","http://resolver.tudelft.nl/uuid:71240415-3de9-44db-819d-e8d59898cf5e","Computational modelling and parameter estimation in fixed beds of non-spherical pellets","Mohammadzadeh Moghaddam, E. (TU Delft Large Scale Energy Storage)","Padding, J.T. (promotor); Stankiewicz, A.I. (promotor); Delft University of Technology (degree granting institution)","2019","Fixed bed arrangements find wide applications, particularly in reaction engineering, where they are employed as multi-tubular catalytic reactors for the transformation of reactants into desired products. The importance of such complicated reactors can be realized by their extensive applications as the process workhorse in various industries, e.g. chemical, pharmaceutical and petrochemical. The design of such systems is predominantly rooted in macroscopic models, e.g. pseudo-continuum approaches, with effective parameters extracted from averaged semi-empirical correlations. However, such simplistic design procedures are inadequate for design of tubular fixed beds with low tube-to-particle diameter ratios, say dt/dp<10, where lateral heterogeneities of the tortuous structure lead to dominance of localised phenomena. These local or “pellet-scale“ effects cannot be captured nor explained by pseudocontinuum models, and call for 3D spatially-resolved simulations of flow and transport scalars. However, majority of the prevailing efforts within the context of “particle-revolved CFD simulation”, have dealt with fixed beds of spheres, because generating random packing of nonspherical pellets necessitates a cumbersome and complicated strategy to account for the orientation freedom of such pellets, specifically when collisions occur...","","en","doctoral thesis","","978-94-6375-262-6","","","","","","2020-02-22","","","Large Scale Energy Storage","","",""
"uuid:9b121e9b-bfa0-49e6-a600-5db0fbfa904e","http://resolver.tudelft.nl/uuid:9b121e9b-bfa0-49e6-a600-5db0fbfa904e","Harnessing Heterogeneity: Understanding Urban Demand to Support the Energy Transition","Voulis, N. (TU Delft System Engineering)","Brazier, F.M. (promotor); Warnier, Martijn (promotor); Delft University of Technology (degree granting institution)","2019","This thesis demonstrates that heterogeneous spatio-temporal demand profiles are required for a realistic representation of urban energy systems. This is needed to prepare them for the energy transition. Therefore, existing and future urban energy system models should be expanded with more detailed spatio-temporal local demand data that account for both household and non-household consumers, in particular for the thus far omitted service sector consumers. This thesis describes methods and approaches that allow for such detailed modelling of urban demand profiles based on the few publicly available data sources. Using the developed detailed spatio-temporal demand profiles, this thesis provides new insights in the impact of renewable energy resources in realistic, heterogeneous urban areas. The presented results can support governments, communities, and companies in their
endeavours to bring the energy transition to fruition.","Demand modeling; Demand profiles; Urban energy; Urban energy systems; Urban energy transition; Renewables; Renewable energy sources integration; Energy taxes","en","doctoral thesis","","978-94-6384-011-8","","","","","","","","","System Engineering","","",""
"uuid:8ae79702-2689-4235-89a7-202afdf5e358","http://resolver.tudelft.nl/uuid:8ae79702-2689-4235-89a7-202afdf5e358","Phosphate recovery: From Nanoparticles to Membrane Technology","Paltrinieri, L. (TU Delft OLD ChemE/Organic Materials and Interfaces)","Sudhölter, Ernst J. R. (promotor); de Smet, L.C.P.M. (copromotor); Delft University of Technology (degree granting institution)","2019","","","en","doctoral thesis","","","","","","","","","","","OLD ChemE/Organic Materials and Interfaces","","",""
"uuid:c624cd58-25e0-4bf9-bf36-025e08c46169","http://resolver.tudelft.nl/uuid:c624cd58-25e0-4bf9-bf36-025e08c46169","Dynamic Multilevel Methods for Simulation of Multiphase Flow in Heterogeneous Porous Media","Cusini, M. (TU Delft Reservoir Engineering)","Hajibeygi, H. (promotor); van Kruijsdijk, C.P.J.W. (promotor); Delft University of Technology (degree granting institution)","2019","","","en","doctoral thesis","","978-94-6366-126-3","","","","","","","","","Reservoir Engineering","","",""
"uuid:458a384f-6f8a-4fc3-8bc4-c01397b54b59","http://resolver.tudelft.nl/uuid:458a384f-6f8a-4fc3-8bc4-c01397b54b59","A Systematic and Quantitative Approach to Safety Management","Li, Y. (TU Delft Safety and Security Science)","van Gelder, P.H.A.J.M. (promotor); Guldenmund, F.W. (copromotor); Delft University of Technology (degree granting institution)","2019","Safety management systems (SMSs) have gained importance since the 1970s and changed focus from individual management activities to more systematic frameworks. The methods, techniques and tools used in them also became more and more sophisticated. However, from the perspectives of the researcher, company, auditor, government and (safety-specialised) organisation, the modelling of safety management is still in need of improvement. Modelling of safety management means developing a generic model that can cover all SMSs. This generic model (or system) will look into the common constituent parts of an SMS and details of those parts. Theoretical models have been developed extensively, however, quantifying how safety management controls risk is one of the difficulties in applying these models, especially the quantification of safety management deliveries. Therefore, this research aims to develop a quantitative approach to the modelling of safety management.","","en","doctoral thesis","","978-94-028-1364-7","","","","","","","","","Safety and Security Science","","",""
"uuid:01a602f7-59af-4ee5-a54e-40c536216f58","http://resolver.tudelft.nl/uuid:01a602f7-59af-4ee5-a54e-40c536216f58","Compiler Assisted Reliability Optimizations","Nazarian, G. (TU Delft Dataintensive Systems)","Gaydadjiev, G. (promotor); Sips, H.J. (promotor); Delft University of Technology (degree granting institution)","2019","Microprocessors are used in an expanding range of applications from small embedded system devices to supercomputers and mainframes. Moreover, embedded microprocessor based systems became essential in modern societies. Depending on the application domain, embedded systems have to satisfy different constraints. The major challenges today are cost, performance, energyconsumption, reliability, real-time (reactive-operation) and silicon area. In traditional computer systems some of these constraints can be less crucial than others, while performance, area and power-consumption will always remain valid constraints for embedded systems. However, in modern systems reliability has emerged as a new, highly important requirement. Among all above factors performance, power, reactive-operation and reliability can be addressed by software-only techniques that do not require any hardware modifications or additions. Such optimization techniques, however, may impact the performance and power characteristics of the system. The main goal of this work is to find novel software based reliability techniques with affordable power and performance overheads. For this reason the reliability optimization methods are studied in detail and a diligent categorization of existing software techniques is proposed. The strong and the weak points of each category are carefully studied. Using the information obtained from our categorization, two novel optimization techniques for fault detection and one new optimization technique for fault recovery are proposed. Our optimization techniques minimize the required code instrumentation points while guaranteeing equivalent reliability as compared to state of the art approaches. Moreover, a generic methodology is proposed to help with the process of identifying the minimum set of code instrumentation points. For the evaluation we select a challenging baseline that consists of the best known techniques for fault detection and fault recovery found in the public literature. The experimental results on a set of biomedical benchmarks show that using the proposed design methodology and fault detection and recovery methods, the performance and power overheads are significantly reduced while the fault coverage remains in line with previously proposed and widely used methods.","Reliability; Compiler optimizations; Control flow error detection and recovery","en","doctoral thesis","","978-94-6384-005-7","","","","","","","","","Dataintensive Systems","","",""
"uuid:5851345a-0284-4514-aad3-bed367d672f7","http://resolver.tudelft.nl/uuid:5851345a-0284-4514-aad3-bed367d672f7","Electrochemical recycling of rare earth elements from NdFeB magnet waste","Venkatesan, P. (TU Delft (OLD) MSE-6)","Yang, Y. (promotor); Sietsma, J. (promotor); Delft University of Technology (degree granting institution)","2019","Rare earth elements (REEs), along with other metals, will play a pivotal role in the transition towards a low-carbon economy. Primary mining of REEs consists of multiple steps, is energy intensive and has an adverse environmental impact. Thus, the current scenario of “use and throw” of REEs after a single use in a product is untenable. Furthermore, the REEs are classified as critical metals by the US and EU due to the monopoly of China over REE production. Thus, recycling REEs from secondary resources and end-of-life (EOL) waste products can help effectuate a circular economy by a) reducing the environmental impact of primary mining, b) reducing the dependency on imports and formulating a secure supply chain, c) avoiding landfilling and incineration...","","en","doctoral thesis","","978-94-6366-122-5","","","","","","","","","(OLD) MSE-6","","",""
"uuid:c709d2ba-8e41-4f23-a35c-ab75b34ed2c8","http://resolver.tudelft.nl/uuid:c709d2ba-8e41-4f23-a35c-ab75b34ed2c8","All-aromatic Hyperbranched Polyaryletherketone Networks","Vogel, W. (TU Delft Novel Aerospace Materials)","Dingemans, T.J. (promotor); Delft University of Technology (degree granting institution)","2019","The work presented in this thesis will open a pathway towards the design of crosslinkable hyperbranched poly(aryetherketone)s (HBPAEKs). In addition to addressing the chemistry, crosslinking characteristics and (thermo)mechanical properties we will demonstrate that crosslinked HBPAEKs are excellent candidates for membrane-based gas separation applications.","","en","doctoral thesis","","9789463234863","","","","","","2019-08-11","","","Novel Aerospace Materials","","",""
"uuid:cddcf5d0-5a9f-43f6-8903-f6d01d58b0fc","http://resolver.tudelft.nl/uuid:cddcf5d0-5a9f-43f6-8903-f6d01d58b0fc","Exploring haptics for subsea vehicles: Haptic feedback for rate controlled vehicles in subsea environments","Kuiper, R.J. (TU Delft Human-Robot Interaction)","Abbink, D.A. (promotor); van der Helm, F.C.T. (promotor); Delft University of Technology (degree granting institution)","2019","Remotely controlled subsea vehicles are frequently used for oil and gas applications. A potential future application requiring remote controlled vehicles is deep-sea mining. At the envisioned water depths beyond 1500m, rare minerals are accessible without deep excavation. However, the extreme hyperbaric conditions (i.e. high pressure), limited visibility and unpredictable soil properties pose immense challenges in controlling the excavation process. Such machines are expected to be operated manually by an operator using joysticks that manipulate the machine’s operational velocity (also known as ’rate control’). Large subsea vehicles are difficult to control due to their complexity and slow dynamic response. This thesis explores design choices for haptic feedback that can support the operator in controlling these machines. Offering haptic feedback (i.e. forces on the input device) can potentially improve task performance and operator awareness, by informing the operator of the naturally occurring (possibly scaled) interaction forces with the environment. Alternatively, artificial guidance forces based on a model or sensed environment can be used to guide or constrain the operator’s control actions. Force reflection in a rate controlled task poses a difficulty compared to a position controlled task, because the reflected forces are no longer directly related to the operator’s input position.
The goal of this thesis is to provide design guidelines for haptic feedback, by designing and evaluating several haptic feedback algorithms, for a variety of remotely controlled sub-sea vehicles. First the thesis will present an analysis of the general task environment of deep-sea mining, including a choice for the most likely options for machinery to be used in the envisioned operations. Secondly, the design of natural haptic feedback is explored for controlling a large heavy backhoe dipper excavator, operating in a shallow subsea environment. And thirdly, the implementation of haptic guidance forces is studied for rate controlled devices and its effect on steering a deep-sea mining crawler.
1) General task environment of deep-sea mining, machines and minerals
Deep-sea mining applications require large heavy machinery, to excavate mineral-rich rock materials. Excavating rock in large water depths requires more energy than on land, due to hardening of the material in hyperbaric conditions (chapter 2). Two possible deep-sea mining approaches are compared: using a large suspended grab with two clamshells, and track-driven drum cutters. The suspended grab is shown to reduce energy consumption, due to a reduction of hyperbaric hardening-effect caused by slow loading of the material thereby allowing water to enter the effected deformed zone (chapter 2).
Using a grab is a promising excavation method for deep-sea mining due to the low loading rates and only crushing parts of the material, leaving most intact. Controlling such a machine while exerting large cutting forces onto the seabed is a challenging task. Offering haptic feedback to the operator by means of natural force feedback and haptic shared control combined potentially improves the situational awareness and control effort (chapter 3). Further investigation into both types of support (i.e. natural and guidance feedback) needs to be done for these type of large subsea machines.
2) Exploring natural haptic feedback for vehicles with a slow dynamic response
Subsea vehicles typically are large and heavy, thereby having a slow dynamic response. This requires predictive inputs from the operator for controlling the vehicles’ position. Natural haptic feedback increases the situational awareness of the operator, enabling better understanding of the state of the machine and anticipation of the required control inputs (chapter 4). It is shown that scaling of the reflected forces during position control does not affect the perception of the controlled vehicle’s response, thereof prediction of the required inputs.
Using rate control has an unlimited workspace, required for steering heavy machines over the seabed. Offering natural feedback in rate control is however not as obvious as it is for position control, where the measured forces can be reflected directly related to the position. Implementing stiffness feedback showed promising results for offering natural haptic feedback in rate control for operating a slow dynamic system, compared to force-based feedback and static feedback of a centering spring (chapter 5). This was tested for the fundamental abstract subtask of positioning in free-space, a contact transition and force level tasks.
Controlling a backhoe dipper excavator on a pontoon for excavation in harbors or offshore shallow waters is a challenging task due to the machine’s complexity and slow dynamics. A high fidelity force reflecting joystick was developed to demonstrate the effect of implementing stiffness feedback for controlling an excavator, based on the measured hydraulic cylinder pressures, representing the environment interaction forces (chapter 6). A human factors case study showed that several operating effects can be clearly reflected by means of stiffness feedback, such as making contact with the seabed and cutting through sand layers.
3) Exploring haptic guidance feedback designs
Instead of informing the operator of what the machine is doing, haptic guidance feedback based on a model or sensed environment can assist the operator in correct task execution. This thesis explores two types of design of guidance feedback, by means of a repulsive force field around forbidden zones or attractive forces towards a suggested path. The latter requires more sensed information from the environment, but showed most improvements for steering an abstract vehicle through a virtual maze (chapter 7).
Haptic shared control is an attractive guidance towards a suggested path, sharing the control with the operator on the input device. For a deep-sea crawler maneuvering over the seabed haptic guidance is compared to semi-automated control and manual control (chapter 8). This showed that sharing the control is beneficial due to automation during normal operating conditions, but also from manual control in unexpected events such as obstacle avoidance or slip conditions.
In conclusion, both natural haptic feedback and haptic guidance feedback were evaluated on abstract tasks as well as real-life tasks simulated in virtual reality. Combining natural haptic feedback and guidance feedback is recommended for rate controlled tasks, to inform the operator on interaction forces as well for as assisting in task execution. The combination of feedback can be offered to the operator by means of stiffness reflection combined with guidance by haptic shared control, which shifts the neutral position of the stiffness.","Haptic feedback; Haptic shared control; Deep sea mining","en","doctoral thesis","","978-94-6323-524-2","","","","","","","","","Human-Robot Interaction","","",""
"uuid:225ba0ab-22d7-4d1e-a12c-85e0ed55c51d","http://resolver.tudelft.nl/uuid:225ba0ab-22d7-4d1e-a12c-85e0ed55c51d","Optical cavities, coherent emitters, and protocols for diamond-based quantum networks","van Dam, S.B. (TU Delft QID/Hanson Lab)","Hanson, R. (promotor); Delft University of Technology (degree granting institution)","2019","Quantum mechanics differs deeply from classical intuition and knowledge, sparking fundamental questions and radically new technology. Generating large entangled states between distant nodes of a quantum network will advance both domains. The nitrogen-vacancy (NV) centre in diamond is a promising building block for such a network.
However, extending quantum networks to more nodes and larger distances relies upon improving the entangling efficiency of these defect centres. In this thesis we present experimental and theoretical work focused on addressing this challenge through embedding NV centres in an optical cavity, taking care to preserve coherence of the NV optical transition. We further analyze protocols for efficient quantum communication over an NV-based quantum network.","","en","doctoral thesis","","978-90-8593-383-0","","","","Casimir PhD Series, Delft-Leiden 2018-52","","","","","QID/Hanson Lab","","",""
"uuid:48e17e42-5335-41b3-838a-0586523e5b78","http://resolver.tudelft.nl/uuid:48e17e42-5335-41b3-838a-0586523e5b78","Simulations of electrode & solid electrolyte materials","de Klerk, N.J.J. (TU Delft RST/Storage of Electrochemical Energy)","Wagemaker, M. (promotor); Brück, E.H. (promotor); Delft University of Technology (degree granting institution)","2019","Batteries have found widespread use, especially in mobile devices. With the inevitable energy transition the demand for batteries will rise further. Batteries for large-scale storage, transport applications and mobile devices all have different demands, requiring the development of a range of battery types for the storage of electrical energy. For these developments a better understanding of the fundamental aspects of materials and energy is necessary.","Batteries; Solid Electrolytes; Space-charge Layers; Molecular Dynamics Simulations; Phase-field Modelling","en","doctoral thesis","","","","","","","","","","","RST/Storage of Electrochemical Energy","","",""
"uuid:e3f848d9-9625-4f3c-a203-5e8e129022bf","http://resolver.tudelft.nl/uuid:e3f848d9-9625-4f3c-a203-5e8e129022bf","CO2 Capture by Metal-Organic Framework based Mixed Matrix Membranes (MMMs)","Sabetghadam, A. (TU Delft ChemE/Catalysis Engineering)","Kapteijn, F. (promotor); Gascon, Jorge (promotor); Delft University of Technology (degree granting institution)","2019","Membrane separation is an energy efficient technology with a small physical footprint in which the membrane is the core of process. Membranes need to be further developed to be specifically applied in the field of gas separation. The most challenging target in designing membranes is to improve the permeation and selectivity, simultaneously. This goal cannot be achieved without acquiring the knowledge of material science to tune the membrane material properties. This PhD thesis focusses on designing mixed matrix membranes (MMMs) by using a new class of crystalline materials known as metal organic frameworks (MOFs) as filler. In combination with polymers as continuous phase it was expected to improve both the processability and separation performance of this composite material in comparison with the polymer only. This work has been performed in the framework of the FP7-EU project M4CO2 ('MOF-based Mixed Matrix Membranes for energy efficient CO2 capture', grant agreement n° 608490). Therefore the focus in this thesis was on, but not limited to, membranes for the separation of CO2 from N2, as a model for stack gases in coal combustion ('post-combustion separation'). To this aim, the overall concept of this thesis is divided into three parts in which the most relevant aspects of design in mixed matrix membranes are carefully studied. Part I (Chapter 2) elucidated the influence of MOF pore structure and topology on the MMMs separation performance. In part II (Chapter 3 and 4) the effect of MOF morphology and polymer free volume is studied. Finally, part III (Chapter 5) reports a study on free-standing and thin supported MOF nanosheet based membranes by using industrially viable methods. The summary of each Chapter in this thesis is presented as follows...","","en","doctoral thesis","","978-94-6384-007-1","","","","","","","","","ChemE/Catalysis Engineering","","",""
"uuid:95f37723-f452-4e05-95a3-e6693638d8d5","http://resolver.tudelft.nl/uuid:95f37723-f452-4e05-95a3-e6693638d8d5","PV Module Integrated Converter for Distributed MPPT PV Systems","Acanski, M. (TU Delft DC systems, Energy conversion & Storage)","Ferreira, Jan Abraham (promotor); Delft University of Technology (degree granting institution)","2019","Driven by constant advances and cost reductions in photovoltaic (PV) technology,
together with incentive government policies toward cleaner environment, the PV energy became one of the fastest growing market in the world. In many countries the amount of installed PV power is increasing at an exponential rate, in all sectors from large utility scale power plants to small residential PV systems.","","en","doctoral thesis","","978-94-6375-307-4","","","","","","","","","DC systems, Energy conversion & Storage","","",""
"uuid:078e8cff-d9bb-417a-b80e-8fcac19c3b9a","http://resolver.tudelft.nl/uuid:078e8cff-d9bb-417a-b80e-8fcac19c3b9a","Navigation of guidewires and catheters during interventional procedures: A computer-based simulation","Sharei Amarghan, H. (TU Delft Medical Instruments & Bio-Inspired Technology)","Dankelman, J. (promotor); van den Dobbelsteen, J.J. (promotor); Delft University of Technology (degree granting institution)","2019","Endovascular interventions include a variety of techniques, chiefly involving guidewires and catheters, that give access to the vascular system through small incisions. It is imperative to reach the place of interest quickly and safely. By considering the fact that the composition of guidewires and catheters differ (e.g., in material, diameter, length, tip shape, stiffness, and coating), each one shows a different behavior based on its structure, and therefore the choice of instruments becomes challenging. Currently, the choice of the instruments in each procedure is often based solely on a specialist’s experience, which is not sufficient and does not always result in a successful procedure. Therefore, in this thesis, we focus on the performance of the guidewires and catheters with considering their structure.","","en","doctoral thesis","","","","","","","","2019-12-30","","","Medical Instruments & Bio-Inspired Technology","","",""
"uuid:e52cc182-457c-4687-baee-d0f72af36950","http://resolver.tudelft.nl/uuid:e52cc182-457c-4687-baee-d0f72af36950","Graph-time signal processing: Filtering and sampling strategies","Isufi, E. (TU Delft Signal Processing Systems)","Leus, G.J.T. (promotor); Delft University of Technology (degree granting institution)","2019","The necessity to process signals living in non-Euclidean domains, such as signals defined on the top of a graph, has led to the extension of signal processing techniques to the graph setting. Among different approaches, graph signal processing distinguishes itself by providing a Fourier analysis of these signals. Analogously to the Fourier transform for time and image signals, the graph Fourier transform decomposes the graph signals decomposes in terms of the harmonics provided by the underlying topology. For instance, a graph signal characterized by a slow variation between adjacent nodes has a low frequency content.
Along with the graph Fourier transform, graph filters are the key tool to alter the graph frequency content of a graph signal. This thesis focuses on graph filters that are performed distributively in the node domain–that is, each node needs to exchange information only within its neighbor to perform a given filtering operation. Similarly to the classical filters, we propose ways to design and implement distributed finite impulse response and infinite impulse response graph filters.
One of the key contributions of this thesis is to bring the temporal dimension to graph signal processing and build upon a graph-time signal processing framework. This is done in different ways. First, we analyze the effects that the temporal variations on the graph signal and graph topology have on the filtering output. Second, we introduce the notion of joint graph-time filtering. Third, we presentpr a statistical analysis of the distributed graph filtering when the graph signal and the graph topology change randomly in time. Finally, we extend the sampling framework from the reconstruction of graph signals to the observation and tracking of time-varying graph processes.
We characterize the behavior of the distributed autoregressivemoving average (ARMA) graph filters when the graph signal and the graph topology are time-varying. The latter analysis is exploited in two ways: i ) to quantify the limitations of graph filters in a dynamic environment, such as a moving sensors processing a time-varying signal in a sensor network; and i i ) to provide ways for filtering with low computation and communication complexity time-varying graph signals.
We develop the notion of distributed graph-time filtering, which is an operation that jointly processes the graph frequencies of a time-varying graph signal on one hand and its temporal frequencies on the other hand. We propose distributed finite impulse response and infinite impulse response recursions to implement a two-dimensional graphtime filtering operation. Finally, we propose design strategies to find the filter coefficients that approximate a desired two-dimensional frequency response.
We extend the analysis of graph filters to a stochastic environment, i.e., when the graph topology and the graph signal change randomly over time. By characterizing the first and second order moments of the filter output, we quantify the impact of the graph signal and the graph topology randomness into the distributed filtering operation. The latter allows us to develop the notion of graph filtering in the mean, which is also used to ease the computational burden of classical graph filters.
Finally, we propose a sampling framework for time-varying graph signals. Particularly, when the graph signal changes over time following a state-space model, we extend the graph signal sampling theory to the tasks of observing and tracking the time-varying graph signal froma few relevant nodes. The latter theory considers the graph signal sampling as a particular case and shows that tools from sparse sensing and sensor selection can be used for sampling.
outer layer of the brain, and is responsible for decision making and functioning
of the human body. Complementary, the white matter (WM) contains the
communication pathways between the grey matter areas. Diffusion-weighted
magnetic resonance imaging (dMRI) is an imaging modality allowing to model
the brain’s white matter structures by making the MRI acquisition sensitive to
diffusion processes. When brain structure is altered due to pathology, it can be
assessed and quantified by analysing dMRI data...","diffusion MRI; stroke; brain; ADHD","en","doctoral thesis","","978-94-6384-009-5","","","","","","","","","Biomechatronics & Human-Machine Control","","",""
"uuid:265e1be8-6edc-4ee2-8b9c-8d01a6022448","http://resolver.tudelft.nl/uuid:265e1be8-6edc-4ee2-8b9c-8d01a6022448","Spin–orbit coupling and geometric phases at oxide interfaces","Groenendijk, D.J. (TU Delft QN/Caviglia Lab)","Caviglia, A. (promotor); van der Zant, H.S.J. (promotor); Delft University of Technology (degree granting institution)","2019","In this work, we investigate electronic and magnetic phenomena in thin films and heterostructures of transition metal oxides with strong spin–orbit coupling. Ultrathin films are prepared by pulsed laser deposition, a technique which enables layer-by-layer growth of complex materials on atomically flat crystal surfaces. The properties of these heterostructures, which include materials such as strontium iridate (SrIrO3) and strontium ruthenate (SrRuO3), are probed by applying electric and magnetic fields. By varying parameters such as temperature, magnetic field strength and layer thickness, we obtain information about spin and charge transport in these atomically engineered crystals. Chapter 1 provides an introduction to the field of transition metal oxides, followed by a brief overview of the materials studied in this dissertation. Chapters 2 to 5 are dedicated to SrIrO3, a material that displays unexpected physical properties owing to the strong spin–orbit coupling of Ir. Chapter 2 starts with the growth and thermodynamic stability of SrIrO3, which is essential to obtain high-quality films and study their properties in the ultrathin limit. We develop a method to grow stoichiometric films by measuring their transport characteristics as a function of the target condition. We discover that the properties of SrIrO3 are sensitive to degradation in air and develop an encapsulation procedure to protect the filmsurface. SrIrO3 displays an exotic semimetallic state due to the interplay between electronic correlations, spin–orbit coupling, and octahedral rotations. In Chapter 3, we combine thermoelectric and magnetotransport measurements to quantitatively determine the transport coefficients of the different conduction channels. Despite their different dispersion relationships, electrons and holes are found to have strikingly similar transport coefficients. Chapters 4 and 5 focus on the electronic and magnetic properties of SrIrO3 in the two-dimensional limit. In Chapter 4, we discover a metal–insulator transition occurring at a critical thickness of 4 unit cells and an enhancement of spin fluctuations near the transition point. We investigate the magnetic state in Chapter 5, showing that a fourfold symmetric magnetoresistance component appears above a criticalmagnetic field. In Chapter 6, we interface ultrathin SrIrO3 with SrRuO3, an itinerant ferromagnet with an unconventional anomalous Hall conductivity. We discover that the presence of two dissimilar interfaces results in the emergence of two spin-polarized conduction channels. Having explored the influence of epitaxial interfaces, in Chapter 7 we develop a method to detach thin films fromtheir growth substrate using an epitaxial buffer layer. Using this approach, we prepare nanomechanical resonators of freestanding SrTiO3 and SrRuO3 films. By measuring the temperature dependence of theirmechanical response, we observe signatures of structural phase transitions in the SrTiO3, which affect the strain and mechanical dissipation of the resonators. Chapter 8 summarizes the findings of the previous chapters and provide perspectives for future work. We discuss ongoing experiments regarding Berry phase engineering and the manipulation of freestanding films.","Complex oxide heterostructures & interfaces; strontium iridates & ruthenates; spin–orbit coupling; electronic correlations; lowtemperature electronic transport; Berry phase; freestanding oxides","en","doctoral thesis","","978-90-8593-381-6","","","","","","","","","QN/Caviglia Lab","","",""
"uuid:db1925fd-8e1b-4414-bd04-39c256a09555","http://resolver.tudelft.nl/uuid:db1925fd-8e1b-4414-bd04-39c256a09555","Controlled perishable goods logistics: Real-time coordination for fresher products","Lin, X. (TU Delft Transport Engineering and Logistics)","Lodewijks, G. (promotor); Negenborn, R.R. (promotor); Delft University of Technology (degree granting institution)","2019","This thesis provides tools for decision support systems for perishable goods logistics. It also provides an approach for enterprises to estimate the benefit of adopting the proposed logistic systems. Perishable products are produced, transported, and consumed all over the world. Thanks to perishable goods supply chains, we can enjoy safe, fresh, and affordable products. Nevertheless, it is estimated that one third of the agricultural products produced globally for human consumption end up wasted, which amounts to 1.3 billion tonnes per year. This also means that one third of the resources and greenhouse gas emissions for producing and transporting these products are in vein. The wastage of these fresh products happens throughout the supply chains, which are often caused by the perishing nature and inefficiencies of supply chain planning. For instance, congestions at a certain location or over supply at a retailer may result in spoilage. Disruptions such as malfunctioning of cooling equipment can also contribute to wastage in supply chains. Recent technological developments provide new insights into supply chain management, allowing further waste reduction. With sensors and communication technologies, information of products such as location and freshness can be made known to supply chain planners in real-time. Thus the research question of this thesis is given real-time information of perishable goods logistics, in what ways can perishable goods supply chain players better control and coordinate logistic processes to reduce loss of perishable products? Traditional mathematical methods cannot capture enough details when describing perishable goods in their supply chains. Therefore, this thesis proposes a general framework in a system and control fashion to describe and control logistic operations. This general framework consists of a quality-aware modeling method and a model predictive control strategy. The quality-aware modeling method considers the perishable goods in the supply chain as a system, with quality and logistic features. The model predictive control strategy observes the system and steers the system in a manner that the wastage can be minimized. In this thesis, the proposed method is used in case studies of supply chains with three different commodities, namely bananas, starch potatoes, and cut roses. In each case study, because each commodity is unique in its physiological nature, it perishes in a unique way. Therefore, the supply chain takes care of the commodity in a way that may not suit other commodities. As a result, the logistic features of the supply chains are also different from each other. This requires the systems to be described differently, and control architectures can also vary. Results from the case studies show that the general framework can improve the effectiveness of supply chain logistic planning, and to reduce wastage. The improvements are quantified to illustrate the benefit of making full use of the information on perishable goods in supply chains.","Logistics; perishable goods; quality-aware modeling; model predictive control","en","doctoral thesis","TRAIL Research School","978-90-5584-246-9","","","","TRAIL Thesis Series T2019/3, The Netherlands TRAIL Research School","","2019-01-23","","","Transport Engineering and Logistics","","",""
"uuid:9d22f175-0fc0-489e-83b7-30ffd9996b3b","http://resolver.tudelft.nl/uuid:9d22f175-0fc0-489e-83b7-30ffd9996b3b","Efficient computational methods in Magnetic Resonance Imaging: From optimal dielectric pad design to effective preconditioned imaging techniques","van Gemert, J.H.F. (TU Delft Microwave Sensing, Signals & Systems)","Remis, R.F. (promotor); Webb, A. (promotor); Delft University of Technology (degree granting institution)","2019","This dissertation describes how to design dielectric pads that can be used to increase image quality in Magnetic Resonance Imaging, and how to accelerate image reconstruction times using a preconditioner.
Image quality is limited by the signal to noise ratio of a scan. This ratio is increased for higher static magnetic field strengths and therefore there is great interest in high-field MRI. The wavelength of the transmitted magnetic RF field decreases for higher field strengths, and it becomes comparable to the dimensions of the human body. Consequently, RF interference patterns are encountered which can severely degrade image quality because of a low transmit efficiency or because of inhomogeneities in the field distribution. Dielectric pads can be used to improve this distribution as the pads tailor the field by inducing a secondary magnetic field due to its high permittivity. Typically, the pads are placed tangential to the body and in the vicinity of the region of interest. The exact location, dimensions, and constitution of the pad need to be carefully determined, however, and depend on the application and the MR configuration. Normally, parametric design studies are carried out using electromagnetic field solvers to find a suitable pad, but this is a very time consuming process which can last hours to days. In contrast with these design studies, we present methods to efficiently model and design the dielectric pads using reduced order modeling and optimization techniques. Subsequently, we have created a design tool to bridge the gap between the advanced design methods and the practical application by the MR community. Now, pads can be designed for any 7T neuroimaging and 3T body imaging application within minutes.
In the second part of the thesis a preconditioner is designed for parallel imaging (PI) and compressed sensing (CS) reconstructions. MRI acquisition times can be strongly reduced by using PI and CS techniques by acquiring less data than prescribed by the Nyquist criterion to fully reconstruct the anatomic image; this is beneficial for patient's comfort and for minimizing the risk of patient's movement. Although acquisition times are reduced, the reconstruction times are increased significantly. The reconstruction times can be reduced when a preconditioner is used. In this thesis, we construct such a preconditioner for the frequently used iterative Split Bregman framework. We have tested the performance in a conjugate gradient framework, and show that for different coil configurations, undersampling patterns, and anatomies, a five-fold acceleration can be obtained for solving the linear system part of Split Bregman.","Maxwell Equations; Dielectric pad; High-permittivity pads; Reduced order modeling; MRI; Preconditioning; Reconstruction","en","doctoral thesis","","978-94-028-1334-0","","","","","","","","","Microwave Sensing, Signals & Systems","","",""
"uuid:1f879b34-73f1-42e1-96c0-92f85289e13e","http://resolver.tudelft.nl/uuid:1f879b34-73f1-42e1-96c0-92f85289e13e","Multi-functional LED Module Integration and Miniaturization for Solid State Lighting Applications","Liu, P. (TU Delft Electronic Components, Technology and Materials)","Zhang, Kouchi (promotor); van Zeijl, H.W. (copromotor); Delft University of Technology (degree granting institution)","2019","Solid State Lighting (SSL) develops towards small size, high lumen output, high working temperature, and multi-functional applications. These trends are more desirable in miniaturized LED applications such as retrofit G4 LED devices. Retrofit G4 LEDs were chosen in this work as a technical carrier due to the miniaturized size challenge and high lumen requirements. The solutions for miniaturized retrofit G4 can also be extended to other applications of consumer lighting applications with similar requirements.","Solid State Lighting; Miniaturization; Wafer Level Integration","en","doctoral thesis","","978-94-6380-220-8","","","","","","","","","Electronic Components, Technology and Materials","","",""
"uuid:1856b9b2-fd89-4383-b28a-b35220fbeafa","http://resolver.tudelft.nl/uuid:1856b9b2-fd89-4383-b28a-b35220fbeafa","Crowds inside out: Understanding crowds from the perspective of individual crowd members' experiences","Li, J. (TU Delft Human Information Communication Design)","de Ridder, H. (promotor); Vermeeren, A.P.O.S. (copromotor); Delft University of Technology (degree granting institution)","2019","With the growth of global population, the big cities become increasingly crowded. It is not rare to see large crowds in public transportations and events with masses of visitors, such as music festivals and football matches. The question “How to deal with crowds” is receiving attention, both from academia and practical crowd management.
This thesis aims at contributing to a better understanding of crowds from the perspective of individual crowd members’ experiences, including their well- being, emotional experiences and action tendencies. In addition, we want to understand the emotional contagion effect between groups in crowds. To achieve this, we chose to go into the crowds, get in touch with the crowd members, and try to find out what factors sustain their well-being, how their emotional experiences can be measured in a playful and non-intrusive manner, what they tend to do when they have certain emotions, and how the grouping behavior reflects their experiences.","Crowd experiences; emotions in crowds; action tendencies in crowds; crowd management; self-report emotions","en","doctoral thesis","","9789065624314","","","","","","","","","Human Information Communication Design","","",""
"uuid:d95a29ff-8865-4484-864a-74fd75028292","http://resolver.tudelft.nl/uuid:d95a29ff-8865-4484-864a-74fd75028292","A theoretical basis for salinity intrusion in estuaries","Zhang, Z. (TU Delft Water Resources)","Savenije, Hubert (promotor); Wang, Zhengbing (promotor); Delft University of Technology (degree granting institution)","2019","Saltwater intrusion is a crucial issue in estuaries. The spread of salinity is described by the dispersion coefficient. A purely empirical equation which links the effective tidal average dispersion to the freshwater discharge was developed by Van der Burgh [1972]. Combining it with the salt balance equation, Savenije [1986] derived a one-dimensional model for salinity intrusion in estuaries. This Van der Burgh model has performed surprisingly well around the world. However, the physical basis of the empirical Van der Burgh coefficient (퐾) is still weak. This study provides a theoretical basis for the Van der Burgh method and presents alternative equations. MacCready [2004] presented a theoretical expression for the dispersion coefficient following a reductionist approach. Comparing the density-related parts of the equations of the dispersion coefficient developed by Savenije and MacCready, a predictive equation is obtained for the coefficient 퐾 using physical parameters. In addition, a new box-model has been developed considering the longitudinal densitydriven gravitational circulation and the lateral tide-driven horizontal circulation. The coefficient 퐾 (closely related to the Van der Burgh’s coefficient) is used as an index of the density-driven mixing mechanism while the tide-driven part is included by assuming that it is proportional to the longitudinal dispersion. This model is validated in sixteen alluvial estuaries worldwide by using calibrated 퐾 values (and the boundary conditions). These calibrated values correspond well with the predicted values from the theoretical derivation, revealing that 퐾 has smaller values when the tide is stronger. From a system perspective, alluvial estuaries are free to adjust dissipation processes to the energy sources that drive them. The potential energy of the river flow drives mixing by gravitational circulation. The maximum power concept assumes that the mixing takes place at the maximum power limit. To describe the complex mixing processes in estuaries holistically, different assumptions had to be made. The maximum power concept did not work satisfactorily when estuaries were assumed as isolated systems. However, by including the accelerating moment provided by the freshwater discharge, the open estuary system could be solved in analogy with Kleidon [2016] applying the maximum power concept. A new expression for the dispersion coefficient due to gravitational circulation has been derived and solved in combination with the advection-dispersion equation. This maximum power model works well in eighteen estuaries with a large convergence length, providing an alternative equation for the dispersion. These estuaries also have larger calibrated 퐾 values by the Van der Burgh method, revealing a relation between the empirical coefficient 퐾 and the geometry. All these models: the Van der Burgh model, the box-model, and the maximum power model, can describe the longitudinal salinity profiles. The comparison between these models implies that the empirical Van der Burgh coefficient is associated with the geometry and stratification conditions. Finally, new predictive equations have been obtained by regression with physical-based parameters which make the Van der Burgh salinity intrusion method predictive with a solid theoretical basis.","Alluvial estuary; salinity intrusion; empirical model; predictive equations; maximum power concept","en","doctoral thesis","","","","","","","","","","","Water Resources","","",""
"uuid:1fa73d67-e69c-4a2b-89a3-b56cac13c7e7","http://resolver.tudelft.nl/uuid:1fa73d67-e69c-4a2b-89a3-b56cac13c7e7","Green Bulk Terminals: a Strategic Level Approach to Solid Biomass Terminal Design","Dafnomilis, I. (TU Delft Transport Engineering and Logistics)","Lodewijks, G. (promotor); Schott, D.L. (promotor); Junginger, Martin (promotor); Delft University of Technology (degree granting institution)","2019","This thesis deals with the design of solid biomass terminals from a strategic operational point of view. The design of terminals is here characterized as the total equipment selection, purchase, utilization and salvage within the terminal bounds.","","en","doctoral thesis","TRAIL Research School","978-90-5584-245-2","","","","TRAIL Thesis Series no. T2019/2, the Netherlands Research School","","","","","Transport Engineering and Logistics","","",""
"uuid:e62593a6-5118-4eb7-a638-d4091e6ce6eb","http://resolver.tudelft.nl/uuid:e62593a6-5118-4eb7-a638-d4091e6ce6eb","Microfabrication and microstructuring of hydrogel materials","Mytnyk, S. (TU Delft ChemE/Advanced Soft Matter)","van Esch, J.H. (promotor); Mendes, E. (promotor); Delft University of Technology (degree granting institution)","2019","Growing importance of hydrogels in various areas of human life has led to increasing need in controlling their properties, which is generally achieved by adjusting hydrogels shape and microstructure. Even though standard microfabrication and microstructuring techniques can be currently applied in hydrogel research, the variety of properties of hydrogel materials makes it difficult to employ any of these techniques universally. Furthermore, to produce hydrogel structures complex enough to mimic biological tissues, several structuring and microfabrication approaches on various length scales would need to be combined. The complexity and diversity of problems associated with such processes raises a whole set of multidisciplinary challenges. This doctoral dissertation explores novel approaches to structuring and fabrication of polymeric and supramolecular hydrogels by combining modern microfabrication techniques with molecular self-assembly and/or exploiting mutual incompatibility of certain hydrophilic polymers.","Hydrogel; Microfabrication; Microstructuring; Microfluidics; Aqueous two-phase systems","en","doctoral thesis","","978-94-6323-484-9","","","","","","","","","ChemE/Advanced Soft Matter","","",""
"uuid:981edd2c-1674-4cba-8146-cf097b29c4f1","http://resolver.tudelft.nl/uuid:981edd2c-1674-4cba-8146-cf097b29c4f1","Railway wheel defect identification","Alemi, A. (TU Delft Transport Engineering and Logistics)","Lodewijks, G. (promotor); Pang, Y. (copromotor); Delft University of Technology (degree granting institution)","2019","Wheels are critical components of trains, and their conditions should be therefore monitored.Wheel defects change the wheel-rail contact and cause high impact forces thatare damaging for tracks and trains. Wheel defects can also cause unexpected failuresthat reduce the availability and reliability of the railway system. Several monitoring systemshave been developed to detect and identify the wheel defects. Wheel Impact LoadDetector (WILD) is commonly used to estimate the wheel condition by measuring thewheel-rail contact force.WILDs normally measure the contact force by multiple sensors in different locationsto sample fromdifferent portions of the wheel circumference. The variation in the forcesmeasured by the multiple sensors presents the condition of the wheel. Force ratio anddynamic force are two main indicators using for detecting the defective wheels. Forceratio is the division of the peak force by the average force and the dynamic force is thesubtraction of the peak force and the average force. Force ratio and dynamic force areinfluenced by axle load, and train velocity. In addition, these criteria fail to identify thedefect types. Furthermore, these methods are not useful for monitoring the minor defects.","","en","doctoral thesis","","978-94-6384-010-1","","","","","","","","","Transport Engineering and Logistics","","",""
"uuid:0eab23c7-9ba4-4d27-91ee-58f9f140dd34","http://resolver.tudelft.nl/uuid:0eab23c7-9ba4-4d27-91ee-58f9f140dd34","Numerical and Experimental Investigation of Hygrothermal Aging in Laminated Composites","Rocha, I.B.C.M. (TU Delft Applied Mechanics)","Sluys, Lambertus J. (promotor); van der Meer, F.P. (copromotor); Delft University of Technology (degree granting institution)","2019","Although being a crucial step in structural design of laminated composites, prediction of their long-term mechanical performance remains a challenging task for which no comprehensive and reliable solution is currently available. Nevertheless, structures such as wind turbine blades, of which laminated composites constitute the main load bearing parts, must be designed to withstand 20 years of service while being subjected to a combination of fatigue loads and interaction with often extreme environmental conditions. In the end, a compromise is reached by compensating the lack of knowledge on the complex material degradation and failure mechanisms spanning multiple spatial and time scales that determine mechanical performance by adopting higher safety factors. This in turn leads to heavier, less efficient and more expensive designs. A better understanding of these mechanisms through discerning experiments and the development of fast and accurate numerical prediction tools are therefore necessary.
This work focuses on the phenomenon of hygrothermal aging (a combination of high temperatures and moisture ingression) on unidirectional laminated composites. The complexities of the aging problem, a combination of physical and chemical degradation mechanisms that affect fibers, resin and interface differently, are investigated through a combination of experiments, microscopic observation techniques and state-of-the-art numerical modeling. The result is an efficient multiscale and multiphysics framework for the prediction of failure and hygrothermal degradation in composites.
First, an experimental campaign is conducted on unidirectional glass/epoxy composite samples and on pure epoxy specimens immersed in water at 50C and tested quasi-statically and in fatigue. By comparing results of unaged, partially saturated, saturated and redried samples, the contributions of reversible and irreversible hygrothermal aging mechanisms are measured. The results indicate a strong correlation of degradation with the water concentration field inside the specimens. Furthermore, significant differences in strength reduction between composites and pure resin specimens point to damage in the fiber-matrix interfaces.
In order to realistically model the diffusion process that drives degradation, an experimental/numerical study is conducted on the anisotropic diffusion behavior of laminated composites. Thin material slices extracted from a thick composite panel are immersed until saturation and the obtained anisotropic diffusivity parameters are numerically reproduced through a microscopic diffusion model with periodic concentration field. The existence of an interphase transition region around the fibers is confirmed through microscopic experiments and included in the model through a level set field.
Since both the diffusion process and the resultant material degradation are highly influenced by the microstructure of the material, a multiphysics and multiscale analysis approach becomes necessary. A numerical framework for modeling of the aging process is proposed combining a macroscopic Fickian diffusion analysis with a multiscale stress equilibrium analysis based on the FE2 method. Since the multiscale approach does not rely on any constitutive hypotheses at the macroscale, complex failure behavior combined with plasticization and differential swelling can be accurately captured.
In order to expand the framework to allow for modeling of cyclic loading and cyclic environmental exposure, a number of additional model ingredients are developed. Firstly, a new constitutive model for epoxy combining viscoelasticity, viscoplasticity and a damage formulation with rate-dependent fracture onset is presented. The model is calibrated through a series of quasi-static and fatigue experiments on pure resin specimens at multiple strain rates and both before and after hygrothermal aging. The calibrated model is able to accurately capture the observed strain rate dependency and stiffness and strength degradations after aging, as well as correctly capturing damage activation in low-cycle fatigue. Secondly, the significant computational cost associated with the use of a cyclic multiphysics/multiscale analysis with nested micromodels is alleviated through a number of acceleration techniques. Time homogenization is used to explicitly divide the loading into a nonlinear macrochronological part and a linear computationally inexpensive microchronological one. Furthermore, the size of the microscopic boundary value problem is reduced through a combination of Proper Orthogonal Decomposition (POD) and the Empirical Cubature Method (ECM), resulting in a hyper-reduced model. The resultant reduced and time homogenized micromodel allows for speed-ups higher than 1000, dramatically accelerating the solution of the problem.
The modified version of the framework is used to numerically reproduce the experimentally obtained interlaminar shear behavior of composite samples aged for different durations. Use of the multiphysics/multiscale approach allows for accurately describing the stress state in specimens with non-uniform water concentration fields. The viscoelsatic/viscoplastic resin model is capable of capturing differences in stress response between the very slow conditioning phase and the much faster mechanical test. The model is completed by a cohesive-zone model for fiber-matrix interface debonding including friction calibrated with a set of Single Fiber Fragmentation tests performed on dry and saturated samples.","Fiber-reinforced composites; Hygrothermal aging; Multiscale analysis; Multiphysics; Reduced-order modeling","en","doctoral thesis","","978-94-6323-483-2","","","","","","","","","Applied Mechanics","","",""
"uuid:eeb2da3c-83e4-4837-87fd-3e446d401736","http://resolver.tudelft.nl/uuid:eeb2da3c-83e4-4837-87fd-3e446d401736","Designing for Darkness: Urban Nighttime Lighting and Environmental Values","Stone, T.W. (TU Delft Design Aesthetics; TU Delft Ethics & Philosophy of Technology)","van den Hoven, M.J. (promotor); Vermaas, P.E. (copromotor); Delft University of Technology (degree granting institution)","2019","Artificial illumination has had profound and far-reaching impacts on the development, use, and perceptions of urban nights, and has brought with it many benefits. However, in recent years its adverse costs and effects – commonly referred to as light pollution – have emerged as a topic of concern. Nighttime lighting uses enormous amounts of energy, costs billions of dollars annually, can be detrimental to the health of humans and ecosystems, and cuts off access to a starry night sky. Addressing these impacts, and more fundamentally understanding the underlying values shaping contemporary discourse, is a complex and pressing challenge with moral, aesthetic, political, and technical dimensions. This dissertation takes up this challenge by offering a critical examination of the historical roots and normative presuppositions shaping the concept of light pollution. This critique leads to the proposal of an alternative normative framework: instead of focusing on reducing lighting, it argues for fostering darkness in urban nightscapes. A designing for darkness approach is developed on two interrelated levels. The first is conceptual, exploring the relationship between darkness, illumination, and environmental values. The second is practical, proposing first steps towards realizing darker nights via the responsible design of new and emerging technologies, namely LEDs and autonomous vehicles. Taken together, the chapters of this dissertation weave together a critical investigation and constructive contribution to a pressing urban challenge for the 21st century.","","en","doctoral thesis","4TU.Centre for Ethics and Technology","978-90-386-4679-4","","","","","","","","","Design Aesthetics","","",""
"uuid:9a580a83-22e5-461b-b9d4-5de9a2e6fe3f","http://resolver.tudelft.nl/uuid:9a580a83-22e5-461b-b9d4-5de9a2e6fe3f","Quantum transport at oxide interfaces","RinconVieiraLugarinhoMonteiro, A.M. (TU Delft QN/Caviglia Lab)","Caviglia, A. (promotor); van der Zant, H.S.J. (promotor); Delft University of Technology (degree granting institution)","2019","The realization of interfaces between different transition metal oxides has heralded a new era of materials and physics research. Notably, it enabled a uniquely diverse set of coexisting physical properties to be combined with an ever-increasing degree of experimental control. The primary focus of this thesis is the celebrated interface between the two wide band-gap insulators LaAlO3 and SrTiO3, which exhibits a variety of phenomena such as conductivity, superconductivity and spin–orbit coupling—all of which are gate-tunable, demonstrating the promise of this system for fundamental research and technological applications alike. We start by discussing the role of spin–orbit coupling in the magnetotransport properties of the system. Namely, we show how it can drive a giant in-plane magnetoresistance. On a more technically challenging perspective, we realize tunable Josephson junctions by means of lateral confinement and local side-gating. This technique, due to its simplicity, can be expanded to a broad group of interfacial systems. We then investigate LaAlO3/SrTiO3 interfaces along the (111) crystallographic direction, discovering that it condenses into a superconducting ground state and elucidating the important role played by electronic correlations. Finally, this thesis ends with a twist to the story. Moving away from epitaxial interfaces, we employ an innovative technique to obtain free-standing oxide films by etching a water-soluble sacrificial buffer layer. This exciting development paves the way to integrating complex oxides with van der Waals materials and engineering new phases in hybrid devices.","Complex oxide interfaces; magnetotransport; field-effect; superconductivity; electronic correlations; spin–orbit coupling; free-standing oxides","en","doctoral thesis","","978-90-8593-382-3","","","","","","","","","QN/Caviglia Lab","","",""
"uuid:12821467-3df9-40ab-a297-159497650b8c","http://resolver.tudelft.nl/uuid:12821467-3df9-40ab-a297-159497650b8c","Biomass Derived Binder: Development of the scientific basis for methodologies that enable the production of renewable sustainable cement based on ashes derived from the conversion of biomass residues as determined by qualitative mineralogical analysis","Carr, N.N. (TU Delft Materials and Environment)","van Breugel, K. (promotor); Jonkers, H.M. (promotor); Delft University of Technology (degree granting institution)","2019","The aim of this project was the development of the scientific basis for methodologies that enable the production of renewable sustainable cement (i.e. BioCement) based on ashes derived from the conversion of biomass residues. Within this project, biomass ash and derived products were developed at the laboratory scale, and their functionality was tested with respect to sustainability and composition (relative to OPC). The ultimate goal was to prove the possibility of replacing traditional Portland cement with a renewable BioCement in typical cement-based products such as concrete. The environmental superiority of theoretical BioCement is based on assumed negligible CO2 emissions during production. Through a Life Cycle Analyses on the developed biomass ashes (and preferably fully functional BioCement), the environmental impact and the potential to replace Portland cement with theoretical BioCement was quantified. The investigations included three types of biomass (ash) utilization:","","en","doctoral thesis","","","","","","","","","","","Materials and Environment","","",""
"uuid:62b7c90b-2e4d-4965-a2dd-6dccbfc8a509","http://resolver.tudelft.nl/uuid:62b7c90b-2e4d-4965-a2dd-6dccbfc8a509","Engineering of metabolism and membrane transport in Saccharomyces cerevisiae for improved industrial performance","Bracher, J.M. (TU Delft BT/Industriele Microbiologie)","Pronk, J.T. (promotor); van Maris, A.J.A. (promotor); Delft University of Technology (degree granting institution)","2019","Nearly 200 parties signed and committed to the Paris agreement in 2017, which comprises the long-term goal to keep the average global temperature increase well below 2 degrees above pre-industrial levels. Virtually all possible scenarios drafted to reach this goal include a strongly increased use of biofuels for transport by land, sea and air. Bioethanol, whose production could, in principle and in contrast to fossil fuel production, involve a closed carbon cycle, is generated by microbial fermentation of sugars from plant-derived starch or agricultural waste. This liquid transport fuel provides a readily implementable alternative to fossil fuels as it combines the advantages of sustainable fuel production and compatibility with existing combustion engine technologies, without a requirement for time-consuming and expensive changes in our current infrastructure. To date, bioethanol is the largest volume product of industrial biotechnology. 99% of this ethanol is generated via 1st generation processes, largely derived by fermentation of hydrolysed sugar cane or corn starch by bakers’ yeast (Saccharomyces cerevisiae). So-called ‘2nd generation’ bioethanol, for which the first commercial-scale plants are now starting up, is made by fermentation of sugars present in lignocellulosic biomass, typically harvested from agricultural waste streams, such as wheat straw or sugar beet pulp. Whilst such feedstocks enable a “food and fuel” scenario, their industrial implementation brings along additional challenges for yeasts and biotechnologists. Hydrolysis of lignocellulosic biomass, in particular the cellulose and hemicellulose fractions, releases a mixture of different sugars as well as inhibiting compounds that impair growth and viability of S. cerevisiae. Whilst glucose is the most abundant fermentable carbon source, the pentose sugar d-xylose can cover up to 30% of the total sugar content. The fraction of the pentose l-arabinose typically varies between 2 – 20%, depending on the feedstock used. Although pentose sugars cannot be fermented to ethanol by wild-type S. cerevisiae strains, international research efforts over the past two decades yielded metabolic engineering strategies to enable anaerobic conversion of d-xylose and l-arabinose to ethanol by S. cerevisiae. The expression of heterologous, d-xylose- and l-arabinose-isomerase based pathways from fungi or bacteria, together with over-expression of genes of the non-oxidative pentose phosphate pathway (PPP) and deletion of the unspecific aldose-reductase gene GRE3 within S. cerevisiae allows this yeast to aerobically metabolize both sugars. The recent advances in metabolic engineering tools, such as CRISPR-Cas9-assisted genome editing, greatly advanced the construction and characterization of metabolically engineered S. cerevisiae strains with improved yields, kinetics and robustness in 2nd generation ethanol production processes.","","en","doctoral thesis","","978-94-6380-178-2","","","","Ebook version: https://www.globalacademicpress.com/ebooks/jasmine_bracher","","","","","BT/Industriele Microbiologie","","",""
"uuid:419e4678-cb27-4c03-9725-7fb5b0fd3a12","http://resolver.tudelft.nl/uuid:419e4678-cb27-4c03-9725-7fb5b0fd3a12","Digitalization of posture-based Seat Design: Developing car interiors by involving user demands and activities","Kilincsoy, U. (TU Delft Applied Ergonomics and Design)","Vink, P. (promotor); Bubb, H (promotor); Delft University of Technology (degree granting institution)","2019","The conventional development of a new automobile starts with a first proportional model. In this model, the exterior geometry of a car can be distinguished into vehicle, power train portfolio, market requests, safety requirements, and design target. The interior design results from the proportional model with specific characteristics, such as spaciousness, control and display concept, and ergonomic requirements. The automobile emerged from the sole purpose of transportation with driver orientation into a vacation or commuter experience of all users by a broad spectrum of comfort- and infotainment features. This is sustained by emerging mobility concepts like autonomous car concepts. In order to consider this change in mobility concepts, consumer habits and mobility behavior of users, the interior design becomes more important, which creates the frame for this PhD project. An essential part of the interior is the seat. This PhD thesis focuses on designing car interiors from the inside to the outside by user involvement.","","en","doctoral thesis","","978-94-6384-006-4","","","","","","","","","Applied Ergonomics and Design","","",""
"uuid:a992351f-c72e-4b7f-9162-f625eed0dcdd","http://resolver.tudelft.nl/uuid:a992351f-c72e-4b7f-9162-f625eed0dcdd","Radar remote sensing of wind vector and turbulence intensity fields from raindrop backscattering","Oude Nijhuis, A.C.P. (TU Delft Microwave Sensing, Signals & Systems)","Yarovoy, Alexander (promotor); Russchenberg, H.W.J. (promotor); Krasnov, O.A. (copromotor); Delft University of Technology (degree granting institution)","2019","Scanning radars are promising sensors for atmospheric remote sensing, giving potential to retrieve parameters that characterize the local air dynamics during rain. For the observation of air motion, radars are relying on the backscatter of particles, which can, for example, be raindrops or insects. To measure wind vectors and turbulence intensities remotely during rain the radar is a common choice. This is mainly because the radar signals are not attenuated too much by the rain itself, which is the case for instruments operating at other frequencies, such as lidars. There is, however, a problem with measuring air dynamics from raindrops. Raindrops are not perfect tracers of the air motion. It may thus be necessary to make some corrections when air-dynamics parameters are estimated with a radar during the rain, and account for that raindrops are imperfect tracers of the air motion. This dissertation focuses on this problem. In addition, existing radar-based wind vector and turbulence intensity retrieval techniques are assessed for when they are applied during the rain, and they have been further developed.","Radar; remote sensing; turbulence; wind vectors; rain; inertia effect","en","doctoral thesis","","978-94-6384-004-0","","","","","","","","","Microwave Sensing, Signals & Systems","","",""
"uuid:c946f596-54fd-41b1-b43b-63b8f9457ef8","http://resolver.tudelft.nl/uuid:c946f596-54fd-41b1-b43b-63b8f9457ef8","Optomechanics in a 3D microwave cavity","Cohen, M.A. (TU Delft QN/Steele Lab)","Steele, G.A. (promotor); Blanter, Y.M. (promotor); Delft University of Technology (degree granting institution)","2019","This thesis explores certain technologies that are related to the field of cavity optomechanics; specifically, optomechanics where the cavity is a 3Dmicrowave cavity.","cavity optomechanics; silicon nitride resonators; superconducting circuits","en","doctoral thesis","","978-94-028-1349-4","","","","Casimir PhD Series, Delft-Leiden 2018-18","","","","","QN/Steele Lab","","",""
"uuid:57be7165-2726-4a1a-b076-c5ed3988e00b","http://resolver.tudelft.nl/uuid:57be7165-2726-4a1a-b076-c5ed3988e00b","Conceptualizing inter-household energy exchanges: An anthropology-through-design approach","Singh, A. (TU Delft Design Conceptualization and Communication)","Keyson, D.V. (promotor); van Dijk, H.W. (promotor); Romero Herrera, N.A. (promotor); Delft University of Technology (degree granting institution)","2019","With the growth of decentralized, off-grid, and distributed renewable energy systems across the globe, an arena for energy exchanges between households is opening up. As compared to traditional ‘centralized’ energy supply systems, in these emerging energy systems households are imagined to acquire agency by having choice and control over inter-household energy exchanges within neighborhoods or villages. The existing literature on such scenarios of energy exchanges is mostly rooted in a techno-economic analysis built upon visions of rational choice approaches and lacks discussion on the sociocultural dimensions of energy exchanges. This research utilizes theoretical perspectives from economic anthropology to study the phenomenon of inter-household energy exchange. The methodological approach followed takes inspiration from discourses of design anthropology, research through design, and ethnography. This approach is instantiated in the form of a longitudinal multi-method study conducted at two off-grid villages in rural India. This interdisciplinary research makes knowledge contribution to the fields of energy studies and design anthropology. This dissertation develops conceptual knowledge of inter-household energy exchanges by investigating the social and cultural embeddedness of energy exchanges in a system where householders can decide with whom to exchange locally produced energy. Overall, the dissertation showcases that when people get to structure energy exchanges, they do so by employing a range of social, cultural, moral and economic notions, and demonstrates that there is more to energy exchanges than what the dominant rational choice perspective describes. This work proposes a novel approach called Anthropology-through-Design (AtD), which facilitates generating anthropological knowledge about a sociocultural phenomenon through a design intervention. The AtD approach takes a strategic step in relocating 'design' from being an object of anthropology to becoming an instrument for doing anthropology.","","en","doctoral thesis","","978-94-6384-008-8","","","","","","2019-07-31","","","Design Conceptualization and Communication","","",""
"uuid:48572080-bc51-4ffe-9ba5-676ee9ab5fcc","http://resolver.tudelft.nl/uuid:48572080-bc51-4ffe-9ba5-676ee9ab5fcc","Towards closed-loop dynamical wind farm control: model development and control applications","Boersma, S. (TU Delft Team Jan-Willem van Wingerden)","van Wingerden, J.W. (promotor); Verhaegen, M.H.G. (promotor); Delft University of Technology (degree granting institution)","2019","Electricity consumption is increasing on a global level. In 2017, non-renewable energy sources such as crude oil, natural gas and coal provided 76% of the required energy, while 24% came from renewable sources such as hydro, wind and solar. Non-renewable sources are finite because they do not replenish rapidly enough relative to the rate at which they are being used, and harvesting these resources is environmental costly. Since renewable energy sources replenish naturally in a relatively short period of time and are cleaner, they are suitable for the sustainable production of electricity...","Control-oriented wind farm modelling; closed-loop secondary frequency control; model predictive wind farmcontrol","en","doctoral thesis","","","","","","","","","","","Team Jan-Willem van Wingerden","","",""
"uuid:643ddf12-97d3-48a1-9742-b4dd22f16164","http://resolver.tudelft.nl/uuid:643ddf12-97d3-48a1-9742-b4dd22f16164","Fast Aeroelastic Analysis and Optimisation of Large Mixed Materials Wind Turbine Blades","Hegberg, T. (TU Delft Aerospace Structures & Computational Mechanics)","van Bussel, G.J.W. (promotor); De Breuker, R. (promotor); Delft University of Technology (degree granting institution)","2019","In this dissertation, structural wind turbine blade layouts are presented that are suitable for 10MW and 20MW wind turbine blades. This has been accomplished by using a medium fidelity static aeroelastic model embedded in an optimisation framework. The structural solutions are the result of a stiffness optimisation where the blade mass is minimised. To accomplish the structurally optimised blade, an aeroelastic analysis model is set up. This model consists of a nonlinear structural analysis module and an aerodynamic module. Both models are comparable in terms of the level of physical modelling and as such, it can be said that both models are of equal fidelity. This equal fidelity is favourable for the aeroelastic coupling between both models, which generates an aeroelastic solution that is accurate up to the level of physics present in the aerodynamic and the structural models...","Wind Energy; Aeroelasticity; Optimisation; Structural design","en","doctoral thesis","","978-94-6323-473-3","","","","","","","","","Aerospace Structures & Computational Mechanics","","",""
"uuid:88fbd0ea-7998-4d6f-a6ae-ac65114e6d95","http://resolver.tudelft.nl/uuid:88fbd0ea-7998-4d6f-a6ae-ac65114e6d95","Information integration and intelligent control of port logistics system","Feng, F. (TU Delft Transport Engineering and Logistics)","Lodewijks, G. (promotor); Pang, Y. (copromotor); Delft University of Technology (degree granting institution)","2019","Port logistics (PL) can be defined as the process of planning, implementing and controlling the flow of goods and information between the sea and inland via ports and the other way around. PL systems concern the development of functions to support activities including sea side and land side transportation, cargo storage, order processing, and distribution. Increasing demand and a highly competitive market have forced PL systems to continuously improve their performance, including their operational efficiency and reliability. A key issue is the improvement of decision-making abilities. Decision-making systems play an important role within PL systems, especially as they consider the ways that different processes, operations, and equipment can be controlled and coordinated. With the support of ICT technologies, the decision-making systems have significantly developed. However, several decision making processes lack sufficient ICT support. As a result, the benefits of integrating new ICT supports are unknown, including their benefits for inland vessel coordination and the equipment reliability assessments. The goal of this thesis is to develop an ICT framework to support the decision-making processes and ultimately improve the performance of PL systems. To do so, a hierarchical ICT framework is designed, which consists of two major components: a middleware and an intelligent decision-making approach. With regards of selecting middleware, an agent system is chosen. Likewise, for the selection of intelligent decision making approach, the meta-heuristics approach is chosen to aid the collaborative planning, whereas context-aware system is chosen for the reliability assessment. To further integrate the selected ICT technologies, a hierarchical framework is designed, which contains three layers: an agent model layer, an agent control layer, and an agent management layer. At the agent model layer, the problems are decomposed and modelled as an agent. At the agent control layer, a coordinate agent is integrated with the intelligent decision-making approach to establish control and coordination. Finally, at the agent management layer, the agent communication facility is established...","","en","doctoral thesis","TRAIL Research School","978-90-5584-244-5","","","","TRAIL Thesis Series no. T2019/1, The Netherlands TRAIL Research School","","","","","Transport Engineering and Logistics","","",""
"uuid:2b9d9501-3c1a-44e2-a14e-86578f62c5b4","http://resolver.tudelft.nl/uuid:2b9d9501-3c1a-44e2-a14e-86578f62c5b4","Methods for simulation, planning, and operation of Aquifer Thermal Energy Storage under deep uncertainty","Jaxa-Rozen, M. (TU Delft Policy Analysis)","Herder, P.M. (promotor); Kwakkel, J.H. (copromotor); Delft University of Technology (degree granting institution)","2019","The building sector currently accounts for approximately one-third of the global demand for energy, and one-fifth of all energy-related greenhouse gas emissions (GHG). The development and adoption of energy-efficient technologies in this sector is therefore a key element towards efforts for the mitigation of climate change. In particular, heating is the single largest end use of energy in buildings; basic trends towards urbanization, as well as climate change, are also expected to significantly increase the demand of energy for cooling by the middle of the century. Energy technologies which can address both of these aspects are thus particularly promising. In this context, Aquifer Thermal Energy Storage (ATES) is an increasingly popular shallow geothermal energy technology. This method uses natural aquifer formations to seasonally store energy for heating and cooling, using “warm” and “cold” storage wells combined with a heat pump. This approach can reduce energy demand by more than half in larger buildings. ATES is used in nearly one-tenth of new commercial and utility buildings in the Netherlands, where suitable aquifers – combined with increasing demand for energy-efficient technologies – make the technology especially competitive. However, this growth has already...","Aquifer Thermal Energy Storage; Geothermal energy; Smart energy systems; Social-ecological systems","en","doctoral thesis","","978-94-6366-124-9","","","","","","","","","Policy Analysis","","",""
"uuid:3ef5d9cb-915d-4152-a0b4-cedb69eaa62c","http://resolver.tudelft.nl/uuid:3ef5d9cb-915d-4152-a0b4-cedb69eaa62c","Charge and Energy Transfer in Multichromophoric Arrays","Inan, D. (TU Delft ChemE/Opto-electronic Materials)","Grozema, F.C. (promotor); Jager, W.F. (copromotor); Delft University of Technology (degree granting institution)","2019","In nature, solar energy conversion occurs with a process called photosynthesis. Although the overall energy efficiency of the storage of the energy of sunlight into biomass is low, the individual photophysical processes that occur have a high quantum yield. In the latter sense, natural photosynthesis can serve as a valuable inspiration for the design of artificial light harvesting systems with the final goal of achieving efficient energy conversion. In this thesis, different approaches are explored to construct new artificial light harvesting systems with a special focus on the systematic study of the individual photophysical processes that take place...","","en","doctoral thesis","","978-94-6375-184-1","","","","","","","","","ChemE/Opto-electronic Materials","","",""
"uuid:c1040427-364d-485b-bebd-f23a89e217aa","http://resolver.tudelft.nl/uuid:c1040427-364d-485b-bebd-f23a89e217aa","Estimating Surface Heat Fluxes Using Temperature and Wetness Information: A Particle Data Assimilation Framework","Lu, Y. (TU Delft Water Resources)","Steele-Dunne, S.C. (promotor); van de Giesen, N.C. (promotor); Delft University of Technology (degree granting institution)","2019","Surface heat fluxes (latent and sensible heat over the land surface) play a key role in the land-atmosphere interaction, and their spatial pattern as well as temporal evolution are vital to the terrestrial water cycle and surface energy balance. Ideally, we want to have accurate estimates of spatially distributed and temporally continuous fluxes. However, this cannot be achieved through interpolation of point measurements because of the limited number of flux stations and the high heterogeneity of fluxes, nor can this be done using large scale monitoring platforms such as remote sensing, since fluxes lack a unique signature that can be detected by satellites. Given the fact that surface heat fluxes are closely related to the thermal and wetness condition of the land surface, which are available from remote sensing instruments, this PhD research proposes a methodology to improve flux estimates by assimilating land surface temperature (LST) and soil wetness information into a coupled water and heat transfer model. The goal is to acquire accurate flux estimates over a large area using a simple model and a small suite of input data...","Surface Heat Fluxes; Soil Moisture; Land Surface Temperature; Brightness Temperature; Data Assimilation","en","doctoral thesis","","978-94-028-1341-8","","","","","","","","","Water Resources","","",""
"uuid:ecd8e101-3164-4227-b47b-13a04bc4b8fb","http://resolver.tudelft.nl/uuid:ecd8e101-3164-4227-b47b-13a04bc4b8fb","Solid state phase transformations in steels: a neutron and synchrotron radiation study","Fang, H. (TU Delft Novel Aerospace Materials)","van Dijk, N.H. (promotor); van der Zwaag, S. (promotor); Brück, E.H. (promotor); Delft University of Technology (degree granting institution)","2019","Solid-state phase transformations in steels cover a broad range of aspects. The underlying physics behind these phase transformations usually include nucleation, diffusion, lattice reconstruction and interactions between solutes and grain boundaries and interfaces. These features and the fact that the events take place at high temperatures, in the bulk, at time scales ranging from milliseconds to hours and at length scales ranging from atomic dimensions to millimeters make even the most widely studied phase transformation in steel, the austenite-ferrite phase transformation, only approximately understood and therefore an attractive topic for investigation. Quantitative data obtained by sophisticated physical characterization techniques in combination with supporting physical microstructural models addressing the relevant length and time scale are required to bring the field of ferrous physical metallurgy further.
This thesis focusses on two new approaches to orchestrate phase transformations in steels such that more physical insight is obtained or that new properties can be reached: (1) cyclic partial austenite-ferrite phase transformations that are designed to unravel the grain growth, and more specifically the interface mobility by avoiding concurrent nucleation of new phases. This topic is studied by computational studies and 3D neutron depolarization studies that are capable to in-situ monitor the ferrite grain size and fraction. (2) Self healing of creep damage by site selective precipitation of supersaturated iron-based alloys. A strong preference for precipitation at free creep cavity surfaces compared to that in the bulk can result in a filling of creep cavities and a significant extension of the creep life time. To make this self-healing mechanism applicable for creep-resistant steels, a search for an alternative healing agent for Au in Fe is to be executed and new design recipes need to be extracted on the basis of the experimental input from advanced characterization techniques such as electron microscopy and X-ray nanotomography.","Steels; Phase Transformations; Neutron scattering; Synchrotron radiation; Tomography; Self healing","en","doctoral thesis","","978-94-028-1285-5","","","","","","","","","Novel Aerospace Materials","","",""
"uuid:f72f61f4-4508-4552-85b8-d89abbbee90e","http://resolver.tudelft.nl/uuid:f72f61f4-4508-4552-85b8-d89abbbee90e","Nano-scale failure in steel: Interace decohesion at iron/precipitate interfaces","Elzas, A. (TU Delft (OLD) MSE-7)","Thijsse, B.J. (promotor); Delft University of Technology (degree granting institution)","2019","Multiphase alloys such as advanced high strength steels show limited ductility due to interface decohesion at internal boundaries. This interface decohesion is caused by dislocations that pile-up at interfaces in the material, where they cause a stress concentration. This stress concentration in turn can lead to interface decohesion, resulting in the formation of voids, which, when they coalesce, can form a macroscopic crack. In order to understand the process of interface decohesion and the factors facilitating this, in this thesis interface decohesion at interfaces between the soft iron matrix of steel and hard precipitates is studied at the nano-scale with molecular dynamics simulations. From the nano-scale simulations cohesive laws are derived that relate the tractions at the interface to the separations at the interface. These cohesive laws can be used to describe interface decohesion in material models at the next larger length scale (micro-scale), such as discrete dislocation plasticity.","dislocations; Iron/precipitate interface; Molecular dynamics; Cohesive law; mixed mode loading","en","doctoral thesis","","978-94-6186-998-2","","","","","","","","","(OLD) MSE-7","","",""
"uuid:84fa3be6-32e8-48f8-a732-deb4445b1b23","http://resolver.tudelft.nl/uuid:84fa3be6-32e8-48f8-a732-deb4445b1b23","A single-chip micro-opto-electro-mechanical system for optical coherence tomography imaging","Jovic, A. (TU Delft EKL Processing)","Sarro, Pasqualina M (promotor); Delft University of Technology (degree granting institution)","2019","","MOEMS; system integration; OCT imaging; electrothermal actuators; Al-SiOx bimorph beams; Si microlenses; Si photonics","en","doctoral thesis","","978-94-028-1350-0","","","","","","2020-07-10","","","EKL Processing","","",""
"uuid:bedfb838-ebcb-4a68-a4b4-4d0c9a300beb","http://resolver.tudelft.nl/uuid:bedfb838-ebcb-4a68-a4b4-4d0c9a300beb","Re-use of Building Products in the Netherlands: he development of a metabolism based assessment approach","Icibaci, L.M. (TU Delft Design for Sustainability; TU Delft Spatial Planning and Strategy)","Brezet, J.C. (promotor); van Timmeren, A. (promotor); Delft University of Technology (degree granting institution)","2019","This research departs from the desire to understand the practice of reuse of building products from a systemic standpoint, representing a network of multiple factors influencing the process of reusing. Through the Industrial Ecology theoretical framework, these relations are dynamic and contextually bounded defining the commercial feasibility of products. This holistic approach generates an overview of how dynamics in the building stock (housing stock as supply of potential reusable products) and socioeconomic, and technological factors, influence what is harvested for reuse in practice. The representation of these dynamic relations composes a conceptual model of the metabolism of building product reuse in the Netherlands. This “map” offers a way to improve the visualization and the understanding of how trajectories of flows of products are reused as well as the motivations, conditions and limitations behind them.","Industrial Ecology; Metabolism; Building Product reuse; Waste prevention; Supply chain; Housing stock; Circular Economy","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-119-5","","","","A+BE | Architecture and the Built Environment No 2 (2019)","","","","","Design for Sustainability","","",""
"uuid:6562195d-ae4c-4150-bfdd-c5282c21954b","http://resolver.tudelft.nl/uuid:6562195d-ae4c-4150-bfdd-c5282c21954b","Computational analysis of copy number profiles of tumors","Van Dyk, H.O. (TU Delft Pattern Recognition and Bioinformatics)","Wessels, L.F.A. (promotor); Reinders, M.J.T. (promotor); Delft University of Technology (degree granting institution)","2019","Cancer is a genetic disease. The activation, alteration or deactivation of cancer genes can stimulate undesirable cell-proliferation. Cancer genes can be subdivided into oncogenes and tumor suppressors. Oncogenes, such as growth factor receptors, are altered and/or overexpressed genes that are causally linked to tumorigenesis. Tumor suppressors, by contrast, are typically under expressed or deleted in tumors since they would otherwise serve a protective role.
There are two main genetic mechanism that can activate or deactivate cancer genes: mutations and DNA copy number alterations. In this work, we focus on detecting novel cancer genes using somatic DNA copy number data. The philosophy is simple: if independently acquired somatic amplifications or deletions occur frequently across multiple tumor samples, they are likely to harbor oncogenes or tumor-suppressors respectively. With a single tumor DNA copy number profile,it is not possible to know which copy number alterations activate or deactivate cancer genes, since many of the alterations (referred to as passenger aberrations) occur due to genomic instability and do not necessarily provide a selective advantage for cancerous cells. However, when aggregating across many samples, we expect cancer genes to be amplified or deleted more frequently than by chance, which allows us to detect them.
This application can be regarded as a peak calling problem. We aggregate (sum) copy number profiles across many tumors and call peaks that are significantly high. To do this we define a null model that describes the behavior of an aggregate copy number profile that would arise if only passenger aberrations occurred. The null aggregate profile (also called the noise profile) exhibits high auto correlation across the genome due to the segmented nature of copy number profiles.
We therefore developed a statistical framework for calling peaks (at varying widths) where the noise profile can exhibit strong auto correlation. The framework allows us to detect peaks (at varying widths) with high statistical power while controlling the false discovery rate of detected peaks. We employ two concepts. First, we take advantage of the fact that broad peaks can be detected with much higher statistical power when smoothing the profile and we developed techniques for adaptive smoothing. Second, we use a powerful statistic called the expected Euler characteristic that is insensitive to platform resolution, directly compatible with our smoothing methodology and that can be directly used to estimate the expected number of false positive peaks called.
This framework does not rely directly on the inherent properties of DNA copy number profiles and can therefore be applied in many more applications with suitably defined null-models. Although the mathematics we develop in this framework might be taxing at times, we observe thatthe equations that result and that are ultimately used in our peak calling algorithms are simple and the validity can easily be verified by simulating data and comparing our theoretical expectations with measured observations.","copy number profile; segmentation; recurrent aberrations; recurrent copy number breaks; oncogene; tumor suppressor; driver gene; scale space; Euler characteristic","en","doctoral thesis","","978-94-6384-003-3","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:6002c1a0-b19f-4a2b-b42d-bffcd914dd6b","http://resolver.tudelft.nl/uuid:6002c1a0-b19f-4a2b-b42d-bffcd914dd6b","Improving the Availability of Wind Turbine Generator Systems","Shipurkar, U. (TU Delft Transport Engineering and Logistics; TU Delft DC systems, Energy conversion & Storage)","Ferreira, Jan Abraham (promotor); Polinder, H. (promotor); Delft University of Technology (degree granting institution)","2019","Wind energy is becoming an important contributor in the world’s energy needs. An important trend in wind turbine design is the focus on reliability and increased availability of wind turbines. This is the aim of the study. It focusses its attention on the generator and power electronic converter due to the susceptibility of the generator system to failure. The first step is identifying the problem. This is achieved by a review of existing studies in failure rates and failure mechanisms to identify critical failures, their probabilities and their failure mechanisms. This is followed by the identification of approaches that can be used to increase the availability of wind turbine generator systems, focussing on – component reliability, active control, and fault tolerance. It identifies three aspects that are analysed in detail...","","en","doctoral thesis","","978-94-6384-001-9","","","","","","","","","Transport Engineering and Logistics","","",""
"uuid:ad81b0ee-76be-4054-a7e8-bd2eeecdb156","http://resolver.tudelft.nl/uuid:ad81b0ee-76be-4054-a7e8-bd2eeecdb156","Autonomous control for adaptive ships: with hybrid propulsion and power generation","Geertsma, R.D. (TU Delft Ship Design, Production and Operations)","Negenborn, R.R. (promotor); Hopman, J.J. (promotor); Delft University of Technology (degree granting institution)","2019","Shipping plays a crucial role in modern society, but has to reduce its impact on the environment. The commercial availability of power electronic converters and lithium-ion batteries provides an opportunity to improve performance of ships energy systems while reducing their environmental impact. However, the degrees of freedom in control for hybrid propulsion and power generation require advanced control strategies to autonomously achieve the best trade-off between fuel consumption, emissions, radiated noise, propulsion availability, manoeuvrability and maintainability.
This PhD thesis proposes dynamic simulation models, benchmark manoeuvres and measures of performance (MOP) to quantify energy system performance. These simulation models and MOPs are used to quantify the improvements with three novel control strategies: adaptive pitch control, parallel adaptive pitch control and energy management for hybrid propulsion and power generation. Finally, a layered control strategy is proposed that can autonomously adapt to changing ship functions, using the proposed control strategies. The proposed energy systems and control strategies can thus significantly reduce the impact of shipping on the environment, while more autonomously achieving its increasingly diverse missions at sea","ship propulsion; Non-linear control systems; marine systems; Modelling and simulation; validation; Power systems","en","doctoral thesis","","978-90-829766-0-1","","","","","","","","","Ship Design, Production and Operations","","",""
"uuid:308b0955-635a-4342-ac36-23e9fead164b","http://resolver.tudelft.nl/uuid:308b0955-635a-4342-ac36-23e9fead164b","Eye movements in manual driving","van Leeuwen, P.M. (TU Delft Biomechatronics & Human-Machine Control)","van der Helm, F.C.T. (promotor); de Winter, J.C.F. (promotor); Happee, R. (promotor); Delft University of Technology (degree granting institution)","2019","Driving simulators provide researchers with a flexible, controllable, safe, and economical tool for a range of applications. A pivotal aspect in the application of driving simulators is the development of measures aimed at describing the behavior and performance of the driver and providing knowledge about the way drivers are controlling their vehicle, which ultimately will benefit road safety. The driver performance is traditionally described by measures of (simulated) vehicle data and measures of subjective evaluations. This thesis provides additional measures of visual attention and driver physiology aimed at describing the driver behavior. Frequently, these measures are analyzed in isolation of the driver and vehicle performance. This thesis aims to derive relationships between concurrently recorded eye-movement and driver behavior variables in closed-loop driving tasks.","","en","doctoral thesis","","978-94-028-1338-8","","","","","","","","","Biomechatronics & Human-Machine Control","","",""
"uuid:e90ee9dc-0b00-4908-be32-c6a3efb425e2","http://resolver.tudelft.nl/uuid:e90ee9dc-0b00-4908-be32-c6a3efb425e2","Positron Annihilation Studies on Thin Film Solar Cells: CdSe and PbSe Quantum Dot Thin Films and Cu(In1-xGax)Se2 Layered Systems","Shi, W. (TU Delft RST/Fundamental Aspects of Materials and Energy)","Brück, E.H. (promotor); Eijt, S.W.H. (copromotor); Delft University of Technology (degree granting institution)","2019","High efficiency, low cost, and long stability are three key factors for the wide application of photovoltaics (PV) which are currently intensively studied in order to meet the increasing global renewable energy demand. Currently, the PV market is mainly based on silicon. However, solar cells based on silicon may not be capable to meet the long-term global energy demand due to their relatively high costs and high energy required for the synthesis of silicon wafers, opening the door to conventional thin films (such as Cu(In1-xGax)Se2 (CIGS)) and innovative thin films (e.g. semiconductor quantum dots (QDs), based on PbS, PbSe, CdSe QDs). The advantages of QDs as solar cell materials are the low-temperature synthesis process, the tunable band gap via control of the composition and size, and the promise of physical mechanisms that may increase efficiency above the Shockley-Queisser limit, such as multiple exciton generation (MEG), in which more than one exciton is created from a single photon. However, the low efficiency, with a current laboratory record just above 10%, and the durability are still limitations on their widespread application in the PV market. It is very important to understand the surface structure and surface-ligand interactions in order to improve the efficiency and stability of QD solar cells. For CIGS solar cells, research-cell efficiencies have reached 22.6%, which is just below the efficiencies of Si-based solar cells. Besides, various deposition approaches have been developed that can supply high-efficiency, low-cost and large-area solar cell devices. However, it is still a challenge to guarantee long-term stability of CIGS modules. CIGS solar cells can be well protected by sealing into glass plates, but this in turn increases the manufacturing cost. Therefore, understanding of the degradation mechanism is necessary.
Positron techniques are powerful tools to study the surface composition of QDs and to determine the types of open space deficiencies in thin film materials. For QDs, previous studies provided indications that positrons can trap and annihilate at the surfaces of semiconductor QDs and can effectively probe the surface composition and electronic structure of colloidal semiconductor QDs. For CIGS, previous depth-sensitive positron experiments indicated the sensitivity of positrons to probe the types of vacancy-related defects in CIGS.","Solar cells, Quantum Dots; Positron Annihilation Spectroscopy; Ab-initio Calculations; Surface Composition; Surface States; CIGS; ZnO; Thin Films; Degradation; Vacancy; Grain Boundaries; Diffusion","en","doctoral thesis","","978-94-028-1330-2","","","","","","","","","RST/Fundamental Aspects of Materials and Energy","","",""
"uuid:b81bf4a5-95f8-4caf-9167-7825b69a5eab","http://resolver.tudelft.nl/uuid:b81bf4a5-95f8-4caf-9167-7825b69a5eab","The Gaming of Systemic Innovations: innovating in the railway sector using gaming simulation","van den Hoogen, J. (TU Delft Organisation & Governance)","de Bruijn, J.A. (promotor); Meijer, S.A. (copromotor); Delft University of Technology (degree granting institution)","2019","In 2009 ProRail, the Dutch railway infrastructure manager started the use of gaming simulation to support its innovation processes. The organization found that innovations became more systemic and, railways being sociotechnical systems, increasingly involved both changes to technology and human behavior. Subsequently, the organization deemed gaming simulation a valuable addition to existing computer simulations. Such gaming simulations are experiments with models of a system, where human players become part of the simulation. Gaming simulation would for instance allow the organization to experiment with different railway infrastructure layouts around stations and see the effects on network resilience. This is because in this very example human behavior, e.g. in the form of traffic controllers rerouting trains, plays a crucial role. From 2009 onwards a range of gaming simulations have been designed and employed for similar purposes in the Dutch railway sector.
Currently however, both practitioners and scholars have built up limited understanding of the use of gaming simulation for innovation processes in sociotechnical systems such as the railways. Firstly, this has to do with the main applications of the tool. Gaming simulation has historically been mostly used for training and education purposes or for policy-making exercises. Secondly, innovation processes are relatively rare in inert sociotechnical systems, especially innovations that we define as systemic: collections of a varied set of innovations that in their conjunction radically change the system. A poor understanding of both causes a problem. This is because it not only remains unknown to what extent gaming simulation can support innovation processes, but also what this support constitutes in the first place. Not knowing the desired functionality of games then renders any design of such games more of an art rather than a craft.
This thesis builds upon the assertion that, according to Klabbers (2003; 2006), the design of gaming simulation needs to closely follow the design of the process in which it is embedded. Games for innovation processes will be significantly different from games for policy-making and training. Hence, studying the design of games needs to occur in conjunction to the study of the innovation process. In this thesis we therefore firstly studied systemic innovation processes in the railway sector independently. In studying innovation processes we adhered to the notion of Poole and Van de Ven (1989) that such processes consist of local mechanisms invoked by intentional actors and resulting emergent patterns. Subsequently this thesis studied how gaming simulation can influence these patterns through these local mechanisms. This thesis thus answered the following main research question: “What mechanisms play a role in driving a systemic innovation process in the Dutch railway sector and in what ways is gaming simulation able to influence relevant macro-level patterns through these mechanisms?”","","en","doctoral thesis","","978-94-92679-76-5","","","","","","","","","Organisation & Governance","","",""
"uuid:90a7bc21-7ccc-4825-865e-2d02af7e72f4","http://resolver.tudelft.nl/uuid:90a7bc21-7ccc-4825-865e-2d02af7e72f4","Spatial Quality as a Decisive Criterion in Flood Risk Strategies: An integrated approach for flood risk management strategy development, with spatial quality as an ex-ante criterion","Nillesen, A.L. (TU Delft OLD Urban Compositions)","Meyer, Han (promotor); Kok, M. (promotor); Delft University of Technology (degree granting institution)","2019","The role of the designer in flood risk management strategy development is currently often restricted to the important but limited task of optimally embedding technical interventions, which are themselves derivatives of system level flood risk strategies that are developed at an earlier stage, in their local surroundings. During this thesis research, an integrated approach is developed in which spatial quality can already be included in the regional flood risk management strategy development, and thus can become a decisive ‘ex-ante’ aspect of flood risk management strategy development.
The key principle to this approach is the inclusion of a range of interchangeable (effective) flood risk reduction interventions at varying locations, so that the criterion of spatial quality can become decisive in flood risk management strategy development. As part of the methodology development, an assessment framework is developed, allowing for the assessment of the impact of the different interventions on spatial quality; research-by-design is employed to systematically evaluate different interventions at different locations. The Rijnmond-Drechtsteden area in The Netherlands is used as a case study area for this research.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-121-8","","","","A+BE | Architecture and the Built Environment No 1 (2019)","","","","","OLD Urban Compositions","","",""
"uuid:b98e7799-c0d3-42ca-8c9c-70ee64bf059a","http://resolver.tudelft.nl/uuid:b98e7799-c0d3-42ca-8c9c-70ee64bf059a","Effects of a stratified tidal flow on the morphodynamics","Meirelles, Saulo (TU Delft Coastal Engineering)","Stive, M.J.F. (promotor); Reniers, A.J.H.M. (promotor); Pietrzak, J.D. (promotor); Delft University of Technology (degree granting institution)","2019","This thesis examines the effects of the stratified tidal flow on the morphodynamics of the Dutch inner shelf. The south portion of the Dutch inner shelf is strongly influenced by the Rhine River ROFI (Region Of Freshwater Influence), which is generated by the discharge from the Rhine River through the Rotterdam waterways. Under stratified conditions, the three-dimensional structure of the tidal currents develops a strong cross-shore shear so that the bottom and surface currents become 180deg out of phase. The sheared flow created by stratification operates in the inner shelf and nearshore zones so that the flow asymmetries imparted by stratification are expected to impact the morphodynamics, however the role of the stratified tidal flow on the morphodynamics along the Dutch coast has been often neglected or oversimplified. In this context, this thesis aims to provide new insights on how the stratified tidal flow dictates the morphodynamics outside the surfzone.
In the south portion of the Dutch coast is located the Sand Engine, a 21.5 million m3 experimental mega-nourishment that was built in 2011. This intervention created a discontinuity in the previous straight sandy coastline, altering the local hydrodynamics in a region that is influenced by the Rhine River ROFI. Estimates of the centrifugal acceleration directly after construction of the Sand Engine showed that its curved shape impacted the cross-shore flow, suggesting that the Sand Engine might have played a role in controlling the cross-shore exchange currents during the first three years after the completion of the nourishment. Presently, the curvature effects are minute owned to the morphodynamic evolution of the Sand Engine. Observations document the development of strong baroclinic-induced cross-shore exchange currents dictated by the intrusion of the river plume fronts as well as the classic tidal straining which are found to extend further into the nearshore (from 12 to 6 m depth), otherwise believed to be a mixed zone.
In the inner shelf, shoaling waves are as effective in mobilizing sediment as the other co-existing flows. The influence of stratification on the hydrodynamics is translated into near-bed shear velocity in the layer immediately above the sea floor. The tide-induced bed shear stress is able to periodically agitate the bed near the peaks of flood and ebb cycles mostly during spring tides. Results from observations suggested that, under stratified conditions, relatively high values of bed shear stress are sustained for a prolonged period of time. The results also revealed that the non-tidal flow, such as the wind-induced flow, plays a role in controlling the bed mobility. However, wave-induced bed shear stress in general does not set sediment in motion during fair weather conditions and thus the stirring role of the waves is mostly important during storms.
The co-exiting near-bed flows in the inner shelf are responsible for moulding the seafloor so that the resulting types of bedforms can reveal important information on the hydrodynamic forcings that dictate the sediment mobility. Observations showed that 56% of the ripples in the Dutch inner shelf are classified as current ripples. Wave ripples occur only during storm conditions, comprising 3%. The frequency of occurrence of transitional bed types composes 23% and poorly developed ripples is found to develop mostly during neap tides making up 15% of the observed bed types. The feedback of the different types of bedforms on the overlying boundary layer plays a fundamental role in the dynamics of the sediment load.
The morphological response of the bed to the stratified and non-stratified tidal flow leads to differentiations of the ripple migration as well as the sediment transport modes (bedload and suspended load). The bedforms at the measurement site are strongly controlled by tides so that their behavior exhibits not only a spring-neap signature, but also a distinct semi-diurnal fluctuation. Under the influence of the Rhine ROFI, the bedform mean dimensions (ripple height and wavelength) are reduced, indicating that their development is affected by the stratified tidal flow. In the absence of (ambient) stratification, the tidal current ripples are more developed, attaining relatively larger dimensions. The net alongshore bedload transport is south-directed, whereas the net alongshore suspended load is north-directed regardless of stratification. Moreover, the net alongshore bedload transport is higher during stratified conditions but the net alongshore suspended transport is smaller. Regarding the cross-shore sediment transport, the findings show that ambient stratification promotes onshore-directed bed- and suspended load net transport. The gross suspended transport rates are 10 times greater than the gross bedload transport rates.","stratification; bedforms; sediment transport","en","doctoral thesis","","","","","","","","","","","Coastal Engineering","","",""
"uuid:b749675c-edb1-4355-ba09-bf46278077d0","http://resolver.tudelft.nl/uuid:b749675c-edb1-4355-ba09-bf46278077d0","Semi-analytical approaches for the prediction of the noise produced by ducted wind turbines","Küçükosman, Y.C. (TU Delft Wind Energy)","Casalino, D. (promotor); Schram, C (copromotor); Delft University of Technology (degree granting institution)","2019","The integration of wind turbines into urban environments is a challenging task due to the reduced wind speed and high turbulence levels caused by the surface resistance, as well as limited spacing. If a specific building arrangement is explored, an improve ment in wind speed can be obtained. This would be especially beneficial for tall build ings where a wind turbine can be placed on the roof, side, or through a duct. However, the main problem associated with the integration of wind turbines is the acoustic an noyance. Therefore, the focus of this thesis is twofold. First, a robust, accurate, and low computational cost numerical methodology is proposed to predict the trailing edge noise for a ducted wind turbine. Second, a measurement device is developed to acquire noise emitted by a rotating machine where the duct surface cannot be altered. An inves tigation of the incoming flow on the noise emitted by a building-integrated wind turbine is conducted by different aerodynamic roughness lengths…","wind turbine noise; semi-analytical models; ducted wind turbines","en","doctoral thesis","","978-94-028-1421-7","","","","","","","","","Wind Energy","","",""
"uuid:cdb32aa2-9ca4-448c-a8a0-63f458c375ff","http://resolver.tudelft.nl/uuid:cdb32aa2-9ca4-448c-a8a0-63f458c375ff","Multi-Microphone Noise Reduction for Hearing Assistive Devices","Koutrouvelis, A. (TU Delft Signal Processing Systems)","Heusdens, R. (promotor); Hendriks, R.C. (copromotor); Delft University of Technology (degree granting institution)","2018","The paramount importance of good hearing in everyday life has driven an exploration into the improvement of hearing capabilities of (hearing impaired) people in acoustic challenging situations using hearing assistive devices (HADs). HADs are small portable devices, which primarily aim at improving the intelligibility of an acoustic source that has drawn the attention of the HAD user. One of the most important steps to achieve this is via filtering the sound recorded using the HAD microphones, such that ideally all unwanted acoustic sources in the acoustic scene are suppressed, while the target source is maintained undistorted. Modern HAD systems often consist of two collaborative (typically wirelessly connected) HADs, each placed on a different ear. These HAD systems are commonly referred to as binaural HAD systems. In a binaural HAD system, each HAD has typically more than one microphone forming a small local microphone array. The two HADs merge their microphone arrays forming a single larger microphone array. This provides more degrees of freedom for noise reduction. The multi-microphone noise reduction filters are commonly referred to as beamformers, and the beamformers designed for binaural HAD systems are commonly referred to as binaural beamformers.","","en","doctoral thesis","","978-94-6186-999-9","","","","","","","","","Signal Processing Systems","","",""
"uuid:e031ff81-6a8e-4022-bf90-49ef9fe8871e","http://resolver.tudelft.nl/uuid:e031ff81-6a8e-4022-bf90-49ef9fe8871e","Image reconstruction algorithms for optical tomography","Trull, A.K. (TU Delft ImPhys/Quantitative Imaging)","van Vliet, L.J. (promotor); Kalkman, J. (copromotor); Delft University of Technology (degree granting institution)","2018","Disease model systems, such as the zebrafish, play an important role in understanding the onset of diseases like cancer and monitor the efficacy of new drugs. In the past, non-invasive methods for screening, diagnostics and treatment monitoring were intrinsically from the outside. In the past decades, there has been a strong drive to look inside these model systems, which resulted in the development of many small animal tomographic imaging techniques. Due to the absence of ionizing radiation, high-resolution, and cost efficiency, optical tomography is a popular imaging technique to study disease model systems such as zebrafish. The main obstacles in obtaining high-resolution imaging suitable for tissue characterization are the scattering of light in tissue and diffraction of optical waves. Scattering of light in tissue degrades the resolution of optical tomography systems, especially for thick samples. In this thesis, transmission optical coherence tomography (OCT) is used to select ballistic, non-scattered, from non-ballistic, scattered, light. We demonstrate that transmission optical coherence tomography is a versatile tool to measure optical properties of liquids, solids, and particle suspensions. The developed technique is used to perform quantitative optical tomography of the refractive index and attenuation coefficient. A good agreement is observed between our measurements and literature values for group refractive index, group velocity dispersion, and attenuation coefficient. Based on the tomographic reconstruction of transmission OCT measurements, the median attenuation coefficient, group refractive index and volumes of various organs of an adult zebrafish are segmented and quantified in optical coherence projection tomography reconstructions. In optical tomography light is imaged by a lens onto the camera. Due to the focusing of light onto the camera, this light is collected non-uniformly along the propagation direction from the sample. Consequently, the straight-ray assumption as in standard (pre-) clinical X-ray CT reconstruction is violated. Reconstruction of optical tomography images with standard filtered back projection (FBP) causes radial blurring and tangential blurring that becomes stronger with increasing distance to the rotation axis. We present 2D and 3D tomographic reconstruction algorithms that include the point spread function (PSF) of the imaging system. For emission optical projection tomography, these methods show greatly reduced radial and tangential blurring over the entire field of view 113 114 Summary and a significantly improved signal-to-noise ratio compared to FBP. The 3D PSF-based algorithm is evaluated using different initializations. When initialized with the 2D PSF-based reconstruction result, the 3D PSF-based reconstruction gives an improved signal-to-background and image quality in a useful timeframes. Besides including the physical point spread function (PSF) in the 2D tomographic reconstruction, the effect of the PSF also can be reduced by deconvolution of the FBP reconstructed image or filtering the sinogram before FBP reconstruction. We compared the performance of these techniques with each other based on simulations and the signal-to-noise ratio and the sharpness in reconstructed fluorescent beads and zebrafish OPT images. We demonstrate that the sinogram filtering performs poorly on data acquired with high numerical aperture optical imaging systems. We show that the deconvolution technique performs best for highly sparse, low signal-to-noise ratio objects. The PSF-based reconstruction method is superior for non-sparse objects and data of high signal-to-noise ratio. In this thesis, we developed novel algorithms for transmission OCT signal processing and PSF-based tomographic reconstruction. Our algorithms allow for high-resolution quantitative imaging in turbid media. These techniques can be used for quantitative optical imaging of disease model systems. Potentially this may lead to more insight in tissue development and disease onset, progression, and treatment.","transmission OCT; optical tomography; OPT; reconstruction","en","doctoral thesis","","978-94-6186-974-6","","","","","","","","","ImPhys/Quantitative Imaging","","",""
"uuid:25dc497c-b218-4d6a-9796-001e9d569975","http://resolver.tudelft.nl/uuid:25dc497c-b218-4d6a-9796-001e9d569975","The Singular Optics of Random Light: a 2D vectorial investigation","de Angelis, L. (TU Delft QN/Kuipers Lab)","Kuipers, L. (promotor); Groeblacher, S. (copromotor); Delft University of Technology (degree granting institution)","2018","In this thesis, we explore the physics of optical singularities. We investigate them in light waves propagating randomly in a planar nanophotonic chip. With a custom-built nearfield microscope, we map the electromagnetic field resulting from the interference of these light waves. Our technique gives access to the full vectorial and complex nature of such an electromagnetic field, with subwavelength resolution. The resulting information allows us to precisely pinpoint and characterize the multitude of singularities that arise in the random light field. We detect phase singularities in the Cartesian components of light’s vector field, i.e., points where the phase of the field components is undetermined and it circulates in a vortical flow around them (Part II).Moreover, we identify polarization singularities, e.g., C points: locations where the vector of light’s electric field describes a perfect circle in time (Part III)...","light; phase; polarization; singularities; vortices; randomness; waves; chaos; correlation; 2D","en","doctoral thesis","","978-94-6186-995-1","","","","","","","","","QN/Kuipers Lab","","",""
"uuid:1b7eded7-dbbb-4549-b892-4afce31fe949","http://resolver.tudelft.nl/uuid:1b7eded7-dbbb-4549-b892-4afce31fe949","Turbulence in traffic at motorway ramps and its impact on traffic operations and safety","van Beinum, A.S. (TU Delft Transport and Planning)","Wegman, F.C.M. (promotor); Hoogendoorn, S.P. (promotor); Farah, H. (copromotor); Delft University of Technology (degree granting institution)","2018","In the vicinity of motorway ramps, multiple manoeuvres are performed by drivers that are entering or exiting the motorway, and by drivers that anticipate on, or cooperate with, the other entering and exiting vehicles. These manoeuvres involve lane-changes, changes in speed, and changes in headways. This results in changes in lane flow distribution, greater speed variability and changes in headway distribution on the different lanes, with presumably a greater share of small gaps on the outside lane. In literature and motorway design guidelines, this phenomenon is referred to as turbulence. Currently, an explicit definition for turbulence is unavailable. In this thesis, therefore, an explicit definition for turbulence is introduced: “individual changes in speed, headways, and lanes (i.e. lane-changes) in a certain road segment, regardless of the cause of change”. Turbulence is expected to be present in the traffic stream at any given time, and therefore a second definition is introduced: the level of turbulence, which is defined as: “the frequency and intensity of individual changes in speed, headways and lane-changes in a certain road segment, over a certain period of time”...","","en","doctoral thesis","SWOV Institute for Road Safety Research","978-90-73946-19-4","","","","SWOV-Dissertatiereeks SWOV – Instituut voor Wetenschappelijk Onderzoek Verkeersveiligheid","","","","","Transport and Planning","","",""
"uuid:26f3b0db-4c77-4564-8a59-a802fff39028","http://resolver.tudelft.nl/uuid:26f3b0db-4c77-4564-8a59-a802fff39028","Solid State Phase Transformations in Medium Manganese Steels","Farahani, H. (TU Delft Novel Aerospace Materials)","van der Zwaag, S. (promotor); Xu, W. (promotor); Delft University of Technology (degree granting institution)","2018","Steels are still, and probably will remain in future, the primary choice for applications as structural materials. This is not only because of the reasonable ratios of properties over production costs, but also owing to the versatile properties realisable via variety of microstructures achieved by controlling the solid-state phase transformations between austenite and ferrite phases in steels. The noticeable improvements in the properties of advanced high strength steels since the invention, have led to development of three generation of these steels. A sustainable continuous improvement in developing new grades of steels, requires more and deeper understanding of the effect of macroscopically controllable parameters, such as overall composition and temperature variations, on the rate of nucleation and migration of interfaces during solid-state phase transformations. In this PhD thesis, experimental and modelling approaches are developed and employed to study the effect of alloying elements, such as Mn and C, on migration behavior of interfaces during solid state phase transformations at high and low temperatures...","Steel; Phase Transformation; Interface","en","doctoral thesis","","978-94-028-1308-1","","","","","","","","","Novel Aerospace Materials","","",""
"uuid:fd44c046-0621-41a5-9862-7f6e932b75cf","http://resolver.tudelft.nl/uuid:fd44c046-0621-41a5-9862-7f6e932b75cf","Integrated Urban River Corridors: Spatial design for social-ecological resilience in Bucharest and beyond","Forgaci, C. (TU Delft Environmental Technology and Design)","van Timmeren, A. (promotor); van Dorst, M.J. (promotor); Delft University of Technology (degree granting institution)","2018","The issue of urban resilience concerns a multitude of urban systems and spaces. This thesis focuses on Urban River Corridors (URCs)—that is, urban spaces where the overlap between the urban systems (carrying the ’social-‘) and the river system (carrying the ‘-ecological’) is at the highest intensity—as strategic spaces with a potentially high contribution to urban resilience. The general hypothesis is that with an integrated spatial understanding, planning and design of rivers and the urban fabric surrounding them, cities could become more resilient not just to flood-related disturbances, but to general chronic stresses as well. Hence, the thesis addresses four spatial problems arising from the loss of synergy between the natural dynamics of rivers and the spatial configuration and composition of urban areas that they cross: (1) river-taming operations combined with riverside traffic corridors have weakened the relationship between fluvial geomorphology and urban morphology, transforming rivers into physical barriers; (2) flood-protection measures aiming for resistance to water dynamics have led to a latent flood risk; (3) the capacity of urban rivers to deliver ecosystem services has been diminished; and (4) rationalisations of the river system have reduced the scalar, (and implicitly) social and ecological complexity of urban rivers.
Drawing on theories of social-ecological resilience and urban form resilience, on conceptual and analytical tools from spatial morphology and landscape ecology, and on practical experience in urban river design projects, the thesis constructs a theory of social-ecologically integrated Urban River Corridors, in which it proposes a spatial-morphological definition, an assessment framework, and a set of design principles and design instruments. Framed as a transdisciplinary design study, the thesis integrates knowledge from various disciplines dealing with the problematique of urban rivers and employs a design-driven methodology that includes design explorations and design testing in the research process.
The case of Bucharest crossed by URC Dâmbovița and URC Colentina is used to contextualise the spatial-morphological definition, and to demonstrate, develop and test the proposed assessment framework, design principles, and design instruments with a distinct set of methods in each of the three parts of the thesis. In addition to a transdisciplinary literature review of URCs, and a historical review of Bucharest’s URCs, Part 1 presents a qualitative data analysis of 22 expert interviews, used to determine the current state of URC Dâmbovița and URC Colentina. Based on four key properties of URCs identified in literature, Part 2 develops an indicator system and a method for the assessment of social-ecological integration. Informed by key problems and potentials identified by the local experts, the assessment framework is then applied on the two URCs of Bucharest. In the last part, design applications, including urban river projects carried out by the author on other rivers and a design workshop in Bucharest, are used to demonstrate and test the design principles through design instruments.
Design is seen as pivotal in the creation of products that can facilitate the recycling process. For this reason, in the past two decades there has been considerable research on DfR, resulting in a large number of methods and tools being developed. The aim of these methods is to assist designers in assessing the recyclability of their designs and to select adequate product design features that facilitate the recycling process. However, these methods do not seem to have been very effective; particularly not in the case of electronic products. This is because, despite the considerable number of methods developed thus far, and what they claim in theory, electronic products are still not being optimally disintegrated and separated in actual recycling processes. Consequently, the aim of this thesis is to uncover the various reasons for the mismatch between the theory and practice of DfR by undertaking a number of studies.","Circular Economy; Circular product design; Design for recycling; Electronics; Design; Sustainable design; Ecodesign","en","doctoral thesis","","9789065624307","","","","","","","","","Circular Product Design","","",""
"uuid:6b40384c-c139-4b7a-8080-fe883aa95628","http://resolver.tudelft.nl/uuid:6b40384c-c139-4b7a-8080-fe883aa95628","Solid phase crystallisation of hydrogenated amorphous silicon deposited by ETPCVD on glass","Westra, J.M. (TU Delft Photovoltaic Materials and Devices)","Zeman, M. (promotor); van Swaaij, R.A.C.M.M. (copromotor); Delft University of Technology (degree granting institution)","2018","Generation of electricity using renewable energy sources is becoming an integral part of the electrical energy mix. A substantial fraction of the electrical energy is foreseen to be generated by photovoltaic (PV) solar cells that directly convert light into electricity. Large scale application of building integrated PV is envisaged to aid the integration of renewable energy in our society, both from a practical and an aesthetic point of view. For implementation of large-scale PV preferably abundant and non-toxic materials are required as well as production processes for making the solar cells in large quantities [1]. For this purpose silicon is the most preferred element, because of its properties for application in solar cells and because of its abundance, despite the fact that refining and doping of Si are dependant on toxic materials [2].
While classically we all understand the idea of measurement in a very straightforward fashion, in quantum mechanics the concept of measurement departs fromour everyday experience in physics. In fact, although the quantum measurement obeys rather simple rules, its interpretation has been a subject of discussion since the beginning of the 20th century.
Some of the physics involved in the process of a quantum measurement have no classical analogues, challenging in this way our intuition: the famous paradox of a cat in a box is a clear example of this.","","en","doctoral thesis","","978-90-8593-379-3","","","","Casimir PhD Series, Delft-Leiden 2018-50","","","","","QN/Nazarov Group","","",""
"uuid:eec6ef3b-3d9d-4b7d-8d9b-02fa5a4d9245","http://resolver.tudelft.nl/uuid:eec6ef3b-3d9d-4b7d-8d9b-02fa5a4d9245","Applying game theory for adversarial risk analysis in chemical plants","Zhang, L. (TU Delft Safety and Security Science)","Reniers, G.L.L.M.E. (promotor); Delft University of Technology (degree granting institution)","2018","Since the 9/11 attack in New York in 2001, a lot of attention has been paid to the protection of critical infrastructures. Chemical industries are without doubt critical infrastructures due to their extreme importance for society in combination with their vulnerability. They play important roles in modern-life society, from producing and providing daily necessities such as food and energy, to making modern medicine. They are thus truly essential to our modern way of living. Process plants usually store dangerous goods in large quantities, which may pose an important threat to themselves as well as to their surroundings. Moreover, due to a variety of benefits of scale, process plants tend to build their factories geographically together, potentially aggravating the danger. Therefore, the importance of protecting industrial process plants (including those in the chemical industry, the food industry, the energy industry, and others) cannot be overestimated.
Risks caused by human behaviours with the intention to cause losses are defined as security risks. For instance, thieves intentionally intruding a plant for stealing valuable materials, or terrorists maliciously setting a fire on a chemical facility to cause societal fear. Initiators of security events (henceforth, attackers) would intelligently observe the defender’s defence plan and then schedule their attack accordingly. Literature has actually shown how resources can be misallocated if intelligent interactions between the defender and the attacker are not considered.
Game theory was developed in the economic domain for modelling both cooperative and competitive behaviours in a multiple actors system. In the last 100 years, game theory has been theoretically improved and practically applied to various domains, such as evolutionary biology, computer science etc. These researches have demonstrated the capability of game theory in modelling intelligent interactions. Several security management systems based on game theory have been developed and deployed in reality, such as the ARMOR system for the Los Angeles airport, the PROTECT system for the US coast guard, etc.
In this research, game theory is employed to study the protection of chemical industrial areas. Four models are proposed: i) DAMS – an agent-based modelling and simulation approach for assessing domino effects in chemical plants; ii) CPP game – a game theoretic model for single plant protection; iii) CCP game – a game theoretic model for multiple plants protection, by optimizing patrolling; and iv) PPG – a game theoretic model aiming at optimizing pipeline patrolling within or between chemical plants. These models are briefly explained hereafter.","","en","doctoral thesis","","978-94-028-1307-4","","","","","","","","","Safety and Security Science","","",""
"uuid:c536ca47-8981-4a9e-916f-396bcbca4bc5","http://resolver.tudelft.nl/uuid:c536ca47-8981-4a9e-916f-396bcbca4bc5","Microstructure evolution in pearlitic rail steel due to rail/wheel interaction","Wu, J. (TU Delft (OLD) MSE-3)","Sietsma, J. (promotor); Petrov, R.H. (promotor); Delft University of Technology (degree granting institution)","2018","The microstructural aspects of rolling contact fatigue in rails were studied. The rail track is a critical component of the railway system and its long-term performance contributes crucially to the sustainable development of the railway system. The increasing demands of trains with a higher speed/capacity impose more severe load conditions on the steel rail tracks. Steels with improved performance are needed to meet such demands because the current rail steels are reaching their limit. Moreover, the understanding of the root cause of damage in rails is still insufficient. This hinders the development of new rail steels that perform better.","","en","doctoral thesis","","","","","","","","","","","(OLD) MSE-3","","",""
"uuid:1752b8ce-631b-4127-91c9-92538e34a13b","http://resolver.tudelft.nl/uuid:1752b8ce-631b-4127-91c9-92538e34a13b","Accelerating DNA Variant Calling Algorithms on High Performance Computing Systems","Ren, S. (TU Delft Computer Engineering)","Al-Ars, Z. (promotor); Bertels, K.L.M. (promotor); Delft University of Technology (degree granting institution)","2018","Next generation sequencing (NGS) technologies have transformed the landscape of genomic research. With the significant advances in NGS technologies, DNA sequencing is more affordable and accessible than ever before. Meanwhile, many DNA sequence analysis tools have been developed to derive useful information from the raw sequencing data produced by NGS platforms. However, the massive amount of generated sequencing data poses a great computational challenge, thereby shifting the bottleneck towards the efficiency of the DNA sequence analysis tools. Due to the high computational needs, high performance systems are playing an important role for DNA sequence analysis. Moreover, dedicated hardware, including graphics processing units (GPUs) and field programmable gate arrays (FPGAs), have become important computational resources in many high performance systems.
In this thesis, we use GPUs and FPGAs to accelerate a number of important bioinformatics algorithms. These represent the most computationally intensive algorithms of the GATK HaplotypeCaller (HC), which we use to improve its performance. GATK HC is a widely used DNA sequence analysis tool. By investigating GATK HC, three computationally intensive algorithms are selected, including the de Buijn graph (DBG) construction algorithm for micro-assembly, the pair-HMMs forward algorithm and the semi-global pairwise alignment algorithm. We first propose a novel GPU-based implementation of the DBG construction algorithm for micro-assembly. Compared with the software-only implementation, it achieves a speedup of up to 3x using synthetic datasets and a speedup of up to 2.66x using human genome datasets. We then propose a systolic array design to accelerate the pair-HMMs forward algorithm on FPGAs. Experimental results show that the FPGA-based implementation is up to 67x faster than the software-only implementation. In order to fully utilize the computing resources on FPGAs, we present a model to describe the performance characteristics of the systolic array design. Based on the analysis, we propose a novel architecture to better utilize the computing resources on FPGAs. The implementation achieves up to 90\% of the theoretical throughput for a real dataset. Next, we propose several GPU-based implementations of the pair-HMMs forward algorithm. Experimental results show that the GPU-based implementations of the pair-HMMs forward algorithm achieve a speedup of up to 5.47x over existing GPU-based implementations. Finally, we propose to accelerate the semi-global pairwise sequence alignment algorithm with traceback to obtain the optimal alignment on GPUs. Experimental results show that the GPU-based implementation is up to 14.14x faster than the software-only implementation.
After accelerating these algorithms on GPUs and FPGAs, we integrate two GPU-based implementations into GATK HC. We first integrate the GPU-based implementation of the pair-HMMs forward algorithm into GATK HC. In single-threaded mode, the GPU-based GATK HC implementation is 1.71x faster than the baseline GATK HC implementation. For multi-process mode, a load-balanced multi-process optimization is proposed to ensure a more equal distribution of computation load between different processes. The GPU-based GATK HC implementation achieves up to 2.04x in load-balanced multi-process mode over the baseline GATK HC implementation in non-load-balanced multi-process mode. Next, we additionally integrated the GPU-based implementation of the semi-global alignment algorithm into the GATK HC. Experimental results shown that this implementation is 2.3x faster than the baseline GATK HC implementation in single-thread mode.","Pair-HMMs forward; sequence alignment with traceback; de Brujin graph construction; GPU acceleration; FPGA acceleration","en","doctoral thesis","","978-94-028-1318-0","","","","","","","","","Computer Engineering","","",""
"uuid:e622ee5a-b551-4f8c-a250-9859db791d22","http://resolver.tudelft.nl/uuid:e622ee5a-b551-4f8c-a250-9859db791d22","A unified framework improving interoperability and symbiosis in the field of Systems Engineering","van Ruijven, L.C. (TU Delft Ship Design, Production and Operations)","Hopman, J.J. (promotor); Veeke, H.P.M. (copromotor); Delft University of Technology (degree granting institution)","2018","This dissertation is about improving performance of projects delivering complex systems. Examples of such systems are ships, infrastructure systems and process plants. Mostly these systems are one of a kind, so called ‘one-offs’ and are the ‘product’ of one or more coherent projects each executed by a consortium of enterprises. The lifecycle of these systems is characterized by a sequence of lifecycle stages (in headlines specification, creation and usage) and requires involvement of different parties with different interests and competences, e.g. the client, (sub) contractors, end users and stakeholders and disciplines like construction, electrical, mechanical and information technology. In actual practice many of these kinds of projects exceed the planned budget and time and do not meet the quality and needs expected by the client, end users and/or stakeholders. This dissertation considers this problem from an overall perspective, and not from the perspective of only e.g. the client or contractor.
In this dissertation three issues have been identified concerning today’s creation of systems:
•Imperfections in the creation process of both systems and the project teams that create the system,
•Lack of reflection,
•Lack of semantic ability.
The objective of this dissertation is to provide a framework in which the backgrounds of these three issues are expressed and offer a way to overcome these issues. The framework can be utilized by enterprises to improve interoperability and symbiosis in the field of Systems Engineering enabling them to improve performance of projects in all lifecycle stages of a system.
The framework addresses interoperability barriers and integrates Systems Engineering principles, organization science, system science, complexity science and cognitive science. The framework has been visualized by means of six symmetrically connected tetrahedrons, supported by an ontology. Additional terms of reference has been drawn up for the purpose of implementation of the framework. A prototype of a collaboration tool based on a specific semantic WEB technology as published in several papers by the author, supporting the framework, was part of the work done for this dissertation. The framework is based on years of experiences of author with complex projects and knowledge as captured in ISO standards and fundamental theories.","","en","doctoral thesis","","978-94-6380-148-5","","","","","","","","","Ship Design, Production and Operations","","",""
"uuid:f1d2f992-873a-43d9-9077-c151c87b1b1e","http://resolver.tudelft.nl/uuid:f1d2f992-873a-43d9-9077-c151c87b1b1e","Design with forms as well as patterns","Cai, J. (TU Delft OLD Urban Design)","Bekkering, H.C. (promotor); van Dorst, M.J. (copromotor); Delft University of Technology (degree granting institution)","2018","The research investigates How can the morphological approach in combination with the pattern language approach assist urban designers to achieve historical continuity in urban design both on theory and application levels.
This research overviews the developments and applications of the two approaches worldwide with a special emphasis on the Dutch school. The Dutch morphological reduction technique and the Dutch interpretation of a pattern language are used in the case study—Wuhan, a Chinese city—to study the transformation of urban form and life style. The multi-scalar historical morphological analysis results in an atlas that consists of four series of analytical maps on three levels of scale as well as 13 spatial structuring elements of the city; whereas the public life study results in a pattern book consisting of 20 individual patterns and three pattern languages. The practical implications and relevance for -- the design of -- the future of the city are discussed.
The research is set up in a systematic and symmetrical manner for comparison of and reflection on the two approaches. It concludes that:
1 The morphological approach can be used to interpret first space (perceived space) and convey its information into second space (conceived space), whereas the pattern language approach can be used to interpret third space (lived space) and convey its information into second space (conceived space).
2 The morphological approach has a tendency to work from large scale to small scale and the pattern language approach tends to be built up from small scale to large scale, whereas urban design works with multiple scales at the same time.
3 The morphological approach and the pattern language approach provide means for urban designers to systematically recognize historical layers so as to distill the meaning in the physical and non-physical contexts respectively. Consirately adding another layer that contains the contemporary meaning (design intervention) to these recognized layers is the way to pass down and simultaneously generate incremental change in the tradition of the context. This results in historical continuity and thus in permanence in urban design.
4 The morphological approach, the pattern language approach, and urban design are processes in themselves and can be combined into one integrated process.
5 The morphological approach, the pattern language approach and urban design are characterized by reduction, abstraction, interpretation, and communication.
6 Some properties of the two approaches can be seen as counterparts, because the roles these properties play in the design process tend to be similar:
–– Individual homogeneous areas vs Individual patterns;
–– Structural homogeneous areas vs Anchoring points/ Structuring patterns;
–– Secondary connections in homogeneous areas vs Linkages between patterns;
–– ? / Typology of homogeneous areas vs Clusters of patterns.","Urban Design; Urban Morphology; a pattern language; Wuhan; China; Urban form; life style; Transformation","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-117-1","","","","A+BE | Architecture and the Built Environment No 30 (2018)","","","","","OLD Urban Design","","",""
"uuid:517fdbaf-0102-4f22-88bc-9883e66b7dca","http://resolver.tudelft.nl/uuid:517fdbaf-0102-4f22-88bc-9883e66b7dca","Enhanced Distributed Space Systems with Miniature Spacecraft: Spatial Distribution, Collision Analysis and Cooperative Communication","Sundaramoorthy, P.P. (TU Delft Electronics; TU Delft Space Systems Egineering)","Gill, E.K.A. (promotor); Verhoeven, C.J.M. (copromotor); Delft University of Technology (degree granting institution)","2018","The repertoire of words in the English language to refer to groups of animals is quite fascinating to say the least – a congregation of alligators, an army of ants, a troop of baboons, a pride of lions, a train of camels, a destruction of cats, an intrusion of cockroaches, a mob of emus, a plague of insects, a drift of pigs, and so on. The intention of using such a wide range of terms is to associate an underlying emotion or meaning to the different kinds of groups. Therefore, without knowing much about choughs or goldfinches, one is more likely to appreciate a charm of goldfinches rather than a clattering of choughs. This thesis is about groups of small spacecraft – characterizing them and enhancing them. The aim of this thesis is to enable charms of CubeSats and prides of PocketQubes.","Distributed Space Systems; Small Satellites; Miniaturization; Phase Synchronization; Enhanced Communication","en","doctoral thesis","","978-94-028-1316-6","","","","","","","","","Electronics","","",""
"uuid:85c8bd91-18cb-404c-ba4f-64b595b0af38","http://resolver.tudelft.nl/uuid:85c8bd91-18cb-404c-ba4f-64b595b0af38","Arsenic removal in rapid sand filters","Gude, J.C.J. (TU Delft Sanitary Engineering)","Rietveld, L.C. (promotor); van Halem, D. (copromotor); Delft University of Technology (degree granting institution)","2018","Arsenic (As) mobility in water is worldwide studied since its toxicity was proven in 1888. Intake of As can lead to skin disease, cancer, kidney and heart failure, diabetes and paralysis. In the Netherlands, groundwater used for drinking water production contains As in the range from 0 – 70 μg/L. Currently, all groundwater treatment plants reduce As in drinking water below the WHO standard of 10 μg/L. However, to ensure no adverse health effects occur by the intake of drinking water, Dutch drinking water companies investigate implications of distributing water with As concentrations below 1 μg/L. The new target value causes 58% of the treatment plants with measurable As in the raw water (19% of all total groundwater treatment plants) to need some sort of adjustment to their treatment scheme to comply with the new As target value...","","en","doctoral thesis","","978-94-6323-393-4","","","","","","","","","Sanitary Engineering","","",""
"uuid:51dbba63-ba7c-4958-ab7d-d6dc182ec5b5","http://resolver.tudelft.nl/uuid:51dbba63-ba7c-4958-ab7d-d6dc182ec5b5","Auditory feedback for automated driving","Bazilinskyy, P. (TU Delft Human-Robot Interaction)","de Winter, J.C.F. (promotor); van der Helm, F.C.T. (promotor); Delft University of Technology (degree granting institution)","2018","Automated driving may be a key to solving a number of problems that humanity faces today: large numbers of fatalities in traffic, traffic congestions, and increased gas emissions. However, unless the car drives itself fully automatically (such a car would not need to have a steering wheel, nor accelerator and brake pedals), the driver needs to receive information from the vehicle. Such information can be delivered by sound, visual displays, vibrotactile feedback, or a combination of two or three kinds of signals. Sound may be a particularly promising feedback modality, as sound can attract a driver’s attention irrespective of his/her momentary visual attention. Although ample research exists on warning systems and other types of auditory displays, what is less well known is how to design warning systems for automated driving specifically. Taking over control from an automated car is a spatially demanding task that may involve a high level of urgency, and warning signals (also called ‘takeover requests’, TORs) need to be designed so that the driver reacts as quickly and safely as possible. Furthermore, little knowledge is available on how to support the situation awareness and mode awareness of drivers of automated cars. The goal of this thesis is to discover how the auditory modality should be used during automated driving and to contribute towards the development of design guidelines.","","en","doctoral thesis","","","","","","","","","","","Human-Robot Interaction","","",""
"uuid:d9b3f849-087e-46a9-96d2-f15d1b573a50","http://resolver.tudelft.nl/uuid:d9b3f849-087e-46a9-96d2-f15d1b573a50","Bones don’t lie: What does bone shape tell us about skeletal diseases?","Tümer, N. (TU Delft Biomaterials & Tissue Biomechanics)","Zadpoor, A.A. (promotor); Weinans, Harrie (promotor); Tuijthof, G.J.M. (copromotor); Delft University of Technology (degree granting institution)","2018","","","en","doctoral thesis","","","","","","","","2019-12-13","","","Biomaterials & Tissue Biomechanics","","",""
"uuid:0e0da51a-e2c9-4aa0-80cc-d930b685fc53","http://resolver.tudelft.nl/uuid:0e0da51a-e2c9-4aa0-80cc-d930b685fc53","Modelling Uncertainty: Developing and Using Simulation Models for Exploring the Consequences of Deep Uncertainty in Complex Problems","Auping, Willem L. (TU Delft Policy Analysis)","Thissen, W.A.H. (promotor); Pruyt, E. (copromotor); Delft University of Technology (degree granting institution)","2018","Simulation models are increasingly used for exploring the consequences of deep uncertainty in complex societal issues. The complexity of societal grand challenges, often characterised by the interrelatedness of different elements in the systems underlying these challenges, often renders mental simulation impossible, necessitating the use of simulation models to assist human reasoning. In addition, these grand challenges are typically also subject to deep uncertainty, making it, for example, impossible to come to a shared understanding of parts of the system and exogenous inputs to it, or even a shared problem definition.
Under deep uncertainty, simulation models can be used to explore the consequences of different combinations of assumptions about uncertain factors or attributes of the problem situation and the underlying system. This type of simulation model use was introduced in 1993 as Exploratory Modelling and Analysis (EMA). In more recent years, this approach has become a major underpinning of the Decision Making under Deep Uncertainty (DMDU) field.
The treatment of deep uncertainty in much DMDU research can be improved, however. In most DMDU research to date, pre-existing models are used. These models were generally developed for ‘consolidative’ use: the modellers tried to unify existing knowledge to come a single, ‘best’ model. While most modellers will agree that these models are not perfect representations of reality, and often agree that they as such cannot be validated in the strict sense of the word, these modellers and their models do not acknowledge deep uncertainty. The use of consolidative models is arguably problematic if one agrees that the issue at hand is characterized by deep uncertainty. Therefore, models are needed that are explicitly developed for ‘exploratory’ use: models that explicitly incorporate deep uncertainty potentially relevant for the research question or questions at hand. However, little experience and guidance exists regarding development and use of specifically exploratory models.
In this dissertation, a first attempt is made to identify, and provide guidance for, the critical choices made during the development and use of exploratory models.","Policy Analysis; Deep Uncertainty; Complexity; Grand challenges; Exploratory Modelling & Analysis; System Dynamics; Scenario Discovery; Robust Decision Making","en","doctoral thesis","","978-94-6332-444-1","","","","","","","","","Policy Analysis","","",""
"uuid:a189ad9b-6c6e-4539-bde7-7dc6f1748a21","http://resolver.tudelft.nl/uuid:a189ad9b-6c6e-4539-bde7-7dc6f1748a21","Spline-based wavefront reconstruction for Shack-Hartmann measurements","Brunner, A.E. (TU Delft Team Raf Van de Plas)","Verhaegen, M.H.G. (promotor); de Visser, C.C. (copromotor); Delft University of Technology (degree granting institution)","2018","In the coming decade, a new generation of extremely large-scale ground-based astronomical telescopes will see first light. It is well known that increasing the size of the telescope aperture is only beneficial if the adaptive optics (AO) system, which compensates for turbulence-induced wavefront aberrations, scales accordingly. For the extreme-AO (XAO) system of the future European Extremely Large Telescope (E-ELT), in the order of 10^4–10^5 unknown phase points have to be estimated at kHz range frequencies to update the actuator commands of the corrective device, consisting of a deformablemirror (DM).
The work on fast algorithms for wavefront reconstruction (WFR) for real-time application has therefore been extensive. Conventional WFR algorithms estimate the unknown wavefront from wavefront sensor (WFS) measurements. They are generally based on a linear relationship between the unknown wavefront and the sensor read out, and assume one of the two following principles. Zonal methods represent the wavefront as discrete phase points in terms of which the sensor model is formulated, leading to a per se local phase-measurement relationship. The second group of modal methods expands the wavefront with a set of globally defined polynomials which results in a sensormodel that acts on the entire sensor domain.","adaptive optics; atmospheric correction; wavefront sensing","en","doctoral thesis","","978-94-6323-422-1","","","","","","","","","Team Raf Van de Plas","","",""
"uuid:57f725e1-b3f3-455c-83ce-9156b2123c88","http://resolver.tudelft.nl/uuid:57f725e1-b3f3-455c-83ce-9156b2123c88","MEMS Micropropulsion: Design, Modeling and Control of Vaporizing Liquid Microthrusters","de Athayde Costa e Silva, M. (TU Delft Space Systems Egineering)","Gill, E.K.A. (promotor); Cervone, A. (copromotor); Delft University of Technology (degree granting institution)","2018","In recent years, there has been an increase in the number of small multi-mission platforms such as CubeSats, in an attempt to reduce costs of space missions. CubeSats have been used for different purposes including Earth observation, research and technology demonstration.
However, a key technology that is still under development is the micropropulsion system that has the potential to significantly increase the capabilities of CubeSat missions. Micropropulsion has been recognized as one of the key development areas for the next generation of highly miniaturized spacecraft such as CubeSats and PocketQubes. It will extend the range of applications of this class of satellites to include missions that require, for example, orbital maneuvering or drag compensation.
An interesting option for CubeSats and PocketQubes is the Vaporizing Liquid Microthruster (VLM) which has received increasing attention due to its ability to provide high thrust levels with relatively low power consumption. The thruster uses the vapor generated in the vaporization of the propellant to produce thrust using a nozzle. The vaporization is usually done by applying power to resistive heaters that could be integrated into the device or externally attached to it. The nozzle is usually a convergent-divergent nozzle that can accelerate the propellant to supersonic velocities.
This thesis aims to develop modeling and control concepts for micropropulsion systems to allow the spacecraft to perform maneuvers of position and attitude control. The Vaporizing Liquid Microthruster has been selected due to its characteristics that suit the needs of very small spacecraft.
The first part of the research is dedicated to an in-depth literature study of the currently available micropropulsion systems. Those that are manufactured with silicon and MEMS (Micro Electro-Mechanical Systems) technologies have been analyzed and compared in terms of their thrust, specific impulse, and power. A classification in terms of complexity is introduced in an attempt to identify the suitability of the devices for the current trend towards simplifying architectures. The analysis of development levels of different types of micropropulsion systems revealed that although the actual thrusters are significantly developed, the interfacing and integration to other components of the system are still to be further developed.
The second part of the research focuses on the characterization and modeling of VLM systems. This is an extremely important step in the development of such systems since a proper model, i.e., one that sufficiently represents the dynamics of the system, is required during the design phase to help, for example, in designing controllers, and also during the operational phase to help reproducing the events happening when the satellite is in orbit. A comprehensive model has been developed using theoretical and empirical relations.
The third part of the research addresses the problem of controlling multiple redundant devices allowing failures to occur. This is very important to guarantee the successful operation of VLM systems with many thrusters while performing combined attitude-position maneuvers. A fuzzy control system was developed introducing an automatic rule generation algorithm that allows the fuzzy controller to solve control allocation problems.
Finally, the last part of the research investigates the possible applications of VLM systems. An example scenario is considered to analyze the performance required to execute different maneuvers and missions.
The key contributions of the work presented in this thesis are related to the modeling and control of Vaporizing Liquid Microthrusters. A comprehensive model of the complete system has been proposed and used to develop control algorithms for individual thrusters and for a set of thrusters. A fuzzy control system has been developed to solve the problem of controlling multiple devices with redundant outputs. Finally, an in-depth literature study and an analysis on the possible applications allowed to put VLM systems into perspective offering a glimpse into the future development of such systems.","Vaporizing Liquid Microthruster (VLM); Micro Electro-Mechanical Systems; spacecraft; control","en","doctoral thesis","","978-94-028-1311-1","","","","","","","","","Space Systems Egineering","","",""
"uuid:0b91c68f-4da7-4745-8d08-c39c0bb00e81","http://resolver.tudelft.nl/uuid:0b91c68f-4da7-4745-8d08-c39c0bb00e81","Advanced Factorization Models for Recommender Systems","Loni, B. (TU Delft Multimedia Computing)","Hanjalic, A. (promotor); Larson, M.A. (promotor); Delft University of Technology (degree granting institution)","2018","Recommender Systems have become a crucial tool to serve personalized content and to promote online products and media, but also to recommend restaurants, events, news and dating profiles. The underlying algorithms have a significant impact on the quality of recommendations and have been the subject of many studies in the last two decades. In this thesis we focus on factorization models, a class of recommender system algorithms that learn user preferences based on a method called factorization. This method is a common approach in Collaborative Filtering (CF), the most successful and widely-used technique in recommender systems, where user preferences are learnt based on the preferences of similar users.
We study factorization models from an algorithmic perspective to be able to extend their applications to a wider range of problems and to improve their effectiveness. The majority of the techniques that are proposed in this thesis are based on state-of-the-art factorization models known as Factorization Machines (FMs).","Factorization Models; Collaborative Filtering; Recommender Systems","en","doctoral thesis","","978-94-6375-232-9","","","","","","","","","Multimedia Computing","","",""
"uuid:ce13d0b5-e00d-4d91-81f8-c7bd7d273345","http://resolver.tudelft.nl/uuid:ce13d0b5-e00d-4d91-81f8-c7bd7d273345","Healing water: using pure water jets to perform bone debridement treatments in orthopedic surgery","den Dunnen, S. (TU Delft Medical Instruments & Bio-Inspired Technology)","Dankelman, J. (promotor); Kerkhoffs, Gino M.M.J. (promotor); Tuijthof, G.J.M. (copromotor); Delft University of Technology (degree granting institution)","2018","Orthopedic surgery is a surgical discipline that is concerned with the treatment
of the musculoskeletal system. Many orthopedic treatments involve cutting or
drilling in bones by using rigid drills or oscillating saws. Using waterjets instead of
conventional instruments can be beneficial due to the absence of thermal damage
and a consistent sharp cut. Additionally, waterjet technology allows the development of flexible instruments that facilitate maneuvering through complex or narrow joint spaces. Therefore, the aim of this thesis is to develop a compliant or flexible arthroscopic surgical instrument, based on water jet technology, that is able to drill in bone tissue.","","en","doctoral thesis","","","","","","","","","","","Medical Instruments & Bio-Inspired Technology","","",""
"uuid:7372f079-4bb5-46eb-b203-315afb8781c8","http://resolver.tudelft.nl/uuid:7372f079-4bb5-46eb-b203-315afb8781c8","Process intensification of microwave assisted methane dry reforming","Gangurde, L.S. (TU Delft Intensified Reaction and Separation Systems)","Stankiewicz, A.I. (promotor); Stefanidis, G. (promotor); Delft University of Technology (degree granting institution)","2018","Resource- and energy-efficient methane (CH4) transformation to fuels and chemicals is a research topic with societal, environmental and industrial relevance owing to the great variety of methane sources, including existing gas networks, small natural gas fields, shale gas, coal beds, agricultural biogas, deep-sea methane hydrates and the pressing issue of methane flaring in remote locations. In addition, CH4 and carbon dioxide (CO2) are the two greenhouse gases contributing majorly to global warming and their effect is expected to increase in years to come due to the continuously increasing energy demand worldwide. In this frame, CH4 reforming by CO2 (dry methane reforming) by means of different catalytic materials and technologies has been investigated over the years as a potential route for valorisation of the two molecules.","","en","doctoral thesis","","978-94-6375-229-9","","","","","","","","","Intensified Reaction and Separation Systems","","",""
"uuid:1a7bbcd9-e0d8-489d-9c0f-67b8595fb945","http://resolver.tudelft.nl/uuid:1a7bbcd9-e0d8-489d-9c0f-67b8595fb945","Technical Performance of EHV Power Transmission Systems with Long Underground Cables","Khalilnezhad, H. (TU Delft Intelligent Electrical Power Grids)","van der Sluis, L. (promotor); Popov, M. (promotor); Delft University of Technology (degree granting institution)","2018","Extra high voltage (EHV) power transmission systems have been traditionally constructed by using overhead lines (OHL) to transfer power over long distances. During the last decade, the opposition against the construction of new OHLs has significantly increased due to societal and environmental concerns, which has caused major obstacles for the grid development. Under this circumstance, system operators have been urged to find solutions and alternatives for the future grid developments. A promising solution of this challenge is to underground the transmission grids (fully or partially) by means of EHV AC underground cables. In this regard, the future transmission grids will be composed of OHLs in non-sensitive areas and underground cables in sensitive areas like populated neighbourhoods and environmentally sensitive locations, which implies on a large-scale utilization of long cables in future EHV grids. These grids are known as hybrid OHL-Cable grids. Although this is very encouraging from the societal and environmental points of view, new challenges arise mainly from the technical perspective and high capital cost. Regarding the technical perspective, the large-scale application of long cables in transmission grids is not yet a well-practiced technique for system operators. In fact, cables have been widely used in low and medium voltage distribution grids, but not in EHV transmission grids. The electrical, thermal, and mechanical characteristics of cables and OHLs are significantly different. These differences can cause various technical problems in the grid, which in return may increase the chance of damage to system components and reduce the reliability of the power supply. Therefore, a decision for the large-scale utilization of EHV cables will be very risky without gaining the complete knowledge and insight of the expected hazards and their countermeasures. This was the main driving force for system operators and manufacturers to carry out research and investigation on the technical performance of EHV grids with long cables. So far, lots of researches have been performed to investigate the design and operation of long EHV cables in transmission grids. These studies have answered many questions and unknowns, but there are still several important scientific gaps that have to be tackled. As a result, the Dutch transmission system operator, TenneT, began an extensive ten-year cable research program together with the Technical Universities of Delft and Eindhoven to investigate the technical possibilities of utilizing long EHV underground cables in the future transmission projects. This thesis, as the last part of the Dutch cable research program, provides robust and comprehensive answers to the most crucial scientific gaps and addresses the required techniques for the reliable operation of cable projects. These techniques can be used in practice by system operators since they are based on realistic assumptions and reliable simulations on an accurate model of an actual power transmission system. This thesis focuses on crucial phenomena related to the steady-state operation, harmonic behaviour, and transient operation of hybrid OHL-Cable systems. A hypothetical future project in the Dutch 380 kV grid with 80 km transmission length was selected as the case study, for which all phenomena were studied according to the most recent standards and grid code. The main scientific contribution of this thesis is the rigorous and comprehensive analysis of a hybrid OHL-Cable system to identify the impact of long cables, system parameters and topology on the system operation. The thesis proposes a methodology for optimal compensation of the cable reactive power in order to enhance the system performance. Moreover, the significance of energization overvoltages is investigated by a robust statistical analysis, which is the first of its kind for hybrid OHL-Cable grids. Last but not the least, two new countermeasures for the zero-missing phenomenon have been developed and several other countermeasures have been also investigated. The main conclusion of the thesis is that the large-scale application of underground cables in transmission systems is technically possible under the condition that all technical phenomena and issues are properly addressed in the planning and designing phases of each project. A case-by-case study for cable projects is a “must” as each project has its own electrical and geographical characteristics. System parameters and topology are different in different areas and consequently the severity of phenomena and challenges will be different. Several countermeasures are available for each technical issue, where the most optimal one should be selected by conducting an in-depth technical analysis. The decision to choose the right countermeasure is highly dependent on the project specifications. Finally, for each cable project, it is always recommended to perform a step-by-step study similar to the presented approach in this thesis, in which all the relevant phenomena from the steady-state operation to the electromagnetic transient behaviour are investigated. The study should follow the guidelines, grid code, manufacturer requirements, and standards in order to guarantee that all requirements for a reliable system operation are met accordingly.","Hybrid OHL-Cable grids; EHV underground cables; Power system transients; Power system planning and design","en","doctoral thesis","","978-94-6375-217-6","","","","","","","","","Intelligent Electrical Power Grids","","",""
"uuid:8205cc34-30df-45f0-b6eb-8081bdb765b8","http://resolver.tudelft.nl/uuid:8205cc34-30df-45f0-b6eb-8081bdb765b8","Quantum Control Architecture: Bridging the Gap between Quantum Software and Hardware","Fu, X. (TU Delft FTQC/Bertels Lab; TU Delft Computer Engineering)","Bertels, K.L.M. (promotor); DiCarlo, L. (promotor); Delft University of Technology (degree granting institution)","2018","Quantum computers can accelerate solving some problems which are inefficiently solved by classical computers, such as quantum chemistry simulation. To date, quantum computer engineering has focused primarily at opposite ends of the required system stack: devising high-level programming languages and compilers to describe and optimize quantum algorithms, and building reliable low-level quantum hardware. Relatively little attention has been given to using the compiler output to fully control the operations on current experimental quantum processors.
Bridging this gap, we propose and build a prototype of flexible control microarchitecture, named QuMA, supporting quantum-classical mixed code for
a superconducting quantum processor. The microarchitecture is based on three core elements: (i) a codeword-based event control scheme, (ii) queue-based precise event timing control, and (iii) a flexible multilevel instruction decoding mechanism for control.","Quantum Instruction Set Architecture; Quantum Control Microarchitecture; Quantum Architecture Simulator","en","doctoral thesis","","978-94-028-1305-0","","","","","","","","","FTQC/Bertels Lab","","",""
"uuid:6cdf5170-69f6-48c5-b953-a790bc611ac8","http://resolver.tudelft.nl/uuid:6cdf5170-69f6-48c5-b953-a790bc611ac8","Life on N2O: On the ecophysiology on nitrous oxide reduction & its potential as a greenhouse gas sink in wastewater treatment","Conthe Calvo, M. (TU Delft BT/Environmental Biotechnology)","van Loosdrecht, Mark C.M. (promotor); Kleerebezem, R. (copromotor); Delft University of Technology (degree granting institution)","2018","With its rapidly rising concentration in the atmosphere and its high global warming potential, N2O is arguably the greenhouse gas of the 21st century. The research carried out within the Nitrous Oxide Research Alliance (NORA) – of which this thesis forms part of – focused on the microbial conversions of N2O within the nitrogen cycle, the ultimate aim being to develop N2O mitigation strategies for natural and managed ecosystems such as agricultural soils and wastewater treatment plants. A variety of pathways in the nitrogen cycle produce N2O, but respiratory N2O reduction to N2 by microorganisms harboring an N2O reductase enzyme (encoded by the gene nosZ), is the only known microbial conversion that consumes N2O. N2O-respiring microorganisms may thus be key in this endeavour. Studies in literature reporting the cultivation of denitrifying bacteria with N2O as a sole electron acceptor date back to the 1950s and in recent years, there have been important discoveries of novel groups of denitrifying and non-denitrifying N2O reducing bacteria and archaea, and their importance for N2O reduction in the environment. Nevertheless, essential aspects of N2O reduction remain unclear and the aim of this thesis was to fill in some of the existing knowledge gaps regarding N2O-reducer ecophysiology, using wastewater treatment as a frame of reference. Our main approach was to study simplified, naturally selected, N2O reducing bacterial communities in chemostat enrichment cultures fed with N2O as the sole electron acceptor and acetate as electron donor. Continuous cultivation, which selects for a fairly simple community, is ideal for ecophysiology studies as it bridges the gap between ecosystem studies and pure culture work. Furthermore, it allows for cultivation under constant and limiting conditions...","","en","doctoral thesis","","978-94-6375-222-0","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:96870b88-e07b-4ec2-8bd4-ef2cd3713568","http://resolver.tudelft.nl/uuid:96870b88-e07b-4ec2-8bd4-ef2cd3713568","An experimental investigation of sloshing impact physics in membrane LNG tanks on floating structures","Bogaert, H (TU Delft Ship Hydromechanics and Structures)","Huijsmans, R.H.M. (promotor); Kaminski, M.L. (promotor); Delft University of Technology (degree granting institution)","2018","","","en","doctoral thesis","","978-94-6332-434-2","","","","","","","","","Ship Hydromechanics and Structures","","",""
"uuid:5b3a2578-f3c3-43f0-b226-ebbde8821670","http://resolver.tudelft.nl/uuid:5b3a2578-f3c3-43f0-b226-ebbde8821670","An Inhabitable Infrastructure: Rethinking the architecture of the bazaar","Sanaan Bensi, N. (TU Delft OLD Public Buiding; TU Delft Teachers of Practice)","Riedijk, M. (promotor); Avermaete, T.L.P. (promotor); Schoonderbeek, M.G.H. (copromotor); Delft University of Technology (degree granting institution)","2018","The aim of this thesis is twofold. First, it offers a ‘theoretical reading’ of a historically important architectural entity – namely the bazaar – in order to propose a synthetic understanding of its complexity and to explore the multiplicity of forces and regimes involved in the bazaar’s [historical] formation. Second, by conceptualizing the architecture of the bazaar, this thesis explores the relation between architecture and territory, inhabitation and infrastructure in the context of the Iranian Plateau. In doing so, this thesis contributes to the production of an architectural knowledge which encourages a contextual studying of complex spatial regimes and mechanisms as their prime forces for intervention.
The notion of the bazaar is complex. Not only does it have implications in diverse disciplines, but it also carries various definitions. Depending on the context in which it is used, the bazaar can be depicted as a place, a form of economy, a social class or a way of life, and thus it can embody the notion of a city, a territory or even it can be expanded to the region known as the Middle East or the Islamic world. Within this wide spectrum of possible meanings, the bazaar has been the topic of discourse in architecture and urban history, as well as anthropology, sociology, economics and political science.
The inherent complexity of the notion of the bazaar is attributable to its intermediate position, i.e. its relation to the territory and various ways of life, its spatial complexity, i.e. a space of movement and a place of public and the collective, and the superposition of different scales between architecture and the city. This implies that research on the bazaar needs to deviate from purely typological or urban morphological studies. Rather it needs to devote simultaneous attention to people as well as the numerous spatial interrelations involved in its formation. This means that an architecture is possible which gives form to the accumulation of complex cultural, social, economic and administrative relations. While it enables connection and integration, it provides scope for confrontation and encounter.
The first chapter provides an overview of various conceptions, definitions and perceptions of the bazaar. This chapter will demonstrate that a proper discursive framework that allows us to grasp the spatial complexity of the bazaar is, in fact, missing. While architecture and urban studies have focused mainly on describing and classifying the bazaar’s structural and morphological presence, other disciplines have hardly recognized its physical importance in the process of forming various interrelations. This chapter concludes that the bazaar is not simply an architectural object, rather it is an entity which is territorial. This means that the bazaar’s formation has been closely related to the ways in which the territory has been managed and inhabited.
Subsequently, this research conceptualizes the architecture of the bazaar by revisiting its ‘whereness’ and ‘whatness’, using the ‘territory’ as a theoretical framework. While ‘whereness’ addresses the characteristics of ‘where’ the bazaar is historically located, ‘whatness’ is concerned with what the bazaar is and what it does. In this process, it is important to note that ‘whereness’ and ‘whatness’ are closely linked to each other, and they are both simultaneously a precondition and product.
The second part of the thesis – which includes chapters three and four – presents an understanding of the ‘whereness’. This part seeks means and lenses to open a discussion on territory both as a precondition and product. These two chapters discuss the geographical condition – and what I call the geopolitics of the in-between, through which two kinds of territorialities take form: i.e. the extensive territoriality of the nomadic spatialized through distribution and movement and the intensive territoriality of the sedentary spatialized through managerial knowledge of dehqan to inhabit a land. The coexistence, encounter and assimilation of these territorialities has had an impact on the state-form and the social and economic system on the Iranian Plateau in general and the spatial formation of the bazaar as an intermediate.
The third part of this thesis focuses on the issue of ‘whatness’. This part – chapters five and six – re-examines the established knowledge on the bazaar as a physical and spatial entity by experimenting within two kinds of territorialities proposed in the previous chapters. In other words, the bazaar is seen as an assemblage of various territorial regimes rooted in the extensive nomadic territoriality and intensive sedentary territoriality. This not only pertains to the relation between movement and inhabitation, space and place in the bazaar’s physical structure, but also in its social and legal organization, topology and logistical system. Thus, the bazaar goes beyond the mere circulation space; rather it is perceived as an infra+structure which is situated within the city and operated as the city’s main [public] place.
The present thesis examines the possibility of constructing a discursive platform for studying the bazaar as a complex architectural entity. It posits a critical reading of the bazaar’s primary spatial idea, suggesting that a territorial reading of the bazaar can provide a valuable alternative lens for looking beyond mere preservation concerns or the purely formal imitations that are normally applied when examining the current condition of the bazaar in Iranian cities. It can help to redefine the intermediate position of the bazaar as a way of discovering new orders and hierarchies within and without the city.","Architecture; Territory; Bazaar; Inhabitable Infrastructure","en","doctoral thesis","","","","","","","","2020-12-18","","","OLD Public Buiding","","",""
"uuid:32765560-5fde-4c86-a778-decdc3eb5294","http://resolver.tudelft.nl/uuid:32765560-5fde-4c86-a778-decdc3eb5294","An Empirical Approach to Reinforcement Learning for Micro Aerial Vehicles","Junell, J. (TU Delft Control & Simulation)","Mulder, Max (promotor); Chu, Q. P. (promotor); Delft University of Technology (degree granting institution)","2018","The use of Micro Aerial Vehicles (MAVs) in practical applications, to solve real-world problems, is growing in demand as the technology becomes more widely known and accessible. Proposed applications already span a wide berth of fields like military, search and rescue, ecology, artificial pollinators, and more. As compared to larger Unmanned Aerial Systems (UAS), MAVs are specifically desirable for applications which take advantage of their small size or light weight – whether that means being discreet, having insect-like maneuverability, operating in small spaces, or being more inherently safe with respect to injury towards people. In some cases, MAVs work under conditions where autonomy is needed. The small size of MAVs and the desire for autonomy combine to create a demanding set of challenges for the guidance, navigation, and control (GNC) of these systems. Limitations of on-board sensors, difficulties in modeling their complex and often time varying dynamics, and limited on-board computational resources, are just a few examples of the challenges facing MAV autonomy...","Reinforcement Learning; Micro Aerial Vehicle; Quadrotor; Policy Iteration; Hierarchical Reinforcement Learning; State Abstraction; Transfer learning","en","doctoral thesis","","978-94-6186-965-4","","","","","","","","","Control & Simulation","","",""
"uuid:f50c2129-6771-468b-aa3c-7c1fdac4e425","http://resolver.tudelft.nl/uuid:f50c2129-6771-468b-aa3c-7c1fdac4e425","Testing and Diagnosis of High Voltage and Extra High Voltage Power Cables with Damped AC Voltages","Cichecki, P (TU Delft OLD High-Voltage Technology and Management)","Smit, J.J. (promotor); Delft University of Technology (degree granting institution)","2018","The thesis focuses on on-site testing and diagnosis of transmission power cables circuits. Based on the application of methods for on-site voltage generation and its use for advanced diagnosis, comprehensive testing and diagnostic procedures have been investigated in this thesis.","","en","doctoral thesis","","978-83-952726-0-8","","","","","","","","","OLD High-Voltage Technology and Management","","",""
"uuid:653d520b-07c0-4fd8-bd40-4f4ee74d8bf5","http://resolver.tudelft.nl/uuid:653d520b-07c0-4fd8-bd40-4f4ee74d8bf5","Image formation for future radio telescopes","Naghibzadeh, S. (TU Delft Signal Processing Systems)","van der Veen, A.J. (promotor); Delft University of Technology (degree granting institution)","2018","Fundamental scientific questions such as how the first stars were formed or how the universe came into existence and evolved to its present state drive us to observe weak radio signals impinging on the earth from the early days of the universe. During the last century, radio astronomy has been vastly advancing. Important discoveries on the formation of various celestial objects such as pulsars, neutron stars, black holes, radio galaxies and quasars are the result of radio astronomical observations. To study celestial objects and the astrophysical processes that are responsible for their radio emissions, images must be formed. This is done with the help of large radio telescope arrays. Next generation radio telescopes such as the Low Frequency Array Radio Telescope (LOFAR) [1] and the Square Kilometer Array (SKA) [2], bring about increasingly more observational evidence for the study of the radio sky by generating very high resolution and high fidelity images. In this dissertation, we study radio astronomical imaging as the problem of estimating the sky spatial intensity distribution over the field of view of the radio telescope array from the incomplete and noisy array data. The increased sensitivity, resolution and sky coverage of the new instruments pose additional challenges to the current radio astronomical imaging pipeline. Namely, the large amount of data captured by the radio telescopes cannot be stored and needs to be processed quasi-real time. Many pixel-based imaging algorithms, such as the widely-used CLEAN [3] algorithm, are not scalable to the size of the required images and perform very slow in high resolution scenarios. Therefore, there is an urgent need for new efficient imaging algorithms. Moreover, regardless of the amount of collected data, there is an inherent loss of information in the measurement process due to physical limitations. Therefore, to recover physically meaningful images additional information in the form of constraints and regularizing assumptions are necessary. The central objective of the current dissertation is to introduce advanced algebraic techniques together with custom-made regularization schemes to speed up the image formation pipeline of the next generation radio telescopes.","radio interferometry; inverse problems; image formation","en","doctoral thesis","","978-94-6366-101-0","","","","","","","","","Signal Processing Systems","","",""
"uuid:60ef07b2-00db-418b-9495-5a9baf6105df","http://resolver.tudelft.nl/uuid:60ef07b2-00db-418b-9495-5a9baf6105df","BiGlobal Stability of Shear Flows: Spanwise & Streamwise Analyses","Groot, K.J. (TU Delft Aerodynamics)","van Oudheusden, B.W. (promotor); Kotsonis, M. (copromotor); Schuttelaars, H.M. (copromotor); Delft University of Technology (degree granting institution)","2018","Laminar-turbulent transition dictates an increase in skin friction. The resulting turbulent skin friction contributes to approximately 40% of the total drag of commercial aircraft. Reducing the turbulent flow region by postponing transition can therefore significantly reduce the carbon footprint and costs of flying. Transition prediction is required in order to do so, which depends on a detailed understanding of the transition process.","Flow instability; measured base flows; Micro-ramp; Swept-wing boundary layer; Crossflow instability; Streamwise BiGlobal problem","en","doctoral thesis","","978-94-6366-115-7","","","","","","","","","Aerodynamics","","",""
"uuid:cc96a7c7-1ec7-449a-84b0-2f9a342a5be5","http://resolver.tudelft.nl/uuid:cc96a7c7-1ec7-449a-84b0-2f9a342a5be5","A new method to assess the climate effect of mitigation strategies for road traffic: The fast chemistry-climate response model TransClim","Rieger, V.S. (TU Delft Aircraft Noise and Climate Effects; Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR))","Grewe, V. (promotor); Delft University of Technology (degree granting institution)","2018","Emissions of road traffic crucially influence Earth’s climate. The vehicle fleet emits not only carbon dioxide (CO2), but also nitrogen oxides (NOx), volatile organic compounds (VOC) and carbon monoxide (CO) which produce ozone (O3) and destroy methane (CH4) in the troposphere. As the demand of mobility is expected to further increase in future, a reduction of the climate effect from road traffic emissions is indispensable. Therefore, it is essential to assess the climate impact of emission changes caused by technological trends and mitigation strategies for road traffic. Several studies have already quantified the impact of road traffic emissions on climate. But climate simulations with complex chemistry climate models are still computational expensive hampering the assessment of many road traffic emission scenarios. Consequently, an efficient method for quantifying the climate impact and contribution of mitigation options is required. Within the scope of this thesis, a unique chemistry-climate response model called TransClim (Modelling the effect of surface Transportation on Climate) was developed. Using an efficient interpolation algorithm, it assesses the impact and the contribution of road traffic emission scenarios on O3 and CH4 concentration as well as their corresponding radiative forcings. Comparing the results delivered by TransClim with simulations of the complex global chemistry climate model EMAC reveals very low deviations (0.02 – 6 %). To determine not only the impact but also the contribution of road traffic emissions to O3, OH and CH4 in TransClim, a so-called tagging method is applied. It attributes the concentrations of trace gases to emission sources such as road traffic. This thesis presents an improved tagging method for the short-lived species OH and HO2 as well as a new method for CH4. Within the scope of this thesis, TransClim enabled to assess the climate effect of two scientific questions: first, the effect of three prospective mitigation options of German road traffic and second, two scenarios describing the cases that European vehicles use fuel blends containing a low and a high proportion of biofuels. Summing up, TransClim offers a new method to quickly assess the climate impact and the contribution of mitigation strategies for road traffic in a sufficiently accurate manner. As TransClim simulates about 6000 times faster than a complex chemistry climate model, it enables to quantify the effect of many emission scenarios in different regions.","Climate effect of road traffic; tagging method; response model; climate effect of biofuels","en","doctoral thesis","","","","","","","","","","","Aircraft Noise and Climate Effects","","",""
"uuid:7482b78d-9daf-4760-b114-ec1ad338e66b","http://resolver.tudelft.nl/uuid:7482b78d-9daf-4760-b114-ec1ad338e66b","Time-dependent flows over textured or compliant surfaces: Turbulent drag reduction & compliant wall deformation","Benschop, H.O.G. (TU Delft Fluid Mechanics)","Westerweel, J. (promotor); Breugem, W.P. (promotor); Delft University of Technology (degree granting institution)","2018","A significant part of the fuel used for transportation results from the drag in turbulent flows. Techniques for turbulent drag reduction yield associated reductions of the fuel consumption and greenhouse gas emissions, which is desirable from both economic and environmental perspectives (cf. chapter 1). This thesis investigates two passive techniques that could be exploited for the reduction of frictional drag in turbulent flows, namely textured and compliant surfaces. Correspondingly, the aim of the thesis is twofold, namely to explore the drag-reducing potential of riblet-textured surfaces, and to characterize the interaction between time-dependent (possibly turbulent) flows and a compliant wall. The work presented in this thesis was performed as part of the European project SEAFRONT, which aimed at the development of environmentally benign antifouling and dragreducing technologies for the maritime sector…","","en","doctoral thesis","","978-94-6366-100-3","","","","","","","","","Fluid Mechanics","","",""
"uuid:f77acd29-5115-4aea-a036-78e8631a268a","http://resolver.tudelft.nl/uuid:f77acd29-5115-4aea-a036-78e8631a268a","Assessment of Capacity and Risk: A Framework for Vessel Traffic in Ports","Bellsola Olba, X. (TU Delft Transport and Planning)","Hoogendoorn, S.P. (promotor); Vellinga, T. (promotor); Daamen, W. (copromotor); Delft University of Technology (degree granting institution)","2018","Vessel traffic in ports is a key issue due to the high increase in vessel flows that lead to busier waterways. This dissertation presents novel methodologies to assess vessel traffic in ports based on capacity and risk independently and jointly. These methodologies have been applied to case studies using simulation models and AIS data. They provide a framework to support decision makers when assessing new infrastructure designs, expansions or changes in the vessel traffic management
strategies.","","en","doctoral thesis","TRAIL Research School","978-90-5584-241-4","","","","TRAIL Thesis Series no. T2018/11, the Netherlands TRAIL Research School","","","","","Transport and Planning","","",""
"uuid:cda9dd80-0a51-436e-9c70-ea174505692a","http://resolver.tudelft.nl/uuid:cda9dd80-0a51-436e-9c70-ea174505692a","How humans use preview information in manual control","van der El, Kasper (TU Delft Control & Simulation)","Mulder, Max (promotor); Pool, D.M. (copromotor); Delft University of Technology (degree granting institution)","2018","The introduction of ever-advancing automatic control systems is rapidly changing traditional manual control tasks such as piloting of aircraft and steering of cars. In order to predict how human controllers will interact with new technology, a thorough understanding of the human’s adaptive manual control capabilities and limitations is essential. This thesis investigates human manual control behavior in control tasks with preview, where information is available about the trajectory to follow in the future; an example is the road that is visible ahead while driving. Human control behavior is measured in tasks that range from basic display tracking (yellow grid on this cover) to realistic car curve driving. A unifying control theoretic model is developed, which captures the measured human control behavior in all preview tasks. The proposed theoretical advancements do not only improve our understanding of manual preview control, but also pave the way for an objective model-based approach to optimize the design of tomorrow’s intelligent automation technology.","Preview control; Manual control; Human behavior analysis; Tracking tasks; Driver steering; System identification; Parameter estimation","en","doctoral thesis","","978-94-6186-967-8","","","","","","","","","Control & Simulation","","",""
"uuid:7e85a2eb-2bb5-4ba6-a0ed-59d92560597b","http://resolver.tudelft.nl/uuid:7e85a2eb-2bb5-4ba6-a0ed-59d92560597b","Place Branding in Megacity Regions in China: coping with ambiguous national environmental policies","Lu, H. (TU Delft Organisation & Governance)","de Jong, W.M. (promotor); ten Heuvelhof, E.F. (promotor); Delft University of Technology (degree granting institution)","2018","In the past four decades, China has experienced unprecedented rates of urban growth. This remarkable urbanization has also created challenges for China’s environment. To protect the environment, the Chinese national government has issued several policies while still maintaining high economic growth, such as Scientific Approach to Development in 2003 and Ecological Civilization in 2007. Furthermore, ecological principles, such as intensive, smart, green, and low-carbon, were further confirmed in the urbanization process in the National New Urbanization Plan in 2014 (State Council, 2014). However, the policy scopes are broad, and goals are also ambiguous in the corresponding policy documents. Facing devastating environmental problems, Chinese regions and cities respond by trying to attract economic activity with higher economic value and lower environmental cost. The proliferation of environmental concerns in place brands reveals influence from national government. These place identities and labels should go beyond mere intentions. The regional and municipal governments have an obligation to promote sustainable development initiatives listed in their policy plans. In the process of urban expansion, new towns are archetypes of urban projects to flesh out these sustainable development initiatives in China. This dissertation studies regional and city branding in China from two angles, i.e., place branding and the intergovernmental context. First, place branding process focuses on the development stages of regional and city brands, which uncovers brand identities and labels in planning documents, as well as city images created around urban projects. Second, the intergovernmental context further addresses the interactions among different levels of governments in the decision-making regarding brand identities and labels, as well as private actors in urban projects...","","en","doctoral thesis","","978-94-6366-108-9","","","","","","","","","Organisation & Governance","","",""
"uuid:4c3ba42f-091e-4b9a-b2bb-68e018a3d4db","http://resolver.tudelft.nl/uuid:4c3ba42f-091e-4b9a-b2bb-68e018a3d4db","Reconstruction and reduction of uncertainties in aeroelastic systems","Sarma, R. (TU Delft Aerodynamics)","Bijl, H. (promotor); Dwight, R.P. (copromotor); Delft University of Technology (degree granting institution)","2018","The growing demand for energy worldwide has resulted in the exploration and
development of sustainable forms of energy, such as wind energy. Wind turbines
are typically used to extract power from the wind through the rotational motion
of blades, which are aeroelastic structures. Among other practical examples, aircraft wings are also aeroelastic in nature. Aeroelastic structures suffer from inherent instabilities and fatigue, and hence their design process requires characterisation of safe operating regimes in order to prevent failure. In this dissertation, we present a methodology for predicting dynamic aeroelastic behaviour, and additionally employing data from experiments to improve predictions. The methodology is demonstrated on three test-cases: a 2-DoF airfoil, the Goland wing and an experimental, downwind, wind turbine. The presented method is generic in terms of applicability to any aeroelastic problem, however considering the engineering and societal relevance, the wind turbine problem is extensively investigated. The dissertation contributes to three broad scientific domains - aeroelasticity, reduced ordermodelling and uncertainty quantification.","Aeroelasticity; ROMs; Uncertainty quantification","en","doctoral thesis","","978-94-6366-107-2","","","","","","","","","Aerodynamics","","",""
"uuid:8f61321c-24c8-4d00-aa9c-738a125e6c98","http://resolver.tudelft.nl/uuid:8f61321c-24c8-4d00-aa9c-738a125e6c98","Capturing human behaviour through wearables by computational analysis of social dynamics","Gedik, E. (TU Delft Pattern Recognition and Bioinformatics)","Reinders, M.J.T. (promotor); Hung, H.S. (copromotor); Delft University of Technology (degree granting institution)","2018","Understanding human behaviour has sparked the minds of many throughout centuries. One intriguing aspect of human behaviour is the social part; how humans react to each other and their environment. Scientifically studying such behaviour is hampered because of the need for manual annotations, so that social scientists limited themselves to observing only short time intervals in limited settings. With the growing processing power of computers and increasing possibilities of robust, continuous, and mobile sensing, collecting and analysing large amounts of real-life behaviour data has become possible. Moreover, computational methods make it possible to go beyond traditional approaches for social understanding, since they detect patterns that are not easily distinguishable for humans. However, even with powerful computational models, investigating human behaviour is quite challenging as behaviour is personal and contextual, resulting in huge variations. This thesis proposes novel computational solutions for analysing human social behaviour. It focusses on data collected from people with wearable accelerometers in crowded events where people freely mingle with each other. It provides solutions to robustly detect actions and interactions, as well as how to use the detected information to derive higher level social understanding. The thesis starts by introducing novel ways of detecting social actions and interactions. To deal with intra personal variations, we show how general action predictors can be adapted to become personalized models using the transfer learning methodology. Further, we show that the detection of conversing groups can be deduced from interaction dynamics, instead of the mainly preferred modality of proximity. Large variations of interaction patterns that might arise in unrestricted scenarios are addressed by a novel method that considers the sizes of the groups; both in training and detection phases. The thesis continues with a proof-of-concept study that shows how detected action and interaction patterns of people can be used to infer an individuals’ psychological construct. We show that it is possible to detect the construct of personality in a real life event by imitating two behavioural cues (speaking and movement) from one digital modality (acceleration). Additionally, we describe a detailed investigation of how social context moderates an individuals’ evaluation of a live performance. Through a novel approach, we infer audience members’ evaluations from informative parts of the event, identified by the linkage of body accelerations. Taken together, with this thesis we show that with the increased sensing and computing power, the understanding of human social behaviour in more dynamic social situations is within reach.","","en","doctoral thesis","","978-94-6380-143-0","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:2a3aa826-be9d-4663-add9-884a42c23a21","http://resolver.tudelft.nl/uuid:2a3aa826-be9d-4663-add9-884a42c23a21","The effects of using mobile phones and navigation systems during driving","Knapper, A.S. (TU Delft Transport and Planning)","Hagenzieker, Marjan (promotor); Brookhuis, K.A. (promotor); Delft University of Technology (degree granting institution)","2018","The effects of using mobile phones and navigation systems during driving Driving might be the most complex task that many engage in on a daily basis. Drivers need to pay attention to other vehicles, cyclists and pedestrians, while keeping the car safely between the road markings and at an appropriate distance from any vehicle in front. Several factors relating to human behaviour affect the likelihood of someone being involved in a crash. The WHO (2015) distinguishes speed, drink driving, motorcycle helmets, seatbelts and child restraints, and distracted driving as the key risk factors. Many countries have put distraction as one of their policy priorities for the coming years. The precise impact of distracted driving on crash likelihood is not known yet. Estimates of road user distraction being a contributory factor in accidents range from 10 to 30% (TRL, TNO, & RappTrans, 2015). This thesis focuses on drivers being distracted from mobile phones and navigation systems, and how their driving performance is affected. Mobile phones are predominantly smartphones nowadays, with touchscreens, downloadable apps and e-mail. Most drivers in Western countries own a mobile phone. Navigation systems may help the driver navigate, providing both efficient routes and comfort. Navigation systems are widely used, for instance in the Netherlands two third of all Dutch households own a portable navigation system in 2015 (KiM, 2015).","","en","doctoral thesis","TRAIL Research School","978-90-73946-18-7","","","","TRAIL Thesis Series T2018/10, the Netherlands TRAIL Research School","","","","","Transport and Planning","","",""
"uuid:7afc2128-e023-41af-8a9c-f1df8c257fc4","http://resolver.tudelft.nl/uuid:7afc2128-e023-41af-8a9c-f1df8c257fc4","Geospatial Data on the Web","van den Brink, L.E. (TU Delft Urban Data Science)","Stoter, J.E. (promotor); Delft University of Technology (degree granting institution)","2018","Geospatial data is an increasingly important information asset for decisionmaking, from simple every day decisions like where to park your car, to national and international policy on topics like infrastructure and environment.
Because of the location aspect, geospatial data is often the linking pin between different datasets and therefore important for data integration. A lot of geospatial data is created, for example, as part of governmental processes and nowadays, also disseminated as open data, traditionally through ""Spatial data infrastructures"" (SDIs).
There is a lot of potential for reusing this data in other domains than the domain and use case for which it was originally created. My main research question was: ""How to reuse geospatial data, from different, heterogeneous sources, via the web across communities?"" Several aspects of data dissemination
must be addressed before open data is actually in a good position for getting reused. These aspects have been coined the ""FAIR principles"": findability, accessibility, interoperability, and reusability.","3D Geo-Information; Linked Open Data; Web; Semantic harmonisation","en","doctoral thesis","","","","","","","","","","","Urban Data Science","","",""
"uuid:7923c257-e81f-4e29-adf7-bd6014d9da6a","http://resolver.tudelft.nl/uuid:7923c257-e81f-4e29-adf7-bd6014d9da6a","Safer reinforcement learning for robotics","Koryakovskiy, I. (TU Delft Biomechatronics & Human-Machine Control)","Vallery, H. (promotor); Babuska, R. (promotor); Delft University of Technology (degree granting institution)","2018","Reinforcement learning is an active research area in the fields of artificial intelligence and machine learning, with applications in control. The most important feature of reinforcement learning is its ability to learn without prior knowledge about the system. However, in the real world, reinforcement learning actions may lead to serious damage of a controlled robot or its surroundings in the absence of any prior knowledge. Safety — an often neglected factor in the reinforcement learning community — requires greater attention from researchers.
Prior knowledge can increase safety during learning. At the same time, it can severely limit a possible solution set and hamper learning performance. This thesis discusses the influence of different forms of prior knowledge on learning performance and the risk to robot damage, where prior knowledge ranges from physics-based assumptions, such as the robot construction and material properties, to the knowledge of the task curriculum, or the approximate model possibly coupled with a nominal controller.","Reinforcement Learning; Humanoid Robots; Optimal Control; Learning and Adaptive Systems; Nonlinear Model Predictive Control; Parametric Uncertainties; Structural Uncertainties; Bipedal Robots","en","doctoral thesis","","978-5-00058-959-5","","","","","","","","","Biomechatronics & Human-Machine Control","","",""
"uuid:2b8835de-5f43-4c41-b940-80e566f5554d","http://resolver.tudelft.nl/uuid:2b8835de-5f43-4c41-b940-80e566f5554d","The Effectiveness of Risk Communication to Raise Awareness of Natural Hazards","Charriere, M.K.M. (TU Delft Water Resources)","van de Giesen, N.C. (promotor); Bogaard, T.A. (promotor); Mostert, E. (copromotor); Delft University of Technology (degree granting institution)","2018","This doctoral thesis studies the effectiveness of real-life risk communication efforts that include visuals and aim to increase the awareness of populations at risk of natural hazards. Several methods are used. To obtain a picture of the current state of research and practice, a qualitative approach is followed, including a literature review of risk communication concerning floods and interviews with designers of Smartphone Apps on avalanche danger. To measure the effectiveness of a real risk communication effort, a quantitative approach is followed, including statistical analysis of survey responses and Radio-Frequency Identification technology. The studied risk communication effort is the ‘Alerte’ exhibition, held in the French Alps, which was designed with the local stakeholders following an action-oriented approach.","Risk communication; Risk awareness; Natural hazards; Action research","en","doctoral thesis","","978-94-028-1295-4","","","","","","","","","Water Resources","","",""
"uuid:141eaf11-7a89-4d8a-a6ab-174bb4d4e686","http://resolver.tudelft.nl/uuid:141eaf11-7a89-4d8a-a6ab-174bb4d4e686","Driver Behaviour during Control Transitions between Adaptive Cruise Control and Manual Driving: Empirics and Models","Varotto, S.F. (TU Delft Transport and Planning)","van Arem, B. (promotor); Hoogendoorn, S.P. (promotor); Farah, H. (copromotor); Delft University of Technology (degree granting institution)","2018","Adaptive Cruise Control (ACC) and automated vehicles can contribute to reduce traffic congestion and accidents. Field operational Tests have shown that drivers may prefer to deactivate ACC systems that are inactive at low speeds in dense traffic conditions and before changing lanes. These transitions between automated and manual driving are called control transitions. Notwithstanding the potential effects on traffic operations, most car-following and lane-changing models currently used to evaluate the impact of ACC do not describe control transitions. The main objectives of this thesis were to gain empirical insights into driving behaviour during control transitions from full-range ACC to manual driving and to model driver decisions to resume manual control in full-range ACC. To achieve these objectives, empirical data were collected in driver simulator and on-road experiments. Findings in these experiments showed that control transitions influence significantly the driver behaviour characteristics for a few seconds after manual control is resumed. Based on the empirical findings and on the Risk Allostasis Theory (RAT), this thesis developed a modelling framework describing the underlying decision-making process of drivers with full-range ACC at an operational level. This continuous-discrete choice model addresses interdependencies across driver decisions to resume manual control and to regulate the ACC target speed in terms of causality, unobserved driver characteristics, and state dependency. The results reveal that driver decisions with full-range ACC can be interpreted based on the RAT. The choice model can be used to forecast driver response to a driving assistance system that adapts its settings to prevent control transitions while guaranteeing safety and comfort. The model can also be implemented into a microscopic traffic flow simulation to evaluate the impact of ACC on traffic flow efficiency and safety, accounting for control transitions and target speed regulations.","Control transitions; Adaptive Cruise Control; On-road experiment; Driver simulator experiment; Driver behaviour; Continuous-discrete choice model","en","doctoral thesis","TRAIL Research School","978-90-5584-240-7","","","","TRAIL Thesis Series no. T2018/9, the Netherlands Research School TRAIL","","2018-12-09","","","Transport and Planning","","",""
"uuid:8a4a116e-b9c3-49b2-b6a7-d462113cf443","http://resolver.tudelft.nl/uuid:8a4a116e-b9c3-49b2-b6a7-d462113cf443","Expansion Governance of the Integrated North Seas Offshore Grid","Gorenstein Dedecca, J. (TU Delft Energie and Industrie)","Herder, P.M. (promotor); Hakvoort, R.A. (promotor); Delft University of Technology (degree granting institution); Comillas Pontifical University (degree granting institution); KTH Royal Institute of Technology (degree granting institution)","2018","The expansion of offshore power transmission and generation in the North Seas of Europe is accelerating rapidly. This is due to several drivers, including the decarbonization and reform of the European power system, and innovations in offshore wind and high-voltage direct current transmission. So far, this European North Seas offshore grid is composed of conventional transmission lines, which perform the interconnection of onshore power systems and the wind farm connection functions separately. An integrated offshore grid is an innovative concept where some of the transmission lines perform simultaneously both the interconnection and connection functions. Earlier research leveraging optimization approaches already demonstrated that such an integrated offshore grid can provide socio-economical, technical and environmental benefits...","Energy Union; expansion planning; governance; HVDC; myopic optimization; North Seas; offshore grid; offshore wind; simulation","en","doctoral thesis","","978-94-6186-962-3","","","","","","","","","Energie and Industrie","","",""
"uuid:a5efc9a0-0f9a-4ced-88be-51da26607ec0","http://resolver.tudelft.nl/uuid:a5efc9a0-0f9a-4ced-88be-51da26607ec0","Advancements in automated design methods for NICFD turbomachinery","Vitale, S. (TU Delft Flight Performance and Propulsion)","Colonna, Piero (promotor); Pini, M. (copromotor); Delft University of Technology (degree granting institution)","2018","The transition towards a more affordable, reliable, and sustainable energy provision paradigm is one of the main 21st century challenges that humanity must overcome to protect the planet from the harmful effect caused by climate change. The concentration of CO2 in the atmosphere has been dramatically increasing since the pre-industrial era. If the increase of green-house gasses emissions continues unabated, this will bring dramatic consequences for planet Earth, compromising eventually the existence of many species, including the human race. To avoid a climate change catastrophe, the share of primary energy coming from renewable energy resources must increase from around 15% in 2015 to 65% in 2050. This energy transition can not rely solely on few successful technologies (i.e., solar photovoltaic, and wind energy), but it must count on a larger variety of technical solutions that are suitable for a wider range of renewable sources and diversity of circumstances. For instance, renewable thermal energy sources for power generation (i.e., geothermal reservoir, biomass fuel, and concentrated solar radiation), can provide a large portion of the world electricity demand in the future. However, the exploitation of a good portion of these sources strongly depends on the market success of technologies such as the Organic Rankine Cycle (ORC) power system. One of the key aspects to make ORC systems economically competitive, especially at the smaller sizes (⇡ 1 − 50 kW), is the realization of highly efficient turbomachinery components. The fluid-dynamic design of ORC turbomachinery significantly differs from the design of traditional machines (i.e., steam and gas turbines), and this is mainly due to the different thermo-physical properties and gas dynamic behavior of the organic working fluids. This means that design methods devised for standard steam and gas turbomachinery can not be used for turbomachinery operating in the Non-ideal compressible fluid dynamics (NICFD) region. Furthermore, no experimental campaigns have ever been carried out to create a body of empirical knowledge to support the design highly efficient ORC turbomachinery. As a consequence, the entire design process of ORC turbomachinery relies only on the use of advanced CFD software. The current trend is to couple CFD tools with numerical optimization techniques in order to automatically obtain optimal flow passage geometries. In particular, adjoint-based methods have clearly demonstrated to be the only optimization technique capable of tackling the multi-stage turbomachinery design problem, in which thousands of design variable must be concurrently optimized. Therefore, the research documented in this PhD dissertation aimed at extending the adjoint method in order to perform the fully-turbulent fluid-dynamic shape optimization of 3D multi-stage ORC turbomachinery. This document contains an extensive introduction, three main chapters, each documenting a building block towards the accomplishment of the main goal of this PhD project, and a final concluding chapter that summarizes all the research outcomes of this work and proposes future steps for research in this field. The first part of the thesis describes the extension of the RANS equations, the convective numerical schemes, and the viscous numerical schemes to the use of complex thermo-physical laws, so to simulate turbulent flows of components working in the NICFD thermodynamic region. The second part documents the derivation of the adjoint solver in order to resolve shape-optimization design problems for 2D single row of ORC turbomachinery. Finally, the last part reports the extension of the adjoint method to 3D multi-stage turbomachinery design.","","en","doctoral thesis","","","","","","","","","","","Flight Performance and Propulsion","","",""
"uuid:60bd730e-fbcb-429c-9e44-a3f2de82ff73","http://resolver.tudelft.nl/uuid:60bd730e-fbcb-429c-9e44-a3f2de82ff73","Three-dimensional model for estuarine turbidity maxima in tidally dominated estuaries: An idealized modeling approach","Kumar, M. (TU Delft Mathematical Physics)","Schuttelaars, H.M. (promotor); Roos, Pieter C. (promotor); Delft University of Technology (degree granting institution)","2018","","estuary; modeling; residual motion; sediment transport; turbidity maximum; Ems estuary","en","doctoral thesis","","978-94-6186-993-7","","","","","","","","","Mathematical Physics","","",""
"uuid:eb604971-30b7-4668-ace0-4c4b60cd61bd","http://resolver.tudelft.nl/uuid:eb604971-30b7-4668-ace0-4c4b60cd61bd","On early-stage design of vital distribution systems on board ships","de Vos, P. (TU Delft Ship Design, Production and Operations)","Stapersma, D. (promotor); van Oers, B.J. (copromotor); Delft University of Technology (degree granting institution)","2018","This research aims to help in solving problems experienced with system integration in early-stage ship and system design by enabling automated design space exploration for on-board energy distribution systems. An Automatic Topology Generation (ATG) tool is developed and tested to do so. The ATG tool supports system designers in making trade-off analyses between system robustness and opposing design objectives for vital energy distribution systems on board of naval vessels. These systems include, amongst others, the electric power generation and distribution systems, chilled water distribution systems and propulsion systems. The ultimate goal of this line of research is to be able to better assess warship survivability in early-stage ship design in order to increase the chances of survival for ship and crew in hostile conditions. The research presented in this dissertation brings this goal closer.","On-board energy distribution systems; Design space exploration; System robustness and vulnerability; Automatic topology generation; Early-stage ship and system design","en","doctoral thesis","","978-94-6380-063-1","","","","","","","","","Ship Design, Production and Operations","","",""
"uuid:f6d25575-d312-4c26-8560-2b9c2572d135","http://resolver.tudelft.nl/uuid:f6d25575-d312-4c26-8560-2b9c2572d135","Geometry does matter2","Janbaz, S. (TU Delft Biomaterials & Tissue Biomechanics)","Zadpoor, A.A. (promotor); Delft University of Technology (degree granting institution)","2018","Nature is full of materials that exhibit astonishing properties that are not available in engineering materials. The study of the underlying structure of such materials has revealed that geometry plays an important role in achieving such properties. Unusual physical and mechanical properties such as structural coloring in butterfly wings and shock absorption in woodpecker skull are examples of how geometry could be used for functionalization of materials. At the same time, recent advancements in (additive) manufacturing techniques have enabled us to fabricate engineering materials whose ultrastructure is geometrically very complex. It is therefore now possible to design engineering materials with unusual properties. In this dissertation, two types of geometrical designs are used for development of mechanical metamaterials with unusual properties. That includes 1. Cellular structures working on the basis of mechanical instability, and 2. Origami-based designs. The dissertation has been organized in two parts each covering one of the above-mentioned design types...","","en","doctoral thesis","","978-94-6323-425-2","","","","","","2019-11-16","","","Biomaterials & Tissue Biomechanics","","",""
"uuid:4977c439-9925-4907-8350-6b3fd50e72fa","http://resolver.tudelft.nl/uuid:4977c439-9925-4907-8350-6b3fd50e72fa","Electrochemical recovery of rare earth metals in molten salts","Abbasalizadeh, A. (TU Delft (OLD) MSE-3)","Yang, Y. (promotor); Sietsma, J. (promotor); Delft University of Technology (degree granting institution)","2018","Electrochemical metal extraction in molten salts is the dominant industrial method for production of Rare Earth (RE) metals from their oxides. Two major challenges pertaining to RE metals extraction using this technology are a) low solubility of RE oxides in molten salts and b) carbon monoxide or carbon dioxide generation and possibility of fluorocarbon gas generation. The primary objective of this thesis is to find new methods to overcome the problem of low solubility of RE oxides in molten fluorides in order to increase the RE metal extraction yield from RE oxides. Another objective is to study novel routes in order to prevent CO, CO2 and halogen gas generation in the RE metal production from RE oxide and RE magnet scrap in molten salt electrolysis process. In view of this, a treatment route is suggested for the conversion of RE oxide to RE chloride/fluoride using strong chemical agents. Chapters 2, 3 and 4 investigate the conversion routes for RE oxides as well as RE magnet scrap in both chlorides and fluorides molten salts. Chapter 5 investigates the electrolysis step in which iron as a reactive anode is used, preventing generation of fluorocarbon, CO and CO2 gas in the extraction process. In Chapter 6 a thermodynamic modelling of the fluoride salt using CALPHAD approach is carried out. The phase equilibria and thermodynamics of molten fluorides system can be used for optimal design of RE extraction processes.","","en","doctoral thesis","","978-94-6186-987-6","","","","","","","","","(OLD) MSE-3","","",""
"uuid:e107c39b-e0ed-4318-9780-207bae9df24d","http://resolver.tudelft.nl/uuid:e107c39b-e0ed-4318-9780-207bae9df24d","Coherence and nonlinearity in mechanical and josephson superconducting devices","Yanai, S. (TU Delft QN/Steele Lab)","Steele, G.A. (promotor); van der Zant, H.S.J. (promotor); Delft University of Technology (degree granting institution)","2018","In this thesis, the microwave detection of mechanically compliant objects is investigated. This starts with a system of a suspended metal drum capacitively coupled to a high impedance microstrip resonator. The mechanical non-linear dissipation of the drums is studied. Next, a suspended nanowire coupled to a CPW resonator is studied. With an electrostatic drive at twice the mechanical resonance frequency, there occurs a parametric excitation of either the mechanical signal or the coupled microwave resonance frequency of the cavity. Then the microwave loss in flux-tunable resonators is investigated for future experiments. One of the goals of this project was to couple a suspended nanowire with a SQUID loop of a flux tunable cavity. Here, the dielectric loss in flux tunable resonators is studied in order to optimize the design of future devices.","mechanical oscillators; parametric excitation; Josephson junctions; SQUIDs; TLSs; cavity optomechanics; nanotechnology","en","doctoral thesis","","978.90.8593.376.2","","","","","","","","","QN/Steele Lab","","",""
"uuid:a6fe2afd-1f34-4b03-9687-1a5b627b64c2","http://resolver.tudelft.nl/uuid:a6fe2afd-1f34-4b03-9687-1a5b627b64c2","Riverbank filtration in highly turbid rivers","Gutiérrez, Juan Pablo (TU Delft Sanitary Engineering)","Rietveld, L.C. (promotor); van Halem, D. (copromotor); Delft University of Technology (degree granting institution)","2018","Riverbank filtration (RBF) is a surface water filtration method for drinking water through the banks and bed of a river, using extraction wells located near the water body in order to ensure direct aquifer recharge. As the surface water travels through the sediments, contaminants, such as suspended and colloidal solids and pathogenic microorganisms, are removed. Apart from water quality improvement, RBF has the advantage of reducing peak concentrations which commonly pass through a river. RBF has been widely used in Europe, USA and, nowadays, in some Asian countries (e.g., South Korea, India, China). Latin-American and specifically Colombian river basins, have been suffering a continuous deterioration, leading to high suspended sediment loads being transported by the rivers. The RBF technology has not been proven yet in highly turbid waters, in which the excessive transport of suspended sediments threatens sustainable operation. Clogging of both the riverbed and deeper aquifer may increase flow resistance, reducing water revenues over the course of time. To assess the feasibility of RBF for highly turbid river waters in Colombia, a combination of field and laboratory research was conducted – both in the Netherlands and Colombia. In Colombia, the studies were done at the Cinara institute's Research and Technology Transfer (R&TT) Station for drinking water and at the Fluid Mechanics lab. The station is located at the Northeast of Cali, Colombia, and was built at the premises of the main water treatment plant of Cali, Puerto Mallarino. In the Netherlands, the laboratory work was done at the Delft University of Technology, running infiltration column experiments at the Sanitary Engineering lab and the flume experiments at the Fluid Mechanics lab...","","en","doctoral thesis","","978-94-6186-991-3","","","","","","","","","Sanitary Engineering","","",""
"uuid:5d45d298-9c1d-4454-9206-495faf4109fe","http://resolver.tudelft.nl/uuid:5d45d298-9c1d-4454-9206-495faf4109fe","Advancing Methods For Evaluating Flood Risk Reduction Measures","Lendering, K.T. (TU Delft Hydraulic Structures and Flood Risk)","Kok, M. (promotor); Jonkman, Sebastiaan N. (promotor); Delft University of Technology (degree granting institution)","2018","Flood risk reduction systems are applied worldwide to reduce flood risk and provide protection of flood prone areas. More and more, decision makers use risk-based approach for the design and implementation of interventions in these systems. This dissertation advances existing risk-based approaches for specific interventions within a flood risk reduction system. Highlights of this dissertation include i) quantifying the reliability of canal levees, ii) evaluating the effectiveness of emergency measures, iii) optimizing portfolios (of combinations) of risk reduction strategies: flood defences and/or land fills and iv) assessing the performance of innovative interventions for flood risk reduction.","","en","doctoral thesis","","978-94-638-0061-7","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:dd37171a-f008-4874-9616-fb50367b8089","http://resolver.tudelft.nl/uuid:dd37171a-f008-4874-9616-fb50367b8089","Electroporation of biomimetic vesicles","Perrier, D.L. (TU Delft Reservoir Engineering)","Kreutzer, M.T. (promotor); Boukany, P. (copromotor); Delft University of Technology (degree granting institution)","2018","Electroporation is a popular technique to permeabilize the membrane for different purposes such as medical treatments, food processing and biomass processing. In this thesis, we use the bottom-up approach to unravel the role of specific cellular components in the electroporation of cellular membranes. We have studied the role of the gel-phase domains in the membrane and the contribution of the actin-cortex during electroporation. In order to do so, we have prepared binary-phase vesicles, containing fluid- and gelphase lipids, and actin-cortex encapsulated vesicles. Consequently, the electroporation mechanisms of these two samples provide systematic insight in the electroporation mechanism of a single cell.","electroporation; electropermeabilization; electric field, lipid vesicles; giant unilamellar vesicles; GUVs; gel-phase lipids; fluid-phase lipids; binary-phase; actin network","en","doctoral thesis","","978.90.8593.371.7","","","","","","","","","Reservoir Engineering","","",""
"uuid:b561da67-ced6-40b9-8f97-ed109439ea4c","http://resolver.tudelft.nl/uuid:b561da67-ced6-40b9-8f97-ed109439ea4c","Vision Concepts for Small- and Medium-Sized Enterprises: Developing a Design-Led Futures Technique to Boost Innovation","Mejia Sarmiento, J.R. (TU Delft Design Conceptualization and Communication)","Stappers, P.J. (promotor); Hultink, H.J. (promotor); Pasman, G.J. (copromotor); Delft University of Technology (degree granting institution)","2018","Concept cars have long been successfully applied in the automotive industry as a design-led way to envisioning the future. While automotive corporations use this futures technique as a driver for innovation, small- and medium-sized enterprises (SMEs) in other industries have not had the benefit of such explorations, largely because concept cars are too resource-intensive and poorly suited to the SMEs’ needs and idiosyncrasies. To democratize this design practice and help SMEs, which are essential to social and economic prosperity, we have developed DIVE: Design, Innovation, Vision, and Exploration. It is a design-led futures technique that assists designers in making and using concept cars –as experimental artefacts that act as visions which embody ideas about the future– as ‘vehicles’ for innovation in SMEs, no longer confined to the automotive sector. Its development began with an inquiry into concept cars in the automotive industry and concept products and services in other industries. We then combined the insights derived from these design practices with elements of the existing techniques of critical design and design fiction into the creation of DIVE’s preliminary first version. This was then applied and evaluated in seven iterations with SMEs, resulting in DIVE’s alpha version. All iterations of DIVE in context show that SMEs can make and use concept cars, tailored to their own domain, to receive some of the benefits of exploring the future using design within the front-end of their innovation strategy. These companies can make concept cars to identify opportunities and threats and to give a sense of direction when they face a significant change. DIVE begins with setting a vision, embedded in an artifact, and then working backward to map a path of ideas, connecting the future to the present. Although the results of these activities might be less flashy than concept cars, these simple prototypes and videos help SMEs internalize and share a clear and concrete image of a preferable future for employees, allies, and investors. Concept cars, prototypes of the future, can also be used at the start of a new product’s design process to combine all the results of investigations on product, market, and technology. Subsequently, it is used to define a design brief and as a criterion to select the most promising ideas.","","en","doctoral thesis","","978-94-6186-994-4","","","","","","","","","Design Conceptualization and Communication","","",""
"uuid:ecb7ff22-bfce-4378-8784-6738aa99d545","http://resolver.tudelft.nl/uuid:ecb7ff22-bfce-4378-8784-6738aa99d545","Tuning flavor-active components","Saffarionpour, S. (TU Delft BT/Bioprocess Engineering)","Ottens, M. (promotor); van der Wielen, L.A.M. (promotor); Delft University of Technology (degree granting institution)","2018","Flavor-active components are key contributors to the profile of the final produced beer product. Their preservation and control during different stages of processing is crucial, since they might be lost during processing due to their volatile nature. In order to produce a final beer product with balanced flavor profile, which is acceptable by the consumer, the level of these components in the beer matrix should be adjusted and controlled. Various techniques can be applied for flavor control and recovery, such as distillation/stripping, pervaporation, supercritical extraction, and adsorption. Chapter two of this thesis discusses the recent advances in various techniques, which are applied for flavor recovery among which adsorption is a technique, which showed potential for selective removal and recovery of flavor/non-flavor-active components. This technique can be combined with heat processing, distillation/stripping, or can be used as a standalone technique. The focus of the work presented in chapter three of this thesis is on method development for selective removal and recovery of flavoractive volatile components mainly belonging to the group of esters, higher alcohols, and diketones, through adsorption technique. In order to investigate the single and competitive adsorption behavior of flavor-active components and their synergistic effects, high throughput experimentation technique is applied, improved for volatile components and isotherms are obtained using batch uptake experimentation. The competitive adsorption behavior of flavor-active components is investigated on various food-grade hydrophobic adsorbents, in order to study the influence of physical and chemical nature of the components and adsorbent properties on selectivity for each tested adsorbate over ethanol. Based on the results obtained from thermodynamic studies through various isotherm models, the appropriate adsorbent material is selected for further studies in the design stage. In the next step, deeper study is conducted on flavor-active esters, presented in chapters four, five, and six, which contribute to beer with a fruity taste and aroma. With adjusting their level in the final beer product and their fractionation, various products can be produced with fruity taste. Further investigation is performed on their competitive adsorption behavior both through batch uptake experimentation, and dynamic breakthrough analysis tests, discussed in chapters four and six respectively. Since ester components are present at low concentration level together with ethanol, which is present at higher concentration in comparison in various process streams, the influence of ethanol and temperature on their competitive adsorption is further investigated, discussed in chapter four. Physical properties such as isosteric heat, entropy, and Gibbs energy of adsorption, are calculated from performed thermodynamic studies, which contribute to our deeper understanding of the adsorption phenomena on the selected adsorbents. Considering the time-consuming steps, which are required to be followed for constructing the adsorption isotherms through batch uptake experimentation, the application of predictive models developed based on adsorbed solution theory, is evaluated in chapter five, for prediction of multicomponent adsorption isotherms for flavor-active esters from single-component adsorption isotherms, when experimental data for multicomponent behavior is not available. The predictive model developed based on IAST, was capable to predict the multicomponent adsorption behavior for the tested condition with accuracy and can be used as a tool for prediction of isotherms, when data on multicomponent adsorption is not available. Possibility for separation of flavor-active esters is further investigated in a fixed-bed column in lab-scale, discussed in chapter six, to study their breakthrough behavior and their separation under various process conditions (ethanol concentration and temperature). The results of the experimental tests obtained through breakthrough analysis and fractionation, are used for validation of simulations. Based on the results obtained in lab-scale, separation of flavor-active components is further investigated in a large-scale column, through simulation of various scenarios for separation. The performed experiments in the lab-scale and results of the simulations at large-scale give an insight on competitive adsorption behavior of these components, when present in a mixture. For a more detailed prediction of the adsorption behavior, future outlooks are discussed in chapter seven, to study the optimized condition considering the process conditions, and integration of the adsorption with other alternatives such as distillation/stripping.","","en","doctoral thesis","","978-94-6186-985-2","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:84d9d425-1b2c-46c3-b256-feeeb8fa3351","http://resolver.tudelft.nl/uuid:84d9d425-1b2c-46c3-b256-feeeb8fa3351","Surface Acoustic Mode Aluminum Nitride Transducer for micro-size liquid sensing applications","Bui, T.H. (TU Delft Electronic Components, Technology and Materials)","Sarro, Pasqualina M (promotor); Chu Duc, T (promotor); Delft University of Technology (degree granting institution)","2018","The thesis focuses on the investigation of thin-film surface acoustic wave (SAW) devices for liquid sensing applications. The piezoelectric material is a thin film of Aluminum Nitride (AlN), a CMOS compatible material, deposited by pulse DC reactive sputtering technique. A CMOS compatible process is developed and employed to fabricate the AlN/Si surface acoustic wave (SAW) devices which operate in a liquid medium. The applicability of the SAW device in sensing liquid is proved by numerical analysis, simulations and experimental results.
In the first chapter, the development of liquid sensors based on MEMS fabrication is introduced together with the wide range of applications for these devices. Also, the motivation to investigate the SAW device based on thin film AlN for liquid sensing is presented. In chapter 2, sensing mechanisms in general and applicable mechanisms of SAW sensors for liquid are presented. To determine the most suitable design of the SAW devices, three-dimension (3D) modeling based on the finite element method (FEM) is performed and analyzed.
Chapter 3 reports on the effect of a micro-size droplet shape, specifically the liquid contact angle, radius (area) and wettability of the contact surface on the SAW response. The numerical analysis and experimental results explain the interaction mechanism between the attenuated SAW beam and micro-droplets. The beam, which is emitted into the droplet, is expressed by the fraction coefficient. The change in contact radius influences the fraction coefficient more than the change in contact angle, especially on hydrophilic and super-hydrophilic surfaces.
In chapter 4, the first applicability of the SAW sensor is demonstrated by identifying the kind of liquid present on the propagation path. The sensing mechanism is based on physical properties (liquid density, sound speed in liquid and evaporation rate) and mass loading (concentration of stagnant liquid molecules). This also suggests a potential method to identify liquid samples of microliter volumes in microfluidic biosensors based on this SAW device.
In chapter 5, a SAW device equipped with an embedded microhole is proposed for the control and monitoring of the contact area between the piezoelectric material and the liquid medium. The device is miniaturized to be integrated on a printed circuit board (PCB). The device response to changes in density and pressure as well as to the evaporation of the liquid inside the microhole is studied. These initial indirect experimental results show the applicability of the SAW device for the state of liquid flow inside the microhole.
In chapter 6, some optimized structures of the SAW device are proposed. The simulation and experimental results showed that SAW devices with circular shape FIDTs have better performance, and provide a good method to detect micro-size droplets due to the better concentration of the energy traveling through the propagation path. Also in this chapter, a mixing IDT structure for SAW devices, which includes two layers of input IDTs, is proposed to reduce the longitudinal component in SAWs and generate novel mixing acoustic waves by mixing surface waves and plate waves on the piezoelectric material.
Finally, in chapter 7 concluding remarks and recommendations for future work are given.
The thesis discusses the suitability of solar cooling technologies in terms of their potential for façade integration, exploring current possibilities and identifying main constraints for the development of solar cooling integrated architectural products. The potential for façade integration is assessed considering both the architectural requirements for the integration of building services in the façade development process; and the potential climate feasibility of self-sufficient integrated concepts, matching current technical possibilities with cooling requirements from several climates.
Although interesting prospects were identified in this dissertation, important technical constraints need to be solved to conceive fail-tested façade components. Furthermore, several barriers related to the façade design and development process need to be tackled in order to introduce architectural products such as these into the market. The identification and discussion of these barriers, along with the definition of technology driven development paths and recommendations for the generation of distinct architectural products, are regarded as the main outcomes of this dissertation, serving as a compass to guide further explorations in the topic under an overall environmentally conscious design approach.","Solar cooling; integrated facades; Facade design; Renewable Energy; Barriers; Cooling; Energy Efficiency","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-098-3","","","","A+BE | Architecture and the Built Environment No 29 (2018)","","","","","Design of Constrution","","",""
"uuid:118b4d3e-2d06-4ce7-b5a8-bcc934f0468a","http://resolver.tudelft.nl/uuid:118b4d3e-2d06-4ce7-b5a8-bcc934f0468a","Dynamics of interacting graphene membranes","Dolleman, R.J. (TU Delft QN/Steeneken Lab)","Steeneken, P.G. (promotor); van der Zant, H.S.J. (promotor); Delft University of Technology (degree granting institution)","2018","Micro and nanomechanical sensors are indispensable in modern consumer electronics, automotive and medical industries. Gas pressure sensors are currently the most widespread membrane-based micromechanical sensors. By reducing their size, their unit costs and energy consumption drops, making them more attractive for integration in new applications. Reducing the size requires the membrane to be as thin as possible, but also very strong. Graphene is the perfect material for such a membrane since it is only one atom thick but also the strongest material ever measured. This dissertation investigates the dynamics of suspended graphene membranes for sensing applications. These sensing applications are not restricted to pressure sensors alone, but the dynamics of graphene can also be used as a sensor for other physical properties. Thus, the topic of this thesis goes into the broader subject of the dynamics of interacting graphene membranes.","graphene; two-dimensional materials; molybdenum disulfide; nanomechanics; pressure sensors; gas sensors; NEMS; nonlinear dynamics; Fabry-Perot interferometer; thermal characterization; parametric resonance; stochastic switching; squeeze-film effect; selective permeation; osmosis","en","doctoral thesis","","978-90-8593-369-4","","","","Casimir PhD Series, Delft-Leiden 2018-39","","2019-11-19","","","QN/Steeneken Lab","","",""
"uuid:084a28c2-dacc-4c4d-9f61-8c9444a3dd4a","http://resolver.tudelft.nl/uuid:084a28c2-dacc-4c4d-9f61-8c9444a3dd4a","Modal methods for rehomogenization of nodal cross sections in nuclear reactor core analysis","Gamarino, M. (TU Delft RST/Reactor Physics and Nuclear Materials)","Kloosterman, J.L. (promotor); Lathouwers, D. (copromotor); Delft University of Technology (degree granting institution)","2018","This thesis develops novel first-principle methods to correct homogenization errors in nodal cross sections and discontinuity factors. Its aim is to improve the accuracy of nodal diffusion simulations of heterogeneous core configurations. This research builds upon previous work conducted at Framatome (Paris, France). It is based on a modal reconstruction of variations in the neutron flux distribution (in space and energy) between the core environment and the infinite-medium approximation, which is typically used in lattice transport calculations for few-group constant generation. Focus is given to the correction of the nodal cross sections.","Neutron diffusion; Nodal methods; Homogenization; Core environment; Neutron leakage; Spectral and spatial effects; Cross-section model","en","doctoral thesis","","978-94-6186-961-6","","","","","","","","","RST/Reactor Physics and Nuclear Materials","","",""
"uuid:12720d56-8a35-4b36-b287-2e301ae69bd0","http://resolver.tudelft.nl/uuid:12720d56-8a35-4b36-b287-2e301ae69bd0","Towards Practical Active Learning for Classification","Yang, Y. (TU Delft Pattern Recognition and Bioinformatics)","Loog, M. (promotor); Reinders, M.J.T. (promotor); Delft University of Technology (degree granting institution)","2018","In recent decades, the availability of a large amount of data has propelled the field of machine learning enormously. Machine learning, however, relies heavily on the availability of annotated data, typically labels indicating to which class a data instance belongs. With the huge amounts of data, this raises the question of how to efficiently annotate data, certainly when having limited resources. This thesis addresses the particular challenge of using as few annotations as possible, while at the same time, maintaining a good learning performance. For that we utilize active learning, which iteratively chooses the most valuable instances as to obtain the labels froman oracle (e.g. a human expert). Though many studies have demonstrated that active learning can reduce the annotation cost, there are still several issues that limit its practical use. This thesis makes a further step forwards making active learning more practical for real-world applications.
We first provide a benchmark and comparison of six different categories of active learning algorithms built on logistic regression. This work provides a better understanding of the underlying characteristics of various active learners and illustrates the potential benefits of using such techniques, but it also provides many cases for which active learning fails to outperform passive learning (i.e. randomly selecting instances for labeling). Those failed cases motivate us to propose two novel active learning methods that show a clear advantage over passive learning. The first one proposes to weight the so-called retraining-based criteria with an uncertainty score that is measured by the estimated posterior probability. The second one measures the usefulness of unlabeled instances according to the variance of the predictive probability. This method takes an additional step towards practical active learning, clearly outperforming current state of the art on binary andmulti-class classification tasks.
We further consider two realistic issues when applying active learning to real-world problems. One is how to find an initial set that contains at least one instance per class to start the active labeling cycle. The other one is dealing with the absence of human annotators in the interactive labeling loop. We propose new approaches to tackle the above problems and observe good performance compared to existing methods. This thesis concludes with an analysis of the contributions and limitations of our work, as well as research directions that deserve further studies.
We hope that this thesis also inspires others to make active learning more suitable for real-world applications.","","en","doctoral thesis","","978-94-6380-102-7","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:e2890c30-2bc2-4865-ad11-7b65ed38145e","http://resolver.tudelft.nl/uuid:e2890c30-2bc2-4865-ad11-7b65ed38145e","Flexibility in project management: Towards improving project performance","Jalali Sohi, A. (TU Delft Integral Design & Management)","Hertogh, M.J.C.M. (promotor); Bosch-Rekveldt, M.G.C. (copromotor); Delft University of Technology (degree granting institution)","2018","Increasingly it is argued that nowadays a pure project management approach (the conventional project management approach) is no longer effective as it underestimates the influence of the dynamic environment. An approach is needed which recognises the complexities of a project and provides tools to cope with these; an approach that is aimed increasing flexibility.
Therefore this research investigated the effect of project management flexibility and project complexity on project performance. Using a mix-method approach, the research was divided in different steps including literature study, case studies, Q-study and survey studies.
Under this reality, geoscientists are consistently making effort to improve the mathematical models, while being inherently constrained by uncertainty, and to find more efficient ways to computationally solve these models.
Closed-loop Reservoir Management (CLRM) is a workflow that allows the continuous update of the subsurface models based on production data from different sources. It relies on computationally demanding optimization algorithms (for the assimilation of production data and control optimization) which require multiple simulations of the subsurface model. One important aspect for the successful application of the CLRM workflow is the definition of a model that can both be run multiple times in a reasonable timespan and still reasonably represent the underlying physics.
Multiscale (MS) methods, a reservoir simulation technique that solves a coarser simulation model, thus increasing the computational speed up, while still utilizing the fine-scale representation of the reservoir, figures as an accurate and efficient simulation strategy.
This thesis focuses on the development of efficient algorithms for subsurface models optimization by taking advantage of multiscale simulation strategies. It presents (1) multiscale analytical derivative computation strategies to efficiently and accurately address the optimization algorithms employed in the CLRM workflow and (2) novel strategies to handle the mathematical modeling of subsurface management studies from a multiscale perspective. On the latter, we specifically address a more fundamental multiscale aspect of data assimilation studies: the assimilation of observations from a distinct spatial representation compared to the simulation model scale.
As a result, this thesis discusses in detail the development of mathematical models and algorithms for the derivative computation of subsurface model responses and their application into gradient-based optimization algorithms employed in the data assimilation and life-cycle optimization steps of CLRM. The advantages are improved computational efficiency with accuracy maintenance and the ability to address the subsurface management from a multiscale view point not only from the forward simulation perspective, but also from the inverse modeling side.","multiscale simulation; analytical derivative computation; adjoint method; life-cycle optimization; data assimilation","en","doctoral thesis","","978-94-6186-990-6","","","","","","","","","Reservoir Engineering","","",""
"uuid:2b9ee3f5-010f-4dbe-b57f-1bb19eeb593e","http://resolver.tudelft.nl/uuid:2b9ee3f5-010f-4dbe-b57f-1bb19eeb593e","Hydrodynamics of vegetated compound channels: Model representations of estuarine mangrove squeeze in the Mekong Delta","Truong Hong, S. (TU Delft Coastal Engineering)","Stive, M.J.F. (promotor); Uijttewaal, W.S.J. (promotor); Delft University of Technology (degree granting institution)","2018","Mangroves are an interesting species of vegetation, surviving and thriving at the interface of land and water, in the inter tidal brackish coastal waters between the mean sea level and mean high water. Mangroves are a highly productive and complex ecosystem, providing numerous services and goods to people and marine environment. Mangroves are home to a large variety of underwater animals. Mangrove wood is valuable because it is resistant to rot and insects and can be harvested for pulp or charcoal production. Most importantly, complex roots, stems and canopies of mangroves provide effective protective means for coastal and estuarine regions. Waves and tidal flows are significantly slowed down as they make their way into and through the roots, stems and canopies of the mangrove forest. Nutrients and sediments can be deposited, providing necessary conditions for a sustainable development of the mangrove ecological system in particular and a stable coastal area in general. However, despite the important role of mangroves along the Mekong delta estuary, a large part of the mangrove forests has been converted into fish farms. In many regions, only a narrow strip of mangroves remained and, as a consequence, mangrove forests degraded and banks and shorelines experience severe erosion. Although numerous attempts were implemented to restore the mangroves and to enhance the river bank stability, these were not really successful. A possible explanation may be that knowledge about the hydrodynamics and exchange processes in and around the mangrove forest vegetation area is still not yet rigorously researched. In order to understand the dynamics of mangroves, the remaining width of mangrove forests and erosion (accretion) rate were observed, collected and analysed in terms of a morphological perspective. It is found that the river bank erosion appears to relate to the width of mangrove forest. The larger the width of the mangroves, the less erosion of the river bank and vice versa. In this context, the concept “Squeeze Phenomenon,” explaining the degradation of mangroves together with the erosion of the river bank, is introduced. Based on a schematised numerical model, changes in hydrodynamics and exchange processes caused by the limited width of the forest are proposed to be the fundamental reason for the “Squeeze Phenomenon.”…","Estuarine mangroves; Compound vegetated channels; Large coherent structures","en","doctoral thesis","","978-94-6186-992-0","","","","","","","","","Coastal Engineering","","",""
"uuid:4d27776a-9fe3-409e-8066-33b861792ba2","http://resolver.tudelft.nl/uuid:4d27776a-9fe3-409e-8066-33b861792ba2","A lattice model for prediction of ice failure in interaction with sloping structures","van Vliet, R. (TU Delft Dynamics of Structures)","Metrikine, A. (promotor); Delft University of Technology (degree granting institution)","2018","To study interaction between ice and sloping structures, numerical models are required that can predict failure of the ice based on physical ice properties, deformations and structural shape and size. An important element of these models is the set of failure criteria that is applied, as failure limits the loading of the ice on the structure. Failure in interaction with a sloping structure can occur in multi-directional tension, compression, bending, splitting or a combination of these, making it important to capture and combine all failure conditions in a single model. In this thesis, a lattice model is developed to simulate an ice plate and failure criteria are derived for the model, linked to field measurements and failure envelopes. Fracture patterns generated with the lattice model compare well with those observed in basin tests. Lattice models are successfully used in modelling of fracture of brittle materials. To date, most of the lattice multi-dimensional (2D and 3D) models describe either in-plane or three dimensional mechanics of the materials. Only a few lattice models are available in the literature for the description of the out-of-plane mechanics of plates. However, the parameters of those lattice models have not been linked to those of the classical plate models, such as Mindlin-Reissner plate theory, which is based on the classical continuum theory. To be able to simulate out-of-plane deformations of a plate, thereby enabling physically correct simulation of ice-structure interaction, a 2-dimensional lattice model is developed that reproduces the out-of-plane dynamics of a shear-deformable plate in the low frequency band. The developed model is composed of masses and springs whose morphology and properties were derived to match the out-of-plane deformations of thick plates as described by the Mindlin-Reissner theory. Bending, shear and torsion are taken into account. The eigen frequencies and the steady-state response of the model to a sinusoidal-in-time point load are computed and compared to those of a corresponding continuum plate. It is proven that the developed lattice predicts the same dynamic behaviour as the corresponding continuum plate at relatively low frequencies, which are dominant in ice-structure interaction processes. At higher frequencies deviations occur. These are discussed in terms of the dispersion, anisotropy and specific boundary effects of the lattice model. A lattice model for in-plane vibrations of a plate in plane strain conditions is described in literature and is adjusted in the current work to plane stress conditions and to reproduce the Poisson effect. Combined with the lattice model for out-of-plane deformations a single model is formed, which captures in- and out-of-plane deformation of a shear deformable plate under the assumption of small deformations. Failure criteria were developed for the lattice model which are linked to field measurement data of ice. The deformation and failure criteria in the lattice model are based on first principles, enabling physically sound simulation of deformation and fracture processes of the ice plate material. Failure in compression, tension, bending and splitting are simulated with the lattice model, providing a complete set of failure scenarios relevant to study interaction between ice and sloping structures. It is shown that the criteria have minimal dependence on cell size and orientation and are applicable for multi-directional loading conditions. By means of numerical assessment, a multitude of structures and loading conditions can be analysed and structural shapes can be optimized for interaction with ice at reasonable costs. Validation of the model against field measurements for ice in complex loading conditions of combined bending, shear and torsion as well as basin tests show that fracture location and fracture patterns can be predicted well with the lattice model. A 3-dimensional model, which would reduce computational efficiency, is required to accurately simulate out-of plane shear and spalling failure of the ice, however in engineering applications these are often not governing over bending and tensile failure. Inclusion of ice rubbling and clearance processes as well as improving the simulation of the contact with a structure would further improve the model and ice load predictions.","lattice; ice; sloping structure; fracture","en","doctoral thesis","","978-94-6366-092-1","","","","","","","","","Dynamics of Structures","","",""
"uuid:935b7fbf-3b06-468b-92cc-722943f8b3ba","http://resolver.tudelft.nl/uuid:935b7fbf-3b06-468b-92cc-722943f8b3ba","The application of Ag/AgCl electrodes as chloride sensors in cementitious materials","Pargar, F. (TU Delft Materials and Environment)","van Breugel, K. (promotor); Koleva, D.A. (copromotor); Delft University of Technology (degree granting institution)","2018","Determination of the chloride content in a reinforced concrete structure is important for evaluation of the risk of chloride-induced corrosion of reinforcement. The traditional techniques for chloride determination in concrete are laborious, time-consuming and cannot be used for continuous monitoring of the chloride content. The investigation on the use of Ag/AgCl electrodes as chloride sensors in cement-based materials dates back to 1990s. Interpretation of the sensor’s response in cementitious materials requires the knowledge of chloride sensor’s characteristics and the interaction between the sensor and the surrounding medium. Hence, the stability of the chloride sensor’s response in cementitious materials depends on the properties of Ag/AgCl interface, AgCl/cement paste interface and the pore solution composition of cementitious materials. The influence of these factors on the stability of the sensor’s response was studied in this thesis. In Chapter 1 the background and motivation for the thesis were presented. In Chapter 2 the advantages and drawbacks of available test methods for determination of the chloride content in cementitious materials were explained.","Ag/AgCl electrode; chloride sensor; anodization; open circuit potential; stability; alkalinity; interference; hydration product; corrosion of steel","en","doctoral thesis","","978-94-6186-972-2","","","","","","","","","Materials and Environment","","",""
"uuid:f75d3713-8ef2-4f92-884f-06664b040f47","http://resolver.tudelft.nl/uuid:f75d3713-8ef2-4f92-884f-06664b040f47","Phosphate recovery from wastewater via reversible adsorption","Suresh Kumar, P. (TU Delft BT/Environmental Biotechnology)","Witkamp, G.J. (promotor); van Loosdrecht, Mark C.M. (promotor); Delft University of Technology (degree granting institution)","2018","Eutrophication and the resulting formation of harmful algal blooms causes huge economic and environmental damages. Phosphorus (P) has been identified as a major limiting nutrient for eutrophication. Phosphorous concentration greater than 100 µg P/L is usually considered high enough for causing eutrophication. The strictest regulations however aim to restrict the concentration below 10 µg P/L. Orthophosphate (or phosphate) is the bioavailable form of phosphorus. Adsorption is often suggested as technology to reduce phosphate to concentrations less than 100 and even 10 µg P/L with the advantages of a low-footprint, minimal waste generation and the option to recover the phosphate.
In this thesis, the optimum properties of a phosphate adsorbent are identified and studied. Limitations of porous adsorbents having high surface areas are discussed and the optimum pore size distribution for phosphate adsorbents is determined. The role and mechanism of biogenic iron oxides in phosphate removal is discussed. Optimum methods for regenerating the adsorbent and improving the adsorbent reusability is studied. An economic assessment for phosphate adsorption is done, highlighting the main cost factors and identifying the research gaps needed to better understand these factors. A scenario analysis comparing the economics of low-cost one-time use adsorbents vs more expensive reusable adsorbents is made. A cost comparison of different technologies and the conditions where adsorption is favorable is highlighted. Suggestions for future studies are made based on the current findings.
In this thesis, we aimed to answer the question of how can FtsH not only degrade soluble proteins, but also degrade insoluble proteins. In a complex mechanism in which soluble/insoluble proteins are unfolded passing through an ATPase domain, and into a protease domain for degradation. The curiosity about this mechanism is also high, regarding how can this protein coordinate this ATP hydrolysation and coordinate it with the proteolytical process.
To answer these questions, this thesis presents a series of purification protocols for E. coli FtsH (Chapter 2) and for an orthologous of FtsH, a thermophile called Aquifex aeolicus (presented in Chapter 3). During this thesis, we show that the movements that the ATPase domain can undergo in relation to the membrane are larger than what was previously described in literature (Chapter 4). The assembling of this protein into dodecamers, in the solubilized form, showed that the intermembrane loops are more flexible than what was thought before. A kinetic characterization of the ATPase and protease activity is also assessed showing that both forms are equally functional. Finally, in Chapter 5, we explore the use of cryo-electron microscopy and tomography to perform an exhaustive single particle study. Although the cryo-TEM results showed in this chapter are preliminary 2D class averages, it is possible to observe the six-fold symmetry structure of this protein, which is an incentive to pursue the studies with this technique. The same is true for the cryo-tomography performed on FtsH in the proteoliposomes which will provide further insights about the protein insertion into the membrane and allow to the study how substrates can access the ATPase domain loops.
This thesis describes the efforts made in the FtsH purification protocol optimization to get a sample as pure and stable as possible. This thesis showed that FtsH undergoes much larger conformational changes than previously thought and challenges the currently accepted model for the substrate to access FtsH active site.
In the future cryo-electron microscopy of single particles and cryo-tomography of proteoliposomes must be explored too deepen our knowledge of the full-length FtsH structure, and more generally our knowledge about proteolytical mechanisms in cells.
This research explores the intermediary role of the third sector in providing support, skills and building capacity among communities to improve the maintenance and administration of their properties. Based on international and local case study analyses, the research proposes a set of management approaches and strategies for Chilean third sector intermediaries to support low-income homeowners in condominium management. Findings show the relevance of multidimensional approaches and strategies, so as to tackle the interrelated challenges by contributing to enhance the community’s capacities and level the built environment conditions. Findings also show the need of partnerships between third sector organisations and municipalities to face complex areas, and the relevance of fostering collaboration and specialisation among third sector organisations.","Housing management; Condominiums; Chile","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-095-2","","","","A+BE | Architecture and the Built Environment No 28 (2018)","","","","","Housing Management","","",""
"uuid:35e33109-fb68-4949-a829-dd16e5e57e4a","http://resolver.tudelft.nl/uuid:35e33109-fb68-4949-a829-dd16e5e57e4a","Methods for improving pan-European flood risk mapping","Paprotny, D. (TU Delft Hydraulic Structures and Flood Risk)","Jonkman, Sebastiaan N. (promotor); Morales Napoles, O. (copromotor); Delft University of Technology (degree granting institution)","2018","In the past decade, there has been growing interest in analysing flood hazard and risk on European scale. Such studies allow assessment of climate change impacts, can be used at EU-level policymaking and provide information on countries where local flood maps are not available. In this thesis, some innovative methodologies that contribute to improvement of pan-European flood mapping are explored. The topics covered include: (1) using Bayesian statistics to reduce time needed to map river flood hazard compared to rainfall-runoff models; (2) computing extreme sea levels and coastal flood hazard zones under present and future climate; (3) adjusting historical flood losses for changes in exposure to reveal true long-term trends in flood losses in Europe; (4) utilizing dependency modelling for assessing the hazard of compound flood occurrence.","","en","doctoral thesis","","978-94-6186-970-8","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:436c4e79-7b47-40b7-ade0-fc24b8a5e5c2","http://resolver.tudelft.nl/uuid:436c4e79-7b47-40b7-ade0-fc24b8a5e5c2","Novel crystallization techniques for separation in multi-component systems","Li, W. (TU Delft Intensified Reaction and Separation Systems)","Stankiewicz, A.I. (promotor); ter Horst, J.H. (promotor); Kramer, H.J.M. (copromotor); Delft University of Technology (degree granting institution)","2018","Crystallization-based chiral resolution techniques was born when Pasteur discovered the chirality of tartaric acid and manually separated its two enantiopure crystals by his tweezers. Ever since then, these techniques have been in constant development, mainly due to the needs from the pharmaceutical and food industries. In the past decades, techniques such as preferential crystallization have been studied and some of them have already been applied at pilot scale. Deracemization is of more recent date and is actually the process that stirs up attention in chiral resolution the last few years. These techniques draw more and more attention from both academic world and the industries owing to one common feature: the potential to recover the desired enantiomer with unrivaled high product purity in a single process step.","","en","doctoral thesis","","","","","","","","2020-08-14","","","Intensified Reaction and Separation Systems","","",""
"uuid:f744c1af-505e-440c-bc49-2a1d95d0591d","http://resolver.tudelft.nl/uuid:f744c1af-505e-440c-bc49-2a1d95d0591d","On Leveraging Vertical Proximity in 3D Memory Hierarchies","Lefter, M. (TU Delft Computer Engineering)","Cotofana, S.D. (promotor); Wong, J.S.S.M. (copromotor); Delft University of Technology (degree granting institution)","2018","Within the past half century, Integrated Circuits (ICs) experienced an aggressive, performance driven, technology feature size scaling. As the technology scaled into the deep nanometer range, physical and quantum mechanical effects that were previously irrelevant become influential, or even dominant, resulting in, e.g., not any longer negligible leakage currents. When attempting to pattern such small-geometry dimensions, the variability of technological parameters considerably gained importance. Furthermore, it became more difficult to reliably handle and integrate such a huge number of tiny transistors into large scale ICs, considering also that a substantial increase in power density needed to be taken into account. Scaling induced performance was no longer sufficient for delivering the expected improvements, which lead to a paradigm switch from uniprocessors to multiprocessor micro-architectures. At the same time, since for certain application domains, such as big data and Internet of things, the to be processed data amount increases substantially, computing system designers become more concerned with ensuring data availability than with reducing functional units latency. As a result, state of the art computing systems employ complex memory hierarchies, consisting of up to four cache levels with multiple shared scenarios, making memory a dominant design element that considerably influences the overall system performance and correct behavior. In this context, 3D Stacked Integrated Circuit (3D SIC) technology emerges as a promising avenue in enabling new design opportunities since it provides the means to interconnect devices with short vertical wires. In this thesis we address the above mentioned memory challenges by investigating the 3D SIC technology utilization in memory designs, as follows. First, we propose a novel banked multi-port polyhedral memory that provides an enriched access mechanism set with a very low bank conflict rate and we evaluate its potential in shared caches. Second, we propose a low power hybrid memory in which 3D technology allows for the smooth co-integration of: (i) short circuit current free Nano-Electro-Mechanical Field Effect Transistor (NEMFET) based inverters for data storage, and, (ii) CMOS-based logic for read/write operations and data preservation. Third, we propose a memory repair framework that exploits the 3D vertical proximity for inter-die redundant resources sharing. Finally, we propose novel schemes for performing user transparent multi-error correction and detection, with the same or even lower redundancy than the one required by state of the art extended Hamming single error correction schemes.","3D stacked integrated circuits; nems; nemfet; zero-energy; memory hierarchy; reliability","en","doctoral thesis","","978-94-6186-983-8","","","","","","","","","Computer Engineering","","",""
"uuid:8da7150b-eec8-4148-b277-538b6bfc1384","http://resolver.tudelft.nl/uuid:8da7150b-eec8-4148-b277-538b6bfc1384","Cement paste degradation under external sulfate attack: An experimental and numerical research","Ma, X. (TU Delft Materials and Environment)","Schlangen, E. (promotor); Copuroglu, Oguzhan (copromotor); Delft University of Technology (degree granting institution)","2018","Chemical degradation of cementitious materials is a serious threat to the durability and performance of concrete structures. External sulfate attack is one of the situations that may cause gradual but severe damage. Sulfate ions present in seawater, rivers, groundwater and industrial effluent can penetrate into the hardened concrete, and react with cement hydration products to form ettringite as well as gypsum crystals, if stronger sulfate concentrations are available. Such formations result in a solid volume increase and cause local expansive pressure within the pore network. Although the solid volume increase may initially reduce the porosity of cement paste, it will cause cracking at a later stage as the generated expansive pressure exceeds the tensile strength of cement paste. This, in turn, leads eventually to a total strength loss and an increased permeability of concrete.
External sulfate attack in a saturated situation is a complex issue in which ionic transport, expansive reactions and mechanical damage interact with each other. These phenomena may be accompanied by significant macroscopic expansion and severe mechanical damage. However, the theories concerning the exact origin of the expansive pressure are still under debate. In recent years, the crystallization pressure theory has become the most widely cited hypothesis, and ettringite formation from monosulfate is also generally considered as the major cause, but more evidences for this mechanism are still needed. Moreover, the magnitude of the expansive pressure at different scale is still missing, since direct measurement of the expansive pressure on the walls of nanopores is highly challenging. Also the expansion behaviors of larger scale specimens are lack of complete experimental data fromthe literature. Furthermore, the process of crack initiation and propagation is seldom discussed. The development of pressure gradient has been largely neglected in the current literature.
In this thesis, an attempt is made to increase the body of knowledge related to cement paste expansion and degradation due to external sulfate attack. Laboratory experiments and numerical simulations were used during the study.
External sulfate attack under continuous immersion condition is a slow diffusion process. Even though high water/cement ratios and high sulfate ion concentrations have been adopted as acceleration methods, research shows that the attack depth remains shallow even after several months. Therefore, specimens with a small thickness along the diffusion direction could be preferred for experimental research in order to ensure a faster exposure of the entire cross-section. In this study, small cement paste pipes with a wall thickness of 2.5 mm were prepared and immersed in sodium sulfate solutions with SO42− ion concentrations of 1.5 g/L and 30 g/L. Three types of longitudinal restraints were applied on the specimens before exposure, which were created by a spring, a thin or a thicker stainless steel bar that was centered in the hollow specimens in order to facilitate the non-, low- or high-restraint condition. Strain gauges were used for the measurements of restrained expansions and generated stresses, with the purposes of increasing the measurement accuracy and obtaining continuous experimental results. The free expansion until 420-day immersion was measured periodically. The restrained expansion and corresponding generated stress until 810-day immersion were quantified continuously.
The pore size distribution, sulfur distribution and crack pattern were also periodically analyzed. According to the MIP measurements, the pores with diameters between 10 nm and 70 nm were continuously filled during the immersion tests, and strong sulfate solution lead to a faster filling, which supports the crystallization pressure theory. The sulfur distributions at 0-day, 21-day, 70-day, 105-day, 133-day and 189-day immersion in strong and weak sulfate solutions were acquired based on SEM-EDS microanalysis, which can reflect the gradient of the local expansive pressure distribution at the corresponding immersion days. The corresponding expansions under three types of restraint were also obtained. The specimen immersed in strong sulfate solution under high-restraint condition (7 mm - 30 g/L) was almost damaged after 565-day exposure with the largest generated stress of 13.4 MPa. Several vertical cracks were found after 581-day immersion based on the image analyses of CT scanning results. The specimen immersed in strong sulfate solution under low-restraint condition (3 mm - 30 g/L) was almost damaged after 628-day exposure with the largest generated stress of 11.2 MPa. One main vertical crack was found after 765-day immersion. The crack development of the unrestrained specimen immersed in strong sulfate solution (30 g/L) was also studied by CT scanning at 189-day, 294-day, 343-day, 420-day and 469-day exposure. A combination of the horizontal cracks which started some distance away from the exposed surface and the vertical cracks which started from the exposed surface was observed. However, no visually noticeable cracks were observed for the specimens immersed in weak sulfate solution (1.5 g/L) up to 807-day immersion. The generated stresses of specimens under low-restraint condition (3 mm - 1.5 g/L) and high-restraint condition (7 mm - 1.5 g/L) after 807-day immersion were 8.8 MPa and 11.1 MPa, respectively.
The complex process of crack initiation and propagation during material degradation at microscopic scale was studied by SEM - EDS microanalysis. The damage evolution of unrestrained specimens immersed in strong sulfate solution (30 g/L) was investigated experimentally. The specimens before sulfate exposure and after 70-day, 105-day and 133-day immersion were studied. Image analysis was applied. The localization process of the subparallel cracks near the exposed surface at the depth of about 250 μm was studied. Progressive precipitation of gypsum crystals inside the localized cracks was observed. The change of sulfur gradient versus exposure time was analyzed. Based on that, the change of expansive pressure gradient versus exposure time was discussed.
Numerical models can be of use in understanding complex problems. Delft lattice model was used in this thesis. In order to obtain the realistic mechanical properties of lattice elements, the experimental and numerical studies on the mechanical properties of cement paste pipes just after 90-day curing was done prior to further simulations. Two types of specimens are subjected to uniaxial tensile loading, which are unnotched and single notched specimens. Two main results were obtained through experiments. The first one is the Young’s modulus and tensile strength of unnotched specimens. The second one is the complete stress-strain curves of single notched specimens. A 3D lattice model with a mesh resolution of 0.25 mm/voxel was constructed to simulate the two types of specimens subjected to uniaxial tensile loading. After fitting with the experimental results, the local mechanical properties of cement paste lattice elements were obtained. Afterwards, a numerical study on expansion and degradation processes of the specimen immersed in strong sulfate solution under high-restraint condition (7 mm - 30 g/L) was performed. After comparing with the experimental results in previous chapters, the magnitude of local expansive pressure caused by external sulfate attack was discussed.
The experimental setup and techniques (such as strain gauge measurement system, SEM-EDS analysis and X-ray computed tomography) employed in this thesis can be used in the same or similar way for further studies on external sulfate attack or some other degradation problems. Also, the experimental results presented in this research can be used for further numerical studies.","External sulfate attack; Cement paste; Thin-wall pipe; Longitudinal restraint; Expansion; Stress; Crack initiation and propagation; Lattice fracture model","en","doctoral thesis","","978-94-6186-982-1","","","","","","","","","Materials and Environment","","",""
"uuid:3d65f306-0e41-4d5c-b53d-a536b845851b","http://resolver.tudelft.nl/uuid:3d65f306-0e41-4d5c-b53d-a536b845851b","Molecular interactomes: Network-guided cancer prognosis prediction & multi-way chromatin interaction analysis","Allahyar, A. (TU Delft Pattern Recognition and Bioinformatics)","Reinders, M.J.T. (promotor); de Ridder, J. (copromotor); Delft University of Technology (degree granting institution)","2018","In the last two decades, our understanding of the molecular mechanisms within the cell has witnessed a great leap forward. For the most part this is due to the fast innovation of the genomic measurements technologies and wide spread usage of computational methods which enables knowledge extraction from the massive datasets produced by these measurements. A notable example of a field that has substantially benefitted from this progress is cancer patient outcome prediction, in which the aim is to predict patient prognosis from common clinical variables such as tumor size, age or histological parameters. With the application of machine learning methods to gene expression profiles of the tumor a major improvement of the prediction accuracy could be realized. These models are later succeeded by Network based Outcome Predictors (NOP) that consider the cellular wiring diagram of cell in the model to identify stable and relevant markers that can accurately estimate outcome of patients. Problematically, after a decade of research in this area, NOPs did not find extensive application compared to the classical models due to contradicting reports regarding their performance, stability and relevance of markers in the literature. In this thesis, we introduce a new NOP - called FERAL - that alleviates several fundamental issues in state-of-the-art NOPs which prevented these models to reach the optimal prediction performance, stability and marker relevance. We furthermore demonstrate that generic biological networks do not contain sufficiently informative interactions to truly aid NOP. We therefore infer a phenotype-specific network called SyNet which connects pairs of genes that together achieve patient outcome prediction performance beyond what is attainable by individually genes. We show that a NOP that use identical gene expression datasets, yields superior performance merely by considering groups of genes suggested by SyNet. We, moreover, show that model performance is severely reduced if nodes in SyNet are shuffled, which confirms that also the links in SyNet are relevant to outcome prediction. An important limitation of current biological networks is that they are restricted to pairwise interactions. We show that higher order interactions between functional elements in the cell are relevant in outcome prediction. We later introduce a novel genomics method called Multi-Contact 4C (MC-4C) to measure and investigate multi-way interactions between functional elements. In contrast to existing methods, MC-4C exploits long-read 3rd generation sequencing technologies and detects higher order interactions that occur in a region of interest at the level of a single allele. We further devise a well-founded statistical model that is required for significance estimation of observed interactions. UsingMC-4C, we experimentally confirm a 26 years old hypothesis regarding the looping and co-localization of enhancers in the O -globin region in the mouse genome. Additionally, we provide the first experimental explanation for the “vermicelli” phenomenon that was observed through microscopic inspection of cells depleted of WAPL (the element responsible for unwinding of loops in mammalian cells). Therefore, targeted multi-way conformation analysis methods like MC-4C promise to uncover how the multitude of regulatory sequences and genes coordinate their activity in the spatial context of the genome.","Bioinformatics; breast cancer outcome prediction; 3D organization of the genome","en","doctoral thesis","","","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:b1be0112-b5ff-4530-a730-4c8c1f176a91","http://resolver.tudelft.nl/uuid:b1be0112-b5ff-4530-a730-4c8c1f176a91","Consistent estimates of sea level and vertical land motion based on satellite radar altimetry","Kleinherenbrink, M. (TU Delft Physical and Space Geodesy)","Klees, R. (promotor); Riva, R.E.M. (copromotor); Delft University of Technology (degree granting institution)","2018","Satellite radar altimetry is often considered to be the most succesful spaceborne remote sensing technique ever. Satellite radar altimeters were designed for static geodetic and ocean dynamics applications. The goal of the geodetic mission phases, which have a dense ground-track spacing, is primarily to acquire information about the marine gravity field. This enables the estimation of mean dynamic topography (geographical sea surface height patterns due to ocean currents) and deep-ocean bathymetry. The primary goal of the oceanographic mission phases is to gain information about time-varying currents and ocean dynamics. TOPEX/Poseidon is the first altimetry mission to reveal sea surface height variations related to ocean dynamics as the El Niño Southern Oscillation (ENSO). During the mission it became clear that secular changes in sea level could also be monitored. Already in 1995, Nerem (1995) computed a Global Mean Sea Level (GMSL) time series from the TOPEX/Poseidon data. Currently, the GMSL record spans 26 years, in which TOPEX/Poseidon time series is extended with the Jason-1&2&3 observations. The estimated secular trend of GMSL over the altimetry era is approximately 3 mm yr−1. The succes of the TOPEX/Poseidon mission spawned the Argo project with the deployment of the first floats in the year 2000. One argued that Argo would support the future Jason missions in separating changes into the two components (density and mass) of sea level. The Argo project aims to estimate temperature and salinity over a depth of 2000 meter using floats, which enable the estimation of density or steric sea level changes. By subtracting the steric signal from the absolute sea level measured by Jason (steric-corrected altimetry), the second component of sea level changes, mass, is estimated. The launch of the Gravity Recovery And Climate Experiment (GRACE) satellites in 2002 made it possible to independently validate oceanic mass variations. If the sum of the mass and steric components equals total sea level within the uncertainties, the sea level is said to be closed. Besides these two oceanic components, ocean bottom deformation or Vertical Land Motion (VLM) also affects the sea level observed by altimeters. Over the open ocean VLM signals are generally small after a correction for Glacial Isostatic Adjustment (GIA), but near large mass variations they might become significant. Additionally, tide-gauge records are affected by VLM changes, because they are connected to land. Therefore they measure sea level relative to the sea floor, while the satellite altimeters observe the absolute variations. To bring tide gauges in the same reference frame as the altimeters, corrections for VLM have to be applied, which is usually done with nearby Global Navigation Satellite System (GNSS) data...","sea-level change; sea-level budget; satellite radar altimetry; vertical land motion","en","doctoral thesis","","978-94-6186-986-9","","","","","","2019-05-12","","","Physical and Space Geodesy","","",""
"uuid:a3231ea9-1380-44f4-9a93-dbbd9a26f1d6","http://resolver.tudelft.nl/uuid:a3231ea9-1380-44f4-9a93-dbbd9a26f1d6","Microphone arrays for imaging of aerospace noise sources","Merino Martinez, R. (TU Delft Aircraft Noise and Climate Effects)","Simons, D.G. (promotor); Snellen, M. (copromotor); Delft University of Technology (degree granting institution)","2018","With the continuous growth in demand for air traffic and wind turbines, the noise emissions they generate are becoming an increasingly important issue. To reduce their noise levels, it is essential to obtain accurate information about all the sound sources present. Phased microphone arrays and acoustic imaging methods allow for the estimation of the location and strength of sound sources. Experiments with these devices are one of the main approaches in the current research in aeroacoustics, along with computational simulations or noise prediction models. This thesis presents a detailed literature review on the most common aerospace noise sources, challenges in aeroacoustic measurements, and the acoustic imaging methods typically used to overcome them. Practical recommendations are provided for selecting the appropriate imaging technique depending on the type of experiment. New integration techniques for distributed sound sources, such as leading– or trailing–edge noise, are proposed in this thesis and are proven to provide the best performance in retrieving the source levels, compared to other well–known methods. In addition, the high–resolution version of the deconvolution method CLEAN–SC, HR–CLEAN–SC, is explained and applied to wind–tunnel measurements. It is confirmed that this method can resolve sound sources at half the frequency associated with the Rayleigh resolution limit, while keeping the inherent advantages of CLEAN–SC. The most appropriate acoustic imaging methods (according to the recommendations from the literature study) were applied to aeroacoustic experiments and compared with other approaches, when possible. Since the landing gear is considered as the dominant airframe noise source in commercial aircraft, this source was analyzed using four different approaches: aircraft flyover measurements under operational conditions, full–scale wind–tunnel experiments, computational simulations and noise prediction models. Strong tonal noise at certain frequencies was observed and suggested the presence of open cavities. Noise prediction models do not account for this behavior and seem to provide erroneous estimates. Eliminating the contribution of the cavity will reduce the noise levels considerably. Trailing–edge noise is considered to be the dominant noise source for modern wind turbines. The performance of the two most promising noise reduction measures was investigated in wind–tunnel experiments. First, trailing–edge serrations featuring different geometries were studied and showed noise reductions of more than 10 dB. In case a serration–flow misalignment angle occurs, the performance of the serrations decreases and they even cause a noise increase after a crossover frequency. Similar results were found with computational simulations. Secondly, trailing–edge porous inserts showed noise reductions of approximately 10 dB at low frequencies and a noise increase after a crossover frequency. It is argued that the reasons for these phenomena were, respectively, the cross–flow between the pressure and suction sides of the airfoil and the increased roughness of the porous material with respect to the solid case. Lastly, the issue of the variability in aircraft noise levels was considered, since it is not properly taken into account by current best practice noise prediction models and hinders the enforcement of environmental laws. It was observed that variations in the fan rotational speed explain a large part of this variability. Two different approaches were proposed for estimating the fan rotational speed of aircraft flyovers based on audio recordings. Implementing these more accurate estimates of this parameter in the noise prediction model (rather than the default values as usual) considerably reduces the errors made and provide more accurate aircraft noise estimates. In conclusion, phased microphone arrays have confirmed their importance for aeroacoustic studies, such as measuring aircraft noise emissions under operational conditions and assessing the performance of noise reduction measures.","Aeroacoustics; Aircraft noise; Beamforming; Microphone arrays; Wind turbine noise; Acoustic imaging","en","doctoral thesis","","978-94-028-1301-2","","","","","","","","","Aircraft Noise and Climate Effects","","",""
"uuid:bccca021-5e2e-49f2-9e51-2ede359203fd","http://resolver.tudelft.nl/uuid:bccca021-5e2e-49f2-9e51-2ede359203fd","Alternative pool water treatment: and the in fluence of swimmers on pool water quality","Keuten, M.G.A. (TU Delft Sanitary Engineering)","Rietveld, L.C. (promotor); van Loosdrecht, Mark C.M. (promotor); Delft University of Technology (degree granting institution)","2018","As swimmers enter a swimming pool, they release micro-organisms, particles and soluble substances. While chlorine is often used to inactivate micro-organisms, a side-effect of chlorination is the formation of unwanted disinfection by-products. In order to reduce these by-products, more knowledge is needed on the release of pollutants by bathers and the influence of treatment steps to remove them. Also alternative disinfection can be used to avoid the formation of these by-products.
In this thesis, a shower cabin was used to investigate the release of pollutants by bathers. After showering, swimmers still release pollutants, the so-called submerged sweating. Experiments with standardised submerged exercises were done to determine this submerged sweating. It was found that for competition swimmers, 40% of the pollutants are released during swimming, 30% are due to not having a pre-swim shower and also 30% are due to not using the toilets.
UV-disinfection was chosen as alternative disinfection for swimming pools. The UV-treatment was combined with ultrafiltration for enhanced removal of particles and micro-organisms and biological filtration for removal of dissolved substances. The experiments show that biofilm formation as well as the microbial water quality was controlled with this alternative treatment, close to the biofilm formation and microbial water quality of chlorinated pool water. The use of biological filtration improved the removal of urea and the formation of nitrate in a chlorinated system, so biological filtration can be used to reduce the formation of unwanted disinfection by-products.
consumption of all OECD countries and consequently the EU and the Netherlands.
Therefore, the national targets for CO2 reduction should include provisions for a more energy efficient building stock for all EU member states. National and European level policies the past decades have improved the quality of the building stock by setting stricter standards on the external envelope of newly
made buildings, the efficiency of the mechanical and heating components, the
renovation practices and by establishing an energy labelling system. Energy related occupancy behavior is a significant part, and relatively unchartered, of buildings’ energy consumption. This thesis tried to contribute to the understanding of the role of the occupant related to the energy consumption of residential buildings by means of simulations and experimental data obtained by an extensive measurement campaign.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-096-9","","","","A+BE | Architecture and the Built Environment No 27 (2018)","","","","","OLD Housing Quality and Process Innovation","","",""
"uuid:fd7e7b74-5719-49f5-856a-a5ee3784c2ea","http://resolver.tudelft.nl/uuid:fd7e7b74-5719-49f5-856a-a5ee3784c2ea","Structures of Physical and Visual Light Fields: Measurement, Comparison and Visualization","Kartashova, T. (TU Delft Human Information Communication Design)","Pont, S.C. (promotor); de Ridder, H. (promotor); te Pas, Susan (promotor); Delft University of Technology (degree granting institution)","2018","It is impossible to see the light in an empty space, we can only observe light as emission from a light source or reflections from objects. Yet, human observers can estimate the illumination in empty parts of an observed scene, based on the appearance of surrounding objects. This dissertation presents studies on human sensitivity to the light field structure in empty spaces and description of the development of a light visualization tool that implements our knowledge of light fields, light design and perception.
In our perceptual studies, we reconstructed and compared physical and visual light fields. Physical measurements of the illuminance were made in real and modelled scenes with Cuttle's cubic measurement approach. The measurement device was a cube (a simulated one for modelled scenes) with small sensors on each side. The device was positioned over a grid of points in scenes creating regular measurements. For each position six measurements were translated to light properties (intensity, vector direction, diffuseness) with Cuttle's formulas. Then the resulting data was interpolated in order to obtain a full light field. In psychophysical experiments we used a probe proposed by Koenderink et al., a white matte sphere on which the illumination could be controlled by an observer. The task was to make the probe visually fit to a scene or an object. Placing the probe over grids of positions we obtained user data that was proven to be robust enough to reconstruct the global visual light field. We demonstrated that observers' data is robust enough to reconstruct the global structure of the visual light field. We also found that the visual light field is simplified with respect to the physical truth. In particular, it does not reflect subtle variations of the physical light field. In studies on scenes with complex light field structures (i.e. light zones, neighboring light fields with contrasting differences in one or more light properties), we found that observers are quite sensitive to the difference in light properties between the light zones. However, they showed idiosyncratic behavior especially for light zones with diferences in depth of a scene (front-back), rather than in the picture plane (left-right).
The second goal of this thesis was to develop a tool that incorporates our knowledge in measurement and perception of light in its visualization. Modern light visualizations often focus on surfaces or show light in a sophisticated manner understandable only for experts. We augment existing approaches with our tool that visualizes light in 3D volumes and in a perceptually-relevant manner. The measurement approach was the same as the physical measurements used for the perceptual studies above. The measurement cubes could be implemented physically, for real, or virtually, for modelled scenes. Resulting measurements were translated into light properties - mean illuminance, vector direction and diffuseness of light - and represented via variation of shapes' proportions. We tested our visualizations performance compared to image renderings and found that the visualizations led to at least as good task performance as renderings. Moreover, we developed a web-based tool, which can be used for visualizing of cubic measurements by anyone and described applications of this tool for architectural lighting design.
Our findings expand knowledge on the structure of the visual light field and help to understand it better. This can contribute to applied areas, such as computer graphics and architectural lighting design. Moreover, our visualization tool can immediately be used by lighting artists or architectural light designers for increasing their work efficiency by providing quick and quantitative representation of the light conditions.
The format of teaching and learning how to design is a dialogue that takes place in a design studio. The exchange is conducted by teacher and student while focusing on a design project; this is so because the design studio is a practical educational setting where students learn by doing, that is, by designing under the supervision of a design teacher.
The title of the thesis — design conversations — is the term we propose to describe the several instances of one-on-one dialogue between a teacher and a student while working, presenting, or reviewing a design project. A design conversation adopts a particular language that we call the language of design or design language (the fundamentals of which have been laid out by Schön [1983, 1985]). Design language is an expression of the design process, that is, it communicates aspects of designing as it unfolds; since learning how to design is the central objective of design education it follows that by analysing the language we should uncover (part of) the educational process.
The research firstly describes the educational context that frames the conversations between teacher and student. Secondly, the research centres on the observation and analysis of conversations between teacher and student in real-context design studios. At this stage, we adopt design language as the primary analysis framework.","","en","doctoral thesis","","978-94-028-1273-2","","","","","","","","","Applied Ergonomics and Design","","",""
"uuid:d7bd8ec2-296c-45a1-b1c7-0ee9aac035eb","http://resolver.tudelft.nl/uuid:d7bd8ec2-296c-45a1-b1c7-0ee9aac035eb","Band-Edge Energetics Control for Solar Hydrogen Production","Digdaya, I.A. (TU Delft ChemE/Materials for Energy Conversion and Storage)","Smith, W.A. (promotor); Dam, B. (promotor); Delft University of Technology (degree granting institution)","2018","The global transition from fossil-based resources to renewable energy is critically important to address the sharply increasing threat of global climate change and to ensure long-term energy security. One attractive candidate to substitute for conventional fossil fuels is hydrogen. Hydrogen is an excellent energy carrier that can be directly converted into electricity via fuel cells, or be combined with carbon dioxide (CO2) or carbon monoxide (CO) to form high energy density synthetic fuels. Most of the current industrial methods for hydrogen production, however, is by steam reforming of natural gas, which releases CO2 as a by-product, making it environmentally unsustainable. Photoelectrochemical (PEC) water splitting, on the other hand, is a carbon-neutral approach that enables the conversion and storage of the abundant solar energy into hydrogen using only renewable and clean resources. This process uses semiconductors to capture and convert sunlight into photogenerated charge carriers (i.e., electrons and holes), and electrocatalysts to facilitate the multi-charge transfer process for the oxidation and reduction of water to oxygen and hydrogen, respectively.","","en","doctoral thesis","","978-94-6186-978-4","","","","","","","","","ChemE/Materials for Energy Conversion and Storage","","",""
"uuid:865dce30-2133-4e1a-add9-f0cb4ba4b3c4","http://resolver.tudelft.nl/uuid:865dce30-2133-4e1a-add9-f0cb4ba4b3c4","Compliant transmission mechanisms","Farhadi Machekposhti, D. (TU Delft Mechatronic Systems Design)","Herder, J.L. (promotor); Tolou, N. (copromotor); Delft University of Technology (degree granting institution)","2018","Classical human-engineered transmission mechanisms such as linkages, gears, and couplings are well established designs which made out of multiple parts and rigid hinges. However, these mechanisms impose disadvantages such as wear, friction, low precision and reliability, need for assembly and lubrications, and difficulty for miniaturization. By employing the natural elasticity of materials, rather than using rigid hinges and connections, sophisticated motions can be realized with minimal mechanical complexity. This relative new paradigmin engineering design, called compliant mechanisms, enables the creation of monolithic mechanisms that are strong, compliant, precise, scalable, and cost-effective. The kinematics of classical gears and couplings impose several limitations to design","Compliant Mechanism; MEMS; Transmission; Frequency Multiplier; Coupling","en","doctoral thesis","","978-94-6323-376-7","","","","","","2020-11-09","","","Mechatronic Systems Design","","",""
"uuid:23c338a1-8b34-40a6-89e9-997adbdafd75","http://resolver.tudelft.nl/uuid:23c338a1-8b34-40a6-89e9-997adbdafd75","Incremental Control of Hybrid Micro Air Vehicles","Smeur, E.J.J. (TU Delft Control & Simulation)","Hoekstra, J.M. (promotor); de Croon, G.C.H.E. (copromotor); Delft University of Technology (degree granting institution)","2018","Micro Air Vehicles (MAVs) can perform many useful tasks, such as mapping and delivery. For these tasks either rotorcraft are used, which can hover but are not very efficient, or fixed wing vehicles, which are efficient but can not hover. Hybrid MAVs combine the hovering of a rotorcraft with the efficiency of a fixed wing. The reason that these vehicles are not yet widely adopted is that they are very difficult to control.
This thesis addresses the use of Incremental Nonlinear Dynamic Inversion (INDI) for the control of the attitude and velocity of hybrid MAVs. This control method had not been applied in a real world application prior to this thesis, which is why the thesis encompasses the application to a quadrotor at first, which is easier to control than a hybrid MAV.
First, an INDI structure is proposed for the control of the angular accelerations of a quadrotor. I show that the delay that filtering of the angular acceleration produces should also be applied to the measurement of the actuator state. If this is done, the filtering does not appear in the transfer function from virtual control to angular acceleration, which turns out to be equal to the actuator dynamics. It is also shown that a disturbance, or unmodeled dynamics, is compensated with the transfer function of the actuator dynamics multiplied with the applied filter and a unit delay. Finally, it is shown how the effects of propeller inertia, which can be very significant in the yaw axis, can be dealt with and how the control effectiveness can be made adaptive. All these findings are validated with experiments on a Bebop quadrotor.
Second, this thesis includes a Weighted Least Squares (WLS) control allocation algorithm with priority management into the INDI controller. This means that for vehicles with coupled control effectors, certain control objectives can be given pri- ority upon actuator saturation. This is very important for vehicles with controlled axes that are not very important for the stability of the vehicle, such as the yaw for a quadrotor. It is shown that for a quadrotor doing a 50 degree yaw change, the stability is greatly improved when the yaw axis is given very low priority.
Third, this thesis introduces the control of linear accelerations in all three axes with INDI. The controller does not need a complex model, but instead relies on a measurement of the acceleration. It is shown through a wind tunnel experiment, that the disturbance rejection properties, that were shown for the inner loop, carry over to the control of linear accelerations. It is also shown that the method can be applied outdoors with an off-the-shelve GPS receiver. Finally, a nonlinear method of calculating the input increment is derived, which provides only a slight improvement in the tracking of aggressive acceleration commands.
These three things are combined for the INDI control of hybrid MAVs. The result is a single, continuous INDI controller for the attitude, and a single, continuous INDI controller for the velocity of the vehicle. This is achieved by incorporating partial derivatives of the lift vector in the control effectiveness of the pitch and roll angles. Though no transition maneuver is explicitly defined, the transition follows implicitly from the increments in attitude and thrust that are calculated from desired acceleration changes. Further, as the control effectiveness of a hybrid MAV changes dramatically over the flight envelope, the control effectiveness is scheduled as a function of airspeed. When the airspeed is too low to measure, the pitch angle is used for this purpose. To prevent sideslip, a sideslip controller is included, where an estimate of the sideslip angle is obtained from the accelerometer.
Test flights show that the INDI inner and outer loop controllers are indeed capa- ble of controlling the attitude and the linear acceleration of the vehicle throughout the flight envelope, within the physical limitations of the vehicle. It is shown that the tracking of accelerations can make the vehicle naturally transition to forward flight and back, and fly in the stall region as necessary. Because of the abstraction that the INDI acceleration control provides, it is straightforward to follow a velocity vector field, for example one that guides the aircraft along a line.
The developed controller can be applied to different tailsitter MAVs with relative
ease, as the model dependency is low. The algorithm may even be applied to
different types of hybrid vehicles, such as quadplanes or tilt-wing aircraft, with
minor adjustments.
The costs of production flaws in the CFRP manufacturing process are normally hidden in the cost structure of the end product. To address this the research also investigated the financial impact of rework and rejection of products in a CFRP manufacturing process and the estimated financial benefits of implementing an NDE process monitoring system. Overall, this research shows that the potential to detect flaws in-situ, the impact of rework and rejection and the financial feasibility of implementing a novel NDE process monitoring system will increase the efficiency and effectiveness of the CFRP manufacturing process.","","en","doctoral thesis","","978-94-6380-032-7","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:70a1e180-ef0c-4226-9af3-7e9dc3938c7f","http://resolver.tudelft.nl/uuid:70a1e180-ef0c-4226-9af3-7e9dc3938c7f","Sensor-based sorting opportunities for hydrothermal ore deposits: Raw material beneficiation in mining","Dalm, M. (TU Delft Resource Engineering)","Jansen, J.D. (promotor); Buxton, M.W.N. (copromotor); Delft University of Technology (degree granting institution)","2018","Sensor-based particle-by-particle sorting is a technique in which singular particles are mechanically separated on certain physical and/or chemical properties after determining these properties with a sensor. Sensor-based sorting machines can be incorporated into mineral processing operations in order to remove waste or sub-economic ore prior to conventional treatment. This has potential to reduce the consumption of energy and water during mineral processing and thereby decrease processing costs. Furthermore, sensor-based sorting can be used to separate different ore types in order to enhance control of the feed to mineral processing facilities and improve processing efficiency. For most ore types no sensors are known that can be used to detect the grade of ore particles. This is because many ores are polyminerallic rocks in which the economically important minerals occur in relatively low concentrations and in small grain sizes. However, the deposition of ore minerals during the formation of hydrothermal ore deposits is often related to specific hydrothermal alteration zones. This means that it might be possible to characterise the grade of such an ore by using sensors that are capable of detecting differences in hydrothermal alteration mineralogy. Sensors can be applied throughout the entire mining value chain to collect information on the characteristics of the mined ore in real-time. The information that sensors provide can be used to improve deposit models, improve ore quality control and optimise mineral processing. However, the applicability of real-time sensor technologies has not yet been assessed for many types of ore deposits. The aim of the study was to explore the opportunities and potential benefits of using sensors for real-time raw material characterisation in mining and investigate the opportunities for sensor-based particle-by-particle sorting at hydrothermal ore deposits. Investigating sorting opportunities was aimed at researching the applicability of real-time sensors to segment waste particles from ore particles and to distinguish between ore particles that represent different ore types. This is based on samples taken from the Los Bronces porphyry copper-molybdenum deposit, the Lagunas Norte epithermal gold-silver deposit, and the Cortez Hills carlin-style gold deposit. For all the deposits included in the study, a fraction of the waste could be segmented by using a Visible to Near-InfraRed (VNIR) and Short-Wavelength InfraRed (SWIR) spectral sensor to detect the hydrothermal alteration mineralogy. For Lagunas Norte and Cortez Hills, this sensor could also be used to distinguish between different ore types. The ability to segment waste was based on indirect relationships between certain alteration mineral assemblages and the copper or gold grade. Since these relationships correspond to the alteration-mineralisation relationships that generally occur at each deposit type, there is potential that sensors can also be used to segment waste at other porphyry, epithermal or carlin-style deposits. For all three deposits additional research is required to investigate whether it is economically feasible to use the discrimination capabilities of the VNIR-SWIR spectral sensor for sensorbased particle-by-particle sorting. The feasibility may be limited by surface contaminations of the ore particles feeding the sorter, the influence of water on the discrimination capabilities of the VNIR-SWIR sensor, and the sorting efficiency resulting from misclassification.","","en","doctoral thesis","","978-94-6186-946-3","","","","","","","","","Resource Engineering","","",""
"uuid:00fadafa-0e24-4176-b533-d4b908c91a73","http://resolver.tudelft.nl/uuid:00fadafa-0e24-4176-b533-d4b908c91a73","Identification of manual control behaviour to assess rotorcraft handling qualities","Yilmaz, D. (TU Delft Control & Simulation)","Mulder, Max (promotor); Pavel, M.D. (copromotor); Pool, D.M. (copromotor); Delft University of Technology (degree granting institution)","2018","Flight safety has been a fundamental aspect of aircraft, and the future demand for wider usage of aerial operations leads to more focus on the flight safety. Particularly rotorcraft require high standards of flight safety due to their inherent features, such as complicated rotary mechanisms, close-to-ground operations, and complex aerodynamic environment. Consequently, rotorcraft pilots need to exert relatively high workload to safely operate these vehicles. An understanding of the interaction between the rotorcraft and the pilot is essential for improving flight safety. This interaction is elaborated by the Handling Qualities (HQ) discipline, which aims to identify and, if possible predict any deficiency in HQ that could potentially jeopardize safe flight. A typical (and potentially catastrophic) example of a HQ deficiency are the Aircraft / Rotorcraft Pilot Couplings (A/RPC), formerly referred to as Pilot Induced Oscillations (PIO). A/RPC is defined as the involuntary and adverse interaction between the pilot and the vehicle under control. Generally for rotorcraft, the ‘vehicle’ part of this interaction is evaluated by objective HQ criteria and online Rotorcraft Pilot Coupling (RPC) detection tools, whereas the ‘pilot’ part is assessed with subjective pilot ratings. Using subjective ratings has several disadvantages, such as being used at very late stages of the design when a prototype vehicle is already built. Addressing a serious HQ deficiency after this late design stage then requires immerse effort to re-design the vehicle systems and repeat the flight tests...","Rotorcraft; Handling Qualities; Adverse Rotorcraft Pilot Couplings; Manual Control Behaviour Identification; Pilot Modeling; Pilot Induced Oscillations","en","doctoral thesis","","978-94-6186-966-1","","","","","","","","","Control & Simulation","","",""
"uuid:c3cd9297-6d49-4ed6-8979-22419b98622f","http://resolver.tudelft.nl/uuid:c3cd9297-6d49-4ed6-8979-22419b98622f","Non-saturated Chloride Diffusion in Sustainable Cementitious Materials","Zhang, Y. (TU Delft Materials and Environment)","van Breugel, K. (promotor); Ye, G. (promotor); Delft University of Technology (degree granting institution)","2018","Chloride-induced reinforcement corrosion, caused by chloride diffusion in the unsaturated concrete cover, is a major durability problem of concrete structures. Current concepts for concrete mixture design and for service life prediction are generally based on the understanding of the chloride diffusion coefficient of saturated concrete. This will introduce uncertainties and give rise to misjudgement of the actual serviceability of concrete structures, especially when supplementary cementitious materials (SCMs) are added in the concrete mixture.
This thesis developed a numerical tool for predicting the chloride diffusion coefficient in cementitious materials at different degrees of water saturation. The tool accounts for the microstructure and moisture distribution in cementitious materials. The tool provided a basis for service life design based on unsaturated chloride diffusion. The results of the thesis emphasize the importance of looking at the chloride diffusion coefficient at unsaturated state, rather than at saturated state, in order to more effectively utilize the SCMs in concrete mixture design.","Supplementary cementitious materials; Pore structure; Degree of water saturation; Relative humidity; Chloride diffusion; Service life","en","doctoral thesis","","978-94-6366-097-6","","","","","","","","","Materials and Environment","","",""
"uuid:b2592fae-0a96-4aba-90a8-12c18e849a4c","http://resolver.tudelft.nl/uuid:b2592fae-0a96-4aba-90a8-12c18e849a4c","Plasma synthetic jet actuators for active flow control","Zong, H. (TU Delft Aerodynamics)","Scarano, F. (promotor); Kotsonis, M. (copromotor); Delft University of Technology (degree granting institution)","2018","In the last few decades, active flow control (AFC) technology has been developed to minimize the aerodynamic drag of transportation vehicles and maximize the propulsion efficiency of thermodynamic engines. The key of this technology is the actuators. Among all the actuators that have been proposed (i.e. fluidic, moving object, or plasma-based), plasma synthetic jet actuators (PSJAs) exhibit the unique capability of producing high velocity pulsed jets at high-frequency, thus promising to be applied in high-Reynolds number practical flows (e.g. aircraft wings, inlets, helicopter blades). The main objective of this thesis is to provide a deep understanding of the operation characteristics and flow control mechanisms of PSJAs, by virtue of advanced flow diagnostics and simplified theoretical analysis.","plasma; synthetic jet; flow control","en","doctoral thesis","","978-94-6186-954-8","","","","","","","","","Aerodynamics","","",""
"uuid:d68d443a-d9de-4dae-bf88-dae9f08eb8e5","http://resolver.tudelft.nl/uuid:d68d443a-d9de-4dae-bf88-dae9f08eb8e5","Engineering sucrose metabolism in Saccharomyces cerevisiae for improved ATP yield","Marques, W.L. (TU Delft BT/Industriele Microbiologie)","Pronk, J.T. (promotor); Gombert, A.K. (promotor); van Maris, A.J.A. (promotor); Delft University of Technology (degree granting institution)","2018","Contemporary society heavily depends on fossil sources. The energy and materials derived from fossil reserves were major contributors to the acceleration and intensification of agriculture and industry over the past 100 years. Such reserves are finite, hence, after expanding geographically, our economy is now consuming natural reserves that should not just support our generation but also those of the future. This unsustainable scenario becomes even more concerning when environmental impacts are taken into account. Even in the most ― and probably unrealistic ― optimistic climate scenarios, which assume no further increase in CO2 emissions in the next decades, the global temperature would still raise by 2 °C at the end of this century with respect to pre-industrial era, which could already have a negative impact on, for instance, food security.","","en","doctoral thesis","","978-94-6186-981-4","","","","","","","","","BT/Industriele Microbiologie","","",""
"uuid:80f3d825-cb17-4783-b43e-9aa1156d847d","http://resolver.tudelft.nl/uuid:80f3d825-cb17-4783-b43e-9aa1156d847d","Armchair travelling the innovation journey: Building a narrative repertoire of the experiences of innovation project leaders","Enninga, T.L. (TU Delft OLD Management and Organisation)","Hultink, H.J. (promotor); van der Lugt, R. (copromotor); van den Hende, E.A. (copromotor); Delft University of Technology (degree granting institution)","2018","The title of this dissertation is Armchair travelling the innovation journey. ‘Armchair travelling’ is an expression for travelling to another place, in the comfort of one’s own place. ‘The innovation journey’ is the metaphor Van de Ven and colleagues (1999) have used for travelling the uncharted river of innovation, the highly unpredictable and uncontrollable process of innovation. This research study began with a brief remark from an innovation project leader who sighed after a long and rough journey: ‘had I known this ahead of time…’. From wondering ‘what could he have known ahead of time?’ the immediate question arose: how do such innovation journeys develop? How do other innovation project leaders lead the innovation journey? And could I find examples of studies about these experiences from an innovation project leader’s perspective that could have helped the sighing innovation project leader to have known at least some of the challenges ahead of time? This dissertation is the result of that quest, as we do know relatively little how this process of the innovation project leader unfolds over time. The aim of this study is to increase our understanding of how innovation project leaders lead their innovation journeys over time, and to capture those experiences that could be a source for others to learn from and to be better prepared. This research project takes a process approach. Such an approach is different from a variance study. Process thinking takes into account how and why things – people, organizations, strategies, environments – change, act and evolve over time, expressed by Andrew Pettigrew (1992, p.10) as catching “reality in flight”.","","en","doctoral thesis","","978-90-9031144-9","","","","","","2018-10-31","","","OLD Management and Organisation","","",""
"uuid:3d3eb1f4-c067-44f6-9c42-2acc988f6a63","http://resolver.tudelft.nl/uuid:3d3eb1f4-c067-44f6-9c42-2acc988f6a63","Large-scale atom manipulation on an ionic surface and its prospects","Kalff, F.E. (TU Delft QN/Otte Lab)","Otte, A. F. (promotor); van der Zant, H.S.J. (promotor); Delft University of Technology (degree granting institution)","2018","In this thesis, a technique is developed to manipulate individual atoms on an ionic surface, with great precision and at a large scale, to study the quantum mechanical properties of atomic assemblies on the nanoscale.
We use the needle of a scanning tunnelling microscope (STM) to approach missing atoms - vacancies - in a chlorine monolayer on a copper crystal, inducing a neighbouring Cl atom to jump to the vacancy position by ramping up the tunnel current. This procedure is automated - with sometimes up to 99% reliability - to construct a 1 kB memory where each bit is represented by an atom-vacancy pair. The data storage is stable at low temperatures and can be rewritten automatically, leading to an information density of 502 terabits per square inch, or 0.778 bits/nm^2.
Atom manipulation is then used to build other one- and two-dimensional structures with varying sizes and atom densities. In artificial crystals made of vacancies, standing wave patterns are observed at certain energies, suggesting that it is possible to tune electronic properties of the material, such as the dispersion, by controlling the local geometry with atomic assembly.
In the rest of the thesis, more structures were built by atom manipulation in order to investigate the coupling between assemblies of vacancies that form 'artificial molecules'. Resonances in scanning tunnelling spectroscopy measurements indicate the existence of quantum dots on the apex of the STM tip, of which the properties are explored. The chlorine terminated copper surface is also investigated for its use as a decoupling layer suitable for magnetic adatoms.
Various PIV-based methods for instantaneous pressure determination are capable of reconstructing the main features of instantaneous pressure fields, including methods that reconstruct pressure fields from a single velocity snapshot. Highly accurate pressure fields can be obtained by tracking individual particles in combination with advanced processing techniques. In view of this outcome, it is recommended to let the choice for a specific technique be guided by the desired accuracy, resolution and dimensionality of the pressure results, while taking taking into account practical considerations, in particular limitations in the capabilities of available measurement equipment and the complexity of the measurement system. Without such intent, the potential difficulties and complexity of data acquisition were demonstrated with the use of a 12-camera/2-laser PIV system.
For instantaneous pressure reconstruction through pseudo-tracking new insights were obtained on its spatio-temporal filtering behaviour and the propagation of velocity measurement errors. A cut-off peak-response is specified as a function of the temporal track length and spatial resolution. Novel approaches are suggested to determine suitable temporal track lengths on the basis of the variation in material acceleration with track length and on the basis of pressure power spectra. Such spectra were also used estimate the local error margin of reconstructed pressure values. For the implementation of pseudo-tracking, it is recommended to first construct tracks by a combination of a second-order integration method and linear interpolation, using an integration time step that is sufficiently small to meet the Courant–Friedrichs–Lewy condition. The material acceleration may subsequently be estimated from the tracks by means of least-square fitting of a first-order polynomial or central differencing depending on the type of input data.
When calculating mean pressure fields with the Reynolds-averaging approach, it is recommended to only include the terms that are associated with the mean flow and Reynolds stresses. The impact of neglecting spatial and temporal density variations may be estimated as the difference between pressure solutions calculated with and without density-gradient terms. After validation, the approach was employed to study the effects of an exhaust plume and nozzle length on transonic and supersonic axisymmetric base flows. Amongst others, the results showed that depending on the nozzle length the presence of a plume may cause a decrease in base pressure in the transonic flow cases and an increase in base pressure in the supersonic flow cases, indicating the effects of entrainment and displacement, respectively. The results furthermore highlight the need of considering during vehicle design, that a longer nozzle in which a plume expands further, not only corresponds to a lower exit pressure in the plume, but also to a different ambient pressure near the nozzle exit.","pseudo-tracking; PIV measurements; Pressure; material acceleration; base flow","en","doctoral thesis","","978-94-6366-082-2","","","","","","","","","Aerodynamics","","",""
"uuid:0bc06cd7-086d-4e37-bafb-aa0e30d07a54","http://resolver.tudelft.nl/uuid:0bc06cd7-086d-4e37-bafb-aa0e30d07a54","Model-based process development for biopharmaceuticals","Pirrung, S.M. (TU Delft BT/Bioprocess Engineering)","Ottens, M. (promotor); van der Wielen, L.A.M. (promotor); Delft University of Technology (degree granting institution)","2018","","","en","doctoral thesis","","978-94-6186-959-3","","","","","","2019-03-12","","","BT/Bioprocess Engineering","","",""
"uuid:97657389-0ee5-4de7-82b6-6647470160a5","http://resolver.tudelft.nl/uuid:97657389-0ee5-4de7-82b6-6647470160a5","Analysis and planning of power grids: A network perspective","Çetinay Iyicil, H. (TU Delft Network Architectures and Services)","Van Mieghem, P.F.A. (promotor); Kuipers, F.A. (promotor); Delft University of Technology (degree granting institution)","2018","Electric power has become an essential part of daily life: we plug our electronic devices in, switch our lights on, and expect to have power. As the availability of power is usually taken for granted in modern societies, we mostly feel annoyed at its absence and perceive the importance of power during outages which have severe effects on the public order. Blackouts have had disastrous consequences for many countries and they continue to occur frequently. Such examples demonstrate the necessity for careful analysis and planning of power grids, to ultimately increase the reliability of power grids.
The power grids have evolved due to economic, environmental and human-caused factors. In addition to the contingency analysis, nowadays, the operation and planning of power grids are facing many other challenges (such as demand growth, targeted attacks, cascading failures, and renewable energy integration). Thus, many questions arise, including: which buses (nodes) to connect with a new line (link)? What are the impacts of malicious attacks on power grids? How may an initial failure result in a cascade of failures? How to prepare for the integration of renewable energy? Answering such questions requires developing new concepts and tools for analysing and planning of power grids.
Power grids are one of the largest and the most complex man-made systems on earth. The complex nature of power grids and its underlying structure make it possible to analyse power grids relying on network science. The applications of network science on power grids have shown the promising potential to capture the interdependencies between components and to understand the collective emergent behaviour of complex power grids. This thesis is motivated by the increasing need of reliable power grids and the merits of network science on the investigation of power grids. In this context, relying on network science, we model and analyse the power grid and its near-future challenges in terms of line removals/additions, malicious attacks, cascading failures, and renewable integration.","network science; power grids; cascading failures; wind power; sensitivity analyses; targeted attacks; centrality metrics","en","doctoral thesis","","978-94-6186-969-2","","","","","","","","","Network Architectures and Services","","",""
"uuid:a43c0c07-5908-4890-99b8-1744194f815b","http://resolver.tudelft.nl/uuid:a43c0c07-5908-4890-99b8-1744194f815b","Effective robot arm motions: stability and efficiency through natural dynamics","Wolfslag, W.J. (TU Delft Learning & Autonomous Control)","Wisse, M. (promotor); Babuska, R. (promotor); Delft University of Technology (degree granting institution)","2018","While progress in many fields of robotics has been swift, robot arm movement
in scenarios without contact has changed little in the last decades. This lack of
change is not due a lack of potential for improvement. After all, the human arms
that these robot emulate move in ways that are more robust, energy efficient and
adaptable. This thesis is inspired by human movement skill in these three aspects
to improve the movement of robotic arms in non-contact situations. The six main
contributions of this thesis are divided over those three aspects. The aspects are
studied for two motions, the reaching motion, that is, move from the initial position
to a pre-specified target position and back, and the pick-and-place motion, which
adds picking and placing an object at the initial and target positions respectively.
The first aspect, robustness, is studied from a stability standpoint. The first aspect
is the topic of the first part of this thesis, which contains four of its six main
contributions. Stability, as understood in this thesis, is the property of returning
to a fixed (desired) motion after an initial disturbance two motions. This form
of stability is a minimal requirement for successful task completion. Most current
robot arms rely on fast sensory feedback for their stability. This contrasts with
humans, who rely on skill at choosing motions that are intrinsically stable. Such
self-stable motions have been used by earlier robotics researchers to make robots
that juggle or walk without the need for sensory feedback. However, these robots
perform tasks involving impacts, which can have a large stabilizing effect. No
such impacts are available in reaching motions. A self-stabilizing reaching motion
instead depends on ingenious use potential energy and centrifugal or Coriolis
effects.","","en","doctoral thesis","","978-94-028-1221-3","","","","","","","","","Learning & Autonomous Control","","",""
"uuid:20ba0c91-198a-4334-bb6d-a7d99d76d32b","http://resolver.tudelft.nl/uuid:20ba0c91-198a-4334-bb6d-a7d99d76d32b","Free standing interconnects for stretchable electronics","Joshi, S. (TU Delft Electronic Components, Technology and Materials)","Dekker, R. (promotor); Delft University of Technology (degree granting institution)","2018","Advancements in stretchable electronic systems have changed the way modern electronics interact with their target systems, by their conformability to more complex shapes as compared to conventional rigid or flexible electronics. By utilizing this, limitless applications in the field of healthcare can be realized, such as wearable and implantable electronics. Medical devices that can be stretched/conformed to a certain limit, will reduce the effort by physicians and improve the user experience by providing enhanced dynamic shaping and matching mechanical properties to that of the human body. In literature, many methods for the realization of stretchable electronic systems are presented. In this Thesis, the design and micro fabrication of freestanding stretchable interconnect technologies for both large and small area devices is presented. Free-standing interconnects have the freedom to bend out of plane during stretching, thus enabling an increase in stretchability. This Thesis presents a reliable microfabrication technology for stretchable electronic circuits with high density interconnects, that can be considerably stretched even in densely packed/high fill-factor circuits. To fabricate and study the free standing interconnects, a demonstrator patch with a sparse horse-shoe shaped interconnect design is presented in the first part of the Thesis. To render such large structures free standing, several technology modules needed to be developed. After the first proof of principle showing free standing polyimidemeander structures, the poor adhesion of polyimide (PI) and polydimethylsiloxane (PDMS) led to failure of the devices. Therefore, two methods involving surface modification of polyimide, and using an intermediate adhesion layer for improving the adhesion between PI and PDMS, were tested and assessed. Finally, butyl rubber as an intermediate layer was selected and implemented in the final fabrication process. The adhesive bond initiated by the butyl rubber (BR), apart from being extremely strong, is also chemically resistant and mechanically stable. For the final fabrication flow of these structures with metal interconnects, technological modules like PDMS pillars to prevent drooping of the large horse-shoe shaped interconnects and PI-PDMS “stitches” to ensure a reliable adhesion of the pillars to the interconnects were developed and implemented. A demonstrator patch with reversible stretchability of 80% is presented. However, it was observed that the testing of such large free-standing structures on a patch is not straightforward. In the second part of the thesis, testing was made an integral part in the design of the device. A device with a high fill factor i.e. densely packed rigid islands, allows only for a very small footprint of the interconnects. Therefore, a sub-micron interconnect design that can be realized with standard fine-pitch photolithography based IC techniques was developed, and an interconnect pattern based on a design presented by S. Shafqat et. al was implemented. In the second part of this Thesis, a test device for the micro tensile testing of these micron sized free standing structures is designed and fabricated for their easy, damage- free handling and mounting in a test setup. The device is fabricated as a single chip that can be separated into two movable parts after fixing it on the micro tensile test stage. The test device successfully demonstrated the tensile testing of the micron sized free standing structures that show reversible stretchability up to 2000%, while simultaneously measuring the resistance. Moreover, the generic design of the device allows the implementation and testing of different size and shape free-standing structures. After the fabrication of the micron sized free standing structures, several “fur like” residues were observed after the oxygen plasma etching of polyimide using aluminumas a hard etch mask. Therefore, different methods for the residue free etching of the polyimide were explored and a “fur-free” procedure for the etching of PI using a one-step reactive ion etch of the metal hard-etch mask is presented. In conclusion, the results and technological advances presented in this PhD Thesis have led to an increased understanding of the technologies for the reliable fabrication of free standing interconnect structures and have resulted in an improved stretchability in conformal electronic devices.","stretchable electronics; free-standing; microfabrication; body patch; interconnects; PI-PDMS adhesion; PI residues","en","doctoral thesis","","978-94-91909-52-8","","","","","","","","","Electronic Components, Technology and Materials","","",""
"uuid:c4ca2469-1bf2-458d-b944-c76c7d061abe","http://resolver.tudelft.nl/uuid:c4ca2469-1bf2-458d-b944-c76c7d061abe","Porous organic framework (POF) membranes for CO2 separation","Shan, M. (TU Delft ChemE/Transport Phenomena)","Kapteijn, F. (promotor); Gascon, Jorge (promotor); Delft University of Technology (degree granting institution)","2018","Membrane-based separation has become a promising alternative to traditional separation processes to capture CO2 owing to the great features such as energy efficiency and environmental friendliness. Polymers are easy to process and have been commercialized. However, most commercial polymer membranes suffer from a trade-off relation between gas permeability and selectivity, expressed as the Robeson upper bound1. Porous organic frameworks (POFs) are an emerging class of microporous polymers, which may have high CO2 permeability and selectivity when being processed into membranes due to their intrinsic porosity and strong CO2 adsorption ability. However, using POFs as membranes are still at the infancy stage due to their insolubility in most common solvents.
Thus, this thesis focusses on the development of porous organic frameworks (POFs) membranes for various CO2 separation applications, including biogas upgrading, (Chapter 2), post-combustion CO2 capture (Chapter 3 and 4) and pre-combustion capture (Chapter 5). The fully organic nature together with the excellent thermal and chemical stabilities make POFs promising to be used as membranes for CO2 separation.
loaded by waves. Therefore, regular inspection is needed in order to confirm adequate structural integrity throughout the entire service life of the structure. Detected fatigue cracks that are too long for safe operation need to be repaired. Detected cracks of acceptable length need to be at least inspected more frequently. These inspections are costly, time consuming, and hazardous, so additional inspections on top of the periodical class approval surveys are to be avoided if possible.","Fatigue; crack monitoring; ship and offshore structures; Self Magnetic Flux Leakage; metal magnetic memory method","en","doctoral thesis","","978-94-92679-49-9","","","","","","","","","Ship Hydromechanics and Structures","","",""
"uuid:d823ad12-60ea-49d0-b8c0-0ccd1449387f","http://resolver.tudelft.nl/uuid:d823ad12-60ea-49d0-b8c0-0ccd1449387f","Drift-Diffusion-Reaction Model for Time-Domain Analysis of Charging Phenomena in Electron-Beam Irradiated Insulators","Raftari Tangabi, Behrouz (TU Delft Numerical Analysis)","Vuik, Cornelis (promotor); Budko, N.V. (copromotor); Delft University of Technology (degree granting institution)","2018","Electron microscopes take advantage of a beam of electrons to illuminate a
specimen and extract the needed information from the interaction of particles
with matter in order to produce a high resolution image. The main research
question of the present study arose from the fact that this resolution is degraded
when a given specimen contains insulating materials. In the electron
microscopy of insulators the effect behind the degradation of an image resolution
is known as the charging effect. The charging effect needs to be studied
and understood, in particular, since biological specimens are either insulators
or contain insulating parts.","","en","doctoral thesis","","978-94-6186-960-9","","","","","","","","","Numerical Analysis","","",""
"uuid:9d79cf6d-19a5-4f0f-a01e-6573f8e1b2ce","http://resolver.tudelft.nl/uuid:9d79cf6d-19a5-4f0f-a01e-6573f8e1b2ce","Linearity Research of A CMOS Image Sensor","Wang, F. (TU Delft Electronic Instrumentation)","Theuwissen, A.J.P.A.M. (promotor); Delft University of Technology (degree granting institution)","2018","This thesis provides a thorough analysis of the linearity characteristics of a CMOS image sensor. Firstly, this thesis analyzes the factors that cause the nonlinearity of the image sensors. These factors are then verified by simulation results of a proposed behavioral model and the measurements in a prototype chip. Secondly, different techniques are presented to improve the linearity of the whole imaging system; and the effectiveness of these techniques is further confirmed by measurement results of several test chips.","","en","doctoral thesis","","978-94-028-1233-6","","","","","","","","","Electronic Instrumentation","","",""
"uuid:d51d8117-7b1f-4aca-a488-c3bc57f39167","http://resolver.tudelft.nl/uuid:d51d8117-7b1f-4aca-a488-c3bc57f39167","Redox behaviour of model systems for spent nuclear fuel surfaces","Cakir, P. (TU Delft RST/Reactor Physics and Nuclear Materials)","Konings, R. (promotor); Gouder, T (copromotor); Delft University of Technology (degree granting institution)","2018","Safety assessments are the main pillars of the analysis of the impact of storage of the spent nuclear fuel. There are many scenarios to describe what might happen during the storage and disposal time of the nuclear waste. Even though the main composition of the spent nuclear is UO2, the matrix contains transuranium elements and fission products, which have different chemical behaviour and lead to an altered physical state after the irradiation. Thus, the complex nature of the spent nuclear fuels requires understanding of several mechanisms through investigation of individual parameters and their effect on one and another. This is achieved by single effect studies, starting from simple systems to gradually more complex systems. In this thesis, thin films have been used as model systems to simulate the spent fuel in a systematic manner. The main focus was given to the actinide (mixed) oxides (Th, U, Np, Pu, and Ce as surrogate for Pu and as fission product). Throughout this thesis, the suitability of the use of thin films instead of bulk material has been demonstrated, and the investigation of redox properties of model systems for spent fuels using different methods is described.","Thin Films; Actinide Oxides; Redox; Photoelectron Spectroscopy","en","doctoral thesis","","978-94-6380-048-8","","","","","","","","","RST/Reactor Physics and Nuclear Materials","","",""
"uuid:287a608e-85af-47d3-877c-cfc97e3b9939","http://resolver.tudelft.nl/uuid:287a608e-85af-47d3-877c-cfc97e3b9939","Materializing Technologies: Surfacing Focal Things and Practices with Design","Robbins, H.V. (TU Delft Ethics & Philosophy of Technology; TU Delft Human Information Communication Design)","Giaccardi, Elisa (promotor); Karana, E. (copromotor); Delft University of Technology (degree granting institution)","2018","Today, the world is populated with what we colloquially refer to as “black boxes.” These are technologies that perform sophisticated operations but obfuscate these complex operations, providing us with little context to what they do, how they work, and the role they play in our lives. In simple terms, this thesis addresses the following broadly framed questions: what parts of these black boxes should be made legible to the layperson? And regarding these parts: how can design be harnessed to reframe them as legible?","","en","doctoral thesis","","978-94-6186-964-7","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:28ce86f1-ab43-44fa-a1a6-e8a2f917c9ce","http://resolver.tudelft.nl/uuid:28ce86f1-ab43-44fa-a1a6-e8a2f917c9ce","The key role of crevasse splays in prograding river systems: Analysis of evolving floodplain accommodation and its implications for architecture and reservoir potential","van Toorenenburg, K.A. (TU Delft Applied Geology)","Donselaar, M.E. (promotor); Weltje, Gert Jan (promotor); Delft University of Technology (degree granting institution)","2018","A generic life cycle applies to crevasse splays in non-degradational fluvial systems, typically ending in healing and abandonment. Crevasse-splay channels adjust to a graded equilibrium profile through proximal erosion and distal deposition, with their distal termini acting as a (prograding) local base level. When proximal incision advances to below the maximum flooding level, a reflux of floodwater occurs during the waning stage of flooding. The resultant decrease in gradient ultimately leads to the backfilling and abandonment of a crevasse splay, provided that the elevation at its distal fringe remains higher than that of the trunk channel floor. Consecutive crevasse splays form an alluvial ridge through lateral amalgamation and subsequent vertical stacking, perching the active river above the surrounding floodplain. Superelevation of the channel thalweg above the distal termini of a prograding crevasse splay leads to avulsion.
A high-resolution morphological reconstruction of both the active (and recently abandoned) river(s) and the surrounding floodplain has been established to test the proposed life cycle of crevasse splays and evaluate its role in autogenic avulsion and organisation of the fluvial system. An avulsion can only occur when an overbank path of steepest descent reaches the system base level in a shorter distance (which may partially reuse the existing channel or remnant channel depressions) than the along-channel distance to its terminus. Crevasse splays prograde along this overbank flow path and capture an increasing portion of the total discharge, accelerating their development. When the crevasse apex incises down to or below its trunk channel thalweg, the avulsion is complete. The overbank path of steepest descent (i.e., avulsion path) is governed by floodplain topography, which is largely formed of abandoned alluvial ridges. This leads to compensational stacking of successive prograding channel belts, resulting in fan of amalgamated ridges.","Crevasse splays; Prograding river systems; Floodplain evolution; Low net-to-gross stratigraphy; Reservoir potential","en","doctoral thesis","","978-94-6366-084-6","","","","","","","","","Applied Geology","","",""
"uuid:15818216-0eb5-4545-bd2e-27eaa38262ba","http://resolver.tudelft.nl/uuid:15818216-0eb5-4545-bd2e-27eaa38262ba","Design and Application of Scalable Evolutionary Algorithms in Electricity Distribution Network Expansion Planning","Luong, N.H. (TU Delft Intelligent Electrical Power Grids)","la Poutré, J.A. (promotor); Bosman, P.A.N. (promotor); Delft University of Technology (degree granting institution)","2018","In our modern daily life, many activities require electricity, for example, the usage of domestic appliances, manufacturing, communication, and transportation. It is therefore essential to maintain a reliable supply of electricity to ensure the operation of such activities. The electricity supply, in a large part, depends on the underlying electrical networks that transfer electricity from power plants to meet the demand of end users. In the past, electricity consumption has grown over time and, at some point, the electricity demand will exceed the current capacity of certain network assets, causing overloads on parts of the networks. Functioning under overload conditions reduces the reliability of the networks and also damages network assets. Network reinforcement is thus required. This incurs substantial investment costs and time-consuming activities, such as acquisitions of new assets, constructions of substations, and installations of suitable cables and other electrical devices. Network operator companies, therefore, need to properly predict the growth of electricity demand and make suitable expansion plans to enhance the capacity of their networks. In addition, the recent emergence of renewable energy sources and smart grid technologies changes electricity consumption behaviors of users, the growth of electricity demand in general, and also the directions of network flows (due to local generation). This poses additional challenges that need to be addressed by the network operators. In this dissertation, we are interested in medium-voltage distribution networks, which are electrical networks that deliver electricity from high-voltage transmission networks to low-voltage distribution networks. Medium-voltage distribution networks typically have more complicated structures than low-voltage networks and require more frequent reinforcement activities than high-voltage transmission networks. We aim to develop robust computational methods to assist distribution network operators (DNOs) in tackling network expansion planning problems.","evolutionary algorithms; multi-objective optimization; power systems; distribution networks; expansion planning","en","doctoral thesis","","978-94-028-1098-1","","","","","","","","","Intelligent Electrical Power Grids","","",""
"uuid:f613079c-90a1-47dc-afcb-f6833646ca5a","http://resolver.tudelft.nl/uuid:f613079c-90a1-47dc-afcb-f6833646ca5a","LQG and Gaussian process techniques: For fixed-structure wind turbine control","Bijl, H.J. (TU Delft Team Raf Van de Plas)","Verhaegen, M.H.G. (promotor); van Wingerden, J.W. (promotor); Delft University of Technology (degree granting institution)","2018","Wind turbines are growing bigger to becomemore cost-efficient. This does increase the severity of the vibrations that are present in the turbine blades, both due to predictable effects like wind shear and tower shadow, and due to less predictable effects like turbulence and flutter. If wind turbines are to become bigger and more cost-efficient, these vibrations need to be reduced. This can be done by installing trailing-edge flaps to the blades. Because of the variety of circumstances which the turbine should operate in, this results in large uncertainties. As such, we need methods that can take stochastic effects into account. Preferably we develop an algorithmthat can learn from online data how the flaps affect the wind turbine and how to optimally control them. A simple prior analysis can be done using a linearized version of the system. In this case it is important to know not only the expected cost (damage) that will be incurred by the wind turbine in various situations, but also the spread of this cost. This can for instance be done by looking at the variance of the cost function. Various expressions are available to analytically calculate this variance. Alternatively, we can prescribe a degree of stability for the system. Due to the limitations of linear approximations of systems, it is more effective to apply nonlinear regression methods. A promising one is Gaussian Process (GP) regression. Given a training set (X, y) it can predict function values f (x¤) for test points x¤. It has its basis in Bayesian probability theory, which allows it to not only make this prediction, but also give information (the variance) about its accuracy. The usual way in which GP regression is applied has a few important limitations. Most importantly, it is computationally intensive, especially when applied to constantly growing data sets. In addition, it has difficulties dealing with noise present in the training input points x. There are methods to solve either of these issues, but these tricks generally do not work well together, or their combination requires many computational resources. However, by making the right approximations, like Taylor expansions and at times even linearizations, Gaussian process regression can be applied efficiently, in an online way, to data sets with noisy input points. This enables GP regression to be used for system identification problems like online non-linear black-box modeling. Another limitation is that it can be difficult to find the optimum of a Gaussian process. The reason is that the optimum of a Gaussian process is not a fixed point but a random variable. The distribution of this optimum cannot be calculated analytically, but we can use particle methods to approximate it. We can subsequently use this principle to efficiently explore an unknown nonlinear function, trying to locate its optimum. To do so, we sample a point x from the optimum distribution, measure what the function value f (x) at this point is, update the Gaussian process approximation of the function, update the optimum distribution and repeat this process until the distribution has converged. Finding the optimum of a function like this has shown to have competitive performance at keeping the cumulative regret low, compared to similar algorithms. In addition, it allows wind turbines to tune the gains of a fixed-structure controller so as to optimize a nonlinear cost function like the damage equivalent load. All these improvements are a step forward in the application of Gaussian process regression to wind turbine applications. But as is always the case with research, there are still many things left to improve further.","Gaussian processes; regression; machine learning; optimization; system identification; automatic control; wind energy; smart rotor","en","doctoral thesis","","978-94-6299-501-7","","","","","","","","","Team Raf Van de Plas","","",""
"uuid:e6df2051-7db1-4316-9867-c800be313c13","http://resolver.tudelft.nl/uuid:e6df2051-7db1-4316-9867-c800be313c13","Assembly of Membrane-deforming Objects in Tubular and Vesicular Membranes: Theory and Simulations","Vahid Belarghou, A. (TU Delft ChemE/Product and Process Engineering)","Dogterom, A.M. (promotor); Idema, T. (copromotor); Delft University of Technology (degree granting institution)","2018","Biological membranes are selective soft barriers that compartmentalize internal structure of a cell into organelles and separate them as a whole from the external environment. Due to their innate feature of being able to undergo constant reshaping, cellular membranes spatially attain diverse shapes ranging from simple spherical vesicles to more peculiar structures like the interconnected network of tubes found in the endoplasmic reticulum. Membranes are not only composed of lipids, but also host an enormous number of inclusions like proteins. Recent studies of biological membranes have revealed that such inclusions play a key role in diverse biological processes through either sensing or inducing perturbations to the membrane shape. In this dissertation, we studied the interplay between the shape of membrane and the spatial organization of attached curvature inducing objects using mathematical tools and numerical simulations in highly curved spherical and cylindrical geometries.
First, we investigated the interaction between inclusions of different shapes embedded in/adhered to tubular membranes. Our combined theoretical analysis and numerical simulation results evinced that tubular membranes, in contrast to their planar counterpart, transmit an attractive force between inclusions, stemming from their closed and curved geometry. We then elucidated that collective interaction between proteins results in the formation of line-like and ring-like clusters, depending on the their intrinsic shape (Chapters 2–4). We further showed how curvature sensing crescent-like proteins in high densities can constrict tubular membranes and facilitate their splitting, demonstrating that both the curvature-sensing and curvature-inducing property of proteins are two sides of the same coin. Moreover, we used our simulation results to explain how mitochondorial machinery triggers, facilitates and drives membrane fission in its tubular network to avoid entanglements (Chapter 3).
Next, we examined the interaction of spherical proteins adhered to closed vesicles. Our simulation results – supported by recent experimental evidence – revealed membrane curvature as a common physical origin for interactions between any membrane deforming objects, from nanometre-sized proteins to micrometre-sized particles (Chapter 5). Our further simulations unraveled how introducing curvature variation on the surface of a closed vesicle can be exploited by inanimate particles to regulate their pattern formation (Chapter 6).
Finally, through theoretical calculations,we analyzed the interplay between the shape of a cell and the rearrangement of attached microtubules (Chapter 7). Our results particularly suggested that the commonly reported parallel structure and bundling of microtubules can be induced by membrane mediated interactions.","","en","doctoral thesis","","978-90-8593-365-6","","","","Casimir PhD series: 2018-36","","","","","ChemE/Product and Process Engineering","","",""
"uuid:e7b3930d-106c-4254-925c-a5b134ca32a6","http://resolver.tudelft.nl/uuid:e7b3930d-106c-4254-925c-a5b134ca32a6","Health Indexing for High Voltage Gas-Insulated Switchgear (HVAC GIS)","Al-Suhaily, M. (TU Delft DC systems, Energy conversion & Storage)","Smit, J.J. (promotor); Delft University of Technology (degree granting institution)","2018","","","en","doctoral thesis","","978-94-6380-040-2","","","","","","","","","DC systems, Energy conversion & Storage","","",""
"uuid:dfed6452-10ae-42d2-8e00-b5c0c6873ef8","http://resolver.tudelft.nl/uuid:dfed6452-10ae-42d2-8e00-b5c0c6873ef8","Towards a Method of Participatory Planning in an Emerging Metropolitan Delta in the Context of Climate Change: The Case of Lower Paraná Delta, Argentina","Zagare, V.M.E. (TU Delft OLD Urban Compositions)","Meyer, Han (promotor); Sepulveda Carmona, D.A. (copromotor); Delft University of Technology (degree granting institution)","2018","The Parana River is the third largest river in the American continent, after the Mississippi and the Amazon. Instead of flowing directly to the sea, it flows to the Rio de la Plata (located between Argentina and Uruguay) through a complex delta system. This delta is a large and heterogeneous territory that spreads over three provinces of Argentina and that is characterized by different dichotomies along its extension. On the one hand, the islands of the delta are young alluvial lands in constant transformation due to the processes of sedimentation, and are subjected to pulses of floods influenced by the Paraná River streamflow, droughts, precipitations and strong southeastern winds coming from the Atlantic Ocean. Although these alluvial territories seem to be pristine, they have been moderately altered as a result of the development of economic activities. On the other hand, along the edges of the delta, we find the older territories of the mainland, created in the Pleistocene and less dynamic. Here is a network of cities of dissimilar sizes, that establishes the wealthiest corridor of the country. Conurbations such as Rosario (located in the province of San ta Fe) and the Metropolitan Area of Buenos Aires (located in the homonym province), exert different pressures over the territory, generating an increasing impact on the delta system. In other words, this delta shows a contrast between the wild and dynamic condition of the islands and the more stable but strongly urbanized edges. Nevertheless, this dichotomy is not the only one that can be found in the delta. On the contrary, there are other oppositions regarding economic, policy and social realms, expressed through a polarized, unsustainable and unplanned land use, which turns the area into a vulnerable place, given the uncertain context of climate change...","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-090-7","","","","A+BE | Architecture and the Built Environment No 25 (2018)","","","","","OLD Urban Compositions","","",""
"uuid:8f52bab6-c097-4c4c-8291-3e76b0285f55","http://resolver.tudelft.nl/uuid:8f52bab6-c097-4c4c-8291-3e76b0285f55","Integrative modeling of inhibitor response in breast cancer cells","Thijssen, B. (TU Delft Pattern Recognition and Bioinformatics)","Wessels, L.F.A. (promotor); Delft University of Technology (degree granting institution)","2018","Cancer patients often respond very differently to any given drug. Some patients respond very well, while others do not respond at all, leaving the cancer to grow unimpeded. If we have a good understanding of how this variability in response arises, we will be better able to choose the optimal treatment strategy for each patient. The variability in drug response observed in patients is also seen in cancer cell lines when they are cultured in vitro. Detailed cell-biological studies have revealed many different mechanisms which affect the response of cancer cells to anticancer drugs. Certain mutations can render cells sensitive to a certain drug, while other mutations, or changes in gene expression, can cause resistance. However, since any combination of these drug sensitivity mechanisms can be operating in a particular cell line, it is difficult to predict whether it will be sensitive or resistant to a particular drug. Computational modeling can be used to better understand this complexity. In this dissertation, we developed a novel method, which we call Inference of Signaling Activity, that can be used to infer the contributions of different drug sensitivity- and resistance mechanisms. We used the available knowledge of signal transduction in cells, and integrated multiple data types including mutations, gene amplifications and deletions, gene expression levels, protein phosphorylation, growth rates and drug response data to infer the signaling activities in each cell line. After an extensive characterization of thirty different breast cancer cell lines, we developed a model that can explain a large part of the variability in the response of these cell lines to seven different kinase inhibitors. At the same time, the response of some cell lines was not recapitulated exactly. Using further data-driven analysis, we found a novel determinant of mTOR inhibitor sensitivity. Overexpression of 4EBP1 in breast cancer cells renders them more sensitive to these inhibitors. This modeling approach can now be further developed to determine whether it can also be used to explain and predict the response of cancer patients. Initially this modeling framework did not permit the inclusion of feedback signaling mechanisms, even though we know feedback control to be an important feature of cellular signaling networks. We therefore subsequently extended our framework such that feedback could be included, and with this extension we were able to delineate signaling activities in regulatory networks with multiple, interrelated feedback loops, again taking into account different datasets. An important consideration in this dissertation was the quantification of uncertainty in model parameters, for which we used Bayesian statistics. If the uncertainty in parameter estimates is not taken into account, we can be lulled into a false sense of security and misinterpret which elements of the model are important. We developed a software package with efficient, multi-threaded implementations of various Monte Carlo sampling algorithms, which allowed the inference to be done in workable amounts of time. We further showed in a different biological system – cell cycle regulation in yeast – that the integration of different types of measurements can increase the identifiability of parameters. Finally, we investigated whether Bayesian inference with multiple datasets can be done sequentially using intermediate posterior approximations. Each of these contributions to Bayesian inference with multiple datasets may be used more broadly in modeling different biological systems. Although further development and validation of the drug response models is needed, the use of integrative computational modeling appears to be a promising approach for enabling precision medicine for cancer patients in the future.","","en","doctoral thesis","","978-94-6186-958-6","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:1e20169a-6179-47f8-b70d-452cd3e11460","http://resolver.tudelft.nl/uuid:1e20169a-6179-47f8-b70d-452cd3e11460","Towards fundamental understanding of interlaminar ply delamination growth under mode II and mixed-mode loading","Amaral, L. (TU Delft Structural Integrity & Composites)","Alderliesten, R.C. (promotor); Benedictus, R. (promotor); Delft University of Technology (degree granting institution)","2018","In a context of many studies addressing delamination growth in laminated composites, this thesis provides understanding of the underlying physics of this phenomenon. The models currently used to assess delamination growth are phenomenological in nature and rely almost solely on curve fittings and experimental data. These empirical models are used to predict delamination growth rather than aid in understanding it. This lack of knowledge on the physics of delaminations causes problems for both academia and industry. From the perspective of academia, science needs to be built upon fundamental understanding. However, this is currently not the case for delamination growth. Phenomenological trends, for which the reasons are not yet clear, are assumed as fact, and science tries to advance building on these trends. Meanwhile, from the perspective of industry, engineers compensate for the lack of fundamental understanding with conservativeness, overdesign and a great amount of tests, yielding extra costs. Therefore, the present thesis seeks to understand the fundamentals of delamination growth by physically characterising it. This characterisation is performed relating the strain energy dissipated in delamination growth with the delamination growth rate and the damage mechanisms encountered on the fracture surfaces. To this aim, carbon-epoxy unidirectional laminated specimens were manufactured and tested under mode II and mixed-mode static and fatigue loading. Fracture surfaces were analysed on a Scanning Electron Microscope, and the damage mechanisms were identified and correlated to the strain energy dissipated...","","en","doctoral thesis","","","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:d0a8f1b0-d829-4a34-be5a-1ff7aa8679ca","http://resolver.tudelft.nl/uuid:d0a8f1b0-d829-4a34-be5a-1ff7aa8679ca","Visual Quality of Experience: A Metric-Driven Perspective","Siahaan, E. (TU Delft Multimedia Computing)","Hanjalic, A. (promotor); Redi, J.A. (copromotor); Delft University of Technology (degree granting institution)","2018","Multimedia systems are typically optimized in a way that maximizes users’ satisfaction of using the systems/services. This user satisfaction is what is commonly referred to as Quality of Experience (QoE). For visual media, such as images and videos, the optimization of QoE has meant reducing the visibility of artifacts (e.g. noise or other disturbing factors) in the visual media. This is based on the assumption that the sole appearance of artifacts would disrupt the whole visual experience, in a world where media weremostly consumed passively, and in well defined contexts (e.g., TV broadcasts). Nowadays, the way users experience visual media has changed, thanks to the diffusion of mobile, interactive, immersive, and on-demand technology. Media are now consumed in many different contexts, for example, in the interactive and customizable contexts of social media, or in the immersive contexts of virtual and augmented reality. As consequence of these developments, a user’s visual QoE is no longer determined solely on the appearance of artifacts, but also by factors relevant to the viewing context. This thesis brings in new insights in modeling and automatically assessing users’ visual QoE in view of the developments above. The thesis starts with looking into subjective methodologies for QoE assessments, and continues with developing objective quality metrics that incorporate QoE influencing factors to improve state-of-the-art metrics. Developing reliable and accurate objective metrics to automatically assess users’ visual QoE requires subjective data that are reliable as well. This thesis argues that existing methodologies for collecting subjective data might not be reliable when used to evaluate QoE factors that are highly subjective, or that are new to the research community. Highly subjective quantities may yield different conclusions across experiments. As for new types of media, they often bring the uncertainty on how to evaluate them. Two studies are then presented in this regard. The first study considers the assessment of image aesthetic appeal, as one example of a highly subjective quantity. A large scale study was conducted to compare the use of different subjective methodologies to collect aesthetic appeal data, and some ways to measure the data reliably were proposed. The second study considers the assessment of point cloud quality, as one example of a new type of media (i.e. immersive media). The study explores quantitative and qualitative approaches to understand the way users judge point cloud images. Following the studies on subjective QoE assessments, two studies on objective QoE metrics are presented in this thesis. Despite existing efforts to model the influence of different factors on visual QoE, limited work have proposed to incorporate these factors into existing objective qualitymetrics to improve state-of-the-art. The first study on objective QoE metrics in this thesis investigates the influence of image content/semantic categories (i.e. scene and object categories) on visual QoE, and proposes to include semantic category features in objective image quality metrics. The proposed approach shows improvement from state-of-the-art in predicting image quality. The next study on objective quality metrics investigates new QoE influencing factors for point cloud images, and proposes to incorporate these into an objective quality metric for point cloud images. The results of the studies presented in this thesis show how existing subjective methodologies could yield reliable aesthetic appeal data, and explore point cloud QoE influencing factors. Moreover, the results show that incorporating new QoE influencing factors into objective image quality metrics could improve state-of-the-art performance in predicting users’ QoE. At the end of this thesis, some recommendations are given for future research following up the findings in this thesis.","Quality of Experience (QoE); image quality metrics; subjective methodologies","en","doctoral thesis","","978-94-028-1188-9","","","","","","","","","Multimedia Computing","","",""
"uuid:177e9f4c-f847-436d-9fd4-9ed97ba709d9","http://resolver.tudelft.nl/uuid:177e9f4c-f847-436d-9fd4-9ed97ba709d9","Metabolic trade-offs arising from increased free energy conservation in Saccharomyces cerevisiae","Schumacher, R. (TU Delft OLD BT/Cell Systems Engineering)","Heijnen, J.J. (promotor); Wahl, S.A. (copromotor); Delft University of Technology (degree granting institution)","2018","This thesis deals with increasing the free energy conservation in
chemotropic microorganisms with emphasis on S. cerevisiae and investigates
a number of different aspects related to industrial fermentation processes.","","en","doctoral thesis","","978-94-6375-156-8","","","","","","","","","OLD BT/Cell Systems Engineering","","",""
"uuid:addc45be-225e-4a52-acb6-59b3c967deb1","http://resolver.tudelft.nl/uuid:addc45be-225e-4a52-acb6-59b3c967deb1","Advancing single-molecule instrumentation through nanoscale optics, fabrication, and surface functionalization","Ha, S. (TU Delft BN/Nynke Dekker Lab)","Dekker, N.H. (promotor); Delft University of Technology (degree granting institution)","2018","This thesis describes developments in the single-molecule instrumentation, in particular optical torque wrench and DNA nanocurtains, with the goal of employing these techniques in studies of biomolecules and biomotors. Importantly, the use of single-crystal rutile titanium dioxide nanocylinders is suggested for optical torque wrench, enabling access to a larger torque-speed space and improvement in spatiotemporal resolution.","biophysics; single-molecule; nano-optics; nanofabrication; surface functionalization; optical tweezers; optical torque wrench; DNA nanocurtain; fluorescence microscopy","en","doctoral thesis","","978-90-8593-367-0","","","","Casimir PhD Series 2018-37","","2019-10-15","","","BN/Nynke Dekker Lab","","",""
"uuid:6abb7296-8c34-45a8-af62-0b419bbe1e75","http://resolver.tudelft.nl/uuid:6abb7296-8c34-45a8-af62-0b419bbe1e75","Transactions; or Architecture as a System of Research Programs","Mejia Hernandez, J.A. (TU Delft OLD Methods & Analysis)","Avermaete, T.L.P. (promotor); Delft University of Technology (degree granting institution)","2018","This study of the historiography of architecture and the built environment develops the thesis that well-known modernist histories of architecture, such as those written by Reyner Banham, remain unable to appraise the many nuances and complexities that characterize modern architecture. It is argued here that, among other reasons, they are unable to do so because they follow a fundamentally hermeneutic trajectory, on the one hand, and because they are strongly reliant on elements of historicism, as defined by Karl Popper, on the other.
In order to confront the inabilities that stem from these two causes, the study reflects on Karl Popper’s investigations on knowledge, science, and society; and more specifically, revises the architectural historian Stanford Anderson’s attempts to use the work of Popper and Imre Lakatos (one of Popper's critics and collaborators) for the appraisal of architecture.
Key among this work is Imre Lakatos’s formulation of a methodology of scientific research programs, of which Anderson tried to produce a qualified version for the appraisal of architectural design. This study evaluates that qualified version, paying special attention to the examples utilized to present it at work.
Subsequently, a tripartite counter-example is advanced as a development of the examples used by Anderson to present his qualified version at work. Together, the study of Anderson’s approach to the work of Popper and Lakatos, and the description of three architectures understood as parts of an architectural research program, confront the hermeneutic trajectory and the elements of historicism identified in modernist architectural historiography, and provide new elements for the appraisal of modern architecture.
As part of this research, many functions, flows, areas and actors in the urban landscape system of Rotterdam have been studied. This research focuses on the development, design and testing of new approaches to strengthen existing urban qualities and to tackle problems in such a way that positive effects for other functions (synergies) arise at the same time in order to improve the quality of life in cities.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-079-2","","","","A+BE | Architecture and the Built Environment No 24 (2018)","","","","","Landscape Architecture","","",""
"uuid:4609c703-4f96-4932-9b1e-6285e8862721","http://resolver.tudelft.nl/uuid:4609c703-4f96-4932-9b1e-6285e8862721","Affordable Condominium Housing: A comparative analysis of low-income homeownership in Colombia and Ecuador","Donoso-gomez, R.E. (TU Delft OLD Housing Systems)","Elsinga, M.G. (promotor); Boelhouwer, P.J. (copromotor); Delft University of Technology (degree granting institution)","2018","As cities grow and more dense communities are built, the meaning of homeownership changes. In a highly urbanized future, it will be critical to know how to make high density housing in condominium ownership sustainable and resilient. A sector of social housing policies in Latin America subsidizes the provision of affordable housing for low and middle income homeownership. A network of professionals, both from private and public sector are involved in this process. In the context of Bogota, Colombia and Quito, Ecuador, dwellings for homeownership are built in multifamily and collective arrangements of land and architecture. The property system involved in these urban housing solutions is the condominium regime. The problem is that affordable condominiums, particularly those subsidized by national housing policy deteriorate over time. The common property elements of housing complexes or buildings are suffering from serious lack of maintenance. Why are low-income homeowners not taking care of their properties? How can we better understand the problem of lack of maintenance of the affordable condominiums? Tenure forms are one of the most important institutions in housing policy and research. This comparative housing research looks at condominiums as a private common property resource and applies Ostrom´s institutional framework (Ostrom, 1990, 2005) to understand both formal and informal institutions involved in management and governance of the affordable condominiums. In condominium housing, owning a home of one’s own implies a more complex configuration of rights and obligations than just the possession of a single unit. The institutions of condominium housing studied in this thesis make a significant contribution to theory and housing policy and positions Latin American social housing policy in a global perspective.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-076-1","","","","A+BE | Architecture and the Built Environment No 23 (2018)","","","","","OLD Housing Systems","","",""
"uuid:811ba745-18e1-4dca-8321-249ba000a142","http://resolver.tudelft.nl/uuid:811ba745-18e1-4dca-8321-249ba000a142","Automatic analysis of human social behavior in - the - wild using multimodal streams","Cabrera Quiros, L.C. (TU Delft Pattern Recognition and Bioinformatics)","Reinders, M.J.T. (promotor); Hung, H.S. (copromotor); Delft University of Technology (degree granting institution)","2018","The automated analysis of human non-verbal behavior during crowded mingle scenarios is part of the newly emerged domain of Social Signal Processing (SSP). This specific line of research aims to develop computational methods to automatically understand social interactions in-the-wild, while facing the many challenges inherent with the noisy nature of mingle scenarios.
While most works about the analysis of social interactions are focused on structured and task-driven setups such as small group meetings, mingle scenarios consist of free-standing conversational groups that dynamically form, merge and split aligning with the participants’ intentions and desires.
Data collected in structured scenarios is rather clean, whereas mingle scenarios have frequent and heavy subject cross-contamination as well as missing data due to the inherent crowded and dynamic nature of the events, with people mingling freely. The goal of this thesis is to leverage multiple modalities for the analysis of social interactions during crowded mingle scenarios, to overcome these challenges.
The approach taken in this thesis is to record mingling events with overhead cameras and wearable sensors recording body acceleration and proximity, to be minimally intrusive and to scale rather easily to a higher number of people. We focused on different tasks for the understanding of social interactions such as automatic association of multiple modalities, detection of social hand gestures, personality estimation, and group enjoyment.
We show that the use of multiple modalities improves the performance of our classification tasks and the understanding of social interactions, compared to unimodal approaches.
This was particularly important when data in one of the modalities was noisy or completely missing.","","en","doctoral thesis","","978-94-6366-072-3","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:3a0da462-b912-4a60-9ff3-6f66b2cd0884","http://resolver.tudelft.nl/uuid:3a0da462-b912-4a60-9ff3-6f66b2cd0884","Modeling Electrode Materials: Bridging Nanoscale to Mesoscale","Vasileiadis, A. (TU Delft RST/Storage of Electrochemical Energy)","Brück, E.H. (promotor); Wagemaker, M. (promotor); Delft University of Technology (degree granting institution)","2018","Computational modeling is shaping the fundamental understanding of key thermodynamic and kinetic properties in batteries, the importance of which is undeniable for the implementation of next-generation batteries, mobile and large-scale applications (chapter 1). In the present thesis, we employ density functional theory (DFT) at the nanoscale and phase field modeling at the mesoscale (chapter 2) to study both state-of-the-art and novel battery chemistries...","","en","doctoral thesis","","978-94-93019-51-5","","","","","","","","","RST/Storage of Electrochemical Energy","","",""
"uuid:c7127c0f-0a4d-4857-a0c2-d99ac839a342","http://resolver.tudelft.nl/uuid:c7127c0f-0a4d-4857-a0c2-d99ac839a342","Dissecting the Nucleosome: Single-Molecule Studies of Subnucleosomal Structure and Dynamics","Ordu, O. (TU Delft BN/Nynke Dekker Lab)","Dekker, N.H. (promotor); Delft University of Technology (degree granting institution)","2018","The entire blueprint of all living things is encoded in their genomes, which consist of DNA strands. The genome of a complex organism like ourselves can be several meters long. One of the miracles of nature is that such DNA molecules can be stored in the micron-sized nucleus of eukaryotic cells. For this purpose, the relatively large genome of eukaryotes has to be tightly packed while still remaining accessible for vital cellular processes such as replication, transcription, and repair. This is achieved by the organization of the eukaryotic genome into a hierarchical nucleoprotein assembly termed chromatin. Its fundamental unit is the nucleosome, which comprises a short piece of DNA wrapped around a disk-shaped core of eight histone proteins in a left-handed superhelix. As such, nucleosomes constitute the first level of DNA compaction and are assigned a key role in the regulation of the genome to maintain the proper functioning and viability of eukaryotic cells. Hence, detailed knowledge of this fascinating complex is crucial for understanding fundamental processes of life. This thesis deals with investigations of the structure and dynamics of a nucleosomal substructure called tetrasome at the single-molecule level.","Single-Molecule Techniques; Magnetic Tweezers; Chromatin; Nucleosomes; Tetrasomes","en","doctoral thesis","","978-90-8593-362-5","","","","","","","","","BN/Nynke Dekker Lab","","",""
"uuid:d52fe8b2-8076-4a80-84ad-9030633a39fd","http://resolver.tudelft.nl/uuid:d52fe8b2-8076-4a80-84ad-9030633a39fd","Biogenic self-healing mortar: Material development and experimental evaluation","Tziviloglou, E. (TU Delft Materials and Environment)","Schlangen, E. (promotor); Jonkers, H.M. (promotor); Delft University of Technology (degree granting institution)","2018","In concrete structures, it is always a preferable idea to prevent the damage before it happens rather than to repair it afterwards, since it is usually less costly and in some cases the damage detection is impossible. Temperature and humidity fluctuations and/or external loading can trigger micro-cracking on a concrete structure, which in turn can open a pathway for harmful liquids and gasses. Those substances can degrade either the cement matrix or the embedded reinforcement and can cause an extended and irreversible damage. Prevention of damage or instant repair are not always achievable. Therefore, the idea to develop a cementitious material, which can sense the damage and repair it itself in order to mitigate the loss of durability, has gained ground in the last two decades.","","en","doctoral thesis","","978-94-6366-064-8","","","","","","","","","Materials and Environment","","",""
"uuid:f56a09c9-e381-49c8-8313-14c347f33ce7","http://resolver.tudelft.nl/uuid:f56a09c9-e381-49c8-8313-14c347f33ce7","Miniaturized generator – collector electrochemical sensors","Zafarani, H. (TU Delft OLD ChemE/Organic Materials and Interfaces)","Sudhölter, Ernst J. R. (promotor); Delft University of Technology (degree granting institution)","2018","Electrochemical sensing is considered as one of the most powerful analytical detection techniques. Electrochemical methods have fast response time, high sensitivity and selectivity, and can be performed at low cost. Their inherent ease of miniaturization have made them so popular in recent years. Hence, electrochemical sensors have diverse applications including pathological, clinical, and environmental analyses. Miniaturization of analytical devices plays an important role in the sensor development studies. Miniaturized electrochemical sensors open up opportunities toward faster, more sensitive, more user friendly (ease to use) and portable systems compared to the traditional cumbersome bulky electrochemical cells. Thanks to the recent advances in nano/micro fabrication techniques, scaling down the electrode size to micro and even nano dimensions and developing “lab on a chip” technology is achievable and is considered as a hot topic in electrochemistry. Traditional electrochemical cells are composed of three electrodes: a working electrode, a reference electrode and a counter electrode. However, in this thesis the main focus is on the dual- electrode systems, where two closely spaced working electrodes are placed next to each other. Hence the events at each electrode can be affected by the other one. These two electrodes can be biased independently and the current of each can be detected separately. Biasing one of the electrodes in an oxidizing potential (according to a desired redox active analyte) and the other in a reducing potential, results in a repeated successive oxidation and reduction of analyte species on the two electrode surfaces. Accordingly, the current at each electrode is amplified which leads to a higher sensitivity. Reducing the gap size between the electrodes can further enhance the sensitivity and amplification factor (the ratio between the limiting current in dual electrode mode and the current in a single electrode mode) of the device.","","en","doctoral thesis","","","","","","","","","","","OLD ChemE/Organic Materials and Interfaces","","",""
"uuid:2d432d11-cce4-40de-b951-e89dfebbef27","http://resolver.tudelft.nl/uuid:2d432d11-cce4-40de-b951-e89dfebbef27","Drift-flux modeling of hyper-concentrated solid-liquid flows in dredging applications","Goeree, J.C. (TU Delft Offshore and Dredging Engineering)","van Rhee, C. (promotor); Keetels, G.H. (copromotor); Delft University of Technology (degree granting institution)","2018","Transporting large amounts of sand is mostly done hydraulically in dredging and mining. This method of sand transport is efficient and is used in land reclamation projects or extraction of oil from tar sands. Large pieces of equipment, such as pumps and pipe line systems, dredging vessels etc., are used enabling the sand water mixtures to be transported hydraulically. Therefore, a good understanding of the hydrodynamical behavior of sand water mixtures is eminent in order to further improve these kind of systems.
In this thesis a numerical model has been developed which describe the hydraulic behavior of sediment fluid mixtures. In the model the volume concentration of solids varies from 0.0 to 0.6. Moreover, the model is able to describe mixtures consisting of multiple sized sand particles.","hydraulic transport; solids; numerical; dredging; drift-flux","en","doctoral thesis","","","","","","","","","","","Offshore and Dredging Engineering","","",""
"uuid:a65910c1-d291-455b-841b-01820bd58d21","http://resolver.tudelft.nl/uuid:a65910c1-d291-455b-841b-01820bd58d21","Novel routes to polymer-based self-healing systems for cementitious materials","Lu, L. (TU Delft Materials and Environment)","Schlangen, E. (promotor); Han, N (promotor); Delft University of Technology (degree granting institution)","2018","","","en","doctoral thesis","","978-94-6366-078-5","","","","","","","","","Materials and Environment","","",""
"uuid:5c991eac-9fca-4b85-af77-0de4d1e4a8ac","http://resolver.tudelft.nl/uuid:5c991eac-9fca-4b85-af77-0de4d1e4a8ac","Scattering control of optical nano-antennas with designed excitations","Wei, L. (TU Delft ImPhys/Optics)","Urbach, Paul (promotor); Bhattacharya, N. (copromotor); Delft University of Technology (degree granting institution)","2018","","Nano-antenna; Mie theory; vector vortex beam; non-radiating current source; directional scattering","en","doctoral thesis","","978-94-028-1148-3","","","","","","","","","ImPhys/Optics","","",""
"uuid:214e1e9a-c53e-47c7-a12c-b1eb3ec8293b","http://resolver.tudelft.nl/uuid:214e1e9a-c53e-47c7-a12c-b1eb3ec8293b","Aerodynamic and Aeroacoustic Interaction Effects for Tip-Mounted Propellers: An Experimental Study","Sinnige, T. (TU Delft Flight Performance and Propulsion)","Veldhuis, L.L.M. (promotor); Eitelberg, G. (promotor); Delft University of Technology (degree granting institution)","2018","Propellers can enable a significant reduction in energy use of future aircraft by offering a higher propulsive efficiency than turbofan engines. This is especially relevant for a new generation of (hybrid-)electric aircraft. However, the integration of propellers with the airframe remains a challenge, and leads to performance and noise penalties. Yet, by optimally integrating the propellers with the airframe, these penalties can be minimized or even converted into significant performance benefits. A key example of a potentially beneficial integration approach is the tip-mounted propeller. This thesis provides an experimental analysis of the aerodynamic and aeroacoustic interactions and potential performance-enhancement strategies for such propellers. The unique experimental results highlight that tip-mounted propellers provide a significant efficiency benefit due to tip-vortex attenuation and swirl recovery. For the tractor-propeller configuration, this led to a measured 15% reduction in drag at typical cruise conditions when compared to a conventional propeller–wing configuration. For a vehicle with co-rotating propellers, i.e. propellers with equal rotation direction on both sides of the aircraft, the tip-vortex interaction would cause asymmetric aerodynamic loading. This was alleviated by installing swirl-recovery vanes, which reduce the swirl in the propeller slipstream before its interaction with the downstream aerodynamic surface. Besides the time-averaged effects, unfavorable unsteady loads occur on the downstream surface immersed in the propeller slipstream, possibly leading to structure-borne noise. These unsteady loads were shown to be dominated by the periodic impingement of the propeller-blade tip vortices, and were reduced by installing a flow-permeable leading edge. For pusher-propeller configurations, the inflow to the propeller is nonuniform due to the momentum deficit in the wake of the support pylon or wing positioned upstream of the propeller. The resulting wake encounter causes unsteady propeller-blade loads, which resulted in a noise penalty of up to 24 dB. The deficit in the wake was reduced by using a blowing system, installed in the trailing edge of a pylon model. Measurements showed that this alleviated the effects due to the wake encounter, resulting in noise levels comparable to those emitted by the isolated propeller. The results presented in this thesis emphasize the sensitivity of the aerodynamic and aeroacoustic performance of installed tip-mounted propeller propulsion systems to interactions between the propeller and the airframe. It is shown that significant integration benefits can be obtained by exploiting the beneficial interactions, while both active and passive control techniques are available to mitigate the adverse interactions. The knowledge gained from the research study discussed in this thesis can be used to advantage in the design of future highly efficient aircraft.
The first experiment was a demonstration of qubit control by selective broadcasting in order to reduce the scaling of expensive electronics with the number of qubits for individual single-qubit control. We demonstrated that we can bring two transmon qubits to the same frequency (combining fabrication accuracy and in-situ fine tuning) and use the same hardware to control both, routing the pulses with a nanosecond-timescale vector switch matrix. Despite the compromises required by this technique, we show a scalable path to single qubit control beyond the threshold required for quantum error correction. In benchmarking, we take into account gate leakage due to the fact that transmons are fundamentally multi-level systems.
In the second experiment we establish entanglement between two transmon qubits on different chips. We use an entanglement by measurement scheme and demonstrate that we can overcome minor fabrication imperfections by shaping our measurement pulses. Ultimately, performance is mainly limited by photon loss between the chips and up to the amplification chain. This entanglement mediated by traveling photons could be used to make a distributed transmon processor where computations are spread across several chip modules. This modularity could enable connectivities that cannot be realized on chip and ease fabrication requirements, as modules could be individually fabricated and selected.
Thus, both of these experiments fit into the larger effort to converge on the hardware, control equipment and architecture of a future large-scale transmon quantum computer. Other experiments I contributed to are summarized in the conclusion chapter to show the diverse physics that can be studied in cQED experiments.
The main contribution of this dissertation is the design of a technique to decompose a large-scale scenario program into small-scale distributed scenario programs for each agent. Building on existing results in literature, we provide novel guarantees to quantify the robustness of the resulting solutions in a distributed framework. In this setting, each agent needs to exchange some information with its neighboring agents that is necessary due to the statistical learning features of the proposed setup. However, this inter-agent communication scheme might give rise to some concerns about the agents' private information. We therefore present a novel privatized distributed framework, based on the so-called differential privacy concept, such that each agent can share requested information while preserving its privacy. In addition, a soft communication scheme based on a set parameterization technique, along with the notion of probabilistically reliable set, is introduced to reduce the required communication burden. Such a reliability measure is incorporated into the feasibility guarantees of agent decisions in a probabilistic sense. The theoretical guarantees of the proposed distributed scenario-based decision making framework coincide with the centralized counterpart, however the scaling of the results with the number of agents remains an issue.
reinhardtii subject to external periodic flows: A model system for synchronization in biology","Quaranta, G. (TU Delft Fluid Mechanics)","Westerweel, J. (promotor); Tam, D.S.W. (copromotor); Delft University of Technology (degree granting institution)","2018","Synchronization of oscillators is an ubiquitous phenomenon that involves mechanical systems, like pendulum clocks, but also biological systems, like peacemaker cells in the heart or neural activity in the brain. If we consider biological systems at the microscale, namely at the scale of cells, we find that processes like locomotion and fluid transport often exploit synchronization of mechanical oscillators called flagella orcilia. These oscillators at the microscale are whip-like structures extending from the cell body. They are present in a number of micro-organisms like sperm cells, Paramecium or the algae C. reinhardtii. In human, cilia are found in the lungs, the respiratory tract and the middle ear. Cilia are activated in a coordinated way to effectively carry out their function, such as draining mucus. The mechanism behind this cilia coordination is still debated. It is not clear how very simple organisms lacking any feedback system have developed complex oscillatory patterns involving coordination among a multitude of cilia or flagella. There is high interest in understanding the fundamental principles ruling ciliary dynamics, since they would impact medical and engineering applications. The purpose of this thesis is to investigate the mechanisms regulating the synchronization of cilia and flagella.","Chlamydomonas; synchronization; flagella; hydrodynamic forces","en","doctoral thesis","","978-94-6186-943-2","","","","","","","","","Fluid Mechanics","","",""
"uuid:bcc08af2-9849-4f7d-9dcb-e0ace169c510","http://resolver.tudelft.nl/uuid:bcc08af2-9849-4f7d-9dcb-e0ace169c510","Young People’s Housing Opportunity in Post-reform China","Deng, W. (TU Delft OLD Housing Systems)","Elsinga, M.G. (promotor); Hoekstra, J.S.C.M. (promotor); Delft University of Technology (degree granting institution)","2018","The inquiry that has culminated in this thesis was inspired by the challenges that many young Chinese people were facing when trying to gain access to affordable housing at the time of study, the early 2010s. By then, more than thirty years of housing reforms had completely changed how housing was being provided in China. The resulting structure had led young people to access housing in ways that were very different from those of their parents’ generation (Deng, Hoekstra & Elsinga, 2017). These observations prompted the following research question: What are the key factors defining young people’s opportunity to access housing, and how do these factors relate to China’s institutional changes during and after the market reform? The ensuing research has demonstrated that parental resources and intergenerational reciprocity are indispensable to the housing opportunity of young people, as home ownership has come to mediate the exchange of resources between...","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-066-2","","","","A+BE | Architecture and the Built Environment No 22 (2018)","","","","","OLD Housing Systems","","",""
"uuid:55bf79cf-2921-4976-9668-c07f30db8d07","http://resolver.tudelft.nl/uuid:55bf79cf-2921-4976-9668-c07f30db8d07","Trajectories of neighborhood change","Zwiers, M.D. (TU Delft OLD Urban Renewal and Housing)","van Ham, M. (promotor); Kleinhans, R.J. (promotor); Manley, D.J. (copromotor); Delft University of Technology (degree granting institution)","2018","Neighborhoods represent a scale at which inequalities are reflected in the unequal spatial distribution of ethnic and income groups across urban space. However, neighborhoods are not static entities and spatial patterns of socioeconomic and ethnic inequality shift over time as a result of processes of neighborhood change. This dissertation has adopted a longitudinal approach to analyze patterns of neighborhood change on a relatively low spatial scale. This dissertation illustrates that neighborhoods remain relatively stable over time in their socioeconomic and ethnic status and that change takes several decades to take effect. This dissertation finds that neighborhoods exhibit a strong degree of path-dependency and demonstrates how the housing stock influences neighborhood trajectories. In addition, it shows how large-scale changes to the housing stock in the context of urban restructuring affect residential mobility and neighborhood upgrading. This dissertation also reveals the ways in which different population dynamics interact to inhibit or generate neighborhood change to reproduce socio-spatial inequalities. Moreover, the innovative methods that are explored in this dissertation contribute to broadening the scope of statistical methods for the longitudinal analysis of neighborhood change.","neighborhood change; segregation; selective mobility; urban restructuring; ethnicity","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-068-6","","","","A+BE | Architecture and the Built Environment No 21 (2018)","","","","","OLD Urban Renewal and Housing","","",""
"uuid:f8cb0602-d19d-48ac-bc5a-46e3c48ea4c1","http://resolver.tudelft.nl/uuid:f8cb0602-d19d-48ac-bc5a-46e3c48ea4c1","Information engineering for supporting situation awareness of nautical traffic management operators","van Doorn, E.C. (TU Delft Cyber-Physical Systems)","Horvath, I. (promotor); Rusak, Z. (copromotor); Delft University of Technology (degree granting institution)","2018","The main challenge of operators working with complex information systems is to gain sufficient situation awareness (SA). In this research project, we developed an analysis scheme to support a holistic study of SA. This analysis scheme was complemented with a method to process cognitive task analysis and observational research data to identify deficiencies of systems in supporting SA. This approach was applied in nautical traffic management practice. In total we identified 30 deficiencies, of which 23 were related to how the system interfaces support human information processing. The commonly applied user-centered design method was insufficient to overcome the identified deficiencies. User-centered design therefore was complemented with information engineering (IE) methods for analyzing relationships between information elements and for specifying UI design. Application of different IE methods resulted in the development of three UI concepts; a coherent, an integrated and a context-dependent adaptable UI. The generated UI concepts were tested using a nautical traffic management workplace simulator. Usability testing showed that the proposed IE approach had a positive effect on the effectiveness, efficiency and user satisfaction. Evaluation of the impact of the UI prototypes showed that the application of a graph theory based IE approach had positive effects on operators’ speed of gaining SA, speed of communication with priority stakeholders, and the likeliness that operators executed necessary actions in the required order. Application of semantic networks resulted in a UI which provided better support of answering questions of skippers related to future states of the traffic management environment.","situation awareness; traffic management; Information engineering; User Interfaces and Human Computer Interaction; Research Through Design","en","doctoral thesis","","978-94-6186-929-6","","","","","","","","","Cyber-Physical Systems","","",""
"uuid:04ccfed8-ab44-43c6-8223-3da5e5bb7c4a","http://resolver.tudelft.nl/uuid:04ccfed8-ab44-43c6-8223-3da5e5bb7c4a","Maintenance optimization for railway infrastructure networks","Su, Z. (TU Delft Team Bart De Schutter)","De Schutter, B.H.K. (promotor); Baldi, S. (copromotor); Delft University of Technology (degree granting institution)","2018","Maintenance is crucial for the proper functioning and lifetime extension of a railway infrastructure network, which is composed of various infrastructures with different functions. In this thesis we develop robust and tractable model-based approaches for maintenance optimization of railway infrastructure networks. In addition, we develop a compact formulation for a variant of the multiple Traveling Salesman Problem (TSP), that can be applied to optimal scheduling of maintenance crews for a railway network, and a systematic numerical solution method for reverse Stackelberg game with incomplete information, which can be viewed as the framework for optimal maintenance contract design.","maintenance optimization; railway infrastructures; travelling salesman problem; model predictive control; reverse Stackelberg games","en","doctoral thesis","","978-90-5584-238-4","","","","","","","","","Team Bart De Schutter","","",""
"uuid:c9b50bd1-2db6-4003-827b-883f11c18742","http://resolver.tudelft.nl/uuid:c9b50bd1-2db6-4003-827b-883f11c18742","Responsive organocatalysis in soft materials","Trausel, F. (TU Delft ChemE/Advanced Soft Matter)","van Esch, J.H. (promotor); Eelkema, R. (promotor); Delft University of Technology (degree granting institution)","2018","Cells react to the environment by changing the activity of enzymes. Catalysts, such as enzymes, speed up reaction rates by lowering the activation energy of the reaction. Changing reaction rates by altering enzyme activity is used to temporarily increase the production of, for instance, a hormone or to change the mechanical properties of a cell. Control over enzyme activity is achieved in two different ways: by covalent modifications (e.g. phosphorylation) and by non-covalent interactions (allosteric enzymes). In this thesis we describe how we designed signal-responsive catalysts and used them to introduce signal response in artificial materials. Inspired by nature we developed a covalent and a noncovalent method to design catalysts that can react to signals from their environment. To design covalently protected catalysts we used self-immolative chemistry. A self-immolative molecule contains a signal-labile functional group. When this group reacts with the signal, the molecule fragments and releases a molecule of interest, in our case a catalyst.","","en","doctoral thesis","","978-94-6186-947-0","","","","","","","","","ChemE/Advanced Soft Matter","","",""
"uuid:d8f88824-40cc-4358-b7a0-a2d932eb65f5","http://resolver.tudelft.nl/uuid:d8f88824-40cc-4358-b7a0-a2d932eb65f5","Physical and Computational Approaches to Aberration Correction In Fluorescence Microscopy","Wilding, D. (TU Delft Team Raf Van de Plas)","Verhaegen, M.H.G. (promotor); Van de Plas, Raf (copromotor); Delft University of Technology (degree granting institution)","2018","The goal of this thesis, called Physical and Computational Approaches to Aberration Correction In Fluorescence Microscopy, concerns itself with the development of new techniques to control adaptive fluorescence microscopes, so that they can adapt and image with increased resolution, contrast and speed inside complex three-dimensional biological samples.","adaptive optics; microscopy; fluorescence; deconvolution; optics; aberrations","en","doctoral thesis","","978-94-6233-996-5","","","","","","","","","Team Raf Van de Plas","","",""
"uuid:7ce26659-91fa-45a0-bfdf-7223375fed69","http://resolver.tudelft.nl/uuid:7ce26659-91fa-45a0-bfdf-7223375fed69","Ultrasound Matrix Transducers for High Frame Rate 3D Medical Imaging","Shabanimotlagh, M. (TU Delft ImPhys/Acoustical Wavefield Imaging)","de Jong, N. (promotor); Verweij, M.D. (promotor); Delft University of Technology (degree granting institution)","2018","","","en","doctoral thesis","","978-94-6375-077-6","","","","","","","","","ImPhys/Acoustical Wavefield Imaging","","",""
"uuid:d7132920-346e-47c6-b754-00dc5672b437","http://resolver.tudelft.nl/uuid:d7132920-346e-47c6-b754-00dc5672b437","The Elements of Deformation Analysis: Blending Geodetic Observations and Deformation Hypotheses","Velsink, H. (TU Delft Mathematical Geodesy and Positioning)","Hanssen, R.F. (promotor); Niemeier, Wolfgang (promotor); Versendaal, Johan (promotor); Delft University of Technology (degree granting institution)","2018","The subject of this study is deformation analysis of the earth's surface (or part of it) and spatial objects on, above or below it. Such analyses are needed in many domains of society. Geodetic deformation analysis uses various types of geodetic measurements to substantiate statements about changes in geometric positions.
Professional practice, e.g. in the Netherlands, regularly applies methods for geodetic deformation analysis that have shortcomings, e.g. because the methods apply substandard analysis models or defective testing methods. These shortcomings hamper communication about the results of deformation analyses with the various parties involved. To improve communication solid analysis models and a common language have to be used, which requires standardisation.
Operational demands for geodetic deformation analysis are the reason to formulate in this study seven characteristic elements that a solid analysis model needs to possess. Such a model can handle time series of several epochs. It analyses only size and form, not position and orientation of the reference system; and datum points may be under influence of deformation. The geodetic and physical models are combined in one adjustment model. Full use is made of available stochastic information. Statistical testing and computation of minimal detectable deformations is incorporated. Solution methods can handle rank deficient matrices (both model matrix and cofactor matrix). And, finally, a search for the best hypothesis/model is implemented. Because a geodetic deformation analysis model with all seven elements does not exist, this study develops such a model.
For effective standardisation geodetic deformation analysis models need: practical key performance indicators; a clear procedure for using the model; and the possibility to graphically visualise the estimated deformations.
This study shows that key performance indicators can be derived from the method of hypothesis formulation and testing, and from rejection criteria. They can also stem from the description of the test quality by means of minimal detectable deformations. A clear procedure is possible, if an unambiguous way is provided to distinguish the observation noise, the deformation signal with zero mean in time, and the deformation trend from each other. The graphical visualisation, finally, demands clearly defined quantities that are sensitive only to the deformations of the object at hand and not to changes in, e.g., the reference system.
In this study I propose a geodetic deformation analysis model, which is built around a least-squares adjustment model. Two adjustment models are developed in this study: one model uses geodetic measurements in the observation vector. In the other model this vector holds pre-computed coordinates, which follow from separate adjustments per epoch.
The parameter vector holds, for both models, the final coordinates. Both models yield the same adjustment results. The choice, which one to use, depends on the professional context in which the model is used.
The developed geodetic deformation analysis model is shown to be effective in several use cases. These use cases are geodetic networks in 1D, 2D and 3D that have been measured in several epochs, and which are analysed with one of the two adjustment models, mentioned above.
Moreover, the proposed analysis model not only possesses the seven necessary elements, mentioned before, it also has some additional advantageous characteristics. First, it is possible to define the S-basis of the geodetic network, used for deformation analysis, with points that are under influence of deformation. Secondly, there is no need for a separate analysis of reference and object points; they are analysed simultaneously. Thirdly, the deformation estimates of moving points are relative to all the other points of the same network (moving or not), not relative to an S-basis. These estimates are invariant for a change of S-basis, i.e. for an S-transformation. Finally, biases in geodetic measurements and deformation hypotheses can be tested simultaneously.
The availability of key performance indicators, based on the analysis model and its characteristic elements as described in this study, and the definition of a statistically significant deformation, provided in this study, make a standardised procedure for geodetic deformation analysis possible. Thus a tool is available for the improvement of communication about geodetic deformation analysis.
From a numerical point of view, a reservoir simulator’s operation entails the solution of a series linear systems, as dictated by the spatial and temporal discretization of the governing equations. The difficulty lies in the properties of these systems, which are large, ill-conditioned and often have an irregular sparsity pattern. Therefore, a brute-force approach, where the solutions are directly computed at the original fine-scale resolution, is often an impractically expensive venture, despite recent advances in parallel computing hardware. On the other hand, switching to a coarser resolution to obtain faster results, runs the risk of omitting important features of the flow, which is especially true in the case of fractured porous media.
This thesis describes an algebraic multiscale approach for fractured reservoir simulation. Its purpose is to offer a middle-ground, by delivering results at the
original resolution, while solving the equations on the coarse-scale. This is made possible by the so-called basis functions – a set of locally-supported cross-scale interpolators, conforming to the heterogeneities in the domain. The novelty of the work lies in the extension of these methods to capture the effect of fractures. Importantly, this is done in fully algebraic fashion, i.e. without making any assumptions regarding geometry or conductivity properties.
In order to elicit the generality of the proposed approach, a series of sensitivity studies are conducted on a proof-of-concept implementation. The results, which include both CPU times and convergence behaviour, are discussed and compared to those obtained using an industrial-grade AMG package. They serve as benchmarks, recommending the inclusion of multiscale methods in next-generation commercial reservoir simulators.","algebraic multiscale methods; naturally fractured porous media; conductivity contrasts; compressible flow; multiphase transport","en","doctoral thesis","","978-94-6186-956-2","","","","","","","","","Reservoir Engineering","","",""
"uuid:0c8cfe5b-214b-4fb6-873b-37c58f6186cd","http://resolver.tudelft.nl/uuid:0c8cfe5b-214b-4fb6-873b-37c58f6186cd","Changing Values on Water in Delta Cities","Tai, Y. (TU Delft OLD Urban Compositions)","Meyer, Han (promotor); Qu, L. (copromotor); Delft University of Technology (degree granting institution)","2018","Delta cities worldwide are confronted with great challenges concerning flood risks, environmental pressures and other water-related urban issues. The complexity in both physical and social dimensions lies in diverse (and in many cases conflicting) values held by a wide variety of actors in spatial development. These values are shaped by the long-term impacts of natural forces, political powers, development ideologies, economic models, social structures, and local cultures. Defining the central role of “water” in structuring delta cities, this research applies the value concept as a particular lens to study how water is valued in each society through history. It argues that the recognition of diverse water values can help bridge the interplay between physical and societal systems within the delta, which can play a central role in developing urban planning and design strategies towards sustainable and liveable urban water environments.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-071-6","","","","A+BE | Architecture and the Built Environment No 20 (2018)","","","","","OLD Urban Compositions","","",""
"uuid:f63d6020-55ff-4953-8974-247eca7cf4e0","http://resolver.tudelft.nl/uuid:f63d6020-55ff-4953-8974-247eca7cf4e0","Developing an impact-based combined drought index for monitoring crop yield anomalies in the Upper Blue Nile Basin, Ethiopia","Bayissa, Y.A. (TU Delft Water Resources)","Solomatine, D.P. (promotor); Andel, Schalk Jan Van (copromotor); Delft University of Technology (degree granting institution)","2018","Drought is a silent and pervasive disaster that impacts a large area and propagates slowly. Unlike for other natural disasters such as floods, tornados etc., impacts of droughts do not manifest immediately. This makes it more difficult to monitor drought and mitigate adverse effects by early warning. Several drought indices exist to monitor drought. Individually, however, they are unable to provide an integral concise information to characterize and indicate the occurrence of meteorological, agricultural and hydrological droughts. A combined drought index (CDI) using several meteorological, agricultural and hydrological drought indices can indicate the occurrence of all drought types, and can provide information that facilitates the drought management decision-making process. Moreover, development of a CDI can be an impact-based, e.g. by optimizing for monitoring drought-related crop yield reduction. The economic growth in many developing countries relies on the agricultural products, hence developing crop yield monitoring and prediction methods is vital to enhance the economic growth.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-0-367-02451-2","","","","Dissertation submitted in fulfillment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of the UNESCO-IHE Institute for Water Education.","","","","","Water Resources","","",""
"uuid:cfcf1f65-4190-46cb-8688-54389a682c57","http://resolver.tudelft.nl/uuid:cfcf1f65-4190-46cb-8688-54389a682c57","The low-pressure micro-resistojet: Modelling and optimization for future nano- and pico-satellites","Cordeiro Guerrieri, D. (TU Delft Space Systems Egineering)","Gill, E.K.A. (promotor); Cervone, A. (copromotor); Delft University of Technology (degree granting institution)","2018","The aerospace industry is recently experiencing growing interest in very small spacecraft like nano- and pico-satellites. However, these very small satellites are still being developed in most cases without a dedicated propulsion system limiting their capabilities. The micro-resistojet has been recognized as a suitable propulsion system for these classes of satellites due to its scalability and performance. Additionally, it can be classified as a ""green"" propulsion system since it can use naturally any kind of propellant, including ""green"" propellants. The Low-Pressure Micro-Resistojet (LPM) is a type of micro-resistojet concept that works under very low pressure. This PhD thesis is focussed on the development of this propulsion system concept with the goal to enable very small satellites to perform manoeuvres. This improvement allows, for instance, to increase the spacecraft lifetime by active orbit keeping. Furthermore it can enable orbit change manoeuvres and formation flight.","LPM; Micro-Resistojet; Micro-Thruster; Micro-Propulsion System; ""Green"" Propellant; Water propellant","en","doctoral thesis","","978-94-028-1157-5","","","","","","","","","Space Systems Egineering","","",""
"uuid:80c62d6f-0ae0-4e96-9554-841ddcd506c0","http://resolver.tudelft.nl/uuid:80c62d6f-0ae0-4e96-9554-841ddcd506c0","Global Mapping of Atmospheric Composition from Space: Retrieving Aerosol Height and Tropospheric NO2 from OMI","Chimot, J.J. (TU Delft Atmospheric Remote Sensing)","Levelt, Pieternel Felicitas (promotor); Veefkind, j. Pepijn (copromotor); Delft University of Technology (degree granting institution)","2018","The main objective of this thesis is to design a new aerosol layer height retrieval in order to improve the operational NO2 retrieval, both in the troposphere, from space-borne instruments for highly polluted events and under cloud-free conditions. This thesis focuses on the exploitation of the OMI satellite measurements acquired in the visible wavelength range (405-490 nm). In addition, we develop numerical methods and tools (e.g. machine learning) in order to support the operational processing of big data amounts from the forthcoming new-generation satellite instruments for air quality and climate research.","trace gas; aerosol; cloud; air quality; climate; satellite remote sensing; atmospheric retrieval; spectral signature; radiation scattering; absorption","en","doctoral thesis","","978-94-6366-010-5","","","","","","","","","Atmospheric Remote Sensing","","",""
"uuid:8f1cea0c-d12c-4c97-adc2-f48c34c94a25","http://resolver.tudelft.nl/uuid:8f1cea0c-d12c-4c97-adc2-f48c34c94a25","Framework for Military Aircraft Fleet Retirement Decisions","Newcamp, Jeffrey (TU Delft Air Transport & Operations)","Curran, R. (promotor); Verhagen, W.J.C. (promotor); Delft University of Technology (degree granting institution)","2018","The purpose of this work is as follows. Military aircraft are enormous investments for a nation. The systems lifecycle for aircraft spans decades wherein aging effects increase maintenance and operations costs over time. At some point, the deterioration of a fleet of aircraft erodes the capability of those assets below an acceptable threshold, thus triggering retirement planning by a military. Questions arise about how to retire a fleet, including how many aircraft should be retired, when those aircraft should be retired and which aircraft should be chosen. There are few military aircraft fleets that are retired each year, and even fewer managers who understand the aircraft retirement puzzle. This work addresses these questions. The purpose was to provide fleet managers with a comprehensive framework to guide decision-making, as well as to build tools and a standard guidance framework for fleet managers to implement.
In terms of methodology, in the absence of directly applicable existing research in this field, fleet management concepts and modelling approaches were studied in related fields and then applied to the military fleet retirement problem. The vital first approach to the problem required the baselining of military aircraft fleets given structural loading data and utilization histories. Database analysis and trending algorithms were written to draw correlations between existing data and structural fatigue effects. This work then implemented a greedy algorithm model to solve the individual aircraft retirement scheme. That led to a mixed-integer linear programming approach to optimize a fleet utilization and rotation model. Combined, these methods provided concrete steps for the fleet retirement decision framework, which followed established methods for designing a decision support framework. Throughout the work, a consistent case study fleet (United States Air Force’s A 10 Thunderbolt II) was utilized to provide validation of the methods, while secondary case studies and validation techniques were employed to test applicability of the methods to other military aircraft fleets and other capital asset types.
In terms of concrete research results from the work carried out, this dissertation discovered that a framework for military aircraft fleet retirement decisions was a needed contribution to the field. In the process of building that framework, other valuable results were obtained. It was found that aircraft utilization information could be correlated to cyclic loading data on an individual aircraft level. This revealed patterns in aircraft fleets showing which mission types and basing locations either increased or decreased structural degradation. Using that information led to the result that a fleet manager could determine which aircraft to retire prior to others while optimizing an objective function related to fleet cost, fleet utility or the ratio thereof. It was also found that a fleet manager could selectively utilize individual aircraft at particular bases flying particular missions to prolong or hasten the structural degradation of those aircraft. This led to the result that a fleet manager could therefore forecast retirement dates for an entire fleet, subpopulations within that fleet or individual assets.
From the research carried out, it is emphatically concluded that the results imply that a fleet manager beginning with only aircraft usage data can actively manage a fleet of aircraft to extract residual value from the fleet prior to retirement. This work showed that resource allocation could be improved by utilizing a mixed integer linear program to schedule asset retirements. Further, this work illustrated how a management strategy could impact future usage levels in a way to extend useful lifetime. With a capital asset as critical to national defense and as expensive to acquire, operate and retire as military aircraft, focusing on the end-of-life phase of the systems lifecycle not only promotes forward thinking but also provides potential cost savings. This work’s limitations included its focus on military aircraft instead of all capital assets and that the methods were not implemented in an actual fleet environment. This dissertation demonstrated that a flexible framework with core modelling elements is a tool capable of solving the problem of aircraft fleet retirement decisions. Fleet managers both military and otherwise should investigate the applicability of the methods and findings in this dissertation to their own challenges. Future research must include application of the methods to an actual operating fleet. Also, the methods should be applied to other capital asset classes including military equipment and commercial equipment.
In this thesis research, methods for the auralization of environmental acoustical sceneries are established. The sceneries are represented by a virtual environment containing virtual sound sources that are arranged in space and time, and within which sound waves propagate to a virtual observer. To that aim, sophisticated calculation models for the synthesis and reproduction of road traffic, railway and wind turbine noise are developed. This requires investigating the relevance of the involved acoustical phenomena for perceived realism. On that basis, calculation models (i.e. synthesizer structures) are proposed that adequately reproduce source characteristics, sound propagation effects, and spatial impression. The models are accompanied by methods to derive the necessary input parameters from own measurements and data analysis.
The presented calculation models are parametric and thus allow for a large versatility with respect to scenarios and sound reproduction. Because the three considered environmental noise sources feature their specific acoustical peculiarities, source-specific models are proposed. These source-specific models have in common that the sound radiated by a source is artificially generated using digital sound synthesis. For wind turbine and road traffic noise, a combination of additive and subtractive synthesis, denoted as spectral modeling synthesis, is applied. A uniqueness of the wind turbine synthesizer is the ability to reproduce and control different types of characteristic amplitude modulation. The synthesizer for road vehicles separately produces tire noise and propulsion noise. The generated propulsion sounds depend on the engine type, the instantaneous engine condition (engine speed and load), and the emission angle. An additional special feature of the propulsion sound synthesis is the fact that, besides amplitude and frequency, the phase of the engine harmonics has to and is considered.
For railway rolling and impact noise, in contrast, a physically-based synthesis approach has been developed that describes the mechanical excitation and the vibration of the dynamic wheel/rail system. The corresponding model considers the microstructure of the wheels and rails, as well as structural resonances of the wheel/rail system to elicit the typical metallic sound character of railway noise. In all models, sound propagation effects, such as geometrical divergence, Doppler effect, atmospheric absorption, ground effect and amplitude fluctuations due to atmospheric turbulence, from a virtual point source to a virtual observer location are simulated by processing the synthetic source signals with time-variant filters in the time domain.
Auralizations created with the presented models feature a high audio quality and are judged as plausible and realistic by expert listeners. To achieve this realism in the auralizations, it was found that variation with respect to time, frequency, space, and orientation is crucial.
The presented models extend the today's body of existing auralization models and allow for new possible applications.","Acoustics; Environmental pollution; Simulation; Noise; Auralization","en","doctoral thesis","","","","","","","","","","","Aircraft Noise and Climate Effects","","",""
"uuid:06aaa3f6-cf9d-4c5e-8502-3950810dadc5","http://resolver.tudelft.nl/uuid:06aaa3f6-cf9d-4c5e-8502-3950810dadc5","Sustainable High-rises: Design Strategies for Energy-efficient and Comfortable Tall Office Buildings in Various Climates","Raji, B. (TU Delft Climate Design and Sustainability)","van den Dobbelsteen, A.A.J.F. (promotor); Tenpierik, M.J. (copromotor); Delft University of Technology (degree granting institution)","2018","With the aim to limit the number of ineffective designs, this dissertation has investigated the impact of architectural design strategies on improving the energy performance of and thermal comfort in high-rise office buildings in temperate, sub-tropical and tropical climates. As the starting-point of this research, a comparative study between twelve high-rise office buildings in three climate groups was conducted. For each climate group, three sustainable high-rises were selected and one typical high-rise design as a reference. The effectiveness of architectural design strategies was compared between the two categories of buildings (high-performance versus low-performance) concerning their potential impact on heating, cooling, lighting and ventilation loads. Certain architectural design strategies were found to be major determinants of energy performance in high-rise buildings. These can be classified under the categories of geometric factors, envelope strategies, natural ventilation strategies, and greenery systems. To quantify the extent to which these architectural design strategies affect energy use and thermal comfort of tall office buildings, simulation studies were carried out.
To quantify the impact of geometric factors on the energy efficiency of high-rise office buildings, performance-based simulations were carried out for 12 plan shapes, 7 plan depths, 4 building orientations and discrete values for the window-to-wall ratio (WWR). The results of the total annual energy consumption (and different energy end-uses) were used to define the most and least efficient solutions. The optimal design solution is the one that minimises, on an annual basis, the sum of the energy use for heating, cooling, electric lighting and fans. The percentile difference - a deviation in the total energy use - between the most and least efficient design options showed the extent to which geometric factors can affect the energy use of the building. It was found that geometric factors could influence the energy use up to 32%. Furthermore, the recommended design options were classified according to their degree of energy performance for each of the climates.
The second group of strategies is related to the envelope design. To quantify their degree of influence, an existing tall office building was selected as a typical high-rise design for each of the climates and the energy use prior and after refurbishment was compared through computer simulations with DesignBuilder. The 21-storey EWI building in Delft, the Netherlands, is selected as the representative for the temperate climate and the 65-storey KOMTAR tower in George Town, Malaysia, for the tropical climate. As part of a sensitivity analysis, energy performance simulations defined façade parameters with higher impact on building energy consumption. A large number of computer simulations were run to evaluate the energy-saving potential of various envelope measures, as well as their combinations. The results showed which set of envelope measures suits each climate type best. Furthermore, it was found that the right combination of envelope strategies could reduce the total energy use of a conventional tall office building by around 42% in temperate climates and around 36% in tropical climates.
One other important difference between conventional and sustainable tall buildings is related to the application of natural ventilation. In this regard, the potential use of different natural ventilation strategies to reduce the energy demand for cooling and mechanical ventilation in high-rise buildings was investigated by using the same validated base models. The results showed that for a naturally ventilated tall office building in the temperate climate on average only 4% of the occupancy hours a supplementary air-conditioning system might be needed for providing thermal comfort during summer. For the tropical climate, the average percentage of discomfort hours (when air-conditioning is required to keep the indoor air temperature within the comfort limits) was around 16% of the occupancy hours during one year. In both climates, natural ventilation strategies could meet the minimum fresh air requirements needed for an office space for almost the entire period of occupancy hours; 96% in temperate climates and 98% in tropical climates.
The last important strategy that is becoming an integrated part of sustainable tall buildings is the use of greenery systems. The effects of greenery systems on the energy-efficiency, thermal comfort and indoor air quality of buildings were investigated by conducting a thorough literature review on five greenery concepts, including the green roof (GR), green wall (GW), green balcony (GB), sky garden (SG) and indoor sky garden (ISG). It was found that greenery systems have a limited impact for reducing the energy use of high-performance buildings. The maximum efficiency of greenery systems was reported during summer and for places with higher solar radiation and when integrated into buildings that have no solar control systems. However, other large-scale benefits for the urban environment (mitigation of CO2 concentration) and building residents (increased productivity and higher well-being) could justify the application of greenery systems as an essential sustainability feature for the design of tall office buildings.
To sum up, the architectural design is a determinant contributor to the performance of buildings and the comfort of occupants. The findings of this research were used to point out climate specific design strategies for tall office buildings in temperate and tropical climates. At the end of dissertation, a proposed model of an energy-efficient and comfortable high-rise office building for each of the investigated climates was illustrated. It is expected that the discussions and recommendations provided in this dissertation could form an acceptable starting point for improvements to tall building design and could be of assistance to make energy-wise decisions during the design process.
In the first part of this thesis, to start with, we have extended the state-of-the-art results using extended notions of dwell time and of average dwell time: mode-dependent dwell time and mode-dependent average dwell time, respectively. This gives rise to less conservative switching signals. To address the cases in which the next subsystem to be switched on is known, we propose a new time-dependent switching scheme: mode-mode-dependent dwell time, which not only exploits the information of the current subsystem, but also of the next subsystem. Subsequently, an adaptive law for uncertain switched linear systems has been introduced, which fills the theoretical gaps between adaptive control of non-switched linear systems and of switched linear systems. The proposed adaptive law and switching law based on dwell time guarantee asymptotic convergence of the tracking error to zero and, with a persistent exciting reference input, convergence of parameter estimates to nominal parameters asymptotically. To conclude the first part of this thesis, the adaptive law for switched linear systems has been modified using the ideas of parameter projection and leakage method, depending on the available a priori information: when the bounds of uncertain parameters are known, parameter projection is adopted; otherwise, the leakage method is used. The resulting adaptive closed loop system is shown to be global uniform ultimate bounded in the presence of external disturbances.
In the second part of this thesis, adaptive and robust stabilization of switched linear systems have been investigated. Based on the stability conditions, adaptive stabilization of uncertain asynchronously switched systems is studied. Furthermore, in the presence of discontinuous time-varying delays, neither Krasovskii nor Razumikhin techniques can be successfully applied to adaptive stabilization of uncertain switched time-delay systems. A new adaptive control scheme for switched time-delay systems is developed that can handle impulsive behavior in states and time-varying delays with discontinuities. At the core of the proposed scheme is a Lyapunov function with a dynamically time-varying coefficient, which allows the Lyapunov function to be non-increasing at the switching instants. The control scheme substantially enlarges the class of uncertain switched systems for which the adaptive stabilization problem can be solved. Furthermore, in the presence of switching delays between a mode change and activation of its corresponding controller, enhanced stability criteria are investigated, whose novelty consists in continuity of the Lyapunov function at the switching instants and discontinuity when the system modes and controller modes are matched. The proposed Lyapunov function can be used to guarantee a finite non-weighted L2 gain for asynchronously switched systems, for which methods proposed in literature are inconclusive.","switched linear systems; parametric uncertainties; adaptive control; robust control; time delays","en","doctoral thesis","","978-94-6186-937-1","","","","","","","","","Team Bart De Schutter","","",""
"uuid:7f301bbd-c8a1-4614-8bb0-2c17f9e39dd6","http://resolver.tudelft.nl/uuid:7f301bbd-c8a1-4614-8bb0-2c17f9e39dd6","Acoustically effective facade","Krimm, J. (TU Delft Design of Constrution)","Knaack, U. (promotor); Techen, Holger (promotor); Klein, T. (promotor); Delft University of Technology (degree granting institution)","2018","Today’s city centres in European metropolitan areas are comprised of facades made of steel, glass and stone. These hard reflective facades are amplifying the perception of noise sources by human ears in their vicinity. Up to now in building designs this effect is neglected. Thus the number of people harmed by noise is increasing with the increasing noise levels on the streets caused by more and more hard reflective facades. To obtain control on urban acoustic spaces the focus of architects and engineers must be shifted to acoustics parameters. Several case studies in course of this research give evidence for the possibility of controlling the impact of noise sources on an urban space with modified facades. The experience and results of the case studies were merged to deliver a plot of a process chart for implementing the acoustical point of view in a building design process. Laboratory methods e.g. scale model measurements and impedance measurements were modified in order to be feasible in a building or facade design process. As with modified reflection properties of facade surfaces a sound reduction of up to 8 dB for specific frequency bands is feasible the building of quieter cities is in the responsibility of architects and engineers.","Urban acoustic; facades; building design; Noise control; Design Process","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-052-5","","","","A+BE | Architecture and the Built Environment No. 16 (2018)","","","","","Design of Constrution","","",""
"uuid:6e96db66-1df0-4ed1-b343-92939d58d864","http://resolver.tudelft.nl/uuid:6e96db66-1df0-4ed1-b343-92939d58d864","Flocculation and consolidation of cohesive sediments under the influence of coagulant and flocculant","Ibanez Sanz, M.E. (TU Delft Environmental Fluid Mechanics)","Chassagne, C. (promotor); Winterwerp, J.C. (promotor); Delft University of Technology (degree granting institution)","2018","This thesis focuses on the coagulation and flocculation processes of cohesive sediments under the influence of polyelectrolyte. For this study clay particles and anionic and cationic flocculants were used. The influence of the shear stresses on the flocculation demonstrated that the shear stress is the parameter that influences the most the break-up and re-grow of the aggregates. Fine sediment management requires the necessity of understanding what happens between clay particles and flocculant as it affects their flocculation, settling, and the formation of the bed. A good quality soil requires a good de-watering, meaning a quick removal of water, and it should have enough strength at the end of consolidation. This knowledge is of importance for dredging management and end-uses like mining industries, sanitary engineering, dredging engineering will benefit for it and mining industry in which they need to re-use the slurry from their manufacturing.","","en","doctoral thesis","","978 94 6233 988 0","","","","","","","","","Environmental Fluid Mechanics","","",""
"uuid:dd56840f-050e-419c-9ceb-8eca3be414bd","http://resolver.tudelft.nl/uuid:dd56840f-050e-419c-9ceb-8eca3be414bd","Sampling-based Motion Planning in Configuration and State Spaces: Using supervised learning tools","Bharatheesha, M. (TU Delft Robot Dynamics)","Wisse, M. (copromotor); Delft University of Technology (degree granting institution)","2018","Robotic systems are the workhorses in practically all automated applications. Manufacturing industries, warehouses, elderly care, disaster rescue and (unfortunately) warfare are example applications where human life has benefited from robotics. By precisely planning controlling their motions via computer programs, real world tasks can be performed with high levels of accuracy and repeatability. Devising methods and algorithms that generate such motions by a) correctly reporting and finding the desired motion if it exists and b) doing so as fast as possible, has constituted the field of robot motion planning research over the last four decades.
In recent years, the Industry 4.0 initiative has provided a promising avenue for further advances in industrial automation. Modular, quickly reconfigurable and versatile robotic systems that safely collaborate with humans hold the key to future industrial automation. This is a challenging endeavor from an industrial and an academic perspective and inspires the work in this thesis. In alignment with these perspectives, this thesis is presented in two parts.
In the first part, we propose methods and frameworks to effectively utilize open source implementations of configuration space planners to realize flexible and robust solutions for bin picking. To this end, three results are presented: a tool to automatically tune parameters of path planning algorithm implementations, a world championship winning solution for industrial bin picking and a reactive collision avoidance framework for collaborative robotic applications.
Configuration space planners are extremely popular due to their solution speeds of about a tenth of a second for planning problems in 7-8 dimensions. However, a primary limitation of configuration space planners is that their planning solutions do not account for the physical laws governing the movement of robots. Consequently, the possibilities of generating versatile and dynamically feasible motions are highly curtailed. This limitation can be addressed by planning in the state space. However, sampling-based planning in state space is computationally intensive and challenging to realize in practice.
This challenge inspires the second part of this thesis. Here, the goal is to answer the question: Is it possible to achieve planning speeds in state space that are comparable to planning speeds in the configuration space? We pursue this goal by considering the Rapidly exploring Random Tree (RRT) planner in state space to plan a swing-up motion for a simple pendulum. Here, we propose two contributions that alleviate the computational demands of two critical steps in the RRT planner. We present a framework to approximate the distance (pseudo) metric and the steering function in state space using supervised learning tools. Together, a speed up of about 4 orders of magnitude is achieved relative to numerically solving for these two critical steps. However, reaching planning times equivalent to or better than what is achievable in configuration space still remains an elusive goal. Nevertheless, the achieved results serve as encouraging signs to pursue further research in this direction.
For multi-component structures featuring many interface degrees of freedom, standard substructuring dynamics can be combined with interface reduction techniques to obtain compact reduced order models. Chapter~2 summarized a variety of interface reduction techniques for the well-known Craig-Bampton substructuring method. These approaches are reviewed and compared in terms of both computational cost and accuracy. A multilevel interface reduction method is presented as a more generalized approach, where a secondary Craig-Bampton reduction is performed when the subsystems are assembled within localized subsets. The multilevel interface reduction method provides an accurate representation of the full linear model with significantly lower computational cost.
In Chapter~3, we extend the Craig-Bampton method to geometric nonlinear problems by augmenting the system-level interface modes and internal vibration modes of each substructure with their corresponding modal derivatives. The modal derivatives are capable of describing the bending-stretching coupling effects exhibited by geometric nonlinear structures. Once the reduced order model is constructed by Galerkin projection, the upcoming challenge is the computation of the reduced nonlinear internal force vectors and tangent matrices during the time integration. The evaluation of these objects scales with the size of the full order model, and it is therefore expensive, as it needs to be repeated multiple time within every time step of the time integration. To address this problem, we directly express the reduced nonlinear vectors and matrices as a polynomial function of the modal coordinates, using substructure-level higher-order tensors with much smaller size. This enhanced Craig-Bampton method offers flexibility for reduced modal basis construction, as modal derivatives need to be computed only for substructures actually featuring geometrical nonlinearities, and do not need the prior knowledge of the nonlinear response of the full system with training load cases.
For flexible multibody systems, each body undergoes both overall rigid body motion and flexible behavior. To describe the dynamic behavior of each body accurately, the floating frame of reference is commonly applied. In Chapter~4, the enhanced Craig-Bampton method, as proposed in Chapter~3, is embedded in the floating frame of reference. We consider here structures modeled with von-Karman beam elements. Interface reduction methods are in this context unnecessary since the adjacent bodies are connected through a single node. The proposed reduction method constitutes a natural and effective extension of the classical linear modal reduction in the floating frame.
For more complex geometries, like wind turbine blades, extremely simplified beam models can not capture the complexity of the real three-dimensional structure, and therefore the dynamic behavior might not be accurately modeled. In Chapter~5, we present an enhanced Rubin substructuring method for three-dimensional nonlinear multibody systems. The standard Rubin reduction basis is augmented with the modal derivatives of both the free-interface vibration modes and the attachment modes to include bending-stretching coupling effects triggered by the nonlinear vibrations. When compared to the enhanced Craig-Bampton method proposed in Chapter~4, the enhanced Rubin method better reproduces the geometrical nonlinearities occurring at the interface, and, as a consequence, higher accuracy can be achieved.
In Chapter~6, the overall conclusions are drawn and recommendations for further study are provided.
later. Moreover, the collision of the satellites Cosmos 2251 and Iridium 33 in 2009 highlighted the threat by space debris, since it signaled a trend that the future space environment will be dominated by fragmentation debris generated via similar collisions, instead of explosions of rocket upper stages, which had formed the majority of space debris objects in the past. To mitigate the risk of collision and stabilize the space environment, active debris removal (ADR) is of great relevance. According to an analysis by NASA, five space debris objects need to be removed each year to stabilize the space environment starting from the year 2020.
The objective of this research is to investigate the net capturing method for active space debris removal. To remove a debris object from its orbit, many capturing and removal methods have been proposed, such as using a robotic arm, a tethered space robot, or a harpoon system. Among the existing ADR methods, net capturing is regarded as one of the most promising capturing methods due to its multiple advantages. For example, it allows a large distance between a chaser satellite and a target, so that close rendezvous and docking are not mandatory. It is furthermore compatible to different sizes, shapes
and orbits of space debris. Additionally, it is flexible, lightweight and cost efficient. Even though some research on net capturing has been performed, the dynamics of net deployment and debris capturing and the feasibility and reliability of capturing a tumbling target using a net are not fully understood. Based on the relevance of this problem and a review of the state-of-the-art of the scientific literature, the following research questions were formulated. These research questions are answered in this thesis.
RQ1. Which levels of non-cooperativeness of space debris exist? Which are their
associated capturing and/or removal methods and what is the role the net capturing method plays among all those methods?
RQ2. What are the dynamic characteristics of the net capturing method?
RQ3. How to reliably capture a tumbling and non-cooperative debris object using the net capturing method?
To characterize the net capturing method among existing ADR methods and to address the strengths and weaknesses of the net capturing method, matrices with the advantages and drawbacks of the most relevant capturing and removal methods are developed. Space debris objects were divided into three main categories based on their properties, namely, non-operational satellites, rocket upper stages and fragments from collision or explosion. A tailored associated capturing and removal method for each category of space debris objects is provided to facilitate decision-making through these ADR methods. A comparison of the most relevant ADR methods concludes that net capturing
is considered as a promising method among others due to its multiple advantages. It is also found that capturing a tumbling space debris object with unknown physical properties is still facing many technological challenges. Therefore, capturing of tumbling targets using a net needs to be further investigated. The net capture mechanism consists of four flying weights in each corner of a net. The flying weights, named ""bullets"", are shot by a spring system, named ""net gun"". These four bullets expand the large net thus wrapping the target that will be transported by the tether connecting the chaser and the
net. This thesis starts with the analysis of the deployment dynamics of a net. The deployment dynamic characteristics of a net folded in a pattern proposed in this research called ""inwards-folding scheme"" are investigated based on the mass-spring model and the absolute nodal coordinates formulation (ANCF) model. Deployment dynamics of a net based on the ANCF model are, for the first time, modeled, analysed and discussed in-depth. Besides, four critical parameters describing the deployment dynamic characteristics of the net, namely, the maximum area, the deployment time, the travelling distance and the effective period are defined. A sensitivity analysis of the initial input parameters, such as the initial bullet velocity, the shooting angle and the bullet mass with respect to the four critical parameters are performed. Simulations based on the ANCF model are performed and compared with the conventional mass-spring model.
The results from both methods show a good agreement on changes of the four critical parameters. Furthermore, the ANCF model is more capable of describing the flexibility of the net with fewer nodes than the conventional mass-spring model. However, it is more computationally expensive. To investigate the contact dynamics between a net and a target, two contact modeling
methods: the penalty-based and the impulse-based method are compared and analyzed. The theoretical solutions of the single contact and the multiple contacts dynamics based on the impulse-based method are derived. To our knowledge, the impulse-based method is, for the first time, being used in a net capturing scenario. Numerical simulations of targets with basic shapes, i.e., a cube, a ball and a cylinder, are performed to cross-verify the two contact models. It is concluded that the impulse-based method is superior to the penalty-based method with respect to the penetration avoidance and
computational robustness. Moreover, the modeling of the flexibility of a net is addressed and discussed for the first time. To investigate the influence of the flexibility modeling on the net dynamics, simulations of capturing of a ball- and a cube-shaped target using themass-spring model and the ANCF model are performed and compared, respectively. However, it is found that the modeling of the flexibility of a net for capturing a space debris object has little influence on net deployment and contact dynamics. The dynamics of the net deployment and contact with the target have to be experimentally validated. A parabolic flight experiment performed under ESA contract allows to compare the experimental results with the simulations of the net deployment and the capturing phase. In the net deployment phase, simulation results based on both net modelling methods, the mass-spring model and the ANCF model, are compared with the
experimental results. From the analysis of the absolute and the average relative residuals between the simulations and results of the parabolic flight experiment, it is concluded that both models are able to describe the motion of the bullets and the net along the traveling direction with an average relative residual error up to 15%. In the net capturing phase, both contact models, the penalty-based method and the impulse-based method, are validated by the parabolic flight experiment of the capturing of an Envisat mockup. The comparison shows that the average difference between the two models is limited to 7% when comparing with the travelling distance of the net. With the validated net deployment and contact dynamic models, net capturing of free-floating targets and tumbling targets is investigated for the first time. The net’s compatibility to handle different sizes and shapes of targets is demonstrated by simulation results of the capturing of three types of targets varying in size and shape, namely, a
3-unit Cubesat without appendages, the simplified representation of the second upper stage of the Zenit-2 rocket and the Envisat satellite. Simulation results show that for free-floating targets the net is able to capture and surround the targets without pushing them away. For tumbling targets, the net without a closing mechanism is able to capture the targets when their tumbling rates are within a certain range: 0-1.5 rad/s for the Cubesat and 0-0.7 rad/s for the rocket upper stage. Simulations of the tumbling Envisat, which has appendages such as a solar panel and a radar antenna, indicates that the net capturing method is more robust to irregularly shaped targets than regularly shaped targets. Finally, a novel concept of a closing mechanism is designed and its effectiveness is demonstrated to ensure a successful capturing of the targets evenwith a higher tumbling rate.","space debris; net capturing method; deployment dynamics; contact dynamics; parabolic flight experiment; tumbling targets capturing; net closing mechanism.","en","doctoral thesis","","978-94-6295-985-9","","","","","","","","","Space Systems Egineering","","",""
"uuid:c3be6373-f4f2-4865-b3f0-750bfb17871e","http://resolver.tudelft.nl/uuid:c3be6373-f4f2-4865-b3f0-750bfb17871e","Targeting static and dynamic workloads with a reconfigurable VLIW processor","Hoozemans, J.J. (TU Delft Computer Engineering)","Bertels, K.L.M. (promotor); Wong, J.S.S.M. (copromotor); Delft University of Technology (degree granting institution)","2018","Embedded systems range from very simple devices, such as a digital watch, to highly complex systems such as smartphones. In these complex devices, an increasing number of applications need to be executed on a computing platform. Moreover, the number of applications (or programs) usually exceeds the number of processors found on such platforms. This creates the need for scheduling. Furthermore, each program exhibits different characteristics and their interaction with the (real-life) environmentment leads to real-time requirements. Consequently, the set of programs, called workload, exhibits highly dynamic behavior. Workloads can be dynamic in intensity (i.e., the number of concurrent tasks), characteristics (amount and type of parallelism), and requirements (real-time constraints, power budgets, performance). We argue that dynamic workloads require a dynamic computing platform and propose to use one that comprises the 휌-VEX reconfigurable VLIW processor. It can dynamically adapt to the workload while it is running. Adaptations can be triggered by a user, programmer, compiler, or an operating system. The latter two methods can operate fully automatic and exploring these is one of the goals of this work. Besides dynamic workloads, a number of new classes of embedded devices are running application programs that are very static, but require very high throughput. Examples are the latest generations mobile telecommunications hardware and vision-based applications (automation, surveillance, automated driving). In this case, adapting to the workload at run-time is not advantageous because there are no changes to adapt to. Optimizing for these applications is possible, but must be done before the hardware platform is manufactured (during the design phase) or by making use of Field-Programmable Gate Arrays (FPGAs). This thesis explores the use of the proposed reconfigurable processor to target the full spectrum of embedded workloads. First, design-time reconfigurability is employed to optimize a hardware platform for a static, streaming image processing workload. Second, we explore the run-time reconfigurable processor for dynamic workloads. This is achieved by adapting to a single program to optimize energy efficiency, followed by adapting to a generated set of programs optimizing for throughput. Third, the real-time characteristics of the processor are evaluated and it is shown to have better schedulability compared to static processors. The VLIW architecture results in good timing-predictability, which allows finding tight bounds on the worst-case execution time. Last, we show that the processor is able to assign more parallel execution resources to a static program that is added into the workload, while still guaranteeing time-safety for critical tasks.","Computer architecture; VLIW processor; dynamically reconfigurable; polymorphic; embedded computing; FPGA; streaming","en","doctoral thesis","","978-94-6366-049-5","","","","","","","","","Computer Engineering","","",""
"uuid:a88f215a-3185-47ed-b3e4-bd309c4c8b34","http://resolver.tudelft.nl/uuid:a88f215a-3185-47ed-b3e4-bd309c4c8b34","Towards Geometrically Consistent Aerostructural Optimisation of Composite Aircraft Wings","Gillebaart, E. (TU Delft Aerospace Structures & Computational Mechanics)","Bisagni, C. (promotor); De Breuker, R. (copromotor); Delft University of Technology (degree granting institution)","2018","","Aeroelasticity; Isogeometric analysis; Aerostructural optimization","en","doctoral thesis","","978-94-028-1081-3","","","","","","","","","Aerospace Structures & Computational Mechanics","","",""
"uuid:671712d5-4517-4709-a999-0698a85d5f6c","http://resolver.tudelft.nl/uuid:671712d5-4517-4709-a999-0698a85d5f6c","Stedebouwkundig(e) ontwerpen in woorden: Honderd jaar stedebouwkundige begrippen","Hoekstra, M.J. (TU Delft OLD Urban Compositions)","Meyer, Han (promotor); van Oostendorp, M. (promotor); Delft University of Technology (degree granting institution)","2018","The realisation of an urban design involves many players, such as designers, critics, politicians, journalists, public relations specialists and residents. In order to consult with each other they make use of drawings, but also frequently of words, to be able to interpret the drawn objects in the plans. But are these urbanistic notions used by everyone in the same way? And have there been noticeable changes in their usage or meaning over time? This urbanistic and linguistic doctoral research addresses these questions.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-041-9","","","","A+BE | Architecture and the Built Environment No 15 (2018)","","","","","OLD Urban Compositions","","",""
"uuid:f5a8559e-75ae-4961-a466-71e48dc0c8d2","http://resolver.tudelft.nl/uuid:f5a8559e-75ae-4961-a466-71e48dc0c8d2","On the Paradoxical Nature of Innovation: Evidence from Social Networks in Fryslân","Celik, S (TU Delft Design for Sustainability)","Brezet, J.C. (promotor); van Engelen, J.M.L. (promotor); Joore, Peter (copromotor); Delft University of Technology (degree granting institution)","2018","With the ever-growing development of technology, being able to innovate is the ultimate goal to create an advantage for organizations, regions and countries. Innovation enables the prosperous growth of communities all over the world, but not all regions are able to keep up. This thesis focuses on regions that have not benefitted fully from this innovative development. Fryslân, a northern province of the Netherlands, is an example of such a region. This research aims to explore the social constructs that block progress and help regions to enhance their innovative output.
To provide a holistic understanding of regional innovation systems, this research determines two consecutive paradoxes. Paradox 1 relates to the interdependency of social and technical processes within innovation systems. A comprehensive approach to innovation must consider social processes as a part of innovation, which makes understanding social constructs crucial. Social relationships form a significant part of social constructs, which are the enablers of the knowledge exchange that will lead towards an innovation ecosystem. Paradox 2 focuses on the contradictory set of relationships between actors. Social relationships among a group of individuals are commonly identified as networks and therefore, social network analysis (SNA) is an appropriate tool to study the social constructs that are relevant for innovation systems.
The analysis showed that the networks in Fryslân are compact and not open enough for external knowledge. In addition, the actors that are involved are weakly connected to each other. Although the strong friendship bonds play a role in the hampering closed-like social structure of the province, the least problematic network in Fryslân is the friendship network and, therefore, it must be utilized for the purpose of innovation. Friendship does not have a direct link to innovation, but the power of existing networks and the local dynamics makes the friendship network the best path towards innovative progress.","","en","doctoral thesis","","978-94-028-1091-2","","","","","","","","","Design for Sustainability","","",""
"uuid:6ae2a876-310b-4401-84b7-da9e1ed49079","http://resolver.tudelft.nl/uuid:6ae2a876-310b-4401-84b7-da9e1ed49079","Power system stability and frequency control for transient performance improvement","Xi, K. (TU Delft Mathematical Physics)","Lin, H.X. (promotor); van Schuppen, J.H. (promotor); Dubbeldam, J.L.A. (copromotor); Delft University of Technology (degree granting institution)","2018","The electrical power grid is a fundamental infrastructure in today’s society. The synchronization of the frequency to nominal frequency over all the network is essential for the proper functioning of the power grid. The current transition to a more distributed generation by weather dependent renewable power sources, which are inherently more prone to fluctuations, poses great challenges to the functioning of the power grid. Among these fluctuations, the frequency fluctuations negatively affect the power supply and stability of the power grid. In this thesis, we focus on load frequency control laws that can effectively suppress the frequency fluctuations, and methods that can improve the synchronization stability...","load frequency control; economic power dispatch; transient performance; centralized control; distributed control; multi-level control; transient stability; energy barrier; equilibria","en","doctoral thesis","","978-94-6186-931-9","","","","","","","","","Mathematical Physics","","",""
"uuid:6cacd8a2-b573-4644-922f-cffe4a13344f","http://resolver.tudelft.nl/uuid:6cacd8a2-b573-4644-922f-cffe4a13344f","Energy performance progress of the Dutch non-profit housing stock: a longitudinal assessment","Filippidou, F. (TU Delft OLD Housing Quality and Process Innovation)","Visscher, H.J. (promotor); Nieboer, N.E.T. (copromotor); Delft University of Technology (degree granting institution)","2018","Worldwide, buildings consume a large part of the total energy delivered. In the context of all the end-use sectors, buildings represent the largest sector with 39% of the total final energy consumption, followed by transport in the EU (European Union ). Policy targets and regulations are in force at the EU level to ensure the energy efficiency improvement of the building stock. This research seeks to provide insight into the energy performance progress, of the existing non-profit housing stock in the Netherlands, through the application of energy renovations. The non-profit housing stock comprises 30% of the housing market in the Netherlands and a large part of the policies towards a more efficient housing stock rely on the non-profit housing sector. To that end, we determine the energy renovation rate of the stock and the impact of the applied renovations on both the predicted and actual energy consumption. The difference of predicted and actual energy savings is analysed through longitudinal statistical modelling in renovated and non-renovated dwellings. Based on the knowledge gained on the renovation rates of the non-profit housing stock we compare and evaluate future renovation rates through dynamic building stock modelling and empirical data validation. In essence, we examine the effect that the improvement of thermo-physical characteristics of dwellings has on efforts to make the existing housing stock almost emission-neutral by 2050, as advocated by the European Commission since 2011. The renovation activity is expected to be greater than the construction and demolition activity in the future and as such we need to bring awareness to the actual impact and effectiveness of energy renovations.","Energy Efficiency; Energy renovation; energy savings; Building energy epidemiology","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-047-1","","","","A+BE | Architecture and the Built Environment No 14 (2018)","","","","","OLD Housing Quality and Process Innovation","","",""
"uuid:d3cfe1fc-5782-4315-9cd1-5c6209814595","http://resolver.tudelft.nl/uuid:d3cfe1fc-5782-4315-9cd1-5c6209814595","Magnetic adatoms as building blocks for quantum magnetism","Toskovic, R. (TU Delft QN/Otte Lab)","Otte, A. F. (promotor); van der Zant, H.S.J. (promotor); Delft University of Technology (degree granting institution)","2018","Physics at the level of an atom is dominated by laws of quantum mechanics. Often, this is entangled with a high complexity in behavior of the systems at that length scale. Unravelling the properties of a material at the atomic level is, therefore, a challenging task that easily supersedes current computational capabilities. A route to circumvent this problem is found in physical realization of simpler quantum systems that are representative of the complex quantum systems one is interested in. These simpler physical systems, unlike their more complex counterparts, can actually be measured and information about the complex system, otherwise inaccessible, gained. This thesis describes experimental work focusing mainly on the property of magnetism in spin chains. To mimic these complex systems, we employ a scanning tunneling microscope (STM) to build atomic chains on solid state surfaces and probe their magnetic properties. The intrinsic strength of STM in building and testing structures with single atom precision makes STM a great candidate for simulation of complex quantum systems. In addition to STM having a role of a quantum simulator, I present work supporting STM as a control device determining the very existence of the magnetic excitations of the atom it measures. Finally, I present experimental findings that suggest we are able to probe the magnetic excitations of the atom with subatomic resolution. In summary, this thesis work presents STM as a powerful probing and control tool for studies on quantum magnetism at the level of a single atom.","atomic magnetism; scanning tunneling microscopy; inelastic electron tunneling spectroscopy","en","doctoral thesis","","978-90-8593-347-2","","","","","","","","","QN/Otte Lab","","",""
"uuid:53b30dab-04d8-4904-9e08-4d7d2a2997d2","http://resolver.tudelft.nl/uuid:53b30dab-04d8-4904-9e08-4d7d2a2997d2","Plug-and-Play Optical Waveguide Sensor Systems for Chemical and Biomedical Sensing","Xin, Y. (TU Delft Electronic Instrumentation)","French, P.J. (promotor); Delft University of Technology (degree granting institution)","2018","Outbreaks of bacteria have caused many problems over the last few years and are a major public health concern. Bacteria are now affecting our lives in many ways to a more severe extent, from contaminated food in markets to polluted water. There are some devices available which detect bacteria, however, all of them can only be used in the lab condition and are very expensive. This research is aimed at the development of a biomedical sensor which is capable of monitoring bacteria, especially focusing on diagnosing colorectal anastomotic leakage (AL) in patients at an early stage by detecting the existence of E. coli in the drain fluid. The occurrence of AL in patients after colon surgery is high and is cause for concern as it can lead to severe consequences, such as morbidity or even mortality. Therefore, there is a vital need for an efficient, on-line bedside tool to monitor the bacteria in the leakage: a diagnostic on-line device that is accurate, cost-effective, and ideally operates in an easy plug-and-play fashion which is beneficial for practical application.","optical waveguide; evanescent wave; sensing; plug-and-play","en","doctoral thesis","","","","","","","","","","","Electronic Instrumentation","","",""
"uuid:e045dc8c-ebaa-4fae-b43a-5e3cc6028d2e","http://resolver.tudelft.nl/uuid:e045dc8c-ebaa-4fae-b43a-5e3cc6028d2e","Insights into the nature of iron based Fischer-Tropsch catalysts","Janbroers, S. (TU Delft QN/High Resolution Electron Microscopy)","Zandbergen, H.W. (promotor); Kooyman, P.J. (promotor); Delft University of Technology (degree granting institution)","2018","This thesis focused on iron-based Fischer-Tropsch catalysts. It deals with different iron phases that might play a role in the catalytic process. As the activation step takes place inside the reactor (in-situ at high temperature and pressure), studying the catalyst activation process is not trivial. In addition, since the working catalysts are air sensitive, also post analysis on activated and used catalysts is challenging. As a result, the identity of the real active phase is still unknown.
To gain more insight in Fischer-Tropsch catalysts, we used different techniques like (in-situ) TEM, TEM-EELS, ED and PXRD. TEM and ED are very suitable techniques since they provide very high spatial resolutions (in the Å range). Special TEM grids and TEM holders were designed for this research in order to mimic the activation conditions and / or avoid any exposure to air.
Our results indeed showed that it is essential to avoid any exposure to air prior to analysis. In fact, we can question the results of some older publications where catalysts were exposed. In contrast to literature data published so far, we found no carbon deposits on the outer rim of the iron carbides at high temperature conditions. Overall, we showed that carbon surface layers can change, or even form, during exposure to air. We also found evidence for what in the literature has been designated as “hypothetical q¥-Fe2C”. We proposed a structure model for which the carbon content is higher than in the pure c-Fe5C2 carbide. More research is required to elucidate the exact nature of working iron-based Fischer-Tropsch catalysts.","","en","doctoral thesis","","","","","","","","","","","QN/High Resolution Electron Microscopy","","",""
"uuid:421bbd8f-bd40-4491-b3ae-d4812085c934","http://resolver.tudelft.nl/uuid:421bbd8f-bd40-4491-b3ae-d4812085c934","Lithospheric reflection imaging by multidimensional deconvolution of earthquake scattering coda","Hartstra, I.E. (TU Delft Applied Geophysics and Petrophysics)","Wapenaar, C.P.A. (promotor); Delft University of Technology (degree granting institution)","2018","Seismic interferometry (SI) for body waves offers the opportunity to utilize highfrequency scattering coda from local earthquakes to obtain a detailed reflectivity image of the lithosphere. In this thesis it is demonstrated that classical SI methods are seriously affected by circumstances that are typical of field data and that multiple scattering poses a complex trade-off for SI performance. Therefore, we propose an alternative method by multidimensional deconvolution (MDD) that proves to be more resilient under realistic circumstances and properly utilizes the scattering coda: full-field MDD. The main advantage of this method over classical MDD methods is that the kernel of its governing equation is exact, which allows for an optimal use of the multiple scattering coda to obtain virtual primary reflections of the lithosphere.","","en","doctoral thesis","","978-94-6295-992-7","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:f8e21539-bd26-4694-b170-6d0641e4c31a","http://resolver.tudelft.nl/uuid:f8e21539-bd26-4694-b170-6d0641e4c31a","Revealing the Fate of Photo-Generated Charges in Metal Halide Perovskites","Hutter, E.M. (TU Delft ChemE/Opto-electronic Materials)","Siebbeles, L.D.A. (promotor); Savenije, T.J. (copromotor); Delft University of Technology (degree granting institution)","2018","In this thesis, we have investigated the optoelectronic properties of metal halide perovskites with a special focus on their application in solar cells. In less than a decade of development, metal halide perovskites have yielded solar cells with efficiencies comparable to commercialized technologies. However, there has been limited knowledge about the fundamental properties of these materials. As mentioned in the introduction, the efficiency of perovskite-based solar cells is still not at its theoretical limit. In order to rationally design solar cells with maximized efficiencies, we need to understand which factors are currently limiting the performance of perovskite-based solar cells. In general, one of the first important processes in a solar cell is the absorption of light. For metal halide perovskites based on lead iodide, a thickness of 0.3 micrometer is already sufficient to absorb a substantial amount of visible (sun-)light, which makes these materials very suitable for solar cells. Furthermore, it is crucial that this absorbed light is converted into a current of moving charges, also known as electricity. Semiconductor materials such as silicon or metal halide perovskites have the ideal properties to generate a current of charges from light. In order to use this current however, the charges need to be collected. The efficiency with which charges are collected in a solar cell is closely related to its power conversion efficiency.","","en","doctoral thesis","","978-94-6295-964-4","","","","","","","","","ChemE/Opto-electronic Materials","","",""
"uuid:7c4f0ab1-87cc-45f8-9147-8572cbed8c4a","http://resolver.tudelft.nl/uuid:7c4f0ab1-87cc-45f8-9147-8572cbed8c4a","Open for business: Project-specific value capture strategies of architectural firms","Bos-de Vos, M. (TU Delft OLD Management and Organisation; TU Delft Design & Construction Management)","Wamelink, J.W.F. (promotor); Lauche, Kristina (promotor); Volker, L. (copromotor); Delft University of Technology (degree granting institution)","2018","Architectural firms can be regarded as creative professional service firms. As such, architects need to navigate creative, professional and commercial goals, while simultaneously attempting to fulfil client, user and societal needs. This complex process is becoming increasingly difficult, as the historically established role of architects has become more blurred, contested and heterogeneous. While attempting to reclaim their role or to take on new roles in collaborations with other actors, architectural firms are challenged to develop business models that are financially viable and professionally satisfactory. These business models need to facilitate firms in capturing both financial and professional value in co-creation processes, and they must also suit the project-based structure of the firm. This research contributes insights into how firms might capture multiple dimensions of value in project-based work. It generates new perspectives on processes of organizational value capture and business model design, and provides concrete, practical insights into the difficulties of and opportunities involved in value capture by creative professional service firms.","Architectural firms; business model design; construction projects; professional identity; professional roles; strategy; value creation; value capture; value slippage","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-040-2","","","","A+BE | Architecture and the Built Environment 13 (2018)","","","","","OLD Management and Organisation","","",""
"uuid:ce7b3290-9e0f-406b-93ee-7bfb7c9a8430","http://resolver.tudelft.nl/uuid:ce7b3290-9e0f-406b-93ee-7bfb7c9a8430","Reliability Modeling and Mitigation for Embedded Memories","Agbo, I.O. (TU Delft Computer Engineering)","Hamdioui, S. (promotor); Delft University of Technology (degree granting institution)","2018","Complementary Metallic Oxide Semiconductor (CMOS) technology scaling enhances the performance, transistor density, functionality, and reduces cost and power consumption. However, scaling causes significant reliability challenges both from a manufacturing and operational point of view. Obtaining reliable memories require accurate understanding of the impact of aging (such as Bias temperature instability (BTI)) on individual memory components and how they interact with each other. In this dissertation, two types of challenges are addressed, which are related to BTI aging and partially to mitigation schemes: one related to the aging of sense amplifier and another one to the aging of read path and write path. Analysis of aging impact on different memory sense amplifiers - The analysis of BTI impact on various memory sense amplifier (SA) designs was performed, while taking into account two BTI models (i.e., Atomistic and RD model), different technology nodes (i.e., 90, 65, 45, 32, 22, and 16 nm), and different workloads. First, the analysis and comparison of RD and Atomistic models impact on the SA were performed. The results show that the atomistic trap-based BTI model is more accurate than the RD model. Second, the investigation of BTI impact on the drain-input latch type SA for various technology nodes and supply voltages was performed. The result shows that as technology scales down, the impact of BTI on sensing delay increases, while the sensing voltage decreases, causing less robust and reliable memory sense amplifier. The result also shows that increase in supply voltage compensates the BTI degradation. Third, an accurate technique was proposed and characterized for the integral impact of BTI and voltage temperature variation on the memory standard latch type SA for various technology nodes and workloads. The results show that the degradation is strongly dependent on workload and temperature. Fourth, in addition to the latter, the impact of process variation at timezero was incorporated and analyzed. The results show that the SA sensing delay degradation is more significant at lower nodes and could lead to read failures at lower power supply. This reveals that there must be a tradeoff between performance and reliability. Fifth, an accurate methodology was proposed to quantify the impact of variability on the memory SA offset-voltage for both time-zero and time-dependent variability. The results show that the impact on the offset voltage specification is significant for aging time-dependent variability. Sixth, on top of the latter, the sensitivity of the SA and its failure rate were analyzed for five process corners (i.e., Nominal, Fast-Fast, Fast-Slow, Slow-Fast, and Slow-Slow). The results show that balanced workloads result in a significant low offset voltage specification. Finally, the impact of aging was analyzed and compared, while considering different supply voltages, temperatures, and SA designs. The results show that the High Performance SA degrades faster than other SA types, irrespective of the workload, supply voltage, and temperature. Investigation of read path aging - Adequate techniques was proposed to estimate and mitigate the impact of aging on the read path of a high performance SRAMmemory. The mitigation techniques are based on the re-sizing of the pull-down transistors of the cell’s and the SA’s designs. The results show that the SA mitigation is more effective for the SRAM read path (i.e., SA) than cell mitigation. Investigation of write path aging - The analysis of BTI impact on the SRAM write driver was performed for various supply voltages, temperatures, and technology nodes. The result shows that the impact of BTI increases the write delay and widen its distribution, when the technology scales down.","Memory reliability; Aging; Bias temperature instability (BTI); sense amplifier (SA)","en","doctoral thesis","","978-94-6366-053-2","","","","","","","","","Computer Engineering","","",""
"uuid:51df13ed-6ba0-49ba-99d7-1c14f8fd022e","http://resolver.tudelft.nl/uuid:51df13ed-6ba0-49ba-99d7-1c14f8fd022e","Assessing Liquefaction Flow Slides: Beyond Empiricism","de Jager, R.R. (TU Delft Geo-engineering)","Hicks, M.A. (promotor); Molenkamp, F. (promotor); Delft University of Technology (degree granting institution)","2018","The liquefaction flow slide is an important failure mechanism for under water slopes composed of sand.
The failure of the sand body is the result of liquefaction of loosely packed sand, which suddenly looses a large part of its strength and starts behaving as a fluid. Liquefaction flow slides occur unexpectedly and develop at a very high rate, resulting in considerable damage. As a consequence, we have little detailed information on this phenomenon; the actual nature of the failure can only be established afterwards. The lack of detailed information constrains the engineer who needs to assess the slope stability. The available methods are strongly simplified, as a basis for more advanced techniques is lacking.
This thesis is aimed at the improvement of the assessment of liquefaction flow slides. The first part contains a theoretical treatment of the underlying physics, which form the basis of an advanced calculation model. In addition, a large scale experiment has been developed, the Liquefaction Tank. We managed to reproduce laboratory liquefaction flow slides by gradually tilting the sand bed. The result is surprising; at a very gentle slope the sand bed suddenly and seemingly spontaneously liquefies. The occurrence of these experimental liquefaction flow slides depends on the density of the sand and the rate of tilting. The measurements provide valuable new insights that can be used for the further development of new models.","Liquefaction; Flow Slides; Conservation Equations; Finite Element Method; Scale Model Test","en","doctoral thesis","","978-94-6375-013-4","","","","","","","","","Geo-engineering","","",""
"uuid:c761007e-031d-44ba-9b07-f94a977b8c9a","http://resolver.tudelft.nl/uuid:c761007e-031d-44ba-9b07-f94a977b8c9a","Smart Energy Dissipation: Damped Outriggers for Tall Buildings under Strong Earthquakes","Morales Beltran, M.G. (TU Delft OLD Structural Design)","Nijsse, R. (promotor); Turan, Gursoy (copromotor); Delft University of Technology (degree granting institution)","2018","The use of outriggers in tall buildings is a common practice to reduce response under dynamic loading. Viscous dampers have been implemented between the outrigger and the perimeter columns, to reduce vibrations produced by strong winds. However, its behaviour under strong earthquakes has been not yet properly investigated. Strong earthquakes introduces larger amount of energy into the building’s structure, compared to moderate earthquakes or strong winds. In tall buildings, such seismic energy is dissipated by several mechanisms including bending deformation of the core, friction between structural and non-structural components, and eventually, damage. This research focuses on the capability of tall buildings equipped with damped outriggers to undergo large deformations without damage. In other words, when ground motion increases due to strong earthquakes, the dampers can be assumed to be the main source of energy dissipation whilst the host structure displays an elastic behaviour. These investigations are based on the assessment of both the energy demands due to large-earthquake induced motion and the energy capacity of the system, i.e. the energy capacity of the main components, namely core, outriggers, perimeter columns and dampers. The objective of this research is to determine if the energy dissipated by hysteresis can be fully replaced by energy dissipated through the action of passive dampers. The results show that the use of a set of outriggers equipped with oil viscous dampers increases the damping ratio of tall buildings in about 6-10%, depending on the loading conditions. As the ground motion becomes stronger, viscous dampers effectively reduce the potential of damage in the structure if compared to conventional outriggers. However, the use of dampers cannot entirely prevent damage under critical excitations. Combining a damped outrigger at 0.5 of the total building’s height (h), with a conventional outrigger at 0.7 h is more effective in reducing hysteretic energy ratios and economically viable if compared to a single damped outrigger solution.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-042-6","","","","A+BE | Architecture and the Built Environment No 12 (2018)","","","","","OLD Structural Design","","",""
"uuid:3f2b2c52-7774-4384-a2fd-7201688237af","http://resolver.tudelft.nl/uuid:3f2b2c52-7774-4384-a2fd-7201688237af","Design for Managing Obsolescence: A Design Methodology for Preserving Product Integrity in a Circular Economy","den Hollander, M.C. (TU Delft Circular Product Design)","Bakker, C.A. (promotor); Hultink, H.J. (promotor); Delft University of Technology (degree granting institution)","2018","This thesis argues that in order to increase the likelihood that product lifetime extension in a circular economy will be successful from both an environmental and an economic perspective, industrial designers need to be able to control not only the spatial dimension (materialization and geometry) of products, but also their temporal dimension. This temporal dimension is related to the number and duration of product use cycles and the duration of the total product lifetime. To enable industrial designers to capture this temporal dimension, the thesis presents:
• a new design methodology: design for managing obsolescence;
• five new design methods and two typologies in support of managing obsolescence;
• insight into (the factors determining) how and when to best apply these methods;
• insight into where and in collaboration with whom to apply these methods in the product innovation process.","design; circular economy; methodology; managing obsolescence; circular business model; Sustainability; preserving product integrity","en","doctoral thesis","","9789082873603","","","","","","","","","Circular Product Design","","",""
"uuid:0816cbe5-4e42-4fd3-a328-4775c5ccb633","http://resolver.tudelft.nl/uuid:0816cbe5-4e42-4fd3-a328-4775c5ccb633","Impact of sand nourishments on hydrodynamics and swimmer safety","Radermacher, M. (TU Delft Coastal Engineering)","Stive, M.J.F. (promotor); Reniers, A.J.H.M. (promotor); de Schipper, M.A. (copromotor); Delft University of Technology (degree granting institution)","2018","Artificial sand nourishments are a common measure to mitigate coastal erosion problems. Such nourishments can have an impact on currents and waves near the beach, especially when the nourishment has a large size. As nourished beaches often have a recreational function, these altered wave and current patterns may pose a threat to swimmers. This study has investigated the impact of nourishments on currents, waves and swimmer safety. De Sand Motor, an experimental large-scale nourishment at the Dutch coastline south of The Hague, served as a central case study. Using a combination of current measurements at sea and computer models, this study has revealed several interesting flow patterns around the Sand Motor, amongst others the presence of large eddies in the tidal flow. To determine the impact of such flow patterns on swimmer safety, the presence and spatial spreading of beach users at the Sand Motor was monitored with a set of cameras. Although the tidal eddies have a clear influence on currents and sand transport around the Sand Motor, their impact on swimmer safety remains limited. At the part of the Sand Motor where hazardous currents due to tidal eddies may occur, hardly any beach users are present due to the large distance from beach entrances, parking lots and restaurants. The most significant hazard is formed by tidal currents in the artificial lagoon, which has been incorporated in the initial design of the Sand Motor. Especially in the first years after construction of the nourishment, currents in the channel connecting the lagoon to the North Sea were quite strong, while that part of the Sand Motor can be crowded on nice summer days. The findings of this study enable engineers to incorporate swimmer safety considerations in the design of future nourishments. Furthermore, more fundamental insights into waves and currents around the Sand Motor contribute to the understanding of sediment transport, coastal erosion and eventually prevention of coastal flooding.","swimmer safety; sand nourishments; coastal processes; Sand Motor","en","doctoral thesis","","978-94-028-1065-3","","","","","","","","","Coastal Engineering","","",""
"uuid:05c19509-e310-4bac-a71d-670675b27ae0","http://resolver.tudelft.nl/uuid:05c19509-e310-4bac-a71d-670675b27ae0","Residents’ Perceptions of Impending Forced Relocation in Urban China: A case study of state-led urban redevelopment in Shenyang","Li, X. (TU Delft OLD Urban Renewal and Housing)","van Ham, M. (promotor); Kleinhans, R.J. (promotor); Delft University of Technology (degree granting institution)","2018","Since 1978, urban redevelopment in China has resulted in large-scale neighbourhood demolition and forced residential relocation, which can severely disrupt established people-place interactions in the demolished neighbourhoods. Urban redevelopment in China has also been criticized by the public and scholars, because the position of the residents in decision-making processes of urban redevelopment is often marginalized. Conflicts have arisen between the residents, local governments and developers, against the backdrop of the uneven redistribution of capital accumulated via urban space reproduction such as the replacement of declining neighbourhoods in which low-income residents reside, with newly-build high-rise dwellings for middle- or high-income residents (Qian and He 2012, Weinstein and Ren 2009). The aim of the thesis is to gain a deeper understanding of the influence of urban redevelopment and its induced forced relocation on residents, by investigating their behavioural and emotional responses to the state-led urban redevelopment in Shenyang, a Chinese city. In particular, it highlights the agency of the affected residents, through exploring their interactions with other stakeholders and through displaying the ambivalence embedded in their neighbourhood experiences.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-038-9","","","","A+BE | Architecture and the Built Environment No 11 (2018)","","","","","OLD Urban Renewal and Housing","","",""
"uuid:36544e23-af5e-45cc-b7ee-b69117df2be6","http://resolver.tudelft.nl/uuid:36544e23-af5e-45cc-b7ee-b69117df2be6","Optimizing ethanol yield in Saccharomyces cerevisiae fermentations by engineering redox metabolism","Papapetridis, I. (TU Delft BT/Industriele Microbiologie)","Pronk, J.T. (promotor); van Maris, A.J.A. (promotor); Delft University of Technology (degree granting institution)","2018","Mankind’s energy requirements, which are currently mainly covered by the combustion of fossil fuels, have been steadily increasing in the past half century. While fossil fuels have a high energy content, their use results in significant emissions of greenhouse gases (mainly CO2, methane and nitrous oxide). As the industrialization of developing nations continues, the requirement for a paradigm shift is becoming increasingly evident. Microbial fermentation can provide an alternative by enabling the sustainable production of transport fuels that combine a lower carbon footprint with compatibility with current internal combustion engine technology. Bioethanol is, by volume, the biofuel with the highest annual production (ca. 100 billion liters in 2016). Current ‘first generation’ industrial bioethanol production processes are mainly based on fermentation of hydrolysed corn starch or sugar-cane sucrose by the budding yeast Saccharomyces cerevisiae and capitalize on the naturally high sugar-uptake rates and ethanol yield of this microorganism. The first full-scale ‘second generation’ ethanol production plants that are now coming on line use lignocellulosic hydrolysates, derived from agricultural ‘’waste’’ such as corn stover or wheat straw, as feedstocks. Second-generation bioethanol production can have a smaller carbon footprint than first-generation processes. Moreover, it uses feedstocks that are not a part of the human food chain. However, yeast-based second-generation bioethanol production poses multiple challenges for scientists. Lignocellulosic hydrolysates contain significant amounts of pentose sugars (mainly Dxylose and L-arabinose) which are not naturally fermentable by S. cerevisiae. Further, during biomass pretreatment, inhibitors of yeast performance (phenolics, aldehydes and organic acids) are released into the hydrolysates. To mitigate the negative effects of these inhibitors, yeast strains used in second-generation bioethanol production processes need to maintain high rates of sugar fermentation, both for hexoses and for pentoses. In both first- and second-generation bioethanol production, the price of the hydrolysed feedstock represents the single largest factor in production 2 costs. Therefore, in an industry that generally operates at low profit margins, maximization of the ethanol yield on fermentable sugars is of paramount importance…","","en","doctoral thesis","","","","","","","","","","","BT/Industriele Microbiologie","","",""
"uuid:14eac2bb-63ee-47e4-8218-1ba3830a97b4","http://resolver.tudelft.nl/uuid:14eac2bb-63ee-47e4-8218-1ba3830a97b4","On the slamming of ships: Development of an approximate slamming prediction method","Kapsenberg, G.K. (TU Delft Ship Hydromechanics and Structures)","Huijsmans, R.H.M. (promotor); Delft University of Technology (degree granting institution)","2018","Slamming of ships is a phenomenon characterized by a high wave load of short duration. Usually the ships structure responds in a vibratory manner on this load; the response can be either a local or a global vibration mode or it can be in both modes together. These short duration loads are caused by large amplitude motions, even to the point that the fore body of the ship emerges from the water and slams upon re-entry, or they are caused by very steep waves that impact against the hull. The global elastic vibratory response of the structure is called whipping. It is characterized by a very low damping, so it takes many oscillations before it is extinguished. This dynamic response of the structure increases as well the maximum load as the number of load cycles relevant for fatigue damage due to seakeeping loads. Local and global responses can result in local high stresses such that it results in plastic deformation. Slamming loads can lead to catastrophic damage as illustrated by the accidents with the ferry Estonia and the container ship Napoli. Slamming loads are known to be a major reason for operators to change course and/or reducing speed, therefore there is a large effect on the economy of a ship. These aspects are the motivation for carrying out this study.","","en","doctoral thesis","","978-94-92679-47-5","","","","","","","","","Ship Hydromechanics and Structures","","",""
"uuid:1b5bffc3-b7f9-49a1-91ea-ebbf25b1f73f","http://resolver.tudelft.nl/uuid:1b5bffc3-b7f9-49a1-91ea-ebbf25b1f73f","High-resolution deep-tissue quantitative optical tomography","van der Horst, J. (TU Delft ImPhys/Quantitative Imaging)","Kalkman, J. (promotor); van Vliet, L.J. (promotor); Delft University of Technology (degree granting institution)","2018","Optical imaging is one of the primary tools in biological and medical research. Over the years many different optical imaging modalities have been developed that have driven the imaging performance in terms of image resolution, contrast, imaging time, and maximum allowed sample size. The goal of this work is to develop techniques for 3D optical imaging of turbid media that provide high resolution and high contrast images deep in tissue.","Optical tomography; OCPT; OCT; turbid media","en","doctoral thesis","","978-94-6299-950-3","","","","","","","","","ImPhys/Quantitative Imaging","","",""
"uuid:fff15717-71ec-402d-96e6-773884659f2c","http://resolver.tudelft.nl/uuid:fff15717-71ec-402d-96e6-773884659f2c","Models for supervised learning in sequence data","Pei, W. (TU Delft Pattern Recognition and Bioinformatics)","Reinders, M.J.T. (promotor); Tax, D.M.J. (copromotor); Delft University of Technology (degree granting institution)","2018","Much of the observational data that we see around is, is ordered in space or time. For instance, video data, audio data or text data. This ordered data, called sequence data, calls for automatic analysis using supervised learning. Traditional single-observation supervised learning is challenged by sequence data, because (1) the length of sequence examples is often variable; (2) the sequence data may contain irrelevant segments which yield negative impact on the learning performance and (3) there exist temporal dependencies between consecutive observations in a sequence that need to be exploited by supervised learning on sequence data. This thesis introduces new models for supervised learning on sequence data that specifically address these challenges.
We first propose a sequence classification model which is a graphical model using hidden variables to model the latent structure in the sequence data. It advances the state-of-the art by using the same number of hidden variables to model much more complex decision boundaries. Subsequently,we present a sequence classification model which is able to dealwith unsegmented sequences. The proposed model integrates ideas from attention models and gated recurrent neural networks. It is able to discern the salient segments and filter out the irrelevant ones, but it also measures the relevance of each time step of the sequence data to the final task. Finally, we propose an end-to-end model for age estimation from facial expression videos that performs feature learning and supervised learning for the final task jointly.
Next we considered the supervised learning on paired sequences in which we want to predict whether the two sequences are similar. We combined ideas from sequence modeling and metric learning, and propose Siamese Recurrent Networks to learn a good similarity measure between two sequences. Our model is superior to current techniques that are based on handcrafted similarity measures or models using unsupervised learning. Finally, we present a model that predicts the preference of users for items in a recommendation system. In this case, two input sequences represent a pair of historic user and item data each with their own properties. The dependencies between the two sequences are modeled using an attention scheme.","","en","doctoral thesis","","978-94-6186-930-2","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:2c127d9b-17ab-449e-8e16-f93b10f55158","http://resolver.tudelft.nl/uuid:2c127d9b-17ab-449e-8e16-f93b10f55158","Biodegradable magnesium matrix composites for bone fixation devices","Naddaf Dezfuli, S. (TU Delft Biomaterials & Tissue Biomechanics)","van der Helm, F.C.T. (promotor); Zhou, J. (promotor); Delft University of Technology (degree granting institution)","2018","When a bone is fractured, it loses its structural integrity which makes it unable to bear any mechanical load. Therefore, a broken bone must be supported until it regains its strength to handle the body's movement and weight. A surgical procedure is needed to set a fractured bone. This procedure often involves repositioning the bone fragments into their natural position and then, attaching them together using internal fixation devices such as plates and screws. These fixation devices restore load-beanng capacity to bone, allowing the fractured bone to be healed by the primary bone healing mechanism. To date, implants used for internal fixadon are usually made from titanium and stainless steel, which are strong but, notorious for triggering adverse reactions such allergic responses caused by implant erosion in patients. Therefore, permanent fixtures should be removed from the body after the fractured bone heals sufficiently, which imposes another invasive surgery on the patient. The advent of biodegradable magnesium-based composites about two decades ago was an attempt to address the clinical complications regarding the permanent fixtures. However, magnesium-based composites are still in their infancy, and a have a lot to achieve before being considered as fully functional materials for bone fixation purposes. Currently, there are two major issues with magnesium composites. Firstly, most of the magnesium-based composites made to date lack sufficient mechanical integrity, making them unsuitable for load-bearing applications. The second, and the most important, issue would be the rapid degradation of magnesium when exposed to physiological solutions, causing pre-mature mechanical failure before the patient fully recovers. The main aim of this thesis is to provide the necessary background and technical information to address these issues, and to be a reliable platform for future researches on the subject to build upon.","magnesium; composite; degradation; mechanical properties; biocompatibility","en","doctoral thesis","","978-94-6186-938-8","","","","","","","","","Biomaterials & Tissue Biomechanics","","",""
"uuid:ce43752d-37d4-40cf-8d4c-fd06515f9afa","http://resolver.tudelft.nl/uuid:ce43752d-37d4-40cf-8d4c-fd06515f9afa","Complex Adaptive Systems & Urban Morphogenesis: Analyzing and designing urban fabric informed by CAS dynamics","Wohl, Sharon (TU Delft Spatial Planning and Strategy)","Nadin, V. (promotor); Read, S.A. (promotor); Delft University of Technology (degree granting institution)","2018","This dissertation builds upon research that considers how cities operate as Complex Adaptive Systems (CAS). It focuses on how certain characteristics of urban form can support an urban environment's capacity to self-organize, enabling emergent features to appear that, while unplanned, remain highly functional. The main thrust of the work is to unpack how elements of the urban fabric might be considered as elements of a complex system and then identify how one might design these elements in a more deliberate manner, such that they hold a greater embedded capacity to respond to changing urban forces. The research is predicated on the notion that, while such responses are both imbricated with, and stewarded by human actors, the specificities of the material characteristics themselves matter. Some forms of material environments hold greater intrinsic physical capacities (or affordances) to enact the kinds of dynamic processes observed in complex systems than others (and can, therefore, be designed with generating these affordances in mind). The Ph.D.'s primary research question is thus:
What physical and morphological conditions need to be in place within an urban environment in order for Complex Adaptive Systems dynamics to have an opportunity to arise - such that the physical components (or ‘building blocks') of the urban environment have an enhanced capacity to discover functional configurations in space and time as a response to unfolding contextual conditions?
The dissertation is based on a compilation of articles that have, for the most part, been published in academic journals.
More specifically, this thesis addresses the lateral \emph{small-strain} soil response towards rigidly behaving piles that typically have a relatively low ratio of embedded length L to diameter D: L/D<7. It is the small-strain regime that governs the overall dynamic properties of the offshore wind turbine (OWT), which in turn define the accumulation of steel fatigue damage - most often the main design driver in dimensioning the support structure (foundation and tower). The work aims to improve both the currently applied in-situ characterisation of the soil properties and the design model used for simulating the complex SSI of MP foundations. For capturing the in-situ small-strain soil properties, it is suggested to add seismic measurements to the standard site characterisation scope. The currently applied geotechnical Cone Penetration Test measures the very local, large-strain strength parameters, whereas the output of a geophysical method like the Seismic Cone Penetration Test reflects the more global, small-strain stiffness properties of the soil. Regarding the design model, it is suggested to benefit from the accuracy of a 3D model, as it automatically captures the various soil reaction mechanisms that dominate the SSI of rigidly behaving piles. The soil in interaction with the small pile displacements of the fatigue-limit-state load case can be idealised to behave as a linear elastic material. The basic soil stiffness parameters captured by the seismic measurements can be directly used to fully characterize a linear elastic continuum of a 3D model. This physics-based approach, which first identifies the stiffness of the soil and subsequently that of the soil-pile system, is a more versatile and accurate method than the most often applied semi-empirical p-y curve method. The latter method employs the depth-dependent modulus of horizontal subgrade reaction k(z) to quantify a particular soil-pile initial lateral stiffness, to be used in a 1D Winkler foundation model. The Winkler model is the all-time favourite engineering model due to its simplicity and intuitive representation of the main involved physics in the SSI, and the subgrade modulus is a very useful SSI parameter. However, k(z) is an empirical tuning parameter, depending not only on the properties of the (stratified) soil, but also on those of the pile. As the currently used p-y curves were calibrated on small-diameter, flexible piles, they are not representative for the soil reactions to short, rigidly behaving MP foundations. In only assuming a lateral, uncoupled soil reaction - being the dominant restoring force for flexible piles, and hence the assumption in the p-y curve method - one underestimates the complete restoring reaction of the soil, which is induced by additional, more complex soil mechanisms. To become truly useful for design, the 3D model should not only serve as a design check, but its accuracy should be directly integrated into the design models. Similar to various other engineering design procedures, the thousands of load simulations required in the design of offshore wind support structures, make the 3D model computationally too expensive to replace the simple, 1D design model. To employ the speed and simplicity of the 1D model with the accuracy of the 3D model, the current thesis presents - as its main contribution - 2 methods to obtain a 1D effective model that mimics the 3D modelled response. The first, `local' method establishes an effective 1D stiffness keff(z), by optimising the profile of the uncoupled (local) lateral springs that renders the response of the 1D Winkler model of a rigid pile in stratified soil the same as that of the static response of the 3D model in terms of displacement, slope, rotation and curvature along the full embedded length of the pile. Accurate matches can be obtained for quite a broad range of pile geometries and soil (stiffness) profiles, however, this local method seems to perform worse for piles with $L/D<4.5$, softer and/or very irregular soil stiffness profiles. The same methodology was found to be able to also generate an effective damping profile ceff(z) to additionally mimic the energy dissipation in the SSI - provided that a previously found static stiffness profile keff(z) accurately captures the static response.In the second, `non-local' method, effective 1D global stiffness kernels are computed which fully capture the coupled 3D reactions of the stratified soil to the pile, for both the static and the low-frequency dynamic SSI. With the use of the stiffness kernels for the lateral and rotational degrees of freedom, the need of searching for various separate 1D stiffness elements, like distributed lateral and rotational springs along the pile or similar discrete springs at the pile tip, has become obsolete; such mechanisms are all automatically incorporated in the non-local stiffness kernels. The non-local method was shown to be very versatile, irrespective of pile geometry and soil stiffness profile, providing accurate matches of the 3D simulated response of the embedded pile.Finally, for increased confidence, methods and models should be validated - preferably by measuring the response of a realistic and representative version of the structure of interest. As no measurements of the dynamic response of a large scale MP foundation were reported in literature, an extensive measurement campaign was designed and executed on a `real' MP foundation of a near-shore wind farm. The setup involved a large amount of sensors on the pile and in the adjacent soil distributed over the full length of the pile, applying a steady-state excitation with a custom-made hydraulic shaker. The structure being a stand-alone pile, excluding dynamic disturbance of the to-be-installed super structure of tower and turbine, and the test comprising a controlled (known) loading, this campaign was shown to yield a much lower uncertainty regarding the soil response than for the commonly applied monitoring of the operational full OWT structure. Together with the inclusion of realistic saturated, nonhomogeneous sandy soil conditions and installation effects, a `first-off' opportunity was created to validate a model for the lateral, dynamic response of rigidly behaving monopiles. In the presented analyses of the measured response, the predicted effective stiffness was employed as an initial guess in a model-based identification of the stiffness, damping and fundamental frequency of the soil-pile system. It was shown that the proposed design procedure yields a 7 times higher accuracy in predicting the in-situ initial stiffness than the best-estimate p-y curve model. Furthermore, 2 adaptations of the 1D model were employed to investigate the presence of soil-added mass effects in the higher-frequency response of the system. Finally, the stiffness and damping of the pile-only system were related to those observed for the full OWT system, and the assumption of linear elastic soil response was validated using the observed pile response. An initial estimation of the possible benefit of the developed stiffness method, showed a 8% saving potential for the primary steel (shell) mass of the complete support structure (MP, transition piece and tower). This exercise was performed for a contemporary soil-pile case, for which (only) the fatigue-driven wall thickness was optimized and compared to the thickness needed when applying the conventional (softer) p-y curve profile. As the cost for MP support structures typically constitute more than 20% of the total capital cost of an offshore wind farm, the presented and validated work is foreseen to have a significant beneficial impact on the feasibility of future offshore wind projects.","Soil-structure interaction; Rigid monopiles; Small-strain soil reaction; In-situ seismic soil characterisation; Fundamental natural frequency of offshore wind turbine; Soil damping; 3D to 1D modelling translation; 1D effective stiffness; In-situ shaker validation measurements; Non- local (dynamic) stiffness","en","doctoral thesis","","978-94-6233-989-7","","","","","","","","","Offshore Engineering","","",""
"uuid:cb24d1b1-0b9f-4966-8cf4-8a9a5ee70146","http://resolver.tudelft.nl/uuid:cb24d1b1-0b9f-4966-8cf4-8a9a5ee70146","On Stability Enhancement in AC/DC Power Systems through Multi-terminal HVDC Controllers","Kotb, O. (TU Delft Energie and Industrie)","Herder, P.M. (promotor); Delft University of Technology (degree granting institution); Comillas Pontifical University (degree granting institution); KTH Royal Institute of Technology (degree granting institution)","2018","Due to the increasing share of renewable energy sources in modern power systems and electricity market deregulation, heavy inter-regional and cross-border power flows are becoming a commonplace in system operation. Moreover, largescale integration of renewable energy sources is expected to pace up, therefore new solutions have to be developed to integrate these intermittent sources, which are also characterized by being distributed over large geographical areas, such as offshore wind farms. Multi-Terminal High Voltage Direct Current (MTDC) networks are expected to form a solution for the integration of renewable energy sources to the existing interconnected AC grid. The type of converters used in the MTDC networks is however a subject of debate, as both Line Commutated Converters (LCCs) and Voltage Source Converters (VSCs) can be used. Moreover, the coordinated control of the MTDC networks with the AC system poses a challenge to the system operators, as it requires the consideration of both AC and DC system dynamics.
In response to these challenges, this thesis aims to discuss the following aspects of the MTDC networks: control of a hybrid MTDC with both LCCs and VSCs, as well as the utilization of an embedded VSC-MTDC for stability enhancement. The thesis also investigates the supply of passive AC systems using a hybrid MTDC network.
In the investigation of an AC/DC power system with a hybrid MTDC network, first, the combined AC/DC system is modeled. Next, a Small Signal Stability Analysis (SSSA) of the system is conducted, based on which the Power Oscillation Damping (POD) controllers were designed to enhance stability in the connected AC systems.
In the utilization of an embedded VSC-MTDC network for stability enhancement
in the AC/DC system, the operating point adjustment strategy is investigated,
which is implemented through the adjustment of setpoints for the active and reactive power controllers in the network converters. Finally, the design and placement of a Multi-Input Single Output (MISO) controller is investigated, where the control strategy is based on Modal Linear Quadratic Gaussian (MLQG) control using Wide Area Measurement Systems (WAMS) signals.","AC/DC power system; hybrid MTDC; multi-terminal HVDC; small signal stability; VSC-MTDC","en","doctoral thesis","","978-91-7729-726-0","","","","","","","","","Energie and Industrie","","",""
"uuid:ae68c987-acce-4133-a754-02a0f2ad2aac","http://resolver.tudelft.nl/uuid:ae68c987-acce-4133-a754-02a0f2ad2aac","Transformation in Composition: Ecdysis of Landscape Architecture through the Brownfield Park Project","van der Velde, J.R.T. (TU Delft Landscape Architecture)","Sijmons, D.F. (promotor); de Jong, E.A. (promotor); Delft University of Technology (degree granting institution)","2018","This study enlarges on the notion of composition in landscape architecture. It builds upon the ‘Delft method’, which elaborates composition as a methodological framework from its sister discipline architecture. At the same time takes a critical stance in respect to this framework, informed by recent epistemological developments in landscape architecture such as the site-specificity and process discourses. The notion of composition is examined from a historical and theoretical perspective, before turning to an examination of the brownfield park project realised in the period 1975-2015. These projects emerge as an important laboratory and catalyst for developments in landscape architecture, whereby contextual, process, and formal-aesthetic aspects emerge as central themes. The thesis of this research is that a major theoretical and methodological expansion of the notion of composition can be distilled from the brownfield park project, in which seemingly irreconcilable paradigms such as site and process are incorporated.
By extension, the study elaborates on the disciplinary specificity of landscape architecture as distinct to its sister disciplines architecture and urbanism, propositioning a ‘radical maturation’ of the foundations of the discipline in the period 1975 – 2015, via the brownfield park project. A metaphor for this process is offered by the phenomenon of ecdysis in arthropods (such as the blue swimmer crab), whereby the growth from juvenile to adult takes place in stages involving the moulting of an inelastic exoskeleton. Once shed, a larger exoskeleton is formed, whose shape and character is significantly different to its forebears. The research sketches the contours of a similar ‘disciplinary ecdysis’ in the period 1975-2015, whereby an evolution of design-as-composition praxis in landscape architecture takes place.
In the slipstream of these findings, the research sheds new light on the shifts in the form and content of the city itself in this period, and the agency of the urban park in the problematique of the contemporary urban realm. In the cases studied, the park typology has been able to address problems that much of the traditional apparatus of spatial planning and design has failed to do. By extension, the study reveals that many of the paradigms of urban planning and design are in need of major review in the context of deindustrialization. The urban park typology – in its guise as the brownfield park – also appears also able to shape and qualify larger urban regions. As such, the research highlights the rise of brownfield lands and their impact on the fabric of the city, the life of their inhabitants and the paradigms that dominate urban cultures, in turn fundamentally revising the definitions and agencies of notions such as city, nature and landscape.
First, we defined four basic design components that constitute a ‘gameful’ experience (i.e. feeling as if playing a game): goals, rules, objects, and freedom. Next, we explored the application of game elements in two lab- and two field experiments. In the lab, we developed a multiplayer computer game to examine the effect of different rules on interdependent behavior and we developed a physical game with coins to investigate the effect of different rule-sets on output in group-brainstorm meetings. In the field, we implemented and investigated the effect of gamified interventions to improve the cohesion within the operating teams of a strip-galvanizing factory and at a consultancy firm, we developed and tested a game with coins to change the attitude of participants of ‘red team’ meetings.
The results of these studies showed that in teamwork, game elements seem mainly valuable for raising attention and changing goal-driven behaviors and experiences. In order to design and research a gamification that positively influences teamwork it is important to consider: 1) the above-mentioned four basic design components and 2) to what extent they pervade in the emotions, attention, and behavior of team members.
In this doctoral dissertation the central research question is: How do residents with different ways of life perceive and assess their changed and changing neighbourhood? The study in the post-war residential district Zuidwijk in Rotterdam focusses on the change as the result of autonomous moving processes in the existing social rental sector (the influx) as well as the result of demolition and new housing construction, often for owner-occupancy (the intervention).
In general the residents of Zuidwijk are positive about their dwelling and immediate vicinity. They have experienced their move to Zuidwijk as a step up in their housing career. All residents value the green character and quiet setting of Zuidwijk, but are critical about the (shooting) incidents that happened in the past. The perception of residents from all clusters, be they Dutch natives or not, is that the influx in the existing social rental dwellings mainly, some even say totally, consists of households with a migrant background. The influx of mainly allochthonous households is seen as a problem by all residents. Dutch-native residents emphasize the negative effects on liveability and neighbourhood reputation and regret the loss of decorum and respectability. The allochthonous households emphasize the negative effects on integration and want to live in a mixed neighbourhood with Dutch native residents.
In general the residents are positive about the impact of the social mix strategy: demolition and new housing development. Exceptions are found with the households of older allochthonous residents and allochthonous single-parent families. People in both groups have a low income and state that the new-built dwellings are not meant for them. Ethnic diversity is not seen as a problem in the population composition of the new-built houses for owner-occupiers. These residents have to work to be able to buy a dwelling and by doing so they ‘prove’ to be decent and respectable.
Mixing makes a difference: it is important that the influx in the social rental dwellings not only exists of households with a migration background and a low income. Throughout the history of restructuring in Zuidwijk, the dominant narrative has been that the newcomers (meaning households with a migration background) did not have ties with the neighbourhood and only came there to obtain a cheap rental dwelling. This study disproves this narrative: a large number of them have come to live in Zuidwijk as youngsters and have grown up there. They identify themselves very strongly with Zuidwijk and, after a number of removals with their families, have rented or purchased a dwelling on their own.
Ethnic diversity will become more normal while more and more allochthonous residents will grow up in the neighbourhood. The growing presence of allochthonous middle-class households in the new built houses may reinforce that. On the other hand the tensions and problems in the neighbourhood are heightened by the polarized national political debate about integration. The municipality and housing association do have an important role in acting adequately to signals and complaints of residents and supporting mutual contact of residents, but the austerity of the housing association and municipality in recent years has decreased the possibilities in this regard.
In this thesis, a comprehensive study has been carried out using analytical methods, numerical methods and experiments to gain a better understanding of these structures. A wide range of porous structures with several topological designs and material types were considered. The quasi-static and fatigue behaviour of those AM porous biomaterials were determined experimentally and were compared with analytical solutions and computational (i.e. finite element modelling) results. The quasi-static mechanical properties and fatigue S-N curves were analyzed both in absolute and in normalized terms. In the case of quasi-static mechanical properties, normalization was performed with respect to the mechanical properties of the bulk (i.e. matrix) material from which the porous biomaterials were made while stress levels in the S-N curves were normalized with respect to the yield or plateau stress of the porous biomaterial. The results of this thesis show AM porous metallic biomaterials could mimic several aspects of bone tissue properties and are therefore promising candidates as bone substituting implants.","Meta-biomaterials; bone substitutes; orthopedics; Porous structures; Additive Manufacturing; Selective laser melting; Biomedical","en","doctoral thesis","","","","","","","","","","","Biomaterials & Tissue Biomechanics","","",""
"uuid:059e2707-f6f5-4363-9f16-2a9d656b1b1f","http://resolver.tudelft.nl/uuid:059e2707-f6f5-4363-9f16-2a9d656b1b1f","Determinants of energy subsidies and their impact on technological change of energy use","Diaz Arias, A.M. (TU Delft Economics of Technology and Innovation)","van Beers, Cees (promotor); Delft University of Technology (degree granting institution)","2018","This thesis investigates consumer energy subsidies, their persistence due to institutional and political barriers and how they affect the technological advancement of renewable energy. Results from an econometric analysis of a panel data from 194 countries during the period 1990-2012 show that the ability of citizens to be heard by their government and the supply of untargeted public goods correlate with lower fossil fuel subsidies. More subsidies to fossil fuels are provided in countries with higher levels of power concentration and larger income inequality. A meta-analysis of the existing empirical literature on the effects of energy policy on wind energy innovation shows that policies seeking to make innovation more profitable i.e. demand-pull policies, have had a larger impact than policies oriented to reduce the cost of innovation i.e. supply-push policies. The results from the investigation of the effect of the price structure of electricity on renewable energy innovation show that reducing government support to large energy users provides a clear incentive to increase inventions in renewable energy technologies, particularly in solar power, which does not experience lock-in barriers due to grid inflexibility. Political and institutional forces maintain energy subsidies in place that not only have huge economic and environmental costs but also affect negatively the renewable energy innovation that governments want to stimulate.","","en","doctoral thesis","","978-94-6295-937-8","","","","","","","","","Economics of Technology and Innovation","","",""
"uuid:630ce39a-76d8-49e5-bf5e-aec15fde79b3","http://resolver.tudelft.nl/uuid:630ce39a-76d8-49e5-bf5e-aec15fde79b3","On domain-adaptive machine learning","Kouw, W.M. (TU Delft Pattern Recognition and Bioinformatics)","Reinders, M.J.T. (promotor); Loog, M. (copromotor); Delft University of Technology (degree granting institution)","2018","Artificial intelligence, and in particular machine learning, is concerned with teaching computer systems to perform tasks. Tasks such as autonomous driving, recognizing tumors in medical images, or detecting suspicious packages in airports. Such systems learn by observing examples, i.e. data, and forming a mathematical description of what types of variations occur, i.e. a statistical model. For new input, the system computes the most likely output and makes a decision accordingly. As a scientific field, it is situated between statistics and and algorithmics. As a technology, it has become a very powerful tool due to the massive amounts of data being collected and the drop in the cost of computation.
However, obtaining enough data is still very difficult. There are often substantial financial, operational or ethical considerations in collecting data. The majority of research in machine learning deals with constraints on the amount, the labeling and the types of data that are available. One such constraint is that it is only possible to collect labeled data from one population, or domain, but the goal is to make decisions for another domain. It is unclear under which conditions this will be possible, which inspires the research question of this thesis: when and how can a classification algorithm generalize from a source domain to a target domain?
My research has looked at different approaches to domain adaptation. Firstly, we have asked some critical questions on whether the standard approaches to model validation still hold in the context of different domains. As a result, we have proposed a means to reduce uncertainty in the validation risk estimator, but that does not solve the problem completely. Secondly, we modeled the transfer from source to target domain using parametric families of distributions, which works well in simple contexts such as feature dropout at test time. Thirdly, we looked at a more practical problem: tissue classifiers trained on data from one MRI scanner degrade when applied to data from another scanner due to acquisition-based variations. We tackled this problem by learning a representation for which detrimental variations are minimized while maintaining tissue contrast. Finally, considering that many approaches fail in practice because their assumptions are not met, we designed a parameter estimator that never performs worse than the naive non-adaptive classifier.
Overall, research into domain-adaptive machine learning is still in its infancy, with many interesting challenges ahead. I hope that this work contributes to a better understanding of the problem and will inspire more researchers to tackle it.","Machine learning; Domain adaptation; Pattern recognition; Classification; Intelligent systems; Artificial intelligence; Computer science","en","doctoral thesis","","978-94-028-1048-6","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:fdbced85-e7bf-40a9-984b-4529b19cec4a","http://resolver.tudelft.nl/uuid:fdbced85-e7bf-40a9-984b-4529b19cec4a","Modelling and analysis of fine sediment transport in wave-current bottom boundary layer","Zuo, L. (TU Delft Coastal Engineering)","Roelvink, D. (promotor); Lu, Y.J. (promotor); Delft University of Technology (degree granting institution)","2018","The evolution and utilization of estuarine and coastal regions are greatly restricted by sediment problems. This thesis aims to better understand fine sediment transport under combined action of waves and currents, especially in the wave-current bottom boundary layer (BBL). Field observations, experimental data analysis, theoretical analysis and numerical models are employed. Silt-dominated sediments are sensitive to flow dynamics and the suspended sediment concentration (SSC) increase rapidly under strong flow dynamics. This research unveils several fundamental aspects of silty sediment, i.e., the criterion of the incipient motion, the SSC profiles and their time-averaged parameterization in wave-dominated conditions. An expression for sediment incipient motion is proposed for silt-sand sediment under combined wave and current conditions. A process based intra-wave 1DV model for flow-sediment dynamics near the bed is developed in combined wave-current conditions. The high concentration layer (HCL) was simulated and sensitivity analysis was carried out by the 1DV model on factors that impact the SSC in the HCL.Finally, based on the 1DV model, the formulations of the mean SSC profile of silt-sand sediments in wave conditions were proposed. The developed approaches are expected to be applied in engineering practice and further simulation.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-1-138-33468-7","","","","Dissertation submitted in fulfillment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of IHE Delft Institute for Water Education.","","","","","Coastal Engineering","","",""
"uuid:68de92fd-0185-4e08-b911-253358708a9c","http://resolver.tudelft.nl/uuid:68de92fd-0185-4e08-b911-253358708a9c","Finite-dimensional approximation and control of shear flows","Tol, H.J. (TU Delft Control & Simulation)","Scarano, F. (promotor); de Visser, C.C. (copromotor); Kotsonis, M. (copromotor); Delft University of Technology (degree granting institution)","2018","Dynamical systems theory can significantly contribute to the understanding and control of fluid flows. Fluid dynamical systems are governed by the Navier-Stokes equations, which are continuous in both time and space, resulting in a state space of infinite dimension. To incorporate tools from systems theory it has become common practise to approximate the infinite-dimensional system by a finite-dimensional lumped system. Current techniques for this reduction step are data driven and produce models which are sensitive to the simulation or experimental conditions. This dissertation proposes a rigorous and practical methodology for the derivation of accurate finite-dimensional approximations and output feedback controllers directly from the governing equations. The approach combines state-space discretisation of the linearised Navier-Stokes equations with balanced truncation to design experimentally feasible low-order controllers. The approximation techniques can be used to design any suitable linear controller. In this study the reduced-order controllers are designed within an H2 optimal control framework to account for external disturbances and measurement noise. Application is focused on control of laminar wall-bounded shear flows to delay the classical transition process initially governed by two-dimensional convective perturbations, to extend laminar flow and reduce skin friction drag. The controllers are successfully tested in the vertical wind tunnel at the TU Delft.","Flow instability and control","en","doctoral thesis","","978-94-6186-926-5","","","","","","","","","Control & Simulation","","",""
"uuid:99887eda-5264-4564-888e-dcaf2dbae356","http://resolver.tudelft.nl/uuid:99887eda-5264-4564-888e-dcaf2dbae356","Applications of spectroscopy with multiwavelength sources","Hänsel, A. (TU Delft ImPhys/Optics)","Urbach, Paul (promotor); Bhattacharya, N. (copromotor); Delft University of Technology (degree granting institution)","2018","Spectroscopy is a powerful tool to investigate the physical properties of complex systems. The interaction of light with matter allows to get insights into the structure of it. Chapter 1 is dedicated to introduce this topic and to show the developments of the technologies that paved the way to its success. Special focus is given to the techniques that are used in this work. This includes monolithically integrated tunable laser sources, as well as integrated mode locked lasers. In Chapter 2 we guide through the design process of single mode laser source using the generic approach and exploiting the availabilty of multi-project wafers. The design of a Fabry-Perot laser along with its benefits, drawbacks and the underlying physical concepts will be demonstrated. This requires theoretical background in solid state physics; the necessary basics are given in the text. Chapter 3 makes use of this background and expand the design to ring lasers. Chapter 3 also illustrates characterisation techniques for such laser sources. The presented device is investigated regarding its capabilities for gas spectroscopy. To reach different absorption lines that enable spectroscopy for different gas species, the laser design has been adapted for longerwavelengths. In Chapter 4we will showthat despite the reduced performance due to the lower technological status, gas spectroscopy can still be feasible with such devices. Besides the spectroscopical applications photonic integrated circuits can find use in the field of distance metrology. A setup verified the feasibility of a modelocked laser in combination with a VIPA spectrometer to obtain metrological data with a single camera image, which is demonstrated in Chapter 5. This chapter also concludes the investigation of monolithically integrated laser sources.
In addition to on-chip lasers, this work investigates fiber-based frequency comb lasers. With a much lower repetition frequency in comparison to integrated pulsed lasers, the corresponding mode-spacing in the frequency domain sets different requirements of the spectrometer. On the other hand the denser and yet wider spectral coverage allows for spectroscopy over a wider range of absorption lines. Chapter 6 is dedicated to introduce frequency comb lasers and the virtually imaged phased-array (VIPA) spectrometer. The combination of both is used to determine the temperature of CO2 by looking at its absorption behaviour. Similar measurements have been executed in ambient air and are summarised in Chapter 7. Due to the low concentration of CO2 in ambient air, this required a very long path length. In Chapter 8 we demonstrate an optimised setup to increase the stability of the method introduced in Chapter 6. The improved setup is more stable with respect to ambient fluctuations and is portable, which allows measurements outside of laboratory conditions. The final chapter, Chapter 9, summarises the results of all the presented experiments and discusses the impact it can have on future devices making use the presented methods.","Integrated Optics; Frequency Comb; Spectroscopy; Virtually Imaged Phased Array","en","doctoral thesis","","978-94-028-1084-4","","","","","","","","","ImPhys/Optics","","",""
"uuid:61465ddb-e02e-48a6-969e-5c5c90319d67","http://resolver.tudelft.nl/uuid:61465ddb-e02e-48a6-969e-5c5c90319d67","Spark-Discharge as a Nanoparticle Source to Study Size-Dependent Plasmonic Properties for Photo-electrochemical Water Splitting","Valenti, M. (TU Delft ChemE/Materials for Energy Conversion and Storage)","Schmidt-Ott, A. (promotor); Smith, W.A. (promotor); Biskos, G. (copromotor); Delft University of Technology (degree granting institution)","2018","This work exploits the ability of the spark discharge particle generator (SDG) to produce metallic nanoparticles (NPs) with control over the size, shape and composition, to unravel the plasmonic mechanisms by which NPs can enhance the photoelectrochemical performance of semiconductor photoanodes. Chapter 1 gives an overview of the SDG and the aerosol technology used in this thesis to synthesize the NPs. Chapter 2 summarizes the different aerosol NP immobilization techniques (both on solids and in liquids) and introduces for the first time an electrospray technique to efficiently capture neutral NPs in liquids. In chapter 3, an extensive literature review on plasmonic photoelectrocatalysis is given to introduce the plasmonic mechanisms that are experimentally studied in Chapter 4, 5, 6 and 7. Chapter 4 and 5 are dedicated to study the hot electron injection (HEI) mechanism by which plasmonic NPs create light-induced “hot” charge carriers upon illumination that can drive photoelectrochemical reactions. Chapter 4 reveals that alloying Ag NPs with Au can be used to shift in a control way the absorption and utilization of light to longer wavelengths. However, due to the low interband energy of Au (i.e., 2.3 eV) compared to that of Ag (i.e., 3.6 eV), the alloy NPs exhibited more interband excitations when illuminated with visible light than pure Ag NPs. Such increase in interband excitations resulted in lower hot electron energies and HEI efficiencies in the alloy NPs than in pure Ag NPs. Chapter 5, reveals the HEI size dependency of Ag NPs. It is found that smaller NPs (< 10 nm) where the surface-induced excitations are prominent result in higher HEI efficiencies, while for larger light absorbing NPs (in the range 10-25 nm) a maximum in the performance is found that corresponds well with the size of the Ag NP with the largest nearfield enhancement. Chapter 6, studies the ability of Ag NPs to concentrate and scatter light into thin film semiconductors to enhance their absorption. It is found that most of the solar energy absorbed by pure 15 nm Ag NPs is lost through heat dissipation. However, larger NPs preferentially scatter the incoming light to the neighbour 6 semiconductor, improving its absorption above their band gap energy. Finally, two configurations of plasmonic NP/semiconductor composites were studied to enhance the semiconductor absorption. In the first configuration the NPs were placed at the semiconductor-electrolyte interface and in the second configuration, the NPs were embedded in the semiconductor at the back-contact/semiconductor interface. It was found that an absorption enhancement at the semiconductor/electrolyte interface was better utilized due to the ability of the surface charge layer to efficiently separate the extra electron holes induced by the plasmonic NPs.","","en","doctoral thesis","","978-94-6332-370-3","","","","","","","","","ChemE/Materials for Energy Conversion and Storage","","",""
"uuid:df38f455-4fe0-4674-9801-f66f3628693d","http://resolver.tudelft.nl/uuid:df38f455-4fe0-4674-9801-f66f3628693d","Synthesis, Characterization and Properties of Aromatic PDMS Multiblock Copolymers","Xu, H. (TU Delft Novel Aerospace Materials)","Dingemans, T.J. (promotor); Delft University of Technology (degree granting institution)","2018","The main objective of the research described in this thesis is to explore the design, synthesis and (thermo)mechanical properties of a new family thermoplastic high-performance elastomeric (AB)n multiblock copolymers. The backbone is based on bismaleimide-functionalized all-aromatic liquid crystalline (LC) or amorphous (AM) precursors coupled with dithiol terminated PDMS oligomers. Thiol-ene click chemistry was used to prepare high molecular weight multiblock copolymers in high yield. The chemistry, phase behavior, and (thermo)mechanical behaviour will be described in detail. In Chapter 2, the synthetic details of the bismaleimide end-functionalized oligomers are described. Both LC and AM reactive precursors were synthesized using a standard solution polycondensation procedure, with target molecular weights (Mn) of 1, 5 and 9 kg·mol-1. All soluble samples showed unimodal molecular weight distributions and PDIs of ~2, which is consistent with step-growth polymerization. 1H NMR shows that the maleimide end-groups remain intact during synthesis, enabling further functionalisation of these oligomers towards multiblock copolymers. The temperature dependent properties and cure behavior of the LC- and AM- bismaleimide terminated oligomers are presented in Chapter 3. The two oligomer series show different behaviour with respect to their crosslinking chemistry, phase behavior and (thermo)mechanical properties. The cured thermosets show good thermal stabilities with Td5% > 390 °C and DSC, POM and XRD results confirm the liquid crystalline and amorphous nature of the different oligomers. The uncured LC oligomers and reference polymer shows Tgs of 136 – 157 ˚C, whereas the Tgs of the AM series are in the 130 – 134 ˚C range. After cure, the Tgs of the oligomers increased to 140 – 190 ˚C, depending on the concentration of reactive end-groups. Rheology and gel fraction test shows that the two series of oligomers with Mn of 1 and 5 kg·mol-1 are highly crosslinked, whereas those with an Mn of 9 kg·mol-1 are only partly crosslinked on mostly chain extended. The cured AM oligomer films show good mechanical properties with high tensile strengths (> 90 MPa), elastic moduli (~2 GPa), elongation at break (~10%) and toughness (~8 MJ·m-3). In Chapter 4, the synthesis and molecular weight characterization of the multiblock copolymers based on dithiol terminated PDMS and bismaleimidefunctionalized oligomers (LC and AM) are described. All thiol-terminated PDMS oligomers (Mn = 1, 5 and 10 kg·mol-1) could be successfully copolymerized with either LC- or AM-oligomers (Mn = 5 kg·mol-1), via thiol-ene click chemistry. 1H NMR confirmed that the multiblock copolymers exhibited high molecular weights, in the range of 22 – 58 kg·mol-1. The molecular composition, as calculated from 1H NMR experiments, are consistent with the theoretical values. The multiblock copolymers from Chapter 4 are characterized in terms of thermal stability, phase behavior, morphology and (thermo)mechanical properties, and are discussed in Chapter 5. DSC and DMTA experiments shows that the multiblock copolymers with PDMS segments with Mn of 5K and 10K show two glass transitions, indicating (micro)phase separation, due to the incompatibility of the aromatic ester units and PDMS units. In tensile test, the AM5K-b-PDMS multiblock copolymers show superior mechanical properties over their LC5K-b-PDMS analogs. The AM5K-b-PDMS1K film shows outstanding tensile strength of ~125 MPa, elastic modulus of 3.4 GPa and elongation at break higher than 30%. In Chapter 6, the (AB)n-multiblock copolymers based on all-aromatic polyester/PDMS as discussed in Chapter 4 and 5 were investigated as dual- and triple- shape memory polymers. The AM5K-b-PDMS1K film shows high Rf (100%) and Rr (>97%) in terms of dual- SME, while the LC5K-based analogue exhibits moderate shape memory performance with Rf of 97% and Rr > 80%. In triple- SME test, the AM5K-b-PDMS5K film shows high Rf (>95%) and high Rr (>96%), while the LC5K-based analogue exhibits slightly lower Rf of > 91% and Rr > 80%. In conclusion, we have demonstrated that thermoplastic (AB)n-multiblock copolymers can be prepared from all-aromatic oligomers and thiol-terminated PDMS oligomers via thiol-ene click chemistry. The best performing multiblock copolymer is AM5K-b-PDMS1K, which exhibits outstanding tensile strength (~125 MPa), elastic modulus (3.4 GPa) and elongation at break (>30%). These values surpass the mechanical test results of commercially available high-performance polymers such as PEKK, PPS and PEI.","","en","doctoral thesis","","978-94-6186-908-1","","","","","","","","","Novel Aerospace Materials","","",""
"uuid:8acb9b48-bf77-45b2-a0d6-1cf6658f749e","http://resolver.tudelft.nl/uuid:8acb9b48-bf77-45b2-a0d6-1cf6658f749e","Numerical Modelling of Wheel-rail Dynamic Interactions with an Explicit Finite Element Method","Yang, Z. (TU Delft Railway Engineering)","Dollevoet, R.P.B.J. (promotor); Li, Z. (promotor); Delft University of Technology (degree granting institution)","2018","The modelling of wheel-rail dynamic interactions is crucial for accurately predicting wheel/track deterioration and dynamic behaviour. A reliable wheel-rail dynamic interaction model requires a careful treatment of wheel-rail frictional rolling contact and a proper consideration of dynamic effects related to the contact. Since the wheel-rail interaction due to the frictional rolling contact significantly influences the vehicle dynamics and stability, and the dynamic effects involved in wheel-rail interactions can be increased by wheel rail highspeed rolling, a systematic study of wheel-rail dynamic interactions is highly desired within the context of booming high-speed railways.","","en","doctoral thesis","","978-94-6366-048-8","","","","","","","","","Railway Engineering","","",""
"uuid:d8cffad9-efaf-400b-8e36-5d8eb8becc86","http://resolver.tudelft.nl/uuid:d8cffad9-efaf-400b-8e36-5d8eb8becc86","Out of the lab, onto the court: Wheelchair Mobility Performance quantified","van der Slikke, R.M.A. (TU Delft Biomechatronics & Human-Machine Control)","Veeger, H.E.J. (promotor); Bregman, D.J.J. (copromotor); Delft University of Technology (degree granting institution)","2018","Performance in wheelchair court sports is to a large extent determined by the wheelchair mobility performance, the performance measure for the wheelchair-athlete combination. So far, wheelchair mobility performance is mostly utilized as concept, rather than a well quantified measure. However, in order to gain insight in the interaction between athlete, wheelchair and sport, it should be an objective and well quantified outcome that is easily measured. An inertial sensor-based “Wheelchair Mobility Performance Monitor” (WMPM) was developed that met the demands of objective quantification of mobility performance in an easy to use manner. This WMPM is believed to be a valuable tool for wheelchair court sports practice and research. All research done with the WMPM showcases its opportunities and commenced the unravelling of the complex interactions between athlete, wheelchair and sport. It will be a matter of time before the use of the WMPM will be common practice in wheelchair sports and sports research.","Wheelchair basketball; Wheelchair Sports; Inertial Measurement Unit; Paralympic sports; mobility performance","en","doctoral thesis","","978-94-6233-958-3","","","","Dr. M.A.M. Berger (The Hague University of Applied Sciences) has, as supervisor, contributed significantly to the preparation of this dissertation.","","","","","Biomechatronics & Human-Machine Control","","",""
"uuid:e566b544-b37b-426f-a5fb-af341fca78ec","http://resolver.tudelft.nl/uuid:e566b544-b37b-426f-a5fb-af341fca78ec","Carrier multiplication and cooling in semiconductor quantum dots","Spoor, F.C.M. (TU Delft ChemE/Opto-electronic Materials)","Siebbeles, L.D.A. (promotor); Houtepen, A.J. (promotor); Delft University of Technology (degree granting institution)","2018","In semiconductor quantum dots (QDs), charge carrier cooling is in direct competition with carrier multiplication (CM), a process in which one absorbed photon excites two or more electrons that may improve the light conversion efficiency of photovoltaic devices. CM by an initially hot charge carrier occurs in competition with cooling, with the respective rates determining the CM efficiency. Until now, the factors that determine the onset energy and efficiency of CM have not been convincingly explained. Most research on cooling involves low photoexcitation energies close to the band gap, while the competition between CM and cooling takes place at higher energies where an electron or hole has an excess energy that is at least equal to the band gap. Moreover, CM rates have only been calculated theoretically, while experimental studies of CM have focused mostly on proving its occurrence in various materials. Understanding charge carrier cooling at high excess energy and comparing this to experimental CM rates is therefore of great interest. Chapters 2 and 3 of this thesis are aimed at understanding charge carrier cooling, while Chapters 4 and 5 relate this to the onset energy and efficiency of CM. The presented results are a large step forward in understanding cooling and CM and allow for a screening of materials with an onset of CM close to twice the band gap energy. Such materials are of great interest for development of highly efficient photovoltaic devices.","carrier multiplication; carrier dynamics; quantum dot; transient absorption spectroscopy","en","doctoral thesis","","978-94-92679-42-0","","","","","","","","","ChemE/Opto-electronic Materials","","",""
"uuid:2c9d2734-4189-4573-a32c-110beed8f45b","http://resolver.tudelft.nl/uuid:2c9d2734-4189-4573-a32c-110beed8f45b","Multimodal Transportation Simulation for Emergencies using the Link Transmission Model","van der Gun, J.P.T. (TU Delft Transport and Planning)","van Arem, B. (promotor); Pel, A.J. (promotor); Delft University of Technology (degree granting institution)","2018","Emergencies disrupting urban transportation systems cause management problems for authorities. This thesis develops simulation methods that permit analysis thereof and evaluation of candidate management plans, tested in three case studies. It formulates a methodological framework using agent-based choice models and multimodal macroscopic dynamic network loading models, and develops extensions of the Link Transmission Model to deal with more complex and variable fundamental diagrams and initially non-empty roads.","urban emergencies; evacuation modelling; choice modelling; activity-based modelling; dynamic network loading; multimodal networks; agent-based modelling; macroscopic traffic model; en-route choice; Link Transmission Model; Lighthill-Whitham-Richards theory; first-order model; capacity drop; node model; stop-and-go wave; Smulders fundamental diagram; traffic control; environmental conditions; bus bridging; network disruptions","en","doctoral thesis","TRAIL Research School","978-90-5584-235-3","","","","TRAIL Thesis Series no. T2018/3, the Netherlands Research School TRAIL","","","","","Transport and Planning","","",""
"uuid:d999bae1-8649-4e5d-8925-6fd999d9c549","http://resolver.tudelft.nl/uuid:d999bae1-8649-4e5d-8925-6fd999d9c549","Better public housing management in Ghana: An approach to improve maintenance and housing quality","Aziabah Akanvose, A.B. (TU Delft Housing Management)","Gruis, V.H. (promotor); Elsinga, M.G. (promotor); van der Flier, C.L. (copromotor); Delft University of Technology (degree granting institution)","2018","In Ghana, public housing which is provided mainly for government employees plays an important role in socio-economic development. For instance, civil servants are more likely to accept transfers to areas where their services are most needed. Unfortunately, attention to public housing in Ghana has diminished over the years largely due to shift in policy focus towards the enablement approach. That is, private-sector-led housing production. The result is that, public housing conditions and quality continuous to deteriorate due largely to lack of maintenance. This is evident in many research and news publications, and the visible signs of deterioration such as leaking roofs, rotten ceilings, cracked walls, faded paint, and dysfunctional electrical and plumbing systems. Therefore, this thesis proposes an approach to management by local authorities that may bring about maintenance and lead to better conditions/quality of public housing. The approach proposes a defined structure with roles, responsibilities and relationships for the district assembly, the district coordinating director, the housing unit of the local authority, the works department, and tenants. It outlines a defined protocol for addressing repairs and maintenance including mechanisms to receive and respond to everyday repairs from tenants. Furthermore, it proposes that district assemblies should be fully responsible for determining and collecting rents so as to ensure reliable and secure finance for maintenance. It recommends the participation of tenants in management through mechanisms such as regular meetings or tenant representatives. Finally, it recommends mechanisms such as planning, budgeting, and submission of annual accounts to monitor and ensure that rents are spent on maintenance.","Ghana Public housing; Public housing management; Housing management; Local authorities; Housing management approach; Housing maintenance; Maintenance; Housing quality; Housing conditions","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-036-5","","","","A+BE | Architecture and the Built Environment No 7 (2018)","","","","","Housing Management","","",""
"uuid:97fabd08-203c-4471-9596-7ad91f7eb2c0","http://resolver.tudelft.nl/uuid:97fabd08-203c-4471-9596-7ad91f7eb2c0","Aggregation Phenomena in Atomic Layer Deposition: Bridging Macro and Nano","Grillo, F. (TU Delft ChemE/Product and Process Engineering)","van Ommen, J.R. (promotor); Kreutzer, M.T. (promotor); Delft University of Technology (degree granting institution)","2018","Atomic layer deposition (ALD) is a gas-phase thin film technology that boasts atomic-level control over the amount of material being deposited. A great deal of research effort has been devoted to the exploitation of ALD precision for the synthesis of nanostructures other than thin films such as supported nanoparticles (NPs). ALD is not only precise but also scalable to high-surface-area supports such as powders, which are relevant to a wide range of applications in fields spanning catalysis, energy storage and conversion, and medicine. Yet, translating the precision of ALD of thin films to the synthesis of NPs is not straightforward. In fact, ALD is mostly understood in terms of self-limiting surface reactions leading to a layer-by-layer conformal growth. However, the formation and growth of NPs is bound to be dictated by atomistic processes other than ALD surface reactions, such as the diffusion and aggregation of atoms and NPs. Understanding the role of such non-equilibrium processes is the key to achieving atomic-level control over the morphology of ALD-grown NPs and, in particular, their particle size distribution (PSD) and shape. This thesis is aimed at expanding our atomic-scale understanding of the mechanisms behind the formation of NPs during ALD. In particular, this thesis is based on experiments and models that were devised with an eye to scalability.","Atomic layer deposition; nanoparticles; aggregation; kinetics; size distribution; fluidized bed reactors; platinum; nanorods; titania; Modeling","en","doctoral thesis","","978-90-65624-23-9","","","","","","","","","ChemE/Product and Process Engineering","","",""
"uuid:c6543b39-401d-4d7d-9456-e54b02d71a69","http://resolver.tudelft.nl/uuid:c6543b39-401d-4d7d-9456-e54b02d71a69","Topology, Magnetism, and Spin-Orbit: A Band Structure Study of Semiconducting Nanodevices","Skolasinski, R.J. (TU Delft QRD/Wimmer Group)","Nazarov, Y.V. (promotor); Wimmer, M.T. (copromotor); Delft University of Technology (degree granting institution)","2018","Topological insulators and topological superconductors are novel states of matter.
One of the most characteristic properties of topological insulators are the topologically protected edge states.
While the bulk of the material stays insulating, the edge-state conductance is quantized and topologically protected from backscattering.
In topological superconductors the edge states manifest themselves in the form of Majorana bound states: zero energy states inside the superconducting gap that are located at the end of a one-dimensional topological superconductor.
Chapter 2 of this thesis contains a detailed review and a discussion of k.p-theory.
The k.p-theory allows one to go beyond commonly used effective models and obtain much more detailed description of a semiconductor's band structure around its gap.
Topological insulators are often semiconductor-based and topological superconductors can be realized in a hybrid structure that consists of a semiconductor and a conventional superconductor.
Chapter 3 covers implementation details of the numerical methods used in this thesis.
Quantum spin Hall effect is one example of a topological insulator.
Band inversion in HgTe/CdTe or InAs/GaSb two-dimensional system leads to a topological phase that is characterized by topologically protected helical edge states which carry electric current with a quantized conductance.
It was believed that in-plane magnetic field would break time reversal symmetry, suppress the conductance and open an energy gap in the edge-state dispersion.
However, the experiment conducted by Du et al. reported robust helical edge transport in InAs/GaSb persisting up to a magnetic fields of 12 T.
In Chapter 4 of this thesis we show that the burying of a Dirac point in the valence band, a feature of the system dispersion revealed only by the detailed k.p-simulation, explains this unexpected observation.
Experimental group of L.P. Kouwenhoven investigated experimentally the details of spin-orbit interaction in InAs/GaSb system in both topological and trivial phases.
In Chapter 5 we connect the results of this experiment with our band structure calculations: in the topological phase, a quenching of the spin-splitting is observed and attributed to a crossing of spin bands, whereas in the trivial regime, the Rashba coefficient changes linearly with electric field and the linear Dresselhaus coefficient is constant.
In Chapter 6 we take a look into the spin texture of the inverted InAs/GaSb system close to the hybridization gap.
Transport measurements conducted by the experimental group of C.M. Marcus in Copenhagen revealed a giant spin-orbit splitting inherent to this system.
This leads to a unique situation in which the Fermi energy in InAs/GaSb crosses a single spin-resolved band, resulting in a full spin-orbit polarization.
In the last chapter of this thesis we focus on semiconducting nanowires with induced superconductivity that are considered to be a promising platform for hosting Majorana bound states.
In this theoretical research conducted together with physcists from ETH Zurich we show that the orbital contribution to the electron g-factor in higher subbands of small-effective-mass semiconducting nanowires can lead to the g-factors that are larger by an order of magnitude or more than a bulk value.","topology; magnetism; spin-orbit; k.p theory; discretization","en","doctoral thesis","","978-90-8593-344-1","","","","","","","","","QRD/Wimmer Group","","",""
"uuid:ab4bce37-8c82-49fb-96bf-977f284525ed","http://resolver.tudelft.nl/uuid:ab4bce37-8c82-49fb-96bf-977f284525ed","Towards High Energy Density Li and Na Ion Batteries: An Anode Material Study","Xu, Y. (TU Delft ChemE/Materials for Energy Conversion and Storage)","Mulder, F.M. (promotor); Delft University of Technology (degree granting institution)","2018","Modern life is moving towards a mobile and sustainable energy economy, in which rechargeable batteries play an essential role as a power supply. The current battery of choice is Li ion battery that is dominating the market but faces great challenges for future use mainly due to the demand for higher capacities and target for cost reduction. Next-generation rechargeable batteries such as Li-O2, Li-S and Na ion batteries, which offers higher capacities and cost-effectiveness, are being intensively researched as potential solutions to meet the future energy storage demand.
This thesis focuses on the search of high-performance anode materials for both Li and Na ion batteries, including metallic Li and Na, Si, MgH2, and black P and Sn4P3 based composites. Various methods are involved to synthesize the active materials and electrodes in a cost-effective manner; and comprehensive characterization on the physico-chemical and electrochemical properties has been performed to provide fundamental understanding and insights into the electrochemical processes. This work has achieved long-lifespan and safe Li and Na metal anodes by suppressing the hazardous dendrite growth. The Si, P and MgH2 anodes presented in this work also exhibit high and stable electrochemical performance for Li and Na ion storage. Notably, the Na ion uptake in Si and MgH2 has been, for the first time, realized in experiments. This research shows great promise towards the commercial introduction of these anodes in next-generation high energy density Li and Na ion batteries.","Li ion batteries; Na ion batteries; Si nanoparticles; Black phosphorus; magnesium hydride; Anode materials","en","doctoral thesis","","978-94-6295-914-9","","","","","","2019-05-23","","","ChemE/Materials for Energy Conversion and Storage","","",""
"uuid:2da90b9e-794e-45ac-ae72-0b532e058983","http://resolver.tudelft.nl/uuid:2da90b9e-794e-45ac-ae72-0b532e058983","Modelling and Monitoring of Dynamic Wheel-Rail Interaction at Railway Crossing","Wei, Z. (TU Delft Railway Engineering)","Dollevoet, R.P.B.J. (promotor); Li, Z. (promotor); Delft University of Technology (degree granting institution)","2018","This dissertation aims to gain a better understanding of the dynamic wheel-rail interaction at crossings, including characterizing the wheel-rail contact behavior, evaluating the performance of crossings under traffic loads and monitoring the health condition of the structure. The first part of this dissertation focuses on an in-depth analysis of wheel-rail contact behavior and related rail degradation. An explicit 3D finite element (FE) model is developed to simulate the passage of a wheelset across a nominal crossing. The second part proposes a method to evaluate the performance of long-term serviced crossings. In the method, in-situ 3D profile and hardness measurements are conducted on a long-term serviced crossing and are used as the input for the FE modeling of dynamic wheel-rail interaction. The simulated wheel-rail contact parameters are then used to predict the distributions of plastic deformation and wear. The third part analyses the characteristic dynamic response of wheel-rail interaction at crossings. In-situ axle box acceleration (ABA) measurements were conducted on a nominal crossing with various test parameters. Thereafter, a roving-accelerometer hammer test was carried out to extract the relationship between the signature tune of the ABA and the natural frequencies of the crossing. The fourth part investigates the feasibility of the ABA system for monitoring the health condition of crossings. Information from multiple sensors was collected from both nominal and degraded crossings. By proper correlation of the gathered data, an algorithm was proposed to identify the characteristic ABA related to crossing degradation and then to evaluate the health condition of the structure.","","en","doctoral thesis","","978-94-6366-035-8","","","","","","2018-05-22","","","Railway Engineering","","",""
"uuid:fda35870-18d9-4ca3-9443-199a1dcb0250","http://resolver.tudelft.nl/uuid:fda35870-18d9-4ca3-9443-199a1dcb0250","Phanerozoic Vertical Movements in Morocco","Charton, R.J.G. (TU Delft Applied Geology)","Bertotti, G. (promotor); Redfern, Jonathan (promotor); Storms, J.E.A. (copromotor); Delft University of Technology (degree granting institution)","2018","class=""MsoNoSpacing"">Our understanding of the Earth’s interior is limited by the access we have of its deep layers, while the knowledge we have of Earth’s evolution is restricted to harvested information from the present state of our planet. We therefore use proxies, physical and numerical models, and observations made on and from the surface of the Earth. The landscape results from a combination of processes operating at the surface and in the subsurface. Thus, if one knows how to read the landscape, one may unfold its geological evolution.
In the past decade, numerous studies have documented km-scale upward and downward vertical movements in the continental rifted margins of the Atlantic Ocean and in their hinterlands.These movements, described as exhumation (upward) and subsidence (downward), have been labelled as “unpredicted” and/or “unexpected”. ‘Unpredicted’ because conceptual, physical, and numerical models that we dispose of for the evolution of continental margins do not generally account for these relatively recent observations. ‘Unexpected’ because the km-scale vertical movements occurred when our record of the geological history is insufficient to support them. As yet, the mechanisms responsible for the km-scale vertical movements remain enigmatic.
One of the common techniques used by geoscientists to investigate the past kinematics of the continental crust is to couple ‘low-temperature thermochronology’ and ‘time-temperature modelling’. In Morocco alone, over twenty studies were conducted following this approach. The reason behind this abundance of studies and the related enthusiasm of researchers towards Moroccan geology is due to its puzzling landscapes and complex history. In this Thesis, we investigate unconstrained aspects of the km-scale vertical movements that occurred in Morocco and its surroundings (Canary Islands, Algeria, Mali, and Mauritania). ","Morocco; Vertical movements; low-temperature thermochronology; time-temperature modelling","en","doctoral thesis","","978-94-6186-913-5","","","","","","","","","Applied Geology","","",""
"uuid:c6fa5814-99c7-4492-95cc-beed45286c71","http://resolver.tudelft.nl/uuid:c6fa5814-99c7-4492-95cc-beed45286c71","Exploring the role of system operation modes in failure analysis in the context of first generation cyber-physical systems","Ruiz Arenas, S. (TU Delft Cyber-Physical Systems)","Horvath, I. (promotor); Mejia-Gutierrez, Ricardo (promotor); Rusak, Z. (copromotor); Delft University of Technology (degree granting institution)","2018","Typically, emerging system failures have a strong impact on the performance of industrial systems as well as on the efficiency of their operational and servicing processes. Being aware of these, maintenance and repair researchers have developed multiple failure detection and diagnosis techniques that allow early recognition of system or component failures and maintaining continuous system operation in a cost-effective way. However, these techniques have many deficiencies in the case of self-tuning first generation cyber-physical systems (1G-CPSs). The reason is that these systems compensate for the effects of emerging system failures until their resources are exhausted, and the compensatory actions not only mask the failures, but also make their recognition difficult. Late recognition of failures is however in contrast with the principles of preventive maintenance. Therefore, the promotion research concentrated on the issue of recognizing and forecasting failures under dynamic and adaptive behavior of 1G-CPSs.
CPSs are enabled to compensate for failure symptoms by changing their system operation modes (SOMs). It was also observed that transitions of SOMs reduce the reliability of a signal-based failure diagnosis. It was hypothesized that the frequency and the duration of the changes of the operational states of the 1G-CPS may be strong indicators of the failure emergence phenomenon and that investigation of SOMs facilitates early detection of failures. Therefore, the completed exploratory studies were aimed at exploring how the frequency and duration of transitions of SOMs can be brought into correlation with specific types of failures, and how they can be computed as measures of failure occurrence. The obtained results revealed that system failures tend to induce unusual system operation modes that can be used as basis for failure characterization, and even for failure forecasting. The empirical research made use of a cyber-physical greenhouse testbed to get experimental data and was completed by the development of computational model. A failure injection strategy was implemented in order to induce failure occurrence in a controlled manner. The proposed approach can be applied as a basis of forecasting system failures of 1G-CPSs, but additional research seems to be necessary.","Cyber-physical systems; Failure analysis; Forecasting; Preventive maintenance; Failure diagnosis; Fault management; Machine learning","en","doctoral thesis","","978-94-6186-916-6","","","","","","","","","Cyber-Physical Systems","","",""
"uuid:0c6bcdac-6bf7-46c3-a4d3-53119c1a8606","http://resolver.tudelft.nl/uuid:0c6bcdac-6bf7-46c3-a4d3-53119c1a8606","The hidden side of cities: Methods for governance, planning and design for optimal use of subsurface space with ATES","Bloemendal, Martin (TU Delft Water Resources)","Olsthoorn, T.N. (promotor); Delft University of Technology (degree granting institution)","2018","Aquifer Thermal Energy Storage (ATES) systems provide sustainable space heating and cooling for buildings. In future, many buildings in moderate climates rely on ATES for their space heating and cooling.
However, the subsurface space available for heat storage is limited and, there is a trade-off between individual ATES system efficiency and minimizing greenhouse gas emissions in an area by facilitating as much ATES systems as possible. Therefore, is it important to explore how aquifers can be utilized sustainably and to its full potential to maximize energy saving with ATES. In this dissertation methods for design, governance and planning of ATES systems in busy areas are presented. It is also identified where in the world suitable aquifers and climatic conditions coincide with urban areas; the future hot-spots for ATES, where these methods are needed.
The presented design methods result in more efficient use of the subsurface and lower heat losses during storage for individual systems. The results also show that in areas with many buildings with ATES, the developed mathods for governacne and planning of ATES wells result in much larger energy savings by sustainably accommodating more ATES system than is done and allowed in current practice.
H2O2 as an AOP residual during MAR.","Managed aquifer recharge; Advanced oxidation processes; Bromate; Hydrogen peroxide; By-product; Iron; Denitrifying bacteria","en","doctoral thesis","","978-90-6562-422-2","","","","","","","","","Sanitary Engineering","","",""
"uuid:fdb0da19-0ef2-4bb6-92a7-8a7acbb05dd2","http://resolver.tudelft.nl/uuid:fdb0da19-0ef2-4bb6-92a7-8a7acbb05dd2","Towards robust design optimization of automotive turbocharger rotor-bearing systems","Eling, R.P.T. (TU Delft Mechatronic Systems Design)","van Ostayen, R.A.J. (promotor); Rixen, D.J. (promotor); Delft University of Technology (degree granting institution)","2018","In the competitive automotive market, the performance of turbochargers is constantly being pushed towards their theoretical optimum. One of the key components of the turbocharger is the rotor-bearing system, which determines the friction losses and noise output and furthermore affects the overall turbocharger efficiency, reliability and cost. In order to fulfil the demands of the automotive market, developing methods to optimize the rotor-bearing system is the focus of this study, where particular attention is paid to taking into account the product-to-product variations that are inevitable in cost-effective mass-produced parts, as well as the variations in turbocharger operating conditions.
First, a model of the rotor-bearing system was developed to predict the rotordynamic response over the operating range. The model is constructed in a step-by-step fashion, starting with a simple test case: a Laval rotor supported by plain journal bearings. As the behavior of the rotor-bearing system varies over its rotation speed range, run-up simulations were performed by a time-transient multi-physical model. In this model, several sub-models are coupled: a rotordynamic sub-model, a thermo-hydrodynamic submodel and a thermal network model.
Once a satisfactory correlation was found between numerical simulation results and measurement results, the test case progressed to a Laval rotor with floating ring bearings instead of plain journal bearings. Correspondingly, the bearing model was extended to include the dynamics of the floating ring and its two oil films. The resulting run-ups showed a response consisting of a critical speed, an oil whirl and an oil whip.
Analysis of a turbocharger rotor-bearing system was subsequently performed, showing a more complex response, consisting of multiple critical speeds and the co-existence of sub-synchronous whirling modes. The effect of the rotor-bearing operating conditions, unbalance configuration, the thrust bearing and the bearing cylindricity were investigated. Most of the trends are correctly predicted by the model, however the correlation between measurement results and simulation results was clearly inferior to the case of the Laval rotor, most likely due to the uncertainties in the actual turbocharger geometry and the actual unbalance distribution.
Lastly, an optimization of a Laval rotor-bearing system was performed. The resulting robust optimum design ensures optimum rotor-bearing performance, even at the most severe operating conditions and even if all manufacturing tolerances represent the worst case scenario. Particularly the uncertainties in rotor unbalance and oil supply temperature were found to have a significant influence on the optimum design.
the images where the spectral signal is sampled in hundreds of narrow and
contiguous spectral channels, usually covering the 400-2500 nm spectral region
where sunlight reflected by the Earth can be measured. Earth observation systems
acquire spectral information by imaging spectrometers mounted in a platform
flying over the Earth. Recent advances in technology make it possible to have
miniaturised hyperspectral satellites in orbit. Much of the work presented in this
thesis was inspired by the study of a CubeSat equipped with an imaging
spectrometer and capable of onboard data processing.","","en","doctoral thesis","","978-94-6295-935-4","","","","","","","","","Optical and Laser Remote Sensing","","",""
"uuid:b4f772b3-c9ea-4760-b68e-a8c85fd099b6","http://resolver.tudelft.nl/uuid:b4f772b3-c9ea-4760-b68e-a8c85fd099b6","Sequential ultrasonic spot welding of thermoplastic composites: An experimental study on the welding process and the mechanical behaviour of (multi-)spot welded joints","Zhao, T. (TU Delft Structural Integrity & Composites)","Benedictus, R. (promotor); Villegas, I.F. (copromotor); Delft University of Technology (degree granting institution)","2018","The popularity of thermoplastic composites (TPCs) has been growing steadily in the last decades in the aircraft industry. This is not only because of their excellent material properties, but also owing to their fast and cost-effective manufacturing process. Fusion bonding, or welding, is a typical joining method for TPCs due to the intrinsic properties of thermoplastic polymers. Among different welding technologies, ultrasonic welding has been regarded as one of the most promising techniques for the assembly of TPC components. Ultrasonic welding is by nature a spot welding technique. As it is known that a series of problems result from using mechanical fasteners for joining composite structures, e.g. breaking fibres during drilling and extensive labour work, ultrasonic spot welding can be considered as a promising alternative from the perspective of fast manufacturing cycle. However, fundamental understanding is still lacking to achieve application of ultrasonic spot welding in composite structures to be achieved:","Thermoplastic composites; Ultrasonic spot welding; Mechanical behaviour; Fractographic analysis","en","doctoral thesis","","978-94-6295-916-3","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:5bb2f97b-55f7-4afa-b6f4-f18f16543273","http://resolver.tudelft.nl/uuid:5bb2f97b-55f7-4afa-b6f4-f18f16543273","Simulation of hydration and microstructure development of blended cements","Gao, P. (TU Delft Materials and Environment)","van Breugel, K. (promotor); Wei, J. (promotor); Ye, G. (copromotor); Delft University of Technology (degree granting institution)","2018","For optimization of the use of Supplementary Cementitious Materials (SCMs), i.e. blast furnace slag (BFS) and fly ash (FA), in cementitious system a numerical model for simulating the hydration and microstructure development of blended cements can be used. Several models have been proposed in recent years to simulate the hydration and microstructure development of blended cements. However, most of these models need further development. For example, the nucleation and growth of calcium hydroxide (CH) particles were often not simulated explicitly in these models.","hydration; microstructure; simulation; pore solution chemistry; porosity; slag; fly ash","en","doctoral thesis","","978-94-6366-045-7","","","","","","","","","Materials and Environment","","",""
"uuid:a64b7f31-a45b-4868-8978-70256e2ecb4f","http://resolver.tudelft.nl/uuid:a64b7f31-a45b-4868-8978-70256e2ecb4f","White, Friend or Foe?: Understanding and predicting photocatalytic degradation of modern oil paintings","van Driel, B.A. (TU Delft (OLD) MSE-4)","Dik, J. (promotor); van den Berg, K.J. (promotor); Delft University of Technology (degree granting institution)","2018","This dissertation presents a study into the ultraviolet irradiation-initiated degradation phenomena occurring in titanium white containing oil paints, commonly referred to as photocatalytic degradation. The topic of this thesis can be summarized as: the (photocatalytic) properties of titanium white pigments in oil paints and their consequences for collections containing modern art. The thesis consists of three parts: 1) characterization of the use and properties of titanium white pigments, 2) understanding and monitoring the degradation of titanium white containing oil paints and 3) predicting degradation caused by titanium white pigments. Combining the results of these three parts leads to a risk management strategy for modern art collections presented as a conclusion of this thesis.
Titanium white pigments were introduced in the 20th century as an alternative for lead white and zinc white. The pigments underwent a gradual development resulting in a large variety of pigments available throughout history. This variety ranges from very photocatalytic or ‘bad’ pigments, which severely speed up degradation, to photostable or ‘good’ pigments which can protect their environment from UV irradiation. Both ‘good’ and ‘bad’ pigments found their way into artist materials and thus into paintings. Hence the question ‘Titanium white, Friend or Foe?’. The pigment’s photocatalytic activity is highly dependent on the pigment’s crystal structure (rutile or anatase) and (inorganic) surface treatment. When a pigment is photocatalytic, radicals can form upon UV irradiation, which attack the oil binding medium and break it down to volatile components, leading to an effect called chalking: the pigment is unbound on the paint surface.
Interestingly, while we know that ‘bad’ TiO2 pigments were used in modern oil paints, photocatalytic degradation problems have not been widely reported thus far. Several hypotheses for the lack of problems in collections are presented and investigated in this thesis. Additionally, characterization tools, predictive tools, and monitoring tools are developed, in order to provide solutions before it is too late. Simultaneously, analytical methods and research approaches, that are uncommon in the field of conservation science are explored to evaluate their applicability in answering cultural heritage research questions.
After ischemic stroke, both afferent projections to sensory cortices (S1/2) and sensory projections to motor cortices are often affected. Changes in S1 are particularly interesting for our understanding of stroke recovery and rehabilitation strategies. The assessment of sensory impairment after stroke can certainly benefit from affordable and ambulant imaging modalities, like electroencephalography (EEG). Somatosensory evoked potentials (SEPs) recorded with EEG may be used to follow stroke patients longitudinally. In order to detect changes occurring in S1, precise measurements and with high spatial resolution are obligatory. In the present thesis, I first evaluated the capacity of SEPs for tracking longitudinal stroke recovery. Subsequently, I explored the potential benefits and pitfalls of EEG-based monitoring of stroke patients.","","en","doctoral thesis","","978-94-6233-950-7","","","","","","","","","Biomechatronics & Human-Machine Control","","",""
"uuid:44dda417-a658-47d3-998b-48c082c9e989","http://resolver.tudelft.nl/uuid:44dda417-a658-47d3-998b-48c082c9e989","A tensor approach to linear parameter varying system identification","Gunes, Bilal (TU Delft Team Jan-Willem van Wingerden)","van Wingerden, J.W. (promotor); Verhaegen, M.H.G. (promotor); Delft University of Technology (degree granting institution)","2018","","tensor; LPV; identification; data-driven; wind; turbine; statistics; subspace; optimization; tensor decompositions; multi-linear algebra; SVD; MLSVD; HOSVD; tensor trains; tensor networks; polyadic; engineering; wind energy","en","doctoral thesis","","","","","","","","","","","Team Jan-Willem van Wingerden","","",""
"uuid:ef6761b5-538c-4620-bf50-d66ad1222314","http://resolver.tudelft.nl/uuid:ef6761b5-538c-4620-bf50-d66ad1222314","Nucleation Control: Microwave, Ultrasound and Laser as Tools to Control the Number of Nuclei in Crystallization Processes","Kacker, R. (TU Delft Intensified Reaction and Separation Systems)","Stankiewicz, A.I. (promotor); Eral, H.B. (copromotor); Delft University of Technology (degree granting institution)","2018","The principal objective of the research focuses on the intensification of the batch and continuous crystallization processes through enhanced nucleation control, proper plug flow conditions in continuous tubular crystallizers and development of advanced image analysis based PAT tool for process monitoring.
Nucleation control is addressed through manipulation of the number of crystals in the crystallizer; by either controlling the rate of nuclei formation or through dissolution of the excess nuclei to limit the nucleation overshoot or through continuous seeding in case of flow crystallizer to suppress nucleation in the tubes. The following topics are addressed:
1. The efficiency of the Direct Nucleation Control(DNC) strategy using microwave heating.
2. Induction of high nucleation rates at low supersaturation by the application of laser or ultrasound energy.
3. Combination of the ultrasound assisted internal seed generation in the continuous tubular crystallizer, under plug flow conditions.
4. Characterization of nucleation and the crystal properties through development of in-situ imaging based PAT technology.
2O3 ceramics: Selection and testing of novel healing particles","Boatemaa, L. (TU Delft (OLD) MSE-1)","Sloof, W.G. (promotor); van der Zwaag, S. (promotor); Delft University of Technology (degree granting institution)","2018","Alumina (Al2O3) is an attractive ceramic for engineering applications operating at elevated or high temperatures because of its good thermal and chemical resistance. It also maintains high strength and hardness at high temperatures. These desirable properties are due to the strong covalent and ionic bonds existing between its atoms.
However, these same strong and directional bonds are the origins of its inherent brittleness. Over the last decade, material scientists have adopted self-healing as a means of restoring the load bearing capability of such materials after damage from micro-sized surface cracks. In this methodology, the material is restored to a status comparable to the original one by the ‘healing’ of such surface cracks at high temperatures. Healing is achieved by the addition of ‘healing agents’ to the base ceramic material which upon the occurrence of a crack oxidise into a healing oxide which fills and seals of the crack. There are some gaps in the build-up of the knowledge ladder of self-healing ceramics to an application ready level. This thesis addresses some design questions and tests the capability of newly identified healing particles under laboratory and application conditions.","Self-healing ceramics; Alumina; Oxidation kinetics; Spark plasma sintering","en","doctoral thesis","","9789065624215","","","","","","","","","(OLD) MSE-1","","",""
"uuid:6a75532e-e5df-4ad0-a984-e24639462676","http://resolver.tudelft.nl/uuid:6a75532e-e5df-4ad0-a984-e24639462676","Radio spectrum management: from government to governance: Analysis of the role of government in the management of radio spectrum","Anker, P.D.C. (TU Delft Economics of Technology and Innovation)","Groenewegen, J.P.M. (promotor); Lemstra, W. (copromotor); Delft University of Technology (degree granting institution)","2018","This PhD thesis deals with the role of government in radio spectrum management. While current literature suggests that avoiding harmful interference and realizing economic efcient use of the radio spectrum are the prime drivers, the study revealed that realizing and safeguarding public interests have played a crucial role, including the realization of specifc industrial policy objectives. A revision of the radio spectrum governance process is proposed, based on the insights obtained and building on the institutional analysis and design framework of Ostrom et al., combined with competitive market theory. Essentially proposing the next (and likely fnal) step in the liberalization process. The proposed revision redefnes radio spectrum management from a top-down government controlled process to a botom-up governance process in a
multi-actor seting. The role of government shifs from a controller of the process to a role of market design, monitoring and facilitation.","","en","doctoral thesis","","978-90-828481-0-6","","","","","","","","","Economics of Technology and Innovation","","",""
"uuid:26d5bad5-712d-47b2-8df9-d57e1f14599e","http://resolver.tudelft.nl/uuid:26d5bad5-712d-47b2-8df9-d57e1f14599e","Deterioration and optimal rehabilitation modelling for urban water distribution systems","Zhou, Y. (TU Delft Sanitary Engineering)","Vairavamoorthy, K. (promotor); Delft University of Technology (degree granting institution)","2018","Pipe failures in water distribution systems can have a serious impact and hence it’s important to maintain the condition and integrity of the distribution system. This book presents a whole-life cost optimisation model for the rehabilitation of water distribution systems. It combines a pipe breakage number prediction model with a pipe criticality assessment model, which enables the creation of a well-constructed and more tightly constrained optimisation model. The pipe breakage number prediction model combines information on the physical characteristics of the pipes with historical information on breakage and failure rates. A weighted multiple nonlinear regression analysis is applied to describe the condition of different pipe groups. The criticality assessment model combines a pipe’s condition with its hydraulic significance through a modified TOPSIS. This model enables the optimisation to focus its efforts on those important pipes. The whole life cost optimal rehabilitation model is a multiple-objective and multiple-stage model, which provides a suite of rehabilitation decisions that minimise the whole life cost while maximising its long-term performance. The optimisation model is solved using a modified NSGA-II. The utility of the developed models is that it allows decision makers to prioritize their rehabilitation strategy in a proactive and cost-effective manner.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-1-138-32281-3","","","","","","","","","Sanitary Engineering","","",""
"uuid:b919b9e6-d7f2-4cd9-b718-9f404e0a7a1f","http://resolver.tudelft.nl/uuid:b919b9e6-d7f2-4cd9-b718-9f404e0a7a1f","Magnetic Energy Transfer in Roads","Prasanth, V. (TU Delft DC systems, Energy conversion & Storage)","Bauer, P. (promotor); Ferreira, Jan Abraham (promotor); Delft University of Technology (degree granting institution)","2018","This thesis deals with the modelling and application of magnetic fields in roads. The backbone technology being inductive power transfer (IPT) for electric vehicles. The magnetics for energy transfer in vehicles, can be adapted for heating steel fibres in roads, referred to as self-healing and modelling this is a second aspect of this thesis. The first sections of this thesis is dedicated to an overview of modelling techniques for coil design of IPT systems using both analytical and semi-analytical tools. A detailed literature review of techniques is followed by a comparison highlighting the strengths and weakness of techniques in terms of ease of use, computational efficiency, application to material interfaces etc. Analytical modelling of single and multi-coil configurations of IPT systems is carried out subsequently. The theory of partial inductance is used to model these geometries, to assess the impact of system parameters such as coupling, power transferred and magnetic efficiency with shapes of couplers and misalignment. Next, the problem of misalignment is highlighted by considering a distributed IPT system. The analytical modelling and experimental analysis of misalignment - lateral and longitudinal is performed. Edge effect is observed and experimentally validated. The second part of this thesis is dedicated to a multi-objective optimization based on the results of the developed analytical model. The goal being the development of a prototype IPT system for powering light EVs. The double rectangular (DR) coupler is chosen as the geometry for power transfer. Several geometry parameters - turns, ferrites (number, dimensions), gap between ferrites etc. are considered as design variables. Efficiency, area related power density and weight are considered as the optimization targets. Pareto fronts are developed and a particle is chosen for the development of a prototype. An experimental set-up is built consisting of a 85 kHz inverter, compensated charge-pads, rectifier and resistive load. The inverter is based on SiC MOSFETS and SiC Schottky anti-parallel diodes, the rectifier made from the same diodes. Phase shift control of the inverter legs is used to control power flow. An experimental analysis to validate the magnetic models is also developed. The third part of this thesis deals with system level economic analysis of IPT technology. A case study of bus fleet is considered and a generic methodology is developed to determine driving range as a function of mass and frontal area of the EV. The economic analysis is performed also identifying the trade-offs between road coverage of IPT, efficiency and battery size. Finally, the thesis culminates with a vision toward a future highway. Such a highway is expected to undergo a functional upgrade to handle electrification of transportation. This evolves around the integration of IPT systems, with low maintenance inductive healing asphalt roadways and renewable energy generation. The modelling challenges to such an integration is studied both using simulations and experiments. A case study for sizing renewable energy in a highway (A12) in the Netherlands using IPT is detailed.","","en","doctoral thesis","","9789461869241","","","","","","","","","DC systems, Energy conversion & Storage","","",""
"uuid:02f47a5f-9699-478b-95db-d7163d33912e","http://resolver.tudelft.nl/uuid:02f47a5f-9699-478b-95db-d7163d33912e","Representing Large Virtual Worlds","Kol, T.R. (TU Delft Computer Graphics and Visualisation)","Eisemann, E. (promotor); Delft University of Technology (degree granting institution)","2018","The ubiquity of large virtual worlds and their growing complexity in computer graphics require efficient representations. This means that we need smart solutions for the underlying storage of these complex environments, but also for their visualization. How the virtual world is best stored and how it is subsequently shown to the user in an optimal way, depends on the goal of the application. In this respect, we identify the following three visual representations, which form orthogonal directions, but are not mutually exclusive. Realistic representations aim for physical correctness, while illustrative display techniques, on the other hand, facilitate user tasks, often relating to improved understanding. Finally, artistic approaches enable a high level of expressiveness for aesthetic applications. Each of these directions offers a wide array of possibilities. In this dissertation, our goal is to provide solutions for strategically selected challenges for all three visual directions, as well as for the underlying representation of the virtual world.","computer graphics; real-time rendering; three-dimensional graphics; image generation; display algorithms; viewing algorithms; raytracing; visibility approximation; massively parallel algorithms; global illumination; single scattering; artistic stylization; compression; directed acyclic graphs; sparse voxel octrees; alternative representations","en","doctoral thesis","","978-94-6186-896-1","","","","","","","","","Computer Graphics and Visualisation","","",""
"uuid:c049ea67-23a1-4e7e-af52-1275802839f1","http://resolver.tudelft.nl/uuid:c049ea67-23a1-4e7e-af52-1275802839f1","From fluvial supply to delta deposits: Simulating sediment delivery, transport and deposition","van der Vegt, H. (TU Delft Applied Geology)","Luthi, S.M. (promotor); Storms, J.E.A. (copromotor); Delft University of Technology (degree granting institution)","2018","Geological reservoir models, created based on sparse core and seismic data, inform hydrocarbon production, geothermal applications and aquafer management. Important factors contributing to reservoir quality in these applications include the heterogeneities within and connectivity between the relevant geo‐bodies constituting the reservoir. The transport and preservation of sediment at the time of deposition impacts these factors. Therefore, a better understanding of sediment delivery, transport and deposition can be used to better quantify reservoir properties. This same computational methodology can also be applied test hypotheses concerning the depositional processes responsible for preservation of ancient deposits. Constraining such hypotheses improves our understanding of the paleo‐sediment dynamics and the accuracy of future geological models.","","en","doctoral thesis","","978-94-6186-914-2","","","","","","2018-11-03","","","Applied Geology","","",""
"uuid:cae98392-a0d2-4809-9473-c742d0424f33","http://resolver.tudelft.nl/uuid:cae98392-a0d2-4809-9473-c742d0424f33","Simulation-based optimization for decision making under uncertainty in opencast mines","Soleymani Shishvan, M. (TU Delft Resource Engineering)","Jansen, J.D. (promotor); Benndorf, J. (promotor); Delft University of Technology (degree granting institution)","2018","","","en","doctoral thesis","","978-94-6186-920-3","","","","","","","","","Resource Engineering","","",""
"uuid:5c137348-aedb-44d3-a545-406313472a20","http://resolver.tudelft.nl/uuid:5c137348-aedb-44d3-a545-406313472a20","Rotational dynamics of viscoelastic planetary bodies","Hu, H. (TU Delft Astrodynamics & Space Missions)","Vermeersen, L.L.A. (promotor); Visser, P.N.A.M. (promotor); van der Wal, W. (copromotor); Delft University of Technology (degree granting institution)","2018","","Rotational dynamics; tidal deformation; viscoelastic bodies; quasi-fluid approximation; fluid limit approximation","en","doctoral thesis","","978-94-6299-969-5","","","","","","","","","Astrodynamics & Space Missions","","",""
"uuid:50600d87-4a1b-47d5-a26b-2b5ca645ad8b","http://resolver.tudelft.nl/uuid:50600d87-4a1b-47d5-a26b-2b5ca645ad8b","Co-operation and haptic assistance for tele-manipulated control over two asymmetric slaves","van Oosterhout, J. (TU Delft Human-Robot Interaction)","Abbink, D.A. (promotor); van der Helm, F.C.T. (promotor); de Baar, Marco (promotor); Delft University of Technology (degree granting institution)","2018","The success of future fusion power plants as a sustainable energy source greatly depends on their uptime. This uptime relies on the plant's maintenance, which must be performed via tele-manipulation. Tele-manipulated maintenance is challenging, as exemplified by strictly selected and highly trained operators who still require more time to work tele-manipulated, than they do for working hands-on. Many future maintenance tasks involve delicate components that require accurate placement with a dexterous tele-operated slave. Some components are very heavy and need simultaneous hoisting support from a crane, thereby confronting operators with two asymmetric subtasks that have an interactive nature. Literature indicates that having two asymmetric subtasks complicates task execution even more than the already challenging tele-manipulation with one slave system, presumably due to problems in the coordination of the subtasks. Such tele-manipulation with asymmetric slaves must be improved to ensure high plant uptime for future fusion plants. The standard industrial approach to coordinate the control of two asymmetric subtasks prescribes two co-operating operators. However, a single individual could perform the task as well with a bi-manual or hybrid uni-manual control interface. The impact of such differences in control interface design for asymmetric slaves is still a matter of scientific debate. Regardless, the tele-manipulated task will be challenging, and even highly trained operators might benefit from a system that supports them in the task. Although haptic assistance generally improves operator task performance, the main underlying assumption is the availability of perfect knowledge of the task and environment. Handling heavy loads causes manipulator links to deflect statically or dynamically due to their compliance, which cannot be measured or model in sufficient detail. This results in static or dynamic mismatch (inaccuracy) between the real and modelled world that will manifest in the haptic assistive cues, which could negatively affect operator control behaviour.
The goal of this thesis is to quantify the impact of interface design choices and haptic assistance to facilitate action coordination between the asymmetric subtasks. Specifically, the interface design choices for single and dual operators will be evaluated with and without haptic assistance, under realistic conditions that incorporate potential inaccuracies in the assistance arising from mismatches between the real and modelled world.
The conclusions are that the interface with two co-operating operator is favourable over the bi- and uni-manual single operator interface to coordinate two asymmetric subtasks that have an interactive nature. A novel haptic assistance system improves both co-operated and uni-manual task performance. Interestingly, the observed favour for the co-operated interface with respect to the uni-manual interface is not found when both are haptically assisted. Moreover, haptic assistance still provides benefits when the support cues become statically or dynamically inaccurate due to heavy load handling.
In the first design, a ripple reduction technique called auto-correction feedback (ACFB) is proposed, which continuously detects and cancels the ripple in a power and area efficient manner, so that the overall op-amp only draws 13µA from a 1.8V to 5.5V supply and occupies a 0.64mm2 die area. In the second design, an adaptive clock boosting technique for input switches is proposed, so that their charge injection mismatch is minimized and independent of changes in the supply and input common-mode voltages. As a result, 0.5µV maximum offset and 5.6nV/√Hz noise PSD are achieved over the op-amp’s entire rail-to-rail input common-mode range. In the third design, six parallel input transconductors are driven by interleaved 800kHz clocks, which pushes the PSD peak up to 4.8MHz. Furthermore, an on-chip charge mismatch compensation circuit is employed to reduce the maximum input bias current from 1.5nA down to 150pA in post-production trimming.","","en","doctoral thesis","","978-94-028-0997-8","","","","","","","","","Electronic Instrumentation","","",""
"uuid:3e978189-4fd7-4358-840e-b995416bedef","http://resolver.tudelft.nl/uuid:3e978189-4fd7-4358-840e-b995416bedef","Oxidation phenomena in advanced high strength steels: Modelling and experiment","Mao, W. (TU Delft (OLD) MSE-1)","Thijsse, B.J. (promotor); Sloof, W.G. (promotor); Delft University of Technology (degree granting institution)","2018","Galvanized advanced high strength steels (AHSS) will be the most competitive structural material for automotive applications in the next decade. Oxidation of AHSS during the recrystalization annealing process in a continuous galvanizing line to a large extent influences the quality of zinc coating on the final galvanized steel product. For example, formation of oxides of alloying elements (e.g. Mn, Cr, Si) at the steel surface during annealing prior to galvanizing leads to poor adhesion of a zinc coating. Yet, knowledge on the high temperature oxidation behaviour of AHSS is rather limited. The primary aim of this thesis is to provide fundamental understanding on the kinetics of internal oxidation of AHSS during annealing. The classical Wagner internal oxidation theory for binary alloys was extended to account for multi-component alloys. To this end, a generic coupled thermodynamic-kinetic internal oxidation model based on Fick’s 1st law was developed in order to predict the kinetics of internal oxidation, as well as the concentration depth profiles of internal oxides and solute elements in alloy matrix, considering the finite solubility product of oxide precipitates, the non-ideal behaviour of solid solution and the formation of multiple type of oxide species. The internal oxidation behaviour of Fe-Mn and Fe-Mn-Cr steel alloys were experimentally studied to validate the model. It has been found that for Fe-Mn and Fe-Mn-Cr steel alloys, the effect of non-ideal behaviour of solution on internal oxidation is negligible, and local thermodynamic equilibrium is established within internal oxidation zone. Besides, the kinetics of Wüstite formation on pure iron and Mn alloyed steels annealed in CO2 + CO or H2O + H2 gas mixtures as well as the reduction kinetics of the Wüstite scale in Ar + H2 gas mixtures were investigated. The growth of Wüstite scale on iron and Mn alloyed steels follows the linear rate law. However, adding Mn to iron, even at a relatively low concentration (say 1.7 wt%), dramatically lowers the growth rate of Wüstite scale. Nevertheless, reduction kinetics of the Wüstite scale on iron and Mn alloyed steels are almost the same. During the reduction process a dense iron layer is formed which separates the remaining Wüstite scale from the reduction atmosphere. The rate of Wüstite reduction by H2 is controlled by the diffusion of solute oxygen dissolved in the formed iron layer.","Mn steels; annealing; internal oxidation; thermodynamics; kinetics","en","doctoral thesis","","978-94-91909-50-4","","","","","","","","","(OLD) MSE-1","","",""
"uuid:5b875915-2518-4ec8-a1a0-07ad057edab4","http://resolver.tudelft.nl/uuid:5b875915-2518-4ec8-a1a0-07ad057edab4","Online reinforcement learning control for aerospace systems","Zhou, Y. (TU Delft Control & Simulation)","Mulder, Max (promotor); Chu, Q. P. (promotor); Delft University of Technology (degree granting institution)","2018","Reinforcement Learning (RL) methods are relatively new in the field of aerospace guidance, navigation, and control. This dissertation aims to exploit RL methods to improve the autonomy and online learning of aerospace systems with respect to the a priori unknown system and environment, dynamical uncertainties, and partial observability. In the first part of this dissertation, incremental Approximate Dynamic Programming (iADP) methods are proposed. Instead of using nonlinear function approximators to approximate the true cost-to-go, iADP methods use an (extended) incremental model to deal with the nonlinearity of unknown systems and uncertainties of the environment. In the second part, online Adaptive Critic Designs (ACDs) are proposed based on the incremental model. This method replaces the global system model approximator with an incremental model. This approach, therefore, does not need off-line training stages and may accelerate online learning. In the third part, the hybrid Hierarchical Reinforcement Learning (hHRL) method is proposed for guidance and navigation problems. This method consists of several hierarchical levels, where each level uses different methods to optimize the learning with different types of information and objectives. In conclusion, this dissertation contributes with several methods that improve the intelligence and autonomy of aerospace systems. These improvements are mainly from three perspectives: 1) enhancing the adaptability and efficiency of low-level control, 2) improving the intelligence and online learning ability of guidance, navigation, and control, and 3) creating a well-organized hierarchy to ensure coordination between each level. The proposed methods provide novel insights for both the reinforcement learning research community and for developers of aerospace automatic control system.","Reinforcement Learning; Aerospace Systems; Optimal Adaptive Control; Approximate Dynamic Programming; Adaptive Critic Designs; Incremental Model; Nonlinear Systems; Partial Observability; Hierarchical Reinforcement Learning; HybridMethods","en","doctoral thesis","","978-94-6366-021-1","","","","","","","","","Control & Simulation","","",""
"uuid:4f0141e7-9b97-4ed4-86be-9b885b420423","http://resolver.tudelft.nl/uuid:4f0141e7-9b97-4ed4-86be-9b885b420423","Mimicking the nuclear pore complex using nanopores","Ananth, A.N. (TU Delft BN/Cees Dekker Lab)","Dekker, C. (promotor); Delft University of Technology (degree granting institution)","2018","Nuclear pore complexes acts as a gatekeeper for molecular transport between the nucleus and the cytoplasm in eukaryotic cells. The central NPC channel is filled with intrinsically disordered FG domains (phenylalanine (F), glycine (G)) that are responsible for the fascinating selectivity of NPCs, for which the underlying mechanism is still under considerable debate. In this thesis, a minimalistic mimic of (NPCs) was constructed using solid-state nanopore and DNA origami to study the spatial arrangement and transport process.","Single-molecule; solid-state nanopore; nuclear pore complex; proteins; ionic conductance; DNA origami; surface chemistry","en","doctoral thesis","","9789085933427","","","","","","","","","BN/Cees Dekker Lab","","",""
"uuid:a9409495-dcb4-43fb-bf2d-64341056654d","http://resolver.tudelft.nl/uuid:a9409495-dcb4-43fb-bf2d-64341056654d","Objective evaluation of human manual control adaptation boundaries using a cybernetic approach","Lu, T. (TU Delft Control & Simulation)","van Paassen, M.M. (promotor); Pool, D.M. (copromotor); Delft University of Technology (degree granting institution)","2018","Manual control tasks can be found everywhere in our daily activities, and the human ability to adapt in controlling many different vehicles such as cars and airplanes make it possible for us to travel farther, faster and higher. The human adaptation ability to changes in the controlled element dynamics is indispensable for tasks requiring high performance and safety, and none of the state-of-the-art automatic control systems can compete. For example, in the racing industry, professional racing drivers are needed to adapt to different car
configurations and consistently push the car to its performance limit in the driving simulator and on the track, which is important for designing and tuning the cars. In aviation, pilots are our “last line of defense” for flight safety, especially in emergency situations in which automatic flight systems fail.","Manual Control; Human Adaptation; Human-Machine Interaction; Manual Control Adaptation Boundaries; Maximum Unnoticeable Added Dynamics; Cybernetic Approach; System Identification; Compensatory Tracking; Human Control Model Simulation and Optimization","en","doctoral thesis","","978-94-028-0995-4","","","","","","","","","Control & Simulation","","",""
"uuid:8121cc7b-4969-44c8-b08d-8f32c54ecece","http://resolver.tudelft.nl/uuid:8121cc7b-4969-44c8-b08d-8f32c54ecece","Cities for or against citizens? Socio-spatial restructuring of low-income neighborhoods and the paradox of citizen participation.","Perez Rendon, G. (TU Delft Spatial Planning and Strategy)","Stouten, P.L.M. (promotor); Nadin, V. (promotor); Delft University of Technology (degree granting institution)","2018","Urban renewal has evolved into an ambitious and sophisticated urban strategy, recognised as urban revitalisation in America and urban regeneration in Western Europe. This new urban strategy, which tends to be area-based and state-sponsored, claims for the most part to coordinate a wide range of resources, partners and public agencies to bring about social, economic and spatial improvements in underdeveloped and impoverished city areas while improving the livelihoods of the local residents. However, as this study asserts, the objectives behind this new urban strategy have considered, for the most part, the interests of those formulating and implementing such efforts rather than local residents and stakeholders, and produced in turn ‘attractive’ neighbourhoods increasing city revenues, boosting real estate prices, attracting new investments and alluring new residents. Most importantly, citizen participation and gentrification have been concurrently promoted in urban...","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-023-5","","","","A+BE | Architecture and the Built Environment No 6 (2018)","","","","","Spatial Planning and Strategy","","",""
"uuid:ac4dccc7-d9fe-4a90-9606-aa16abf8efed","http://resolver.tudelft.nl/uuid:ac4dccc7-d9fe-4a90-9606-aa16abf8efed","Modelling fracture and healing in particulate composite systems","Ponnusami, S.A. (TU Delft Aerospace Structures & Computational Mechanics)","van der Zwaag, S. (promotor); Bisagni, C. (promotor); Turteltaub, S.R. (promotor); Delft University of Technology (degree granting institution)","2018","Research in the field of self-healing materials gained significant attention in the last decade owing to its promise of enhanced durability of the material components in engineering applications. Though the research has led to several successful demonstrations, extensive experimental tests will be required for a successful demonstration. Further, for real-time engineering applications with self-healing materials, arriving at an optimal design of the self-healing system is crucial. In this context, modelling techniques in combination with a limited number of experimental tests are potentially more efficient than a design process based on extensive experimental campaigns. With this motivation, the present thesis aims to develop a modelling framework to analyse and understand the fracture mechanisms and the healing behaviour of self-healing material systems using finite element modelling approach. The overall objective is to provide certain guidelines and suggestions for material scientists in terms of selection and design of healing particles and a computational tool to understand and quantify the cracking and healing behaviour of self-healing material systems.","Self-healing materials; Cohesive zone modelling; Crack healing model; Composite materials; Fracture mechanics; Thermal barrier coatings","en","doctoral thesis","","978-94-6299-944-2","","","","","","","","","Aerospace Structures & Computational Mechanics","","",""
"uuid:b5fd172c-b0ac-4023-851e-88b7d0ca31c5","http://resolver.tudelft.nl/uuid:b5fd172c-b0ac-4023-851e-88b7d0ca31c5","Block Copolymer Nanofibrillar Micelles: gelation, manipulation and applications","Zhang, K. (TU Delft ChemE/Advanced Soft Matter)","van Esch, J.H. (promotor); Mendes, E. (copromotor); Delft University of Technology (degree granting institution)","2018","Self-assembly of amphiphilic block copolymers in aqueous solution provides a versatile tool to create complex and functional micelles with various nanostructures, such as spherical, cylindrical and bilayer structures. As an important class in these structures, nanofibrillar micelles have attracted growing interest due to their unique properties that can potentially mimic biological analogues. For example, a great number of nanofibrillar structures, such as actin filaments and collagen gels with filamentous structures, were found in nature systems and have greatly motivated researchers to mimic these systems with synthetic materials. Besides, precise spatiotemporal control and integration of these nanofibrillar structures will offer a powerful strategy for construction of new soft devices in the future. Therefore, in this thesis, we explore the ultra-long, stiff and quenched micelles of diblock copolymers and develop a hybrid approach combining self-assembly of block copolymers and micro-fabrication methods to manipulate these micelles for building soft devices.","","en","doctoral thesis","","978-94-6186-917-3","","","","","","","","","ChemE/Advanced Soft Matter","","",""
"uuid:38c1fd73-bff2-4660-aa40-48a4c28e16f7","http://resolver.tudelft.nl/uuid:38c1fd73-bff2-4660-aa40-48a4c28e16f7","Transport processes in the production of organic acids from lignocellulosic feedstocks by Aspergillus niger","da Fonte Lameiras, F. (TU Delft OLD BT/Cell Systems Engineering)","Heijnen, J.J. (promotor); van Gulik, W.M. (copromotor); Delft University of Technology (degree granting institution)","2018","Filamentous fungi, especially from the genus Aspergillus, are well known for the
production of organic acids in fermentation industry. Nonetheless, in present time the competing chemical conversion routes are still more profitable, leaving space for further investigations and improvement on the biological routes.","","en","doctoral thesis","","978-94-6299-902-2","","","","","","","","","OLD BT/Cell Systems Engineering","","",""
"uuid:d247e271-3321-4661-b29a-08070ae9090f","http://resolver.tudelft.nl/uuid:d247e271-3321-4661-b29a-08070ae9090f","National Renewable Policies in an International Electricity Market: A Socio-Technical Study","Iychettira, K.K. (TU Delft Energie and Industrie)","Weijnen, M.P.C. (promotor); Hakvoort, R.A. (copromotor); Delft University of Technology (degree granting institution); KTH Royal Institute of Technology (degree granting institution); Comillas Pontifical University (degree granting institution)","2018","The current regulatory framework under which the support schemes for Renewable energy sources specifically for electricity (RES-E) operate, is provided for by the Directive 2009/28/EC. It sets a 20% target for energy consumption, while relying on legally binding, national targets until 2020. The goal to promote RES-E, in the European context, coexists with goals of ensuring a single internal market for electricity, and security of supply in the European Union, and these simultaneous goals are not always congruent with each other.
Today, significant amounts of intermittent RES-E in the energy mix have led to unintended effects. An important consequence is the so called ‘merit-order effect’, where the spot market electricity price reduces to the extent by which the renewable electricity generation displaces demand along the merit order. There is concern that part of the merit order effect spreads across national borders. Vitally, implications of the merit order effect on the effectiveness of RES-E support schemes are unclear. Another important effect of the price reduction is that, the lower the average electricity market price, the greater the costs of subsidies, making the phasing out of subsidies for renewable, intermittent sources more difficult.
With respect to electricity fromrenewable sources, this achievement of the three objectives took the shape of ""making renewable support schemes more market-based"", ""ensuring renewables are driven bymarket signals"". However, it is not often clearwhat is meant by such statements in policy documents by the EC. What features of the support scheme are being referred to? What would it mean for renewables solely to be driven
by market signals? How would features of support schemes impact for instance, the merit order effect, and vice-versa? These issues are encapsulated in the first problem addressed in the thesis: to unravel the interactions between renewable support scheme design and a single isolated electricity spotmarket, with a long termperspective.
Since countries are now increasingly interconnected, the secondmajor issue tackled in this thesis concerns cross border effects due to different renewable support schemes between neighbouring countries in a common electricity market. This issue addresses concerns about the merit-order effect spreading across national borders, and the ensuing distributional implications.
The final issue addressed in this dissertation relates to the long term economic viability of electricity fromrenewable sources given the current institutional and physical setting they operate in. Costs of renewable technologies have dropped dramatically and yet effects such as their reducing market value lead to questions about whether it is possible for them to attain economic viability in a decarbonised power sector. Accordingly, the main research question in this dissertation is:
How do national renewable electricity support schemes interact with the electricity market over the long term (20-30 years) as the European Union transitions to a decarbonized energy system?","RES-E; policy design; support schemes; renewable electricity; agent-based modelling; investment; electricity; cross-border effects; IAD framework","en","doctoral thesis","","978-94-6233-904-0","","","","","","","","","Energie and Industrie","","",""
"uuid:b7844e47-91c4-49a4-9178-d64ba4b45713","http://resolver.tudelft.nl/uuid:b7844e47-91c4-49a4-9178-d64ba4b45713","Bioinformatic Analysis of Genomic and Transcriptomic Variation in Fungi","Gehrmann, T. (TU Delft Pattern Recognition and Bioinformatics)","Reinders, M.J.T. (promotor); Abeel, T.E.P.M.F. (copromotor); Delft University of Technology (degree granting institution)","2018","Fungi are microorganisms whose astounding variety can be found in every conceivable ecosystem on the planet. Fungi are nutrient recyclers, playing an irreplaceable role in the carbon cycle. They grow on land and in the sea, on plants and animals and in the soil. They feed us as mushrooms, and drive our economy as bioreactors. They leaven our bread and brew our beer, nourish our crops and spoil our food. They even directly play a role in human health. Fungi are, however, far more complex organisms than their simple phenotypes lead us to believe. In order to harness the potential of fungi, and to address the threats they pose, we must gain a better understanding of fungi. However, their substantial genomic and regulatory diversity impede our reasoning. Thus, to understand fungi, we need to understand their genetic and regulatory diversity.
In this thesis, I developed and utilized bioinformatics methods to understand
variation within and between fungi. We focussed on two fungi: Agaricus bisporus (the champignon, or white button mushroom) because of commercial interest, and Schizophyllum commune (the split-gill mushroom) because it is used as a model organism for mushroom formation (for, amongst others, A. bisporus).","","en","doctoral thesis","","978-94-6186-915-9","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:6a745f3e-3b5c-4a8c-9e64-138544aa0e7b","http://resolver.tudelft.nl/uuid:6a745f3e-3b5c-4a8c-9e64-138544aa0e7b","Satellite-based mitigation and adaptation scenarios for sea level rise on lower Niger delta","Musa, Z.N. (TU Delft Environmental Fluid Mechanics)","Mynett, A.E. (promotor); Popescu, I. (copromotor); Delft University of Technology (degree granting institution)","2018","Accelerated sea level rise (SLR) is the most important climate change impact for coastal areas. However many coastal deltas lack necessary data for evaluation of the vulnerabilities. The Niger delta is one of the coastal areas with little data for coastal planning and management.
Use of satellite data helps bridge the data gap by providing ancillary data - imagery, elevation, altimetry etc. This thesis therefore uses satellite data as the main sources of data for hydrodynamic modelling and GIS analysis to assess the impact of SLR on the Niger delta land area, coastline, and surface water. The thesis results show that because of high subsidence levels, a rise in sea levels of 0.14m already inundates Niger delta areas. Consequently, 4.6–5.2% (1119.3–1254km2) of the Niger delta land area can be lost to inundation by 2030, and 4.9–6.8% (1175.9–1633km2) by 2050.
From the thesis, major mitigation/adaptation measures that can be used for the Niger delta include:dykes, by–pass channels, storm surge barriers, coastline shortening and legislation to ensure compliance by all. Furthermore, some of the existing sustainable local practices in the Niger delta should be included in SLR mitigation/adaptation planning. Such practices include: planting of Bamboo trees for erosion control, use of sandbags as bridges and dykes (flood control), and use of flood receptor pits as temporary flood water reservoirs.
The NV centre in diamond is a spinful optically-active crystal defect. NVs are a prime network-node candidate due to demonstrated coherence times beyond 100ms and longitudinal relaxation times exceeding 1s and their spin-selective optical interface which facilitates the generation of spin-photon entanglement. Entangling links between nodes are therefore readily created by overlapping the emission of two NVs on a beam splitter. Besides NVs, we further address individual 13C nuclear spins in the vicinity and use these spins as a quantum resource. Our goal is to propel these nuclear spins to constitute robust quantummemories which store and manipulate quantum information in an NV-based quantum network. The experiments described in this thesis are thematically separated into three groups.
First, we explore the NV-nuclear interplay. We demonstrate nuclear-spin control by observing the Zeno effect on up to two logical qubits within the state space of three nuclear spins (Chapter 3). We further realize that the always-on magnetic hyperfine interaction between NV and nuclear spins will limit the nuclear spin coherence when entangling distant NV centres (Chapter 4). A systematic experimental study probes our theoretical prediction and we additionally demonstrate improved robustness for logical states within decoherence-protected state spaces (Chapter 5) and finally for individual nuclear spins (Chapter 6). Second,we use remoteNV-NV entangled states to demonstrate experimental milestones in quantumnetworks. The realization of a high-fidelity entangled link over a distance of 1.3km permits the loophole-free violation of Bell’s inequality (Chapter 7). We further increase the entangling rate by three orders of magnitude such that it exceeds the decoherence rate of an entangled state on our network. This allows us to convert our probabilistic entanglement generation into a deterministic process which delivers entangled states at prespecifiedmoments in time (Chapter 8).
Third, we finally combine the concepts of nuclear-spin quantum memories and remote entanglement generation to demonstrate entanglement distillation in a network setting (Chapter 9). We subsequently generate two raw entangled input states between two remote NV centres. The first state is stored on nuclear spins to liberate both NVs for the second round of state generation. Finally, a higher-fidelity entangled state is distilled via local operations. This constitutes the first quantum-network demonstration that relies on the control of multiple fully-coherent quantum systems per network node.","Quantum networks; Quantum Information; Diamond; Quantum optics","en","doctoral thesis","","978-90-8593-338-0","","","","Casimir PhD series, Delft-Leiden 2018-06","","","","","QID/Hanson Lab","","",""
"uuid:1a7be4f5-ab7f-496e-b2d4-578a2cb661d2","http://resolver.tudelft.nl/uuid:1a7be4f5-ab7f-496e-b2d4-578a2cb661d2","Privatisation of the Production of Public Space","Leclercq, E.M. (TU Delft Design & Construction Management; TU Delft Design and Politics)","van Bueren, Ellen (promotor); Vanstiphout, W.A.J. (promotor); Delft University of Technology (degree granting institution)","2018","From the 1960s to the 1970s, a large number of Western inner cities went through a phase of severe deprivation due to both a relocation of manufacturing jobs that in turn led to a depopulation, a lack of investment and high unemployment and to suburbanisation made possible by the car. From the 1990’s, urban regeneration strategies were introduced to tackle this inner city deprivation. In the United Kingdom, this ‘urban renaissance’ took place within the new economic and political paradigm of neoliberalism which placed a strong emphasis on market forces as the driver of urban regeneration (see national policy document Urban Task Force 1999). The shift to this economic system led to changes in both the process of urban development and its product, the space itself. Local governments endorsed the new economic reality- which included a greater role for private and corporate actors in the development and management of cities - in order to be able to participate in the global inter-urban competition. As a result of declining public budgets and encouraged by national guidance, public authorities began to outsource tasks and responsibilities that had previously been regarded as a governmental concern to private actors and newly developed private public partnerships (urban regeneration vehicles in planning policy terminology). Public authorities themselves also adopted business-like styles of organisation in which productivity, effectivity and efficiency were regarded as the main conditions for serving the public’s financial interest, though other public values such as cultural heritage, equality and democracy were often regarded as of secondary concern. This privatisation of development in urban areas did not just affect the process but also the outcome; the appearance and use of public spaces. Within the urban renaissance agenda, a strong emphasis was put on the aesthetisation of space in order to attract the desired businesses, investors and people with a high disposable income. The objective in private management regimes appears to be on reducing risk by putting a strong focus on surveillance, safety, tidiness and the exclusion of undesirable behaviour, all of which reduces the diversity, vitality and vibrancy of spaces in order to welcome tourists and middle-class visitors, in other words consumers (Low 2006). This privatisation of public space raises valid questions with regard to the publicness of urban open space and a number of authors recognise a ‘decline’ of public space or have even declared ‘the end of public space’ as we have known it (Sorkin 1992; Mitchell 1995; Low and Smith 2006; Madanipour 2003; Iveson 2007 etc.). Other authors argue that we do not experience the death of public space but a change in its form, function and appearance that reflects contemporary economic, societal and cultural narratives (see eg Madden 2010; Carmona et al. 2008, Carmona 2015; De Magalhães and Freire Tigo 2017). To be able to incorporate these societal shifts, a revised (and wider) definition is needed to describe and analyse publicness in the production of space (Kohn 2004; De Magalhães 2010; Varna and Tiesdell 2010; Németh and Smith 2011; Langstraat and Van Melik 2013; Varna 2016).
The premises for this research are based on the above discussion on the different roles that privatisation plays in the production of space, within this shift towards greater involvement of a variety of private partners in the development process and the possible decline in the degree of publicness in both process and space. The apparent need for a wider definition of public space to include current political, economic and societal changes, the relation between the public sphere and public space, the degree of publicness in both the process and space itself, the perception of the user and the role of the urban designer are hereby taken as starting points and lead to the following research question:
Does privatisation lead to a decline of publicness in the production of space and space itself, and how does urban design play a role in this?","","en","doctoral thesis","A+BE | Architecture and the Built Environment","78-94-6366-060-0","","","","A+BE | Architecture and the Built Environment No 5 (2018)","","","","","Design & Construction Management","","",""
"uuid:5c1bd018-d612-48a2-97c4-ab49e982593e","http://resolver.tudelft.nl/uuid:5c1bd018-d612-48a2-97c4-ab49e982593e","DNA sensing with nanopores in graphene nanoribbons","Heerema, S.J. (TU Delft BN/Cees Dekker Lab)","Dekker, C. (promotor); Delft University of Technology (degree granting institution)","2018","The information that can be extracted from DNA base sequences, is important for our understanding of biology, for insights in disease, and the way we practice medicine. The technology that one uses to determine DNA-base sequences is called ‘DNA sequencing’. A portable device that enables fast and accurate sequencing, can potentially greatly improve the quality of healthcare. Nanopores in graphene have potential for a new DNA sequencing technology that is superior to the current solutions.","DNA sensing; DNA sequencing; biosensing; graphene nanopore; graphene nanoribbon; STEM","en","doctoral thesis","","978-90-8593-341-0","","","","Casimir PhD Series 2018-09","","","","","BN/Cees Dekker Lab","","",""
"uuid:704d764a-6803-4cad-991f-45dc4ea38f6d","http://resolver.tudelft.nl/uuid:704d764a-6803-4cad-991f-45dc4ea38f6d","Aspects of Source-Term Modeling for Vortex-Generator Induced Flows","Florentie, L. (TU Delft Aerodynamics)","Bijl, H. (promotor); Hulshoff, S.J. (copromotor); Delft University of Technology (degree granting institution)","2018","Vortex generators (VGs) are awidespread means of passive flowcontrol, capable of yielding significant performance improvements to lift-generating surfaces (e.g. wind-turbine blades and airplane wings), by delaying boundary-layer separation. These small vanetype structures, which are typically arranged in arrays, trigger the formation of small vortices in the boundary layer. The flow circulation induced by these vortices causes the near-wall flow to be re-energized, thereby reducing the susceptibility of the boundary layer to separate from the surface.","Vortex generators; source-term modeling; Adjoint method","en","doctoral thesis","","978-94-6186-918-0","","","","","","","","","Aerodynamics","","",""
"uuid:f3729790-0cfe-4f92-866b-eca3f2f2df24","http://resolver.tudelft.nl/uuid:f3729790-0cfe-4f92-866b-eca3f2f2df24","Phosphate Recovery From Sewage Sludge Containing Iron Phosphate","Wilfert, P.K. (TU Delft BT/Environmental Biotechnology)","van Loosdrecht, Mark C.M. (promotor); Witkamp, G.J. (promotor); Delft University of Technology (degree granting institution)","2018","The scope of this thesis was to lay the basis for a phosphate recovery technology that can be applied on sewage sludge containing iron phosphate. Such a technology should come with minimal changes to the existing sludge treatment configuration while keeping the use of chemicals or energy as small as possible. The research focused on understanding the exact mechanism for phosphate release from iron in sewage sludge in order to find a method to release phosphate in an elegant way. Phosphate is an essential nutrient for plant growth, but at the same time the resources of phosphate are limited and concentrated in a few countries outside Europe. Recovery of phosphate can secure the access to phosphate for food production and is therefore an important topic. Iron based phosphate removal is still used by a majority of sewage treatment plants (STPs) but no viable technology is available to recover phosphate from sludge without sludge incineration. The addition of iron is a convenient way for removing phosphate from wastewater, but this is often considered to limit phosphate recovery. Struvite precipitation is currently used to recover phosphate, and this approach has attracted much interest. However, it requires the use of enhanced biological phosphate removal (EBPR). Phosphate removal relying solely on EBPR is not yet widely applied and the recovery potential is low (<50%). Other phosphate recovery methods, including sludge application to agricultural land or recovering phosphate from sludge ash, also have limitations. Energy-producing STPs increasingly rely on phosphate removal using iron, but the problem (as in current processes) is the subsequent recovery of phosphate from the iron. In contrast, phosphate is efficiently mobilized from iron by natural processes in sediments and soils. Iron–phosphate chemistry is diverse, and many parameters influence the binding and release of phosphate, including redox conditions, pH, presence of organic substances, and particle morphology. The current poor understanding of iron and phosphate chemistry in sewage systems is preventing processes being developed to recover phosphate from iron–phosphate rich wastes like municipal wastewater sludge. In the first chapter parameters that affect phosphate recovery were reviewed, and methods are suggested for manipulating iron–phosphate chemistry in wastewater treatment processes to allow phosphate to be recovered. Iron is omnipresent in STPs. It can be present unintentionally, for e.g. due to groundwater seepage into sewers, or it is intentionally added for odour and corrosion control, phosphate removal or prevention of hydrogen sulphide emissions into the biogas. The strong affinity of iron to phosphate has advantages for efficient removal of phosphate from sewage but it may also reduce recovery efficiencies in struvite precipitation technologies or for some phosphate recovery methods from ash. On the other hand iron may also have positive effects on phosphate recovery. Acid consumption was reported to be lower when leaching phosphate from sewage sludge ash with higher iron content. Also, phosphate recovery efficiencies may be higher if an iron phosphate compound, like vivianite, Fe(II)3(PO4)2x8H2O, could be harvested from sewage sludge. Developers of phosphate recovery technologies should be aware of the potential and obstacles the iron and phosphate chemistry bears. The mineral vivianite, is already present in digested sewage sludge and can be an alternative phosphate recovery option to current technologies. To evaluate this, surplus and digested sewage sludge was sampled from full-scale STPs and analysed using XRD, (e)SEM-EDX and Mössbauer spectroscopy. Vivianite was observed in all plants where iron was used for phosphate removal. In surplus sludge before the anaerobic digestion ferrous iron dominated the iron pool (≥50%). XRD and Mössbauer spectroscopy showed no clear correlation between vivianite bound phosphate versus the iron content in surplus sludge. In digested sludge, ferrous iron was the dominant iron form (>85%). Phosphate bound in vivianite increased with the iron content of the digested sludge but levelled off at high iron levels. 70-90% of all phosphate was bound in vivianite in the sludge with the highest iron content (molar Fe:P = 2.5). The quantification of vivianite was difficult and bears some uncertainty probably because of the presence of impure vivianite as indicated by SEM-EDX. eSEM-EDX indicates that the vivianite occurs as relatively small (20 -100 µm) but free particles that could potentially be separated from the sludge. We hypothesize that chemical/microbial Fe(III) reduction is relatively quick and triggers vivianite formation in the treatment lines. Once formed, vivianite may endure oxygenated treatment zones due to slow oxidation kinetics and due to oxygen diffusion limitations into sludge flocs. It was shown that vivianite can indeed form relatively quickly in activated sludge systems. Kinetics of iron reduction, the microbial community and the mechanism of vivianite formation in activated sludge from two STPs were studied; one STP with a low iron dosing (STP Leeuwarden, EBPR) and the other STP with a high iron dosing (STP Cologne, applying chemical phosphorous removal, CPR) were studied. The sludges were incubated under anaerobic conditions in batch experiments. The iron reduction rate in the CPR sludge (2.99 mg-Fe g VS-1 h-1) was 3 times higher than the rate observed in the EBPR sludge (1.02 mg-Fe g VS-1 h-1). The higher iron reduction rate in the CPR sludge is probably caused by its 3 times higher iron content. The rate constants (k) in both sludges are comparable (0.06 h-1 in EBPR sludge vs 0.05 h-1 in CPR sludge), thus the potential rates in both sludges are similar. For calculating the time it takes to turn over all Fe(III) to Fe(II) in the sludge, the Fe(III) reduction rates at the total ferric iron content of the experiments were used and assumed to be constant over time. Calculations then suggest that all iron in STP Leeuwarden and STP Cologne can be turned over within 15 h and 44 h respectively. Sequencing showed that both of the sludges were dominated by proteobacteria (65 – 89% of all operational taxonomic units, OTUs) and that the dominant class of bacteria were β-proteobacteria (38-63% of all OTUs). The microbial communities in both sludges contained genera that comprise iron oxidizing and iron reducing bacteria. These genera were more abundant in the CPR sludge with a higher iron content. XRD and Mössbauer spectroscopy showed that significant quantities of vivianite were formed in the sludges within 24 h. Our study suggests that iron metabolizing bacteria are more abundant in sludge which is rich in iron and that significant vivianite formation can already take place before the anaerobic digestion process. Based on the findings, vivianite is the most important phosphate phase provided enough iron is present, vivianite separation from sewage sludge was studied using a tailor made magnetic separator. Vivianite particles are paramagnetic and present as free particles. Magnetism is an elegant technology as it exclusively separates the liberated and paramagnetic vivianite (and perhaps some pyrite or iron carbonates that are present in the sludge). For this purpose a magnetic separator with Jones magnetic plates was designed and tested on two digested sewage sludges with different iron content. Varying feeding rates were used for the separation. A higher phosphate separation efficiency was achieved with sludge that contained more iron (up to 60% of all input phosphate was recovered) compared to the sludge with lower iron contents (up to 40% of all phosphate could be recovered). The iron and phosphate content was double sometimes even three times higher in the separated (magnetic) fraction when compared to the initial sludge solids. The crystalline fraction of the separated material consisted mainly of vivianite (68%) but also quartz was found (32%) as shown by XRD. The separated material had still a relatively high volatile solid content ranging between 30 – 40% of the dry matter. This fraction is related to organic compounds and other compounds that lose weight during heating (such as carbonates or vivianite). Based on these observations a new phosphate recovery technology for vivianite containing sludge was proposed that makes use of relatively cheap magnetic separation equipment from the mining industry. In this process iron is dosed in high quantities during the treatment process. This would result not only in low effluent phosphate concentrations but, additionally, vivianite formation is not limited by iron during the anaerobic digestion and this would probably result in the transformation of all available phosphate to vivianite. Then vivianite can be separated using a magnetic separator. This separation could be combined with a liberation or pre-separation step by using e.g. a hydrocyclone. Once vivianite is separated from sludge it could be directly used, preferably to produce high valuable products, or it could be dissolved to produce fertilizer. Pure vivianite can easily be dissolved at alkaline pH of about 12. At this pH, phosphate goes in solution and iron and most other metals remain in the precipitate. The phosphate solution obtained from the separated vivianite can directly be used for fertilizer production. Iron could be re-used for phosphate elimination in the STP. In another study it was tested whether sulphide can help to release and recover phosphate from sewage sludge. A series of batch experiments were conducted on different synthetic iron phosphates: Fe(III)P purchased from Sigma, Fe(III)P synthesized in the lab and vivianite. Sulphide was added to these different iron phosphates in a molar Fe:S ratio of 1 to evaluate the total phosphate release and the kinetics of phosphate release into solution. Phosphate release was usually completed within 1 hour. The maximum phosphate release was 92%, 60% and 76% from vivianite, Sigma Fe(III)P and Fe(III)P synthesized in the lab, respectively. However, rebinding of the released phosphate by Fe(II), only in the experiment with Fe(III)P that was synthesized in the lab, reduced the net phosphate release to about 56%. Sulphide induced phosphate release from vivianite is more efficient because sulphide reacts directly with Fe(II) to form FeSx and releases phosphate. No additional sulphide is needed for reducing Fe(III) to Fe(II). At the same time Fe(II) in vivianite is probably more efficient, or as efficient, as Fe(III) in retaining phosphate. Phosphate release from Fe(III)P was, at its maximum (before re-sorption/re-precipitation of the phosphate to other compounds in the sludge) higher than stoichiometry would suggest. Probably because sulphide was acting as a reducing agent, without significant formation of FeSx. FeSx formation requires a larger sulphide input. The high efficiency (moles P released / moles S input) of sulphide acting as a reducing agent to release phosphate was confirmed in additional experiments where sulphide was slowly added to Fe(III)P. Moreover, sulphide addition experiments showed that up to 30% of all phosphate could be released from digested sewage sludge. The highest phosphate release was achieved in experiments with the highest iron content. The total phosphate release from digested sludge was not as high as expected, earlier measurements using XRD and Mössbauer spectroscopy, that were used to quantify iron bound phosphate in the digested sludges, suggested that more phosphate should be iron bound and hence sulphide extractable. The dewaterability (determined using capillary suction test) in digested sludge (0.13 ±0.015 g2(s2 m4)-1) dropped significantly after sulphide was added (0.06 ±0.004 g2(s2 m4)-1). This strongly suggests that sulphide addition to sewage sludge will result in higher sludge disposal costs. Only insignificant phosphate release (1.5%) was observed from sewage sludge ash in response to sulphide addition. Overall, sulphide showed to be a useful tool to release phosphate bound to iron from sewage sludge for its subsequent recovery. Drawbacks are the deterioration of the sludge dewaterability and a net phosphate release that is lower than expected. In a side project of this thesis biogenic iron oxides (BioFeO) formed by Leptothrix sp. and Gallionella sp. were compared with chemically formed iron oxides (ChFeO) for their suitability to remove and recover phosphate from solutions. The ChFeO used for comparison included a commercial iron based adsorbent (GEH®) and chemical precipitates. Despite contrary observations in earlier studies, our batch experiments showed that BioFeO do not have superior phosphate adsorption capacities compared to ChFeO. However, it seems multiple mechanisms are involved in phosphate removal by BioFeO which make their overall phosphate removal capacity higher than that of ChFeO. The overall phosphate removal capacity of Leptothrix sp. was 26.3 mg P/g dry matter (d.m.), of which less than 6.4 mg P/g d.m. was attributed to adsorption. The main removal is likely due to formation of organic iron phosphate complexes (19.6 mg P/g d.m.). Gallionella sp. had an overall phosphate removal capacity of 39.6 mg P/g d.m. Significant amounts of phosphate were apparently incorporated into the Gallionella sp. stalks during their growth (31.0 mg P/g d.m.) and only one fourth of the total phosphate removal can be related to adsorption (8.6 mg P/g d.m.). Their overall ability to immobilize large quantities of phosphate from solutions indicates that BioFeO could play an important role in environmental and engineered systems for removal of contaminants such as phosphate or arsenic. This thesis showed that the iron phosphate chemistry in STPs has been neglected in the past and that more research is necessary to understand the complex interactions between iron and phosphate. This knowledge would help to improve the use of iron in STPs for phosphate removal further and pave the way for new phosphate recovery technologies from iron rich sewage sludge. Within the framework of this research the mineral vivianite was identified as a main iron phosphate phase in sewage sludge. Phosphate recovery technologies via vivianite might lead to a significantly higher recovery efficiency compared to routes relying on struvite. Magnetic separation of vivianite from sewage sludge was achieved using equipment from the mining industry. This process will be tested on pilot scale next. Future research related to vivianite based phosphate recovery has to focus on (I) understanding the formation of vivianite in STPs, (II) improving the separation efficiency of vivianite from sewage sludge using equipment that is tailor made for the type of vivianite which is contained in the sludge (density, magnetic susceptibility etc.) or by manipulating the formation of vivianite (by e.g. increasing its particle size) and (III) evaluating the purity of vivianite in sewage sludge to determine its economic value.","Phosphate; Phosphorus; Sewage Sludge; Recovery; Sewage; iron","en","doctoral thesis","","978-94-6186-887-9","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:a5002bb0-4701-4e33-aef6-3c78d0c9fd70","http://resolver.tudelft.nl/uuid:a5002bb0-4701-4e33-aef6-3c78d0c9fd70","Front-End ASICs for 3-D Ultrasound: From Beamforming to Digitization","Chen, C. (TU Delft Electronic Instrumentation)","Pertijs, M.A.P. (promotor); de Jong, N. (promotor); Delft University of Technology (degree granting institution)","2018","This thesis describes the analysis, design and evaluation of front-end application-specific integrated circuits (ASICs) for 3-D medical ultrasound imaging, with the focus on the receive electronics. They are specifically designed for next-generation miniature 3-D ultrasound devices, such as transesophageal echocardiography (TEE), intracardiac echocardiography (ICE) and intravascular ultrasound (IVUS) probes. These probes, equipped with 2-D array transducers and thus the capability of volumetric visualization, are crucial for both accurate diagnosis and therapy guidance of cardiovascular diseases. However, their stringent size constraints, as well as the limited power budget, increase the difficulty in integrating in-probe electronics. The mismatch between the increasing number of transducer elements and the limited cable count that can be accommodated, also makes it challenging to acquire data from these probes. Front-end ASICs that are optimized in both system architecture and circuit-level implementation are proposed in this thesis to tackle these problems.
The techniques described in this thesis have been applied in several prototype realizations, including one LNA test chip, one PVDF readout IC, two analog beamforming ASICs and one ASIC with on-chip digitization and datalinks. All prototypes have been evaluated both electrically and acoustically. The LNA test chip achieved a noise-efficiency factor (NEF) that is 2.5 × better than the state-of-the-art. One of the analog beamforming ASIC achieved a 0.27 mW/element power efficiency with a compact layout matched to a 150 µm element pitch. This is the highest power-efficiency and smallest pitch to date, in comparison with state-of-the-art ultrasound front-end ASICs. The ASIC with integrated beamforming ADC consumed only 0.91 mW/element within the same element area. A comparison with previous digitization solutions for 3-D ultrasound shows that this work achieved a 10 × improvement in power-efficiency, as well as a 3.3 × improvement in integration density.
Electric vehicles are only sustainable if the electricity used to charge them comes from renewable sources and not from fossil fuel based power plants. The goal of this PhD thesis is to develop a highly efficient, V2G-enabled smart charging system for electric vehicles at workplaces, that is powered by solar energy. The thesis focusses on three research elements – power converter, charging algorithms and system design. A 10kW EV charger has been developed that enables the direct DC charging of EV from PV without converting to AC. The charger is bidirectional, so energy from the EV battery can also be fed to the grid for vehicle to grid (V2G). The charger can realize four different power flows: PV → EV, EV → Grid, Grid → EV, PV → Grid. The 10kW modules are modularly built and can be operated without solar input as a bidirectional EV charger as well. Further, several DC charger modules can be operated paralleled for fast charging up to 150kW. The charger is based on silicon carbide and quasi-resonant technology which results in high efficiency (>96%) for both full load and partial load. The integrated EV-PV solution has a lower component count, three times higher power density and lower cost than using separate EV charger and PV inverter exchanging power over AC. The charger is compatible with the CHAdeMO and CCS/Combo charging standard and is designed for implementing smart charging. New smart charging algorithms developed in the project integrate several applications together: PV forecast, EV user preferences, multiplexing of EVs, V2G demand, energy prices, regulation prices and distribution network constraints. For two case studies simulated for Netherlands and Texas, the proposed algorithms reduced the net costs by up to 427% and 651% when compared to average rate charging, respectively.
Nowadays, remote sensing from satellite altimetry provides an accurate estimate of changes in sea level on global and regional scales (Leuliette et al., 2004; Nerem et al., 2010; Ablain et al., 2017). The emergence of satellite gravimetry, in the form of the GRACE mission (Tapley et al., 2004), and the global coverage of in-situ subsurface temperature and salinity observations by the Argo programme (Roemmich et al., 2009; Roemmich and Gilson, 2009) has resulted in an extensive increase of our understanding of sea-level changes over the past decade, and the reliability of the estimates of the individual processes behind sea-level changes has reached the level where we can almost fully explain the observed sea-level changes from these contributors (Rietbroek et al., 2016; Leuliette and Miller, 2009; Dieng et al., 2015; Leuliette, 2015; Kleinherenbrink et al., 2016).
However, before this period the spatially-varying signals have been sampled only sparsely by in-situ observations, mainly by means of tide gauges, which limits our current understanding of sea-level changes on global and regional scales. This thesis aims to find an answer to the question whether the sum of the underlying processes that cause sea-level changes can explain the observations, not only on a global scale, which has been assessed a multitude of times (Moore et al., 2011; Church et al., 2011; Gregory et al., 2013; Jevrejeva et al., 2016b), but also on scales of individual ocean basins and coastal regions. The assessment of this so-called sea-level budget has been done for two regional cases, and for the global ocean and individual basins. Furthermore, the effect of ocean bottom deformation on the difference between relative and geocentric observations has been quantified. Finally, we have applied an alternative approach to time-series analysis, in which the various contributors of sealevel variability are co-estimated with a time-varying trend using a Kalman filter and smoother approach, on tide gauge observations.","sea-level rise; sea-level budget; physical oceanography; water mass redistribution","en","doctoral thesis","","978-94-6186-903-6","","","","","","","","","Physical and Space Geodesy","","",""
"uuid:3acfe30a-1c01-4851-b491-ca20b3b459ce","http://resolver.tudelft.nl/uuid:3acfe30a-1c01-4851-b491-ca20b3b459ce","Data assimilation in the minerals industry: Real-time updating of spatial models using online production data","Wambeke, T. (TU Delft Resource Engineering)","Jansen, J.D. (promotor); Benndorf, J. (promotor); Delft University of Technology (degree granting institution)","2018","Declining ore grades, extraction at greater depths and longer hauling distances put pressure on maturing mines. Not enough new mines will be commissioned on time to compensate for the resulting shortages. Ore-body replacement rates are relatively low due to a reduced appetite for exploration. Development times are generally increasing and most new projects are remote, possibly pushing costs further upwards.
To reverse these trends, the industry must collect, analyse and act on information to extract and process material more productively (i.e. maximize resource efficiency). This paradigm shift, driven by digital innovations, aims to (partly) eliminate the external variability that has made mining unique. The external variability results from the nature of the resource being mined. This type of variability can only be controlled if the resource base is sufficiently characterized and understood.
Recent developments in sensor technology enable the online characterization of raw material characteristics and equipment performance. To date, such measurements are mainly utilized in forward loops for downstream process control. A backward integration of sensor information into the resource model does not yet occur. Obviously, such a backward integration would significantly contribute to the progressive characterization of the resource base.
This dissertation presents a practical updating algorithm to continuously assimilate recently acquired data into an already existing resource model. The updating algorithm addresses the following practical considerations. (a) At each point in time, the latest solution implicitly accounts for all previously integrated data (sequential approach). During the next update, the already existing resource model is further adjusted to honour the newly obtained observations as well. (b) Due to the nature of a mining operation, it is nearly impossible to formulate closed-form analytical expressions de- scribing the relationship between observations and resource blocks. Rather, the relevant relationships are merely inferred from the inputs (the resource model realizations) and outputs (distribution of predicted observations) of a forward simulator. (c) The updating algorithm is able to assimilate noisy observations made on a blend of material originating from multiple sources and locations. Differences in scale of support are dealt with automatically.
The developed algorithm integrates concepts from several existing (geo)statistical techniques. Co-Kriging approaches for example are designed to integrate both direct and indirect measurements and are well capable to handle differences in accuracy and sampling volume. However, they do fail to extract information from blended measurements and can not sequentially incorporate new observations into an already existing resource model. To overcome the latter issue, the co-Kriging equations are merged into a sequential linear estimator. Existing resource models can now be improved using a weighted sum of differences between observations and model-based predictions (forward simulator output). The covariances, necessary to compute the weights, are empirically derived from two sets of Monte Carlo samples (another sta- tistical technique); the resource model realizations (input forward simulator) and the observation realizations (output forward simulator). This approach removes the need to formulate analytical functions modelling spatial correlations, blending and difference in scale of support.
The resulting mathematical framework bears some resemblances to that of a dy- namic filter (Ensemble Kalman filter), used in other research areas, althoughthe under- lying philosophy differs significantly. Weather forecasting and reservoir modelling, for example, consider dynamic systems repetitively sampled at the same locations. Each observation characterizes a volume surrounding the sample locations. Mineral resource modelling, on the other hand, focuses on static systems gradually sampled at different locations. Each observation is characteristic for a blend of material originating from multiple sources and locations. Each part of the material stream is sampled only once, the moment it passes the sensor.
Various options are implemented around the mathematical framework to either reduce computation time, memory requirements or numerical inaccuracies. (a) A Gaussian anamorphosis is included to deal with suboptimal conditions related to non- Gaussian distributions. The algorithm structure ensures that the sensor precision (mea- surement error) can be defined on its original units and does not need to be translated into a normal score equivalent. (b) An interconnected parallel updating sequence (double helix) can be configured to avoid a covariance collapse (filter inbreeding). This occurs as degrees of freedom are lost over time due to the empirical calculation of the covariances. (c) A neighbourhood option is implemented to constrain computation time and memory requirements. Different neighborhoods need to be considered simul- taneously as material streams are blended. (d) Two covariance correction options are implemented to further inhibit the propagation of statistical sampling errors originating from the empirical computation of covariances.
A case specific forward simulator is built and run parallel to the more generally applicable updating code. The forward simulator is used to translate resource model realizations (input) into observation realizations (output). Empirical covariances are subsequently lifted from both realization sets and mathematically describe the link between sensor observations and individual blocks in the model. This numerical inference avoids the cumbersome task of formulating, linearising and inverting an analytical forward observation model. The application of a forward simulator further ensures that the distribution of the Monte Carlo samples already reflect the support of the concerned random values. As a result, the necessary covariances, derived from these Monte Carlo samples, inherently account for differences in scale of support.
A synthetic experiment is conducted to showcase that the algorithm is capable of assimilating inaccurate observations, made on blended material streams, into an already existing resource model. The experiment is executed in an artificial environment, representing a mining environment with two extraction points of unequal production rate. A visual inspection of cross-sections shows that the model converges towards the ”true but unknown reality”. Global assessment statistics quantitatively confirm this observation. Local assessment statistics further indicate that the global improvements mainly result from correcting local estimation biases.
Another 125 artificial experiments are conducted to study the effects of variations in measurement volume, blending ratio and sensor precision. The experiments investigate whether and how the resource model and the predicted observations improve over time. Based on the outcome, recommendations are formulated to optimally design and operate a monitoring system.
This work further describes the pilot testing of the updating algorithm at the Tropi- cana Gold Mine (Australia). The pilot aims to evaluate whether the updating algorithm can automatically reconcile ball mill performance data against the spatial Work Index estimates of the GeoMet model. The focus here lies on the ball mill since it usually is the single largest energy consumer at the mine site. The spatial Work Index estimates are used to predict a ball mill’s throughput. In order to maximize mill throughput and optimize energy utilization, it is important to get the Work Index estimates right. At the Tropicana Gold Mine, Work Index estimates, derived from X-Ray Fluorescence and Hyperspectral scanning of grade control samples, are used to construct spatial GeoMetallurgical models (GeoMet). Inaccuracies in the block estimates exist due to limited calibration between grade control derived and laboratory Work Index values. To improve the calibration, the updating algorithm was tested at the mine during a pilot study. Deviations between predicted and actual mill performance are monitored and used to locally improve the Work Index estimates in the GeoMet model. While assim- ilating about a week of mill performance data, the spatial GeoMet model converged towards a previously unknown reality. The updating algorithm improved the spatial Work Index estimates, resulting in a real-time reconciliation of already extracted blocks and a recalibration of future scheduled blocks. The case study shows that historic and future production estimates improve on average by about 72% and 26%.","Geostatistics; Data Assimilation; geometallurgy; resource engineering; mining; Discrete event simulation; material tracking","en","doctoral thesis","","978-94-6186-904-3","","","","","","","","","Resource Engineering","","",""
"uuid:53272e39-a1c4-4005-bc6f-cb48116919a9","http://resolver.tudelft.nl/uuid:53272e39-a1c4-4005-bc6f-cb48116919a9","Integrated modeling of land and water resources in two African catchments","Yalew, S.G. (TU Delft Water Resources)","van der Zaag, P. (promotor); van Griensven, A (promotor); Delft University of Technology (degree granting institution)","2018","Land and water are two of the most important and interacting natural resources that are critical for human survival and development. Growing population and global economic expansion are accelerating the demand for land and water for uses such as agriculture, urbanization, irrigation, hydropower, and industrialization. The land surface changes dynamically due to these demands and other socio-economic drivers. Biophysical factors such as topographic suitability, climate change, and rainfall variability further influence land use changes and land-use change decisions. Water resources are likewise experiencing pressure from overuse, pollution, and changes in hydrologic processes as a result of both socio-economic and biophysical factors.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-1-138-59338-1","","","","Dissertation submitted in fulfillment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of the UNESCO-IHE Institute for Water Education.","","","","","Water Resources","","",""
"uuid:85261582-62d7-4b17-9c18-a1b2d11a07b6","http://resolver.tudelft.nl/uuid:85261582-62d7-4b17-9c18-a1b2d11a07b6","Induced Dimension Reduction algorithms for solving non-symmetric sparse matrix problems","Astudillo Rengifo, R.A. (TU Delft Numerical Analysis)","Vuik, Cornelis (promotor); van Gijzen, M.B. (copromotor); Delft University of Technology (degree granting institution)","2018","In several applications in science and engineering, different types of matrix problems emerge from the discretization of partial differential equations.
This thesis is devoted to the development of new algorithms to solve this
kind of problems. In particular, when the matrices involved are sparse and
non-symmetric. The new algorithms are based on the Induced Dimension Reduction method [IDR(s)].
IDR(s) is a Krylov subspace method originally proposed in 2008 to solve systems of linear equations. IDR(s) has received considerable attention due to its stable and fast convergence. It is, therefore, natural to ask if it is possible to extend IDR(s) to solve other matrix problems, and if so, to compare those extensions with other well-established methods. This work aims to answer these questions.
The main matrix problems considered in this dissertation are: the standard
eigenvalue problem, the quadratic eigenvalue problem, the solution of systems of linear equations, the solution of sequences of systems of linear equations, and linear matrix equations. We focus on examples that arise from the discretization of partial differential equations.
Advanced flow control experiments based on alternating current dielectric barrier discharge plasma actuators are also performed following different instability control approaches. The primary instability is conditioned by the external forcing either in the wavenumber spectrum (by inducing selected spanwise modes) or in intensity (by weakening or enhancing the cross-flow velocity). The secondary instability modes are conditioned in the frequency spectrum and phase.
These efforts achieved the intended scopes. Although, when selected stationary modes were forced, the boundary layer fluctuations were enhanced. These fluctuations can directly cause the turbulent breakdown vanishing the beneficial effect of the performed instability control. The cross-flow forcing, making use of newer actuators reaching higher frequencies, resulted successful yielding transition promotion or delay depending on the forcing direction.
In the past three decades, particle image velocimetry (PIV) has become a standard measuring technique in experimental fluid mechanics. Advances in both hardware components and software analysis have allowed achieving many milestones in flow diagnostics, mainly time-resolved and instantaneous volumetric measurements. In particular, the extension to the third dimension in space, i.e. tomographic PIV and 3D particle tracking velocimetry (PTV), has been used to provide quantitative visualizations of the coherent structures occurring in various turbulent flows and have provided insight in the spatial organization of the turbulent motions at different scales. The extension of the aforementioned techniques towards industrial practice in wind tunnel testing requires the development of a more efficient approach in terms of scaling and versatility.
The present dissertation tackles the upscaling of PIV experiments towards industrial wind tunnels with the use of HFSB as tracing particles. The reasons and motivations behind this choice are addressed in the first chapter and followed by a description of the state-of-the-art of PIV. The second chapter aims at familiarising the reader with the working principles of PIV, which will be later recalled when presenting the advances towards large-scale experiments. Information on the mechanical behaviour of tracer particles and on the underlying physics are discussed in the third chapter, where also the case of HFSB is examined for use during quantitative measurements in the low-speed flow regime.
The problem of seeding in wind tunnels is discussed in chapter 4, where a system for the injection of HFSB in a large wind tunnel is presented. Here, the relationship between HFSB production rate and the resulting spatial concentration and dynamic spatial range (DSR) are discussed. Specific experiments that examine the tracing fidelity of sub-millimetre HFSB tracers are presented in chapter 5. The behaviour of HFSB is compared to micro-size droplets, yielding a characteristic response time in the range of 10 μs. The latter milestone opens up to the applicability of HFSB tracers for quantitative velocimetry in wind tunnel flows. In chapter 6, a specific case of interest is presented whereby HFSB tracers are used to measure the flow velocity within steady vortices such as those released at the tip of wings. A dedicated experiment shows that the neutrally or slightly buoyant HFSB return a rather homogeneous spatial concentration within the core of vortices, solving the long-standing issue encountered for small heavy tracers, such as fog droplets, that are systematically ejected from highly vortical regions.
An analysis of the light scattering by HFSB was conducted with theoretical and experimental approaches, as described in chapter 7. The light intensity scattered by the HFSB is characterised by two source points: the glare points. The overall scattered light appears to be 104-105 times more intense with respect to the oil-based micro-size droplets. This information is used to retrieve the maximum size of the measurement volume for a given light source.
Chapter 8 closes this dissertation presenting a survey of all the experiments that have been conducted during this PhD research. The scale of experiments varies from the more academic case of a circular cylinder up to the one of a ship model installed in one of the large industrial wind tunnels operated at the German-Dutch Wind Tunnels laboratories (DNW), going through the visualization and quantification of large structures in the rotor region of a vertical axis wind turbine (VAWT).","HFSB; air-flow seeding; large-scale PIV; Tomo-PIV","en","doctoral thesis","","978-94-6366-015-0","","","","","","","","","Aerodynamics","","",""
"uuid:49dc353a-dac7-4a5a-91ad-bc965f149bbb","http://resolver.tudelft.nl/uuid:49dc353a-dac7-4a5a-91ad-bc965f149bbb","Geographical point cloud modelling with the 3D medial axis transform","Peters, R.Y. (TU Delft Urban Data Science)","Stoter, J.E. (promotor); Ledoux, H. (copromotor); Delft University of Technology (degree granting institution)","2018","A geographical point cloud is a detailed three-dimensional representation of the geometry of our geographic environment. Using geographical point cloud modelling, we are able to extract valuable information from geographical point clouds that can be used for applications in asset management, crisis management, city and landscape planning, and environmental simulations.
During this process the point cloud is semantically enriched, e.g. by performing classification, and structurally enriched, e.g. by performing segmentation or surface reconstruction. In this thesis I propose a new approach to geographical point cloud modelling based on the 3D Medial Axis Transform (MAT), a skeleton-like representation of shapes that explicitly models both the topology and the geometry of shapes. While the 3D MAT has been used before in other fields, its application to geographical point clouds is novel. Advantages of the MAT over existing mostly 2.5D and boundary representation-based methods include that 1) it is fully 3D, 2) it can be used to intuitively structure and decompose a point cloud into objects, 3) it clearly separates a point cloud into interior and exterior volumes, and 4) it is able to compactly characterise geometrical properties of a shape though its local medial geometry. I make three core contributions. First, I explain how to robustly approximate the 3D MAT for large real-world geographical point clouds. This is critical for geographical point clouds because they are inherently noisy due to the challenging acquisition conditions and the fact that the MAT in itself is highly sensitive to noise. Second, I show how to structure the MAT into a connected set of medial sheets that form so-called 'medial clusters' that give us a natural decomposition of the point cloud into objects. Third, I demonstrate how the MAT can be applied for feature aware point cloud simplification and visualisation, visibility analysis, watercourse detection, and building detection. Due to noise and limitations in the point density of geographical point clouds, the MAT performs best for objects that have a clearly defined volume in the point cloud such as for example houses and landscape features. It is less suitable for object like trees and thin street furniture. The core result of this thesis is that I prove that the 3D MAT is a useful and practically viable tool for geographical point cloud modelling.","Medial Axis Transform; Point cloud; Geographic information systems; Classification; Object recognition","en","doctoral thesis","","978-94-6186-899-2","","","","","","","","","Urban Data Science","","",""
"uuid:79dbeb5c-f610-4ddd-9ef9-44202e69a18e","http://resolver.tudelft.nl/uuid:79dbeb5c-f610-4ddd-9ef9-44202e69a18e","Electrokinetic and Poroelastic Characterization of Porous Media: Application to CO2 storage monitoring","Kirichek, Alex (TU Delft Applied Geophysics and Petrophysics)","Wapenaar, C.P.A. (promotor); Ghose, R. (copromotor); Delft University of Technology (degree granting institution)","2018","Monitoring the properties of a CO2 storage reservoir is important for two main reasons: firstly, to verify that the injected CO2 is safely contained in the reservoir rock as planned, and secondly, to provide data which can be used to update the existing reservoir models and support eventual mitigation measures in case of deviation from the CO2 storage plan. Reliable quantitative monitoring of reservoir rocks and pore-filling fluids remains a challenging task in geophysical prospecting. Typically, geophysical electrical and seismic surveys are used to predict reservoir properties. Electrical properties of rocks are strongly controlled by the chemistry of the fluids that fill the pore space. Thus, electrical surveys can provide accurate estimates of pore-filling fluid composition and porosity of reservoir rock. Seismic methods are particularly sensitive to elastic heterogeneities in the subsurface. Elastic properties of reservoir rocks can be extracted from recorded seismic data. Potentially, the simultaneous use of electrical and seismic geophysical surveys can reduce the uncertainty in the quantitative characterization of reservoir rocks. In this thesis, theoretical and experimental research is conducted to show the feasibility of such an integrated approach for a CO2 storage reservoir.","Electrokinetics; Dielectric constant; CO2; Poroelasticity; Impedance spectroscopy; Biot׳s theory; Polarization","en","doctoral thesis","","978-94-6186-902-9","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:88d76fda-0cbb-402e-a834-aa76b88a4e3d","http://resolver.tudelft.nl/uuid:88d76fda-0cbb-402e-a834-aa76b88a4e3d","Reliability modelling for fatigue life prediction: with application to components in dynamic systems of rotorcraft","Dekker, S.H. (TU Delft Structural Integrity & Composites)","Benedictus, R. (promotor); Alderliesten, R.C. (copromotor); Delft University of Technology (degree granting institution)","2018","A mechanical component can break due to repeated load cycling, even if these loads remain well below the component’s regular static strength. In a simplified fashion, a component’s fatigue life depends on the loads that it has to endure during its service life, as well as its fatigue strength to resist the formation of cracks. Since both of these factors can be considered as random variables, the time until a fatigue-induced rupture occurs can be considered as a random variable as well. Airworthiness regulations require that aircraft manufacturers show by numerical analysis that the probability that a fatigue failure occurs during a critical part’s maximum allowable service life does not exceed a specified probability.","","en","doctoral thesis","","978-94-6295-865-4","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:839f1f57-37c8-434c-9ef8-363de76965c6","http://resolver.tudelft.nl/uuid:839f1f57-37c8-434c-9ef8-363de76965c6","Form Follows Force: A theoretical framework for Structural Morphology, and Form-Finding research on shell structures","Li, Q. (TU Delft Structural Design & Mechanics)","Rots, J.G. (promotor); Delft University of Technology (degree granting institution)","2018","With the springing up of freeform architectures, the key problem to structural engineers is to generate structural forms with high structural efficiency subject to the architectural space constraints during the conceptual design process. In this research, a theoretical framework for Structural Morphology has been proposed, that provides an effective solution to the problem. To enrich the proposed framework, systematic Form-Finding research on shell structures is conducted.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-012-9","","","","A+BE | Architecture and the Built Environment No 2 (2018)","","","","","Structural Design & Mechanics","","",""
"uuid:f945ee45-048e-4c19-a407-6283ed351ac6","http://resolver.tudelft.nl/uuid:f945ee45-048e-4c19-a407-6283ed351ac6","Feature-based fast grasping for unknown objects","Lei, Q. (TU Delft Robot Dynamics)","Wisse, M. (promotor); Delft University of Technology (degree granting institution)","2018","According to the report by the United Nations in 2015, the global population of older persons aged 60 years or over is predicted to grow to 1.4 billion by 2030. A rapidly aging population poses a challenging problem for human beings, i.e. supply shortage of working-age people. To solve this problem, increasing research efforts are poured into the field of robotics, especially in service robotics. Service robots are believed to be a solid solution to the challenging problem of an aging population. The Strategic Research Agenda (SRA) for Robotics in Europe, a development guideline for European robotics from 2014 to 2020, classifies robots’ functions into eight basic categories, i.e., assembly, surface process, interaction, exploration, transporting, inspection, grasping and manipulation. From SRA, we can find that grasping is an important basic function for robots. Combining grasping with other basic functions, robots can perform many service tasks to free humans from tedious housework, for example, cleaning rooms, cooking and washing dishes.","object feature; gripper feature; unknown object grasping; fast grasping; partial point cloud; force balance","en","doctoral thesis","","","","","","","","","","","Robot Dynamics","","",""
"uuid:f1e5b2d3-eb4e-4f66-af89-cc5380bef837","http://resolver.tudelft.nl/uuid:f1e5b2d3-eb4e-4f66-af89-cc5380bef837","Influence of Bow-Wave Breaking on the Added Resistance of Fast Ships","Choi, B. (TU Delft Ship Hydromechanics and Structures)","Huijsmans, R.H.M. (promotor); Wellens, P.R. (copromotor); Delft University of Technology (degree granting institution)","2018","The publication of the Energy Efficiency Design Index (EEDI) by the International Maritime Organization (IMO) has recently stimulated the accurate assessment of actual sea performance of ships, which is evaluated as added resistance in waves. However, a satisfactory consensus on the evaluation method has not yet been reached owing to uncertainty in wave added resistance. This uncertainty can be improved by developing analytical methods that recognize nonlinearities. A typical factor contributing to the uncertainty of added resistance is the breaking of the bow wave. In this study, an evaluation method is developed to explain the nonlinearity of added resistance due to the uncertainty of bow-wave breaking. The accuracy of evaluation of added resistance can be improved by considering the speed of the ship, which affects the stability of the bow wave. This study also confirms that the breaking of the bow wave causes a violation of the linear relation between the pressure and the relative wave elevation of the bow wave.
In order to express the nonlinearity of added resistance due to the breaking of the bow wave, a transfer function including the speed of the ship is proposed because the speed of the ship affects the stability type of bow-wave breaking. By analyzing the results of the added resistance measured in a fast ship series test, it was confirmed that the added resistance should be evaluated by considering the ship’s speed. In addition, hull pressures and relative wave elevations are measured for the mother ship of the series test, and analysis tools are developed to represent the nonlinearity between these two signals. This analysis confirms that the nonlinear relationship between the hull pressure and the relative wave elevation, which significantly contributes to the added resistance, is greatly influenced by the speed of the ship.
This study provides important insight into the violation of the linear relation by using the proposed analysis tools. The results show that the nonlinearity due to the plunging breaking of a bow wave is intuitively detected. The nonlinearity is shown to vary with the ship’s speed. The findings provide a better understanding of the process of plunging breaking of bow waves.
Based on the above findings, a correction model is proposed to improve the accuracy of numerical calculation performed using the linear potential theory. The calculation of the fast ship is compared with the experimental results. The results reveal that the accuracy of added resistance estimation can be improved through the physics-based correction. Furthermore, a method for improving the reliability of the added resistance estimation is proposed by identifying the nonlinearity of the plunging breaking of the bow wave on a fast displacement ship.","added resistance; fast ships; bow-wave breaking; relative wave elevation; non-linearity assessment","en","doctoral thesis","","978-94-6186-906-7","","","","","","","","","Ship Hydromechanics and Structures","","",""
"uuid:5c23f3d4-c331-4d1f-bf9d-ca37b5fb2436","http://resolver.tudelft.nl/uuid:5c23f3d4-c331-4d1f-bf9d-ca37b5fb2436","Alpha Radionuclide Therapy Using Polymeric Nanocarriers: Solution to the Recoil Problem?","de Kruijff, R.M. (TU Delft RST/Applied Radiation & Isotopes)","Wolterbeek, H.T. (promotor); Denkova, A.G. (copromotor); Delft University of Technology (degree granting institution)","2018","In radionuclide therapy, radioisotopes are used to irradiate tumours from within the body. Usually beta-emitters coupled to tumour-targeting molecules are used, which specifically accumulate at the tumour site. Instead of using beta-emitters, it is also possible to use radionuclides which emit an alpha particle upon decay. Alpha particles have a shorter range and are much more effective in destroying tumour cells. Alpha radionuclide therapy is steadily gaining interest, although currently in most studies radionuclides with relatively short half-life are used. Long lived radionuclides like the 225Ac employed in this thesis are ideal for the treatment of tumours which take a longer time to reach. The long halflife of 225Ac combined with four alpha particles in its decay chain ensure long irradiation of the targeted tissue. However, upon alpha-decay the daughter nuclide receives a recoil energy decoupling it from any targeting agent, allowing it to diffuse throughout the body to irradiate healthy tissue. The main goal of this thesis is to develop polymeric nanocarriers, so-called polymersomes, which retain the recoiling daughter atoms of 225Ac in order to limit healthy tissue toxicity in alpha radionuclide therapy.","","en","doctoral thesis","","978-94-6299-845-2","","","","","","","","","RST/Applied Radiation & Isotopes","","",""
"uuid:0b35cd81-4299-48bf-8751-e9ba2488659b","http://resolver.tudelft.nl/uuid:0b35cd81-4299-48bf-8751-e9ba2488659b","Iron Catalysts for Fischer-Tropsch Synthesis derived from Metal-Organic Frameworks: Fundamentals and Performance","Wezendonk, T.A. (TU Delft ChemE/Catalysis Engineering)","Gascon, Jorge (promotor); Kapteijn, F. (promotor); Delft University of Technology (degree granting institution)","2018","br","","en","doctoral thesis","","879-94-028-0962-6","","","","","","","","","ChemE/Catalysis Engineering","","",""
"uuid:d0c47163-3845-430b-a8ce-013c41faa2ea","http://resolver.tudelft.nl/uuid:d0c47163-3845-430b-a8ce-013c41faa2ea","Data Assimilation in Discrete Event Simulations","Xie, X. (TU Delft Policy Analysis)","Verbraeck, A. (promotor); Delft University of Technology (degree granting institution)","2018","Enabled by the increased availability of data, the data assimilation technique, which incorporates measured observations into a dynamical system model to produce a time sequence of estimated system states, gains popularity. The main reason is that it can produce more accurate estimation results than using either a simulation model or the measurements. Due to this benefit, the data assimilation technique has been applied in many continuous systems applications, but very little data assimilation research has been found for discrete event simulations. With the application of new sensor technologies and communication solutions, the availability of data for discrete event systems has increased as well. The increased data availability for discrete event systems but the lack of related data assimilation techniques thus motivated this work on data assimilation for discrete event simulations.
Since discrete event simulations are highly nonlinear, non-Gaussian systems, particle filters are used to conduct data assimilation in discrete event simulations. However, applying particle filtering in discrete event simulations still encounters several theoretical and practical problems, such as the state retrieval problem (discrete event simulation models have a piecewise constant state trajectory, so the retrieved state was updated at a past time instant, with which inaccurate estimation results will be obtained), the variable dimension problem (the dimension of the state trajectory during a fixed time interval is random, leading to inapplicability of the standard sequential importance sampling algorithm), and the processing of non-numerical data. Therefore, this research aims to develop a particle filter based data assimilation framework for discrete event simulations, in which the aforementioned problems can be addressed.","","en","doctoral thesis","","978-94-6186-893-0","","","","SIKS Dissertation Series No. 2018-09.","","","","","Policy Analysis","","",""
"uuid:47d557ea-9ff9-4851-883a-4ce5e943a8b7","http://resolver.tudelft.nl/uuid:47d557ea-9ff9-4851-883a-4ce5e943a8b7","Efficient wireless networked control: Towards practical event-triggered implementations","Fu, A. (TU Delft Team Tamas Keviczky)","Mazo, M. (promotor); Babuska, R. (promotor); Delft University of Technology (degree granting institution)","2018","Wireless networked control systems, as the name indicates, employ wireless networks to interconnect their components, e.g. sensors, computing units, and actuators, in their implementation. Removing wires from control system implementations, the components can be more easily installed in spatial positions that are hard to access, and facilitate their deployment within large physical scales. This enables the expansion of control applications to new domains or objectives previously not attainable. However, as a trade-off, in a wireless networked control system, the transmission bandwidth is much smaller compared to a wired one. Besides, to achieve flexibility and mobility, some nodes may have energy supplies from batteries, which have limited capacity and are usually costly to replace. The limitations in bandwidth and energy supplies is a major problem when designing wireless networked control systems. The purpose of this thesis is to study how to guarantee pre-designed stability and performance under limitations of bandwidth and energy supplies, with the goal to enrich the control approaches for resource-aware industrial applications.","","en","doctoral thesis","","978-94-6186-894-7","","","","","","","","","Team Tamas Keviczky","","",""
"uuid:52c58e54-883a-4268-8413-c7491dc78671","http://resolver.tudelft.nl/uuid:52c58e54-883a-4268-8413-c7491dc78671","Memristive Device for Logic Design and Computing","Xie, L. (TU Delft Computer Engineering)","Hamdioui, S. (promotor); Delft University of Technology (degree granting institution)","2018","Memristive device or memristor is a promising emerging technology due to its good scalability, near-zero standby power consumption, high integration density, and CMOS fabrication compatibility. Several potential applications based on memristor technology have been proposed, such as non-volatile memories, neuromorphic systems, and resistive computing. However, research on resistive computing is still in its infancy phase. Therefore, it faces challenges with respect to the development of the device technology, logic design styles, computer architectures, compilers and applications.
This thesis focuses on the logic design (including primitive logic gates, interconnect, circuit,
and synthesis flow) and a novel non-Von Neumann architecture.","Memristor; Logic; Computing","en","doctoral thesis","","978-94-6366-013-6","","","","","","","","","Computer Engineering","","",""
"uuid:caf8ff62-7b9b-4dc1-b3b9-2087819d2ae1","http://resolver.tudelft.nl/uuid:caf8ff62-7b9b-4dc1-b3b9-2087819d2ae1","Vibration-induced settlement of a slip-joint connection for offshore wind turbines","Segeren, M.L.A. (TU Delft Offshore Engineering)","Metrikine, A. (promotor); Hendrikse, H. (copromotor); Delft University of Technology (degree granting institution)","2018","The majority of existing offshore wind turbines typically consist of a monopile foundation, a transition piece with a vertically positioned grouted connection, a turbine tower, and a turbine. Of the 2,653 offshore turbines that were installed by the end of 2015, 80 percent are supported by a monopile.
Despite the current overwhelming dominance of the monopile, its future application is rather uncertain. Offshore wind turbines have continuously increased in size and have moved to deeper waters; these developments require larger and heavier support structures. It is unlikely that floating structures will be preferred to bottom-founded structures, up to a water depth of 80 m. The question thus becomes whether jackets or monopiles will be used under such conditions? The monopile seems to be losing in this competition, as, to meet the requirements a monopile would have to be extremely large; thus, it may no longer fall within industry limits, both in terms of manufacturing demands and the lifting capacity of dedicated installation vessels. One may wonder whether a single monopile would be necessary, or if a set of intelligently connected smaller length monopiles could suffice. The key to the success of such a concept could be the so-called slip-joint connection.
A slip-joint consists of two conical sections made of steel. This connection does not require any grout and, besides being a connection option for the transition piece and monopile, allows monopiles to be comprised of a number of lighter sections of very large diameters. By employing a slip-joint, the applicability of the monopile could be extended to deeper waters and to turbines that have very large rotors and power capacities.
Although the slip-joint connection has been successfully used for onshore wind turbines in the past, it has not yet been used offshore. One of the challenges in using the slip-joint is ensuring a proper fit of the cones despite the imperfections that result from manufacturing tolerances, deformations by pile driving, and the potential damage that may occur during the handling of the cones.
In this thesis, it is proposed that a slight difference in the cone angles be used to address the aforementioned imperfections. A steeper cone angle for the transition piece when compared to that of the monopile is proposed. These slightly different cone angles require the upper cone to deform elastically in order to slide down the lower cone during installation. To facilitate the installation process, it is proposed that vibrations be employed in order to cause the upper cone to slide down under its own weight. In order to use this new method of connecting joints, it will be necessary to investigate the manner in which vibrations influence the relative motions of the two cones that need to achieve stable contact.
The objective of this thesis is to investigate the potential of the use of vibrations in the installation and dismounting of a slip-joint with slightly different cone angles. The research is conducted by means of numerical modelling and experiments.","","en","doctoral thesis","","978-94-6186-891-6","","","","","","","","","Offshore Engineering","","",""
"uuid:48218fb4-1b37-4b83-81ee-a0cadc16d8e9","http://resolver.tudelft.nl/uuid:48218fb4-1b37-4b83-81ee-a0cadc16d8e9","Unified correspondence and canonicity","Zhao, Z. (TU Delft Ethics & Philosophy of Technology)","Palmigiano, A. (promotor); van de Poel, I.R. (promotor); Delft University of Technology (degree granting institution)","2018","Correspondence theory originally arises as the study of the relation between modal formulas and first-order formulas interpreted over Kripke frames. We say that a modal formula and a first-order formula correspond to each other if they are valid on the same class of Kripke frames. Canonicity theory is closely related to correspondence theory. We say that a modal formula is canonical if it is valid on its canonical frame, or equivalently,if its validity is preserved from a modal algebra to its canonical extension, or from a descriptive general frame to its underlying Kripke frame. Canonicity is closely related to completeness. If a modal formula is canonical, then the normal modal logic axiomatized by this modal formula is complete with respect to the class of Kripke frames defined by it.
In the development of correspondence theory, the algorithmic aspect receives increasing attention. The Sahlqvist-van Benthem theorem provides an algorithm to transform a class of modal formulas, which are later called Sahlqvist formulas, into their corresponding first-order formulas. The algorithm SQEMA provides a modal language-based algorithm to transform a modal formula into a pure modal formula in an expanded language, and then translate the pure modal formula into the first-order language. SQEMA succeeds on a strictly larger class of modal formulas, which are called inductive formulas.
In recent years, unified correspondence theory is developed based on duality-theoretic and order-algebraic insights. In this approach, a very general syntactic definition of Sahlqvist and inductive formulas is given, which applies uniformly to each logical signature and is given purely in terms of the order-theoretic properties of the algebraic interpretations of the logical connectives. In addition, the Ackermann lemma based algorithm ALBA, which is a generalization of SQEMA based on order-theoretic and algebraic insights, is given, which effectively computes first-order correspondents of input formulas/inequalities, and is guaranteed to succeed on the Sahlqvist and inductive classes of formulas/inequalities.
This dissertation belong to the line of research of unified correspondence theory.
Chapter 3 applies the unified correspondence methodology to possibility semantics, and gives alternative proofs of Sahlqvist-type correspondence results to the ones in [196], and extends these results from Sahlqvist formulas to the strictly larger class of inductive formulas, and from the full possibility frames to filter-descriptive possibility frames. Chapter 4 applies the unified correspondence methodology to modal compact Hausdorff spaces, and gives alternative proofs of canonicity-type preservation results to the ones in [14]. Chapter 5 examines the power and limits of the translation method in obtaining correspondence and canonicity results. Chapter 6 is about an application of unified correspondence theory to the proof theory of strict implication logics, showing the usefulness of unified correspondence theory in the design of analytic Gentzen sequent calculi, especially when it comes to computing the corresponding analytic rules of a given sequent.","","en","doctoral thesis","","","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:38f761f4-6d73-4ca9-9cca-8de432184f0c","http://resolver.tudelft.nl/uuid:38f761f4-6d73-4ca9-9cca-8de432184f0c","Bridging the gap: combined optical tweezers with free standing lipid membrane","Marin Lizarraga, V.M. (TU Delft BN/Marie-Eve Aubin-Tam Lab)","Tans, S.J. (promotor); Aubin-Tam, M.E. (copromotor); Delft University of Technology (degree granting institution)","2018","This thesis presents a reliable technology to assemble free standing lipid membranes using microfabricated devices. A microfluidic cartridge consisting of parallel channels connected with a rectangular aperture was designed and characterized to assemble artificial membranes. This methodology resulted on a system capable of assembling lipid bilayer membranes of different lipid composition. Using decane as organic solvent, ~70% of the aperture was covered by the lipid bilayer, while the remaining are was occupied a pocket of solvent (annulus). An almost complete depletion of the annulus can be achieved by choosing a solvent (chloroform) capable of being absorbed by the flowcell material. In comparison with others methods, this approach is an important contribution to the field as it is allows real-time control over conditions (voltage, molecules in solution, pH) over both leaflets of the membrane. Furthermore, the lipid bilayer plane is perpendicular to the microscope focal plane, allowing observation of morphological changes in the lipid membrane and straightforward combination with optical techniques. This work shows the first successful operation of optical tweezers combined with planar lipid membranes accessible from both sides. Direct manipulation of the membrane is demonstrated with membranes with a reduced annulus. One of the microfluidic devices designed in this thesis can host several membranes simultaneously, which are all accessible with an optical tweezers. This device facilitates optical tweezers studies of by allowing to work with different membranes of same lipid composition in a same device with access to both sides of the membranes. Direct mechanical manipulation and adjustable buffer conditions both simultaneously are highly desired features that this technique offers, enabling the study of biological processes that depend on asymmetric conditions on each membrane sides. In addition, the easy access facilitates the study of the formation of lipid nanotube via the intrusion of objects on a flat membrane. The methodology developed in the context of this thesis can be used for combined electrophysiology and force spectroscopy of lipid membranes. To reach the full potential of this technique, a more complete descriptive model of the membrane is needed. More complex lipid membranes could also be implemented as future work.","Optical tweezers; NOA81; lipid membrane; lipid nanotube; microfluidics","en","doctoral thesis","","978-90-8593-337-3","","","","Casimir PhD series 2018-05","","","","","BN/Marie-Eve Aubin-Tam Lab","","",""
"uuid:4a4d296b-4db4-47ef-835e-b6d445b654d4","http://resolver.tudelft.nl/uuid:4a4d296b-4db4-47ef-835e-b6d445b654d4","Two-dimensional membranes in motion","Davidovikj, D. (TU Delft QN/Steeneken Lab)","Steeneken, P.G. (promotor); van der Zant, H.S.J. (promotor); Delft University of Technology (degree granting institution)","2018","This thesis revolves around nanomechanical membranes made of suspended two - dimensional materials. Chapters 1-3 give an introduction to the field of 2D-based nanomechanical devices together with an overview of the underlying physics and the measurementtools used in subsequent chapters. The research topics that are discussed can bedivided into four categories: characterisation (Chapters 4 and 5), sensors (Chapter 6),actuators (Chapters 7 and 8) and novel materials (Chapter 9).","Graphene; two-dimensional materials; nanomechanics; NEMS; Sensors; nonlinear characterisation; graphene pumps; capacitive readout; complex oxide resonators","en","doctoral thesis","","978-90-8593-335-9","","","","Casimir PhD series 2018-03","","2018-06-01","","","QN/Steeneken Lab","","",""
"uuid:25e95813-c9e9-4eae-a1e7-f24920fbe592","http://resolver.tudelft.nl/uuid:25e95813-c9e9-4eae-a1e7-f24920fbe592","The Cognitive Infrastructures of Markets: Empirical Studies on the Role of Categories in Valuation and Competition, and a Formal Theory of Classification Systems Based on Lattices and Order","Piazzai, M. (TU Delft Ethics & Philosophy of Technology)","Palmigiano, A. (promotor); van de Poel, I.R. (promotor); Wijnberg, Nachoem (promotor); Delft University of Technology (degree granting institution)","2018","This dissertation addresses the question of how the information encoded by category labels is interpreted by agents in a market for the purpose of decision-making. To this end, we first examine the influence of categorization on economic and strategic outcomes with two empirical studies, and then use the insights provided by these studies to develop a formal theory of classification systems. Consistently with Formal Concept Analysis (FCA), this theory builds on the fundamental mathematical notions of lattices and order, and it is thus uniquely suited to yield an ontological perspective on category representations. As a result, we are much better equipped to understand how categories serve as the ""cognitive infrastructures'"" of markets and affect economic activity. Chapter 1 offers a concise overview of the extant research on categorization in cognitive psychology, economic sociology, and organization theory. We build extensively on this diverse literature during the course of our exposition.
The first part of this thesis includes our empirical studies. In Chapter 2, we synthesize insights from industrial economics, strategic management, and organizational ecology to examine the effects of product proliferation strategies. Conceptualizing the market as a multidimensional (Lancastrian) space of product features, we argue that product categories guide firms' strategic decisions by partitioning the space into subsets or regions. Product proliferation occurs when a firm bids to occupy a product category at the expense of competitors by saturating the corresponding region of space. Consistently with game-theoretic models of product competition in differentiated markets, we predict proliferation to have a negative effect on the likelihood of rival product introductions in the targeted category; however, we also predict that this effect is weaker if the region of space to which the category maps is more complex (i.e., heterogeneous in terms of product features). Our analysis of firms' patterns of new product introductions in the US recording industry supports these hypotheses; in addition, it suggests that product proliferation effectively deters competitors who can alter their positioning in feature space, but those who are constrained to particular positions remain virtually unaffected.
In Chapter 3, we turn to consumers' perspective and examine how the categorization of products according to different classification systems affects the attribution of value. Focusing on the distinction between categories based on prototypes and categories based on goals, we argue that these category labels of these two kinds map to structurally different regions of the feature space. Valuation requires consumers to infer the location of products from their labels, but because type- and goal-based categories have different internal structures, they enable different sorts of inferences. Building on this argument, we theorize that under particular conditions spanning type-based categories has a U-shaped effect on consumers' evaluations, whereas spanning goal-based categories has a negative effect. At the same time, we predict that spanning goal-based categories can moderate the U-shaped effect of spanning type-based categories by enabling consumers to make more precise inferences from fewer type-based labels. Our analysis of product ratings on a popular music website offers empirical support for these hypotheses.
In the second part of this thesis, we develop a formal theory of categorization that accounts for the key aspects highlighted by our empirical studies. In Chapter 4, we introduce an order-theoretic account of classification systems as RS-frames. These are algebraic structures based on RS-polarities, which we enrich with additional relations to interpret modalities. Consistently with FCA, we propose to interpret an RS-polarity as a database consisting of a set of objects (such as products or organizations in a market), a set of features, and an incidence relation linking objects with their features. All the possible categories whereby the objects and the features may be grouped arise as the Galois-stable sets of this polarity, just like formal concepts in FCA. An agent's perception of the objects and their features, which can be unique, incomplete, or even mistaken, is modeled by a relation giving rise to a normal modal operator that expresses an agent's beliefs about a category's intensional and extensional meaning. The fixed points of the iterations of belief modalities are used to model categories whose meaning is shared as they arise from social interaction.
In Chapter 5, we clarify how the order-theoretic perspective on concepts enabled by FCA complements the geometric perspective allowed by the theory of conceptual spaces. In addition to introducing a sound and complete epistemic-logical language, we refine the framework presented in the previous chapter both technically and conceptually: Technically, because we free its semantics from the restrictions imposed by the RS-conditions and generalize to more natural Kripke-style frames. This makes our formalism better suited to represent formal contexts (i.e., databases) as they occur in real-world domains. Conceptually, because we enhance our theory of classification systems as concept lattices and propose formalizations for some of the most important theoretical constructs in the categorization literature, including typicality, similarity, contrast, and leniency. In particular, we elaborate our interpretation of the fixed-point construction introduced before by tying it directly to the notion of typicality. Possible extensions are discussed, especially with regard to dynamic updates.
Chapter 6 summarizes the main findings of this dissertation, elucidates their implications for organizational research, identifies key areas for improvement, and presents promising directions for future study. Special consideration is given to the possibility of unifying FCA and conceptual spaces using the framework of correspondence theory. We conclude with a general reflection on the role of logic in the social sciences.","Categorization; Organization theory; Applied logic","en","doctoral thesis","","978-90-90306-12-3","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:37be4591-3e02-4ad3-b800-30bf41a85f1c","http://resolver.tudelft.nl/uuid:37be4591-3e02-4ad3-b800-30bf41a85f1c","Identification of time-varying models for flapping-wing micro aerial vehicles","Armanini, S.F. (TU Delft Control & Simulation)","Mulder, Max (promotor); Delft University of Technology (degree granting institution)","2018","The demand for always smaller, more manoeuvrable and versatile unmanned aerial vehicles cannot be met with conventional manned flight approaches. This has led engineers to seek inspiration in nature, giving rise to the bio-inspired flapping-wing micro aerial vehicle (FWMAV). FWMAVs achieve a remarkable flight performance at small scales, however their flight mechanics are extremely complex. This hinders the development of effective dynamic models, which are essential for simulation, design and advanced controller development, and would enhance the performance and autonomy of such vehicles. This thesis addresses the challenge of modelling flapping-wing dynamics, using free-flight and wind tunnel data, with the aim of devising new models that are both accurate and computationally simple enough for control and simulation applications. The research is based on a test vehicle, i.e. the DelFly, developed at TU Delft. To meet the stated objectives, two modelling approaches are developed. The first approach is based on free-flight system identification and yields time-varying grey-box state-space models of the full vehicle dynamics, covering different flight conditions. The second approach results in physically meaningful phenomenological models of the aerodynamics specifically, accounting for complex effects such as the clap-and-fling mechanism and the interaction between the unsteady wing wake and tail. In addition to the modelling, recommendations for effective FWMAV flight testing are put forth, and a sensor fusion method is developed to advantageously combine on-board sensor data with off-board motion tracking data. All the developed models are accurate and computationally inexpensive, and the approaches can be generalised to comparable FWMAVs. While each model is best suited for different applications, thanks to its specific properties, all the developed models pave the way for new work in design, simulation, and control of FWMAVs.","Flapping-wing flight; Micro Aerial Vehicle; System identification; Aerodynamic modelling; Free-flight testing","en","doctoral thesis","","978-94-6186-895-4","","","","","","","","","Control & Simulation","","",""
"uuid:b6eebef5-f519-4921-9d55-54c85aff3992","http://resolver.tudelft.nl/uuid:b6eebef5-f519-4921-9d55-54c85aff3992","Funding Sustainable Cities in China","Zhan, C. (TU Delft Organisation & Governance)","de Jong, W.M. (promotor); de Bruijn, J.A. (promotor); Delft University of Technology (degree granting institution)","2018","Currently, more and more people live in cities, and this leads to an enormous increase in global GHG emissions. Cities are blamed for the cause of environmental problems. Therefore, countries over the world aim to approach these problems by launching sustainable city programs. On April 22, 2016, China signed the Paris Agreement at the United Nations Headquarters in New York and formally promised that carbon dioxide emissions in China would reach the peak around 2030, and it would strive for reaching the peak as soon as possible. However, the transition to sustainable cities requires governments to invest a large sum of money. It was projected by IEA in 2010 that the total investment to projects responding to climate change may amount to US $ 220 billion each year between 2010 and 2020 and about US $ 1 trillion each year between 2020 and 2030. Against this backdrop, how to fund the development of sustainable cities becomes a pressing problem the Chinese government faces. Previously, it was a common practice for the Chinese government to resort to off-budget means such as land concessions and Urban Development and Investment Corporations (UDICs) to bridge the money gap. However, these tools are viewed as unsustainable due to the scarcity of land and the imbalance of benefit appropriation among different stakeholders caused by land finance and the lack of transparency financing through UIFPs. Therefore, this research aims to expand the funding sources through exploring the possibilities for the involvement of private sectors in the construction of sustainable cities and their roles playing in achieving climate goals nationally.","Sustainable Cities; PPP; Finance; Bonds; Metro + Property","en","doctoral thesis","","978-94-6186-897-8","","","","","","","","","Organisation & Governance","","",""
"uuid:f8fa946a-0178-40e7-bf9c-b91962698481","http://resolver.tudelft.nl/uuid:f8fa946a-0178-40e7-bf9c-b91962698481","Evidence-Based Software Portfolio Management","Huijgens, H.K.M. (TU Delft Software Engineering)","van Deursen, A. (promotor); van Solingen, D.M. (promotor); Delft University of Technology (degree granting institution)","2018","Based on the large amounts spent by software companies to develop new and existing software systems, we argue that an evidence-based approach that focuses on a software portfolio as a whole should be in place to support decision-making. We developed EBSPM as an evidence-based, practical model to support software companies to actively steer at optimization of their software delivery portfolio. We evaluated the model in case studies and surveys in industry, to demonstrate its strengths and limitations in practice. This lead to the following results:
• We analyzed - from a portfolio point of view - the characteristics of best performers and worst performers, in a dataset of 352 software projects, resulting in 7 success factors and 9 failure factors.
• We found that a release process that performs above average on cost and duration, satisfies stakeholders through fast response and direct value, even when the reliability and availability of the actual system are weak.
• A statistical, evidence-based pricing approach for software engineering, as a single instrument, can be used in the subject companies to create cost transparency and performance management.
• We found significant differences between the EBSPM-repository and an ISBSG-subset. Practitioners and researchers alike should be cautious when drawing conclusions from a single repository.
• We found that a focus on shortening overall project duration and improving communication and team collaboration on intermediate progress is likely to have a positive impact on stakeholder satisfaction and perceived value.
Based on the findings, we conclude that it is wise for software companies to collect and analyze their own historic software portfolio data because cross-company large differences in performance are found. We obtained a better understanding of the differences and equalities between effort and cost of software deliveries. Additionally, we studied the effects of pricing of software deliveries, giving us a better insight into ways to support decision-making. Based on the results of ongoing research, we expect that automation of the measurement and analysis process, based on statistics to calculate strong relationships, is a direction in which the analysis of software portfolio (software analytics) is the to develop strongly in the coming years.
The focus of this thesis is on the analysis of conventional DW-MRI data acquired in the context of the Rotterdam Scan Study. This is a prospective population-based cohort study with more than 10.000 participants to investigate causes of neurological disease in elderly people. Conventional DW-MRI is defined as diffusion data acquired with a single diffusion-weighting factor and a small number of diffusion-sensitizing gradient orientations. The objectives of this thesis are (1) to enhance our insight in the relation between tissue structure and the DW-MRI signal from conventional DW-MRI sequences, and (2) to develop methods to quantify diffusion properties in the brain as accurately and precisely as possible based on conventional DW-MRI data.
To gain insight into the relation between tissue structure and the DW-MRI signal, simulated DW-MRI signals based on Monte Carlo simulations of spins between randomly packed cylinders are compared to experimentally acquired data from a hardware phantom. The hardware phantom consists of solid fibers and acts as a model for the extra-axonal diffusion. The simulated DW-MRI signal is in good agreement with the experimentally acquired data. Furthermore, simulations show that the DW-MRI signal from spins between randomly packed cylinders is relatively independent of the cylinder diameter for b-values up to 1500 s/mm2. For b-values higher than 1500 s/mm2, substrates with a smaller cylinder diameter yield a larger attenuation of the diffusion-weighted signal (chapter 2).
Conventional DW-MRI data is commonly analyzed with a technique known as diffusion tensor imaging. Here, thewater diffusion profile is modelled by a 3D Gaussian diffusion profile. However, in white matter structures in close proximity to the cerebrospinal fluid (CSF) the use of the single diffusion tensor model is inappropriate. A novel framework is introduced to analyze white matter structures adjacent to the CSF. In this framework a constrained two-compartment diffusion model is fit to the data in which the CSF is explicitly modeled with a free water diffusion compartment. The proposed diffusion statistics are shown to be relatively independent of partial volume effects with CSF and are applied to study ageing in the fornix, a small white matter structure bordering the CSF (chapter 3).
A significant part of the white matter constitutes of ‘crossing fibers’, whereby two or more white matter tracts contribute to the DW-MRI signal in a voxel. The single diffusion tensor model cannot adequately describe the data in such voxels. To solve this issue a fiber orientation atlas and a model complexity atlas were used to analyze conventional DW-MRI data with a simple crossing fibers model, namely the ball-and-sticks model. It is shown that the application of a fiber orientation atlas and a model complexity atlas can significantly improve the reproducibility and sensitivity of diffusion statistics in a voxel-based analysis (chapter 4).
Finally, a framework is proposed that aims to specifically improve the analysis of longitudinal DW-MRI data. In this framework the ball-and-sticks model is fit simultaneously to multiple scans of the same subject. The orientations of the sticks are constrained to be the same over different scans, while all other parameters are estimated separately for each scan. The use of this framework is shown to increase the precision of estimated ball-and-sticks model parameters in longitudinal DW-MRI studies (chapter 5).
In conclusion, this thesis describes frameworks to enhance the accuracy or precision of estimated diffusion properties of the white matter by applying sophisticated diffusion models to conventional DW-MRI data. We anticipate that many diffusion MRI studies may benefit from the work described in this thesis.","DTI; DW-MRI; dMRI; Diffusion; Brain","en","doctoral thesis","","978-94-6332-315-4","","","","","","","","","ImPhys/Quantitative Imaging","","",""
"uuid:e3e035b1-b9b9-4376-8a3c-45b4f37783d0","http://resolver.tudelft.nl/uuid:e3e035b1-b9b9-4376-8a3c-45b4f37783d0","Creation and detection of majorana states","Rubbert, S.H.P. (TU Delft QN/Akhmerov Group)","Nazarov, Y.V. (promotor); Akhmerov, A.R. (promotor); Delft University of Technology (degree granting institution)","2018","Majorana bound states (MBS) have non-Abelian exchange statistics, which means exchanging the position of two MBS changes the state of the system. This attracted attention for multiple reasons: It is a quantum effect without a classical analogue, it introduces topology to condensed matter physics and the non-Abelian exchange processes allow quantum computing with intrinsic error protection. For a long time research on MBS was purely theoretical, because there was no experimentally accessible idea how to create them. This changed with the appearance of a recipe combining conventional superconductivity, semiconductors and a magnetic field. Now such setups exist and their conductance indicates the successful creation of MBS, but challenges regarding their quality, conclusive detection and control remain. In this thesis I propose different strategies for avoiding open experimental problems when creating MBS, and identify new pathways for detecting them. Most proposals for creating MBS are similar in two regards: They confine electrons to lower dimensional systems, for example a 1-dimensional wire, and they use magnetic field to couple to the electron spin. I start this thesis by developing a system where the magnetic field forces electrons onto cyclotron orbits, thereby spatially confining them without relying on the system’s geometry. This leads to an increased resilience against imperfections. Imperfections in real systems are not the only technical problem. One example is the combination of superconductivity and a magnetic field. Superconductors expel weak magnetic fields, while strong magnetic fields induce vortices in the superconductor which have a nonsuperconducting region in their core. In order to avoid these problems I turn to a completely different regime and develop a system that does not require a magnetic field to create MBS. Instead the role of the magnetic field is taken by a combination of supercurrents and spin-orbit coupling in the semiconductor. Then I turn to the detection of MBS, which is always a competition between two goals: finding a unique signature only caused by MBS and relying on a simple setup. One well-established signature of MBS is single electron transport through a superconductor, because without MBS a superconductor only transports electrons in pairs. Of course single electron transport is not a unique signature at all: it happens in most conductors. That makes it hard to distinguish MBS from undesired side effects. I develop a setup that adds falsifiability to this signature. In addition to detecting single electron transport, my scheme allows to block it if it is due to MBS, therefore making MBS and a normal conduction channel distinguishable. In the last part of my thesis I do not propose a system, but instead I analyze and simulate an unexpected outcome of an experiment. My colleagues were studying a driven superconducting resonator coupled to a Josephson junction (two superconductors separated by a short barrier) with the goal of using it to detect MBS. Completely unexpectedly it turned out to be a high quality microwave laser. In a collaborative effort we explain this previously overlooked phenomenon.","","en","doctoral thesis","","978-90-8593-339-7","","","","","","","","","QN/Akhmerov Group","","",""
"uuid:18c61a8d-2256-4b2d-878f-0406e272e982","http://resolver.tudelft.nl/uuid:18c61a8d-2256-4b2d-878f-0406e272e982","Studying ice particle growth processes in mixed-phase clouds using spectral polarimetric radar measurements","Pfitzenmaier, L. (TU Delft Atmospheric Remote Sensing)","Russchenberg, H.W.J. (promotor); Delft University of Technology (degree granting institution)","2018","Clouds are a prominent part of the Earth hydrological cycle. In the mid latitudes, the ice phase of clouds is highly involved in the formation of precipitation. The ice particles in the clouds fall to earth either as snow flakes, in the winter month, or melting crystals that become rain drops. An efficient growth process is the interaction of ice crystals and supercooled liquid ater droplets in so called mixed-phase clouds. Mixed phase cloud systems contain both - ice crystals and super cooled cloud droplets - in the same volume of air. The interaction of ice and liquid phase leads to an enhanced growth of ice crystals and, therefore, enhances the amount of precipitation. However, such processes are still not fully understood. This work hows that such complex microphysical processes in mixed-phase clouds can be observed using state of the art ground based radar techniques. Analyzing spectral polarimetric radar data, different signatures of particle growth processes can be identified.
The results presented are based on measurements obtained with the Transportable Atmospheric Radar (TARA) during the ACCEPT campaign (Analysis of the Composition of Clouds with Extended Polarization Techniques), in autumn 2014, Cabauw, the Netherlands. TARA is an S-band radar profiler that has full Doppler and spectral polarimetric measurement capabilities. TARAs unique three-beam configuration is also able to retrieve the full 3-D velocity vector. Because the high temporal and spatial resolutions and its configurations TARA can capture the complexity of cloud dynamics and microphysical variabilities involved in mixed-phase cloud systems.
A new retrieval technique was applied to several case studies to qualitatively analyze ice particle growth processes within mixed phase cloud systems. These results demonstrate that using radar data re-arranged along fall streak, the interpretation of Doppler spectra and polarization parameters can improve. Based on synergetic measurements obtained during the ACCEPT campaign it was possible to detect possible to detect supercooled liquid water layers within the cloud system and relate them to TARA observations. Therefore, it was possible to even identify different growth processes, like particle riming, generation of the new particles, and particle diffusional growth within the TARA measurements. This demonstrates, that in order to observe ice particle growth processes within complex systems adequate radar technology and state of the art retrieval algorithms are required. Moreover, the ice particle growth processes within cloud systems can be linked directly to the increased rain intensities using along fall streak rearranged radar data.
The last objective of the thesis is the extension of the spectral polarimetric measurement capabilities of TARA and the estimate of the differential phase and the specific differential phase in the spectral domain. These two parameters are frequently used to improve rain estimation, hydrometeor classifications and, currently, more and more to improve microphysical process understanding, e.g. the onset of the aggregation of ice particles. So far, the parameters are used only as integrated moments. Nevertheless, the work demonstrates that further work has to be done to completely understand the microphysical information of these spectral resolved parameters.
Overall, this work demonstrates that spectral polarimetric radar data can be used to improve the microphysical process understanding. The presented work also shows that spectral polarimetric radar data can be used to estimate quantitative icrophysical
properties related to ice particle growth.","cloud physics; spectral radar measurements; radar polarimetry; ice particle growth processes; mixed phase clouds","en","doctoral thesis","","978-94-6186-884-8","","","","","","","","","Atmospheric Remote Sensing","","",""
"uuid:9e442b3f-d45d-49d2-8e1d-8d2274dbfe69","http://resolver.tudelft.nl/uuid:9e442b3f-d45d-49d2-8e1d-8d2274dbfe69","Development of a system for the investigation of spinnakers using fluid structure interaction methods","Renzsch, H.F. (TU Delft Ship Hydromechanics and Structures)","Huijsmans, R.H.M. (promotor); Gerritsma, M.I. (copromotor); Delft University of Technology (degree granting institution)","2018","While historically sailmaking and saildesign were considered as arts, in the 20th century, mainly from the 1980s onwards, engineering sciences have started to play an important role. Two fields are of particular interest: structural and fluid mechanics. Initially, the sails were tested in the wind tunnel, aggregate flow forces measured and the interaction of flow and structural behaviour implicitly captured by visual observation. No quantitative structural assessment was available in these experiments. With the advent of affordable powerful personal computers, programs were developed to compute the flow around sails and the structural reaction to the resulting forces. These programs were based on significantly simplified assumptions about the fluid mechanics - potential flow - as well as the complete neglect of any unsteady behaviour of flow or coupled result. These simplifications limit the applicability of these programs to upwind sails, essentially this airfoils working at small angles of attack. As downwind sails do not comply with these limitations they are still tested in the wind tunnel with the associated scale effects and limited outcome of quantitative results.
Within this thesis a method is being developed to capture the interaction between the complex viscous flow around downwind sails and compute the structural answer to the resulting forces. First a structural model suitable for downwind sails is developed. This is coupled to a commercial solver for simulations of viscous flow. The individual parts (structural and flow simulation as well as coupling) and the entire method are verified and validated. Finally an application example is given. First, the structural model and coupling to the flow solver are developed. The particular challenge regarding the structural model is the requirement to compute the complex behaviour of downwind sails. By design these sails have negligible bending stiffness with the material being stiff in tension but without any meaningful compressive stiffness. To this end the classic CST-element is extended by a wrinkling model, a robust solver able to capture the resulting non-linearities is implemented. This model is coupled to a commercial RANS solver by a bespoke coupling algorithm. This algorithm ensures the conservative transfer of forces and deformations while keeping the coupled simulation stable.
Next, to ensure applicability of the structural and flow simulation models as well as the coupling, they are verified for grid and time step dependency and validated against analytical or experimental data. As no experimental data was freely available on the particular case of downwind sails, wind tunnel tests were conducted to provide at least aggregate flow forces and flying shapes. Particularly the structural simulation and coupling were successfully verified and validated, the simulation of partially separated flow around highly curved surfaces like downwind sails exhibited a strong sensitivity to e.g. small changes of the angle of attack. Validation of the flow simulation was hampered by uncertainties in the experimental data.
Finally, the method is used to compare three sail designs on a hypothetical yacht based on the AC90-rule. The impact of the sail design changes is clearly shown with small variations in sail (profile) depth resulting in very much different optimal angles of attack.
Improvements to the method could in particular be achieved by implicit or strong coupling of flow and structural simulation, this would yield time-accurate information on the sails unsteady behaviour. Further, even more involved flow simulation methods, e.g. large or detached eddy simulation instead of turbulence modelling might improve the accuracy of the flow simulation.","CFD; FSI; Sailing","en","doctoral thesis","","","","","","","","2018-02-12","","","Ship Hydromechanics and Structures","","",""
"uuid:ed806630-efe2-4484-a5cd-2a8011912841","http://resolver.tudelft.nl/uuid:ed806630-efe2-4484-a5cd-2a8011912841","Surgeon-instrument interaction: A hands-off approach","Arkenbout, E.A. (TU Delft Medical Instruments & Bio-Inspired Technology)","Breedveld, P. (promotor); Dankelman, J. (promotor); de Winter, J.C.F. (copromotor); Delft University of Technology (degree granting institution)","2018","The field of minimally invasive surgery (MIS) is constantly evolving towards the minimization of surgical trauma. Surgical instrumentation enabling this advancement must aid the surgeon, rather than hamper or burden. Surgeon-Instrument Interaction (SII) is of particular importance. Bad SII-design choices during instrument development can complicate procedures, introduce errors, and compromise patient safety. The objective of this thesis is to evaluate and improve SII for MIS instrumentation and to investigate new ways to design for SII, such that potential complications of future instrumentation are avoided.
This thesis is divided into two parts, each relating to an instrument for which the investigation of SII is of significant relevance. Part I discusses the gynaecological morcellator, a dedicated instrument that facilitates the laparoscopic removal of bulk uterine tissue. Contrasting the single-purpose morcellation instrument, Part II discusses multi-functional instrumentation. Specifically, Part II investigates multi-branched instrumentation for natural orifice transluminal endoscopic surgery (NOTES) and presents a new design method towards their future development.
However, in practice, we see large differences in the security measures taken by hosting providers. Some providers implement an array of actions to protect their customers. Others lack even the capacity to detect cybercrime, are negligent of cybercrime, or even willfully facilitate it.
This book answers a series of questions that collectively aim to understand the underlying differences in security incentives and policies of hosting providers: How do we define a hosting provider? How are they distributed? To what extent do their individual properties or security measures affect the volume of incident in their networks?
We expect this book to provide useful insights for hosting providers about the effectiveness of their security policies and to serve as a an input for development of evidence-based policies by the government.","cybersecurity; hosting provider; metrics; incentives; shared hosting; patching; vulnerability scan; data analysis; statistical models; machine learning; blacklist data","en","doctoral thesis","","978-94-6366-007-5","","","","","","2018-02-06","","","Organisation & Governance","","",""
"uuid:edf396c5-5c3a-4b5c-9fc4-b8bb5ff6eeee","http://resolver.tudelft.nl/uuid:edf396c5-5c3a-4b5c-9fc4-b8bb5ff6eeee","Qualitative and Quantitative Imaging in Electromagnetic Inverse Scattering Theory","Sun, S. (TU Delft Microwave Sensing, Signals & Systems)","Yarovoy, Alexander (promotor); Kooij, B.J. (promotor); Delft University of Technology (degree granting institution)","2018","The inverse scattering problem is inherently nonlinear and improperly posed. Relevant study, such as the existence and uniqueness of the solution, the completeness of the far field pattern, etc., involves an abstruse mathematical theory. In our daily life, the inversion techniques play a significant role in areas such as radar, sonar, geophysical exploration, medical imaging and nondestructive testing. This thesis is focused on the qualitative and quantitative reconstruction of shape and medium parameters of scattering objects in electromagnetic inverse scattering theory. The major contributions of this thesis are 1) the proposal of a novel cross-correlated error termand 2) the proposal of the sum-of-normregularized reconstruction algorithm. The significance of the former lies in the fact that the proposed error term fills up a gap hidden in the classical “state error Å data error” cost functional. In the optimization approaches, the data error term tends to recover the unknown properties of the objects directly from the measurement data, while the state error term attempts to ensure that the recovered results satisfy Maxwell’s equations in the field domain. In other words, the solution must behave well in both the measurement domain and the field domain. However, there is still a gap in between because the minor mismatch in the field domain is not monitored in the measurement domain. The proposed crosscorrelated error is a constraint which tends to get the mismatch in the field domain under control in the measurement domain. Therefore, one can say that this novel error term revolutionizes the formulation of the minimization functional of inversion techniques based on optimization theory. The significance of the latter is that the proposed reconstruction scheme enables us to excavate the joint information hidden in the formulation of multiple inverse source problems, without any significant additional computational effort. Although the sum-of-norm regularization is not necessarily the best regularization constraint for some complicated scatterers, it demonstrates at least two points: 1) for an inverse source problem, benefits can be obtained from use of different incident fields; 2) the sum-of-norm regularization brings better resolving ability due to the joint processing of the multiple contrast source vectors. The research results in this thesis are also applicable to the acoustic inverse scattering problems. Application of the qualitative and quantitative reconstruction approaches developed in this thesis to the experimental data in different areas of wave-field inversion would be very interesting as future work.","","en","doctoral thesis","","978-94-028-0912-1","","","","Shilong Sun was born in Zhangqiu, Shandong, China, in 1988. He received the B.S. and M.S. degrees in information and communication engineering from the National University of Defense Technology, Changsha, China, in 2011 and 2013, respectively. He joined the Microwave Sensing, Signals and Systems group, Delft University of Technology (The Netherlands) in 2013, where he started working towards his Ph.D. degree in the field of electromagnetic inverse scattering problems.","","","","","Microwave Sensing, Signals & Systems","","",""
"uuid:2e988560-de75-4933-8b6b-f52b31289423","http://resolver.tudelft.nl/uuid:2e988560-de75-4933-8b6b-f52b31289423","Breaking the clay layer: The role of middle managers in safety management","Rezvani, Z. (TU Delft Safety and Security Science)","Hudson, P.T.W. (promotor); Delft University of Technology (degree granting institution)","2018","The purpose of this study was to explore the role of middle management in safety within hazardous industries such as the oil and gas industries. In so doing, the study has answered the main research question, what are the roles and responsibilities of middle managers in risk management and safety oriented decision-making? To achieve the objectives of safety management, which are primarily the protection of personnel, environment and assets, it is essential to understand the organisational context. An organisation is constructed from different layers of management with interlinked and complex roles. Middle management, who occupy the middle-level positions, is a fundamental management level in an organisation because they are informed managers, operating between people who have a narrow vision which is limited by their own segments, typically front line operatives, and the top/central executive management, who have a broad picture of an organisation that may be unclear as a result of their distance from the operation.","middle manager; Decision making; roles; Safety management; consensus decision-making; organisational decision-making; safety-related decision-making; parallel decision-making","en","doctoral thesis","","9789461868855","","","","","","","","","Safety and Security Science","","",""
"uuid:92bbf688-b5eb-46f3-a844-501e213cb750","http://resolver.tudelft.nl/uuid:92bbf688-b5eb-46f3-a844-501e213cb750","Flexible Coordination Support for Diagnosis Teams in Data-Centric Engineering Tasks","Janeiro Lopes Da Silva, J. (TU Delft System Engineering)","Brazier, F.M. (promotor); Lukosch, S.G. (copromotor); Delft University of Technology (degree granting institution)","2018","The increasing development of information and communication technology is causing fundamental changes to today’s society, changing the communication and interaction of people in their daily lives. For example, geographical distributed co-workers experience collaboration using a broad spectrum of shared workspace systems for communication and interaction despite their physical separation to share data, documents and contextual information.The objective of this thesis is to design Elgar, a shared workspace system, that flexibly coordinates teamwork for such geographically distributed teams. In particular this thesis focuses on the design and evaluation of the system for remote industrial machine diagnosis by a team of distributed engineers.This thesis identifies and describes a spectrum of coordination mechanisms, to structure and provide flexible coordination support. The two extremes of this spectrum are explored and implemented in Elgar. In the context of diagnosis, this thesis proposes Rectio, based on existing models of diagnosis to analyse observed anomalies, explore potential causes, and propose a diagnosis. Elgar supports distributed teams in diagnosis tasks providing different coordination mechanisms and various functionalities that allow engineers to analyse, discuss and document diagnosis for machines. Elgar is evaluated in two different experiments one of Hägglunds Drives and another of Volvo Construction Equipment. Both experiments evaluate the usefulness of implemented diagnosis functionalities of the system and the prescribed and ad-hoc coordination mechanisms in a collaborative diagnosis task. The first experiment evaluates the use of Elgar with engineering students, whereas the second experiment evaluates Elgar with experienced diagnosis engineers.In conclusion, this thesis shows that it is possible and necessary to design different coordination mechanisms and integrate them in a shared workspaces system, supporting flexible coordination for collaborative diagnosis tasks. In addition, the evaluation of the two experiments indicates that even the use of dissimilar coordination mechanisms provides similar teamwork outcomes, making the selection of a specific coordination mechanism a shared decision of team preferences.","","en","doctoral thesis","","978-94-6186-888-6","","","","","","","","","System Engineering","","",""
"uuid:c64d0e63-9e2f-406c-930d-ae33cc077edb","http://resolver.tudelft.nl/uuid:c64d0e63-9e2f-406c-930d-ae33cc077edb","Numerical simulation of foam flow in porous media","van der Meer, J.M. (TU Delft Reservoir Engineering)","Jansen, J.D. (promotor); Möller, M. (copromotor); Kraaijevanger, J.F.B.M. (copromotor); Delft University of Technology (degree granting institution)","2018","If secondary hydrocarbon recovery methods, like water flooding, fail because of the occurrence of viscous fingering one can turn to an enhanced oil recovery method (EOR) like the injection of foam. The generation of foam in a porousmedium can be described by a set of partial differential equations with strongly non-linear functions, which impose challenges for the numerical modeling. Former studies [1–3] show the occurrence of strongly temporally oscillating solutions when using forward simulation models, that are entirely due to discretization artifacts. We describe the foam process by an immiscible two-phase flow model where gas is injected in a porousmedium filled with a mixture of water and surfactants. The change from pure gas into foam is incorporated in the model through a reduction in the gas mobility. Hence, the two-phase description of the flow stays intact. Since the total pressure drop in the reservoir is small, both fluids can be considered incompressible [3]. However, whereas the fractional flow function for a gas-flooding process is a smooth function of water saturation, the generation of foam will cause a rapid increase of the flux function over a very small saturation scale. Consequently, the derivatives of the flux function can become extremely large and impose a severe constraint on the time step. We address the stability issues of the foam model, by numerous numerical approaches that improve the accuracy of the solutions. First, we study several averaging schemes and introduce a novel way of approximating the foam mobility functions on the grid interfaces in a finite volume framework. This will lead to solutions that are significantly smoother than can be achieved with standard averaging schemes. Next, we discuss several novel discretization schemes where the discontinuity is incorporated in the numerical fluxes for a simplified compressible flow model. These include the indirect addition of an extra grid interface at the location of the discontinuity, to preserve monotonicity of the solutions in time. Variations on this method, are the addition of an extra grid cell around the highly non-linear phase transition and the adaption of the flux terms based on the location of the discontinuity or non-linearity in the grid. As a practical example to demonstrate these techniques we study a simplified model for foam flow in porous media. The model is then extended to a two-dimensional reservoir, where the accuracy of the solutions is a main concern. The two-dimensional simulator that is used for this, was build and tested for the foam model. It includes higher-order hyperbolic Riemann solvers, and flux correction schemes to compute the saturation of the different fluid phases in the model. The elliptic solver for the pressure equation is also adapted to the stiffness of the problem. With this simulator we perform a quantitative study of the stability characteristics of the flow, to gain more insight in the important wave-lengths and scales of the foam model. This insight forms an essential step towards the design of a suitable computational solver that captures all the appropriate scales, while retaining computational efficiency. In addition, we present a qualitative analysis of the effect of different reservoir and fluid properties on the foam fingering behavior. In particular, we consider the effect of heterogeneity of the reservoir, injection rates, and foam quality. This leads to interesting observations about the influence of the different foam parameters on the stability of the solutions, and we are able to predict the flow stability for different foam qualities. Finally, we discuss several other approaches that were addressed during this PhD-project to increase the understanding of solving highly non-linear flow problems in a porous medium.","Foam flow in porous media; Local-equilibrium models; Finite volume methods; Stability analysis; Reservoir simulation","en","doctoral thesis","","978-94-6233-863-0","","","","","","","","","Reservoir Engineering","","",""
"uuid:f85b8393-6071-4af6-9507-f713610c0f06","http://resolver.tudelft.nl/uuid:f85b8393-6071-4af6-9507-f713610c0f06","In-situ analysis of phase transformations in a supermartensitic stainless steel: A magnetic approach","Bojack, A. (TU Delft (OLD) MSE-3)","Sietsma, J. (promotor); Delft University of Technology (degree granting institution)","2018","This thesis studies in-situ the phase transformations during heat treatment of two advanced steels: a supermartensitic stainless steels (SMSS), on which the main focus of this work is, and Fe-C-Mn-Si steels. A magnetic technique, based on the analysis of saturation magnetization, is utilized as the primary analysing tool. The scientific aim of this work is twofold: (i) to study the microstructural evolution involved in thermal processing of advanced steels based on optimising retained austenite and (ii) to optimise and extend the application of magnetic methods for these steels. For SMSS, the retained austenite fraction plays an essential role in controlling mechanical properties that often have a narrow tolerance window. Magnetic techniques are of increasing interest in the steel industry for monitoring the development of austenite on the basis of different magnetic properties of the phases in the steel. A new approach is proposed for determining the austenite fraction from in-situ thermo-magnetic measurements in SMSS. The formation of austenite during heat treatment of a SMSS is investigated and the kinetics of the martensite to austenite transformation, as well as the stability of austenite was established. The analysis of the Fe-C-Mn-Si steels provides a basis to further develop the analysis of austenite formation of multi-phase steels using thermo-magnetic techniques.","In-Situ Analysis; Phase Transformations; Supermartensitic Stainless Steel","en","doctoral thesis","","978-94-91909-48-1","","","","This research was carried out under the project number M41.5.10392 in the framework of the Research Program of the Materials innovation institute (M2i) in The Netherlands (www.m2i.nl).","","","","","(OLD) MSE-3","","",""
"uuid:0cd2c394-adc0-490d-9f4e-ba611d3f14c2","http://resolver.tudelft.nl/uuid:0cd2c394-adc0-490d-9f4e-ba611d3f14c2","Modular High Voltage Pulse Converter for Short Rise and Decay Times","Mao, S. (TU Delft DC systems, Energy conversion & Storage)","Ferreira, Jan Abraham (promotor); Popovic, J. (copromotor); Delft University of Technology (degree granting institution)","2018","This thesis explores a modular HV pulse converter technology with short rise and decay times. A systematic methodology to derive and classify HV architectures based on a modularization level of power building blocks of the HV pulse converter is developed to summarize existing architectures and explore new possible architectures.The optimal architecture has been identified and recommendations for architecture selections are provided. The effect of modularization and increasing switching frequency for the HV transformer are addressed. The key influence factors for HV pulse rise and decay times are studied. A method is proposed to mitigate the diode reverse recovery effect for the multi-stage voltage multiplier. A generic equivalent steady-state circuit model and comprehensive design methodology are developed to simplify the analysis and design of the series parallel(LCC) resonant based modular HV pulse converters. Finally, the experimental results of a HV pulse converter prototype based on the architecture with multiple transformers and voltage multipliers validate the equivalent steady state circuit model and the comprehensive design methodology.","HV pulse converter technologies; HV transformer; modular high frequency HV pulse converter; voltage multiplier; modular HV pulse converter","en","doctoral thesis","","","","","","","","","","","DC systems, Energy conversion & Storage","","",""
"uuid:c463eef0-2b18-40c4-acfe-cea7399b20ea","http://resolver.tudelft.nl/uuid:c463eef0-2b18-40c4-acfe-cea7399b20ea","On boundary damping for elastic structures","Akkaya, T. (TU Delft Mathematical Physics)","Heemink, A.W. (promotor); van Horssen, W.T. (copromotor); Delft University of Technology (degree granting institution)","2018","Many mathematical models, which describe oscillations in elastic structures such as suspension bridges, conveyor belts and elevator cables, can be formulated as initial-boundary value problems for string (wave) equations, or for beam equations. In order to build more durable, elegant and lighter mechanical structures, the undesired vibrations can be suppressed by using dampers.
In this thesis, the effect of boundary damping on elastic structures is studied. In Chapter 2, as a simple model of oscillations of a cable, a semi-infinite string-like problem is modelled by an initial boundary value problem with (non)-classical boundary conditions. We apply the classical method of D'Alembert to obtain the exact solution which provides information about the efficiency of the damper at the boundary.
In Chapter 3, initial-boundary value problems for a beam equation on a semi-infinite interval and on a finite interval have been studied. The method of Laplace transforms is applied to obtain the Greens function for a transversally vibrating homogeneous semi-infinite beam, and the exact solution for various boundary conditions are examined. The analytical results confirm earlier obtained results, and are validated by explicit numerical approximations of the damping and oscillating rates. The study shows that the numerical results approximate the exact results for sufficiently large domain lengths and for a sufficiently high number of modes. Moreover, the study provides an understanding of how the Greens functions for a semi-infinite beam can be computed analytically for (non)-classical boundary conditions.
Finally, in Chapter 4 the studies as presented in Chapter 2 and in Chapter 3 are extended to inclined structures. A model is derived to describe the rain-wind induced oscillations of an inclined cable. For a linearly formulated initial-boundary value problem for a tensioned beam equation describing the in-plane transversal oscillations of the cable, the effectiveness of a boundary damper is determined by using a two timescales perturbation method. Not only the influence of boundary damping but also the influence of the bending stiffness on the stability properties of the solution have been studied.","Semi-infinite String Equations; Semi-Infinite Beam; Boundary Damping; Rain-wind oscillations; Inclined cable","en","doctoral thesis","","978-94-6366-005-1","","","","","","","","","Mathematical Physics","","",""
"uuid:a5fcd470-d9e2-4d27-87fd-ae883b68e42e","http://resolver.tudelft.nl/uuid:a5fcd470-d9e2-4d27-87fd-ae883b68e42e","Strategies Towards Soft Functional Supramolecular Materials","Lovrak, M. (TU Delft ChemE/Advanced Soft Matter)","van Esch, J.H. (promotor); Eelkema, R. (copromotor); Delft University of Technology (degree granting institution)","2018","Nature is full of life, which is driven by the numerous biochemical processes that occur in living organisms. Many motifs are universal regardless of species, such as enzymatic networks, self-assembly and reaction-diffusion. This work examines the development of novel soft functional materials via self-assembly and reaction-diffusion approaches. The combination of these two principles from nature offers new possibilities for the development and structuring of soft matter. In our research, we investigated the application of the reaction-diffusion self-assembly approach to make patterned hydrogels and hydrogel objects. The dimensions of the patterns and objects that have been made range from centimeter scale to microscale. Also, we demonstrated various possibilities of chemical functionalization of patterns and objects. In another implementation, we showed that self-assembly combined with reaction-diffusion at the interface of two gels can be used to glue pieces of gel by forming a self-assembled fibrous network across the interface. This approach worked for different biological and polymeric gels and was supported with a numerical model. Finally, we developed a novel strategy to make an in vitro model of artificial plaque. An artificial plaque was made by loading gelatin/alginate polymeric film with liposomes. The prepared films showed similar liposomal distribution as porcine plaque. Also, the plaque was implantable as demonstrated in ex-vivo and in-vivo experiments.","Supramolecular Hydrogel; Reaction-Diffusion; Polymeric Hydrogels","en","doctoral thesis","","978-94-6186-883-1","","","","","","","","","ChemE/Advanced Soft Matter","","",""
"uuid:3d0d456d-c6a9-4781-9a18-6cb041a4fd03","http://resolver.tudelft.nl/uuid:3d0d456d-c6a9-4781-9a18-6cb041a4fd03","Satellite-derived NOx emissions over East Asia","Ding, J. (TU Delft Atmospheric Remote Sensing)","Levelt, Pieternel Felicitas (promotor); van der A, Ronald (copromotor); Delft University of Technology (degree granting institution)","2018","Nitrogen oxides (NOx) are important air pollutants and play a crucial role in climate change. NOx emissions are important for chemical transport models to simulate and forecast air quality. Up-to-date emission information also helps policymakers to mitigate air pollution. In this thesis, we have focused on providing better NOx emission estimates with the DECSO (Daily Emission estimates Constrained by Satellite Observations) inversion algorithm applied to satellite observations. DECSO is a fast algorithm, which enables daily emissions estimates as soon as the satellite observations are available. Satellite-derived emissions reveal more specific information on the location and strength of sources than concentration observations. The monthly and yearly variability in emissions are well captured. This is demonstrated by our monitoring of the effect of air quality regulations on emissions during events like the 2014 Youth Olympic Games. Near the Chinese coast ship tracks, which are otherwise hidden under the outflow of air pollution from the mainland, are revealed in our NOx emissions derived with DECSO applied to OMI satellite observations. Trends of shipping emissions for a 10-year period (2007 to 2016) over Chinese seas are presented for the first time.","Air Quality; Remote sensing","en","doctoral thesis","","978-94-6366-006-8","","","","","","","","","Atmospheric Remote Sensing","","",""
"uuid:21756661-2d92-447d-8ac7-0cd2b1b6dc8b","http://resolver.tudelft.nl/uuid:21756661-2d92-447d-8ac7-0cd2b1b6dc8b","A Virtual Agent for Post-Traumatic Stress Disorder Treatment","Tielman, M.L. (TU Delft Interactive Intelligence)","Neerincx, M.A. (promotor); Brinkman, W.P. (copromotor); Delft University of Technology (degree granting institution)","2018","Post-traumatic stress disorder (PTSD) is a mental disorder with a high impact on quality of life, and despite the existence of treatment, barriers still stop many people from receiving the care they need. An e-mental health system for home use might remove some of these barriers, as it provides a privacy-sensitive and accessible way of following therapy. Treatment for PTSD is difficult though, requiring patients to actively recollect their traumatic memories. Suitable assistance by a virtual agent to increase compliance might therefore be very valuable. Given the novelty of the application of a virtual agent to PTSD therapy, our work studies how a virtual agent should act to best increase therapy compliance for PTSD. After establishing several general design for such an agent, core functions and constraints for the agent have been studied. To enhance treatment compliance in the form of memory recollection, a virtual agent can best present psychoeducation in written text. Furthermore, it can employ an ontology-based question system to elicit more detailed memory recollection, and a motivational feedback system to generate motivation suitable to the patient’s situation. Regarding the constraints of a virtual agent, theoretical models have been presented describing how to deal with risk situations, and a full therapy system for PTSD incorporating the agent was found useful and usable by former patients. Taken together, these findings show how a virtual agent should act and how it can be applied. Given the high impact PTSD has on a person’s life, such a virtual agent has the potential to make a real difference.","HCI; PTSD; Virtual agent; Conversational agent; Post-traumatic stress disorder; Therapy system; E-Health; e-mental health; e-Coaching","en","doctoral thesis","","978-94-6295-829-6","","","","","","","","","Interactive Intelligence","","",""
"uuid:bc10b093-aa3c-435f-9b1b-563cffd3c571","http://resolver.tudelft.nl/uuid:bc10b093-aa3c-435f-9b1b-563cffd3c571","Poles Apart: Discerning and opportunistic mind-sets in design learning","Hamat, B.B. (TU Delft OLD Design Theory and Methodology)","Badke-Schaub, P.G. (promotor); Schoormans, J.P.L. (promotor); Eisenbart, B. (copromotor); Delft University of Technology (degree granting institution)","2018","Mind-sets play an important role in orienting the decisions and activities that an individual engages in when he or she is designing, and designing involves interaction with complex, open-ended and ambiguous situations. This means that the individual disposition of a person influences the way that he or she reacts, and in designing, the complexity of the conditions that the individual interacts with, can increase due to the nature of the design problems. The processes that an individual engages in while designing is in turn, expected to influence the quality of design solutions that he or she produces.
This thesis focusses on investigating the phenomena of mind-sets in the context of design and design learning, and its effects on the process of designing and quality of design solutions. Prevalent mind-sets that design students have toward design learning are identified and examined. Two categories of mind-sets are proposed, validated and tested across three different empirical studies. The two categories of mind-sets include the discerning and opportunistic mind-sets. Distinct differences between the two mind-sets provide significant insights toward the effects of mind-sets on the process and quality of outcomes in designing. Findings from these studies carry implications and recommendations for design education.","mind sets; design learning; designing","en","doctoral thesis","","978-94-6186-882-4","","","","","","","","","OLD Design Theory and Methodology","","",""
"uuid:cc76e95c-b82e-4555-9110-348ad9989705","http://resolver.tudelft.nl/uuid:cc76e95c-b82e-4555-9110-348ad9989705","SPAD imagers for super resolution microscopy","Antolović, I.M. (TU Delft (OLD)Applied Quantum Architectures)","Charbon-Iwasaki-Charbon, E. (promotor); Hoebe, R.A. (copromotor); Delft University of Technology (degree granting institution)","2018","The aim of this research is to explore the potential advantages of SPAD imagers used in microscopy. An ideal microscopy detector requires high sensitivity (high quantum efficiency QE or photon detection probability PDP), photon counting operation, low noise (dark current or dark count rate), timing resolution in the order of 100 ps, frame rate higher than 10 fps, a large enough pixel resolution and wavelength resolvability.","SPAD; microscopy; super resolution; fluorescence imaging; imagers; photon counting and image sensor","en","doctoral thesis","","","","","","Ivan Michel Antolović received his B.S. and M.S. degree (cum laude) in electrical engineering and information technology in 2010 and 2012 from University of Zagreb, Croatia. During his master, he started working with Hamamatsu’s multi-pixel photon counters (MPPC), mainly interested in detection of collagen and estrogen autofluorescence. He was awarded ""Josip Lončar"" Bronze Plaque for the best student of the field electronic and computing engineering. He enrolled to a PhD at the University of Zagreb while working as a firmware designer at Artronic d.o.o. Since 2013, he continued to pursue a PhD degree in single photon avalanche diode (SPAD) imagers at TU Delft. His interests include large format photon counting SPAD imagers and small format time correlated SPAD imagers for microscopy applications like localization super resolution, confocal and fluorescence lifetime. During his PhD, he worked in collaboration with EPFL, Leeuwenhoek Centre for Advanced Microscopy, Macquarie University, Weizmann Institute. He worked with companies like Leica, LFoundry, NXP, TowerJazz and Zeiss. He was awarded PicoQuant Young Investigator Award in 2016 and Else Kooi Award in 2018.","","2019-01-23","","","(OLD)Applied Quantum Architectures","","",""
"uuid:8b3a719f-4dcd-4edd-85ac-361eefba9806","http://resolver.tudelft.nl/uuid:8b3a719f-4dcd-4edd-85ac-361eefba9806","HyperCell: A Bio-inspired Design Framework for Real-time Interactive Architectures","Chang, J.R. (TU Delft Digital Architecture)","Oosterhuis, K. (promotor); Biloria, N.M. (copromotor); Delft University of Technology (degree granting institution)","2018","This pioneering research focuses on Biomimetic Interactive Architecture using “Computation”, “Embodiment”, and “Biology” to generate an intimate embodied convergence to propose a novel rule-based design framework for creating organic architectures composed of swarm-based intelligent components. Furthermore, the research boldly claims that Interactive Architecture should emerge as the next truly Organic Architecture. As the world and society are dynamically changing, especially in this digital era, the research dares to challenge the Utilitas, Firmitas, and Venustas of the traditional architectural Weltanschauung, and rejects them by adopting the novel notion that architecture should be dynamic, fluid, and interactive. This project reflects
a trajectory from the 1960’s with the advent of the avant-garde architectural design group, Archigram, and its numerous intriguing and pioneering visionary projects. Archigram’s non-standard, mobile, and interactive projects profoundly influenced a new generation of architects to explore the connection between technology and their architectural projects. This research continues this trend of exploring novel design thinking and the framework of Interactive Architecture by discovering the interrelationship amongst three major topics: “Computation”, “Embodiment”, and “Biology”. The project aims to elucidate pioneering research combining these three topics in one discourse: “Bio-inspired digital architectural design”.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-004-4","","","","A+BE | Architecture and the Built Environment No 1 (2018)","","","","","Digital Architecture","","",""
"uuid:9667dc41-c736-47e6-b818-78c7c50fb08d","http://resolver.tudelft.nl/uuid:9667dc41-c736-47e6-b818-78c7c50fb08d","Value of information in closed-loop reservoir management","Goncalves Dias De Barros, E. (TU Delft Reservoir Engineering)","Jansen, J.D. (promotor); Van den Hof, Paul M.J. (promotor); Delft University of Technology (degree granting institution)","2018","Over the past decades, many technological advances have unlocked new opportunities to boost efficiency in the oil and gas industry (e.g., complex well drilling, injection of advanced chemicals, sophisticated instrumentation). The real engineering challenge is to apply these technologies in the best possible way for each particular case. This leads to very difficult decisions to be made, mainly because every oil and gas field is one of its kind and our knowledge of the subsurface is very limited. Many efforts have been made to develop tools to support these decisions by applying a more systematic approach to determine smart exploitation strategies. Yet, very little has been done on the optimization of reservoir surveillance plans to establish the best observations to monitor de field response to the exploitation strategies, which, in turn, can also contribute to a better exploitation of the reservoir.
In this thesis we propose a methodology to assess the value of future measurements as a first step towards the development of a framework to optimize the design of reservoir surveillance plans. We also investigate alternatives to improve current reservoir management approaches by recommending actions which anticipate the availability of future information and account for the impact of immediate decisions on the decisions to be made in the future.
Throughout the chapters, we discuss how to combine a variety of topics (e.g., model-based optimization, data assimilation, uncertainty quantification) with other unusual ingredients (e.g., plausible truths, clairvoyance, flexible plans) to develop a methodology which can be applied in many problems involving decision making and learning. Despite being motivated by a real application, this research addresses abstract concepts such as value and information, but always from an engineering perspective. This makes us approach the problem in a different way, which, we hope, will inspire innovative solutions in the future.","value of information; closed-loop reservoir management; reservoir surveillance; geological uncertainty; robust optimization; data assimilation; plausible truths; representative models; clustering; stochastic programming","en","doctoral thesis","","978-94-6366-009-9","","","","","","","","","Reservoir Engineering","","",""
"uuid:ce01998a-0830-494e-a403-9f0696aa0dce","http://resolver.tudelft.nl/uuid:ce01998a-0830-494e-a403-9f0696aa0dce","Developing novel heat treatments for automotive spring steels: Phase transformations, microstructure and performance","Goulas, C. (TU Delft (OLD) MSE-5)","Sietsma, J. (promotor); Delft University of Technology (degree granting institution)","2018","This Ph.D. thesis investigates the substitution of quenching and tempering treat-
ments by isothermal bainitic treatments in automotive spring production. An isothermal bainitic treatment has benefits mainly in terms of energy savings, but it can also prevent quench cracking, distortion and residual stresses, commonly found in quenched and tempered components. A medium carbon low alloy spring steel commonly employed in automotive spring production is used for this research. The study focuses on the microstructure formation and its effect on the performance of a spring steel component.","","en","doctoral thesis","","978-94-91909-49-8","","","","","","","","","(OLD) MSE-5","","",""
"uuid:510bd39f-407d-4bb6-958e-dea363c5e2a8","http://resolver.tudelft.nl/uuid:510bd39f-407d-4bb6-958e-dea363c5e2a8","Planetary-scale surface water detection from space","Donchyts, G. (TU Delft Water Resources; Deltares)","van de Giesen, N.C. (promotor); Delft University of Technology (degree granting institution)","2018","This thesis studies automated methods of surface water detection from satellite imagery. Multiple existing methods are explored, discussed, and some new algorithms are introduced to allowvery accurate detection of surface water and surfacewater changes. Themethods range in applicability from the local level to global, and from detecting high-frequency changes to low-frequency changes. Their trade-offs regarding the accuracy and applicability of the surface water detection methods are also discussed.
Several applications are presented to test the introduced methods. One of the studies focuses on a long-term global surface water change detection over the past 30 years at 30m resolution. The other application looks at the generation of a permanent surface water mask for Murray-Darling River Basin in Australia. Additionally, an in-depth validation for a small reservoir in California, USA is presented, to demonstrate performance of the new methods.
The algorithms discussed in the thesis were applied and tested to process both passive optical multispectral and active Synthetic Aperture Radar (SAR) satellite data. Combining data fromall freely available satellite sensors requires harmonizations of the satellite data, but also, significant computing resources. In this thesis, Google Earth Engine parallel processing platformwas used to performmost of the experiments.
We will see, thatwhen studying surface water dynamics, the best results can be achieved by combining discriminative and generative methods of surface water detection. This way, the surface water can also be detected from satellite images where surface water is only partially visible.
In the thesis, top-of-atmosphere reflectance images are used to detect surface water. The atmospheric correction is not required when dynamic local thresholding methods are used to detect surface water.","surface water observation; satellite imagery; unsupervised classification; NDWI; landsat; sentinel; SRTM; google earth engine","en","doctoral thesis","","978-94-6233-862-3","","","","","","","","","Water Resources","","",""
"uuid:fd5b0a54-fb6c-4055-9a83-5281ceb310e3","http://resolver.tudelft.nl/uuid:fd5b0a54-fb6c-4055-9a83-5281ceb310e3","To gasify or not to gasify torrefied wood?: Investigating the effect of torrefaction on oxygen steam blown circulating fluidized bed gasification of wood, focusing on permanent gas and tar composition, and environmental performance","Tsalidis, G.A. (TU Delft Large Scale Energy Storage; TU Delft Energy Technology)","de Jong, W. (promotor); Delft University of Technology (degree granting institution)","2018","Biomass is a sustainable biofuel as long as it does not compete with food and feed production. Gasification is a versatile technology that produces a gas which can be converted into various high value products. Torrefaction is a technology that converts biomass to a more coal alike product with upgraded properties. As it was confirmed that torrefaction offers environmental benefits in co-combustion for electricity generation, it was decided to assess gasification of torrefaction wood from technical and environmental perspectives. Gasification of commercial torrefied mixed wood pellets resulted in in an increased gas quality and decreased tars quantity, leading in a positive affecting the gasification performance positively. On the other hand, the gasification of torrefied pure woods affected the gasification performance negatively; the effects on the gas quality were limited and the effect on the tar quantity was not the same for both pure woods. Furthermore, for a better understanding of the tar formation in the gasifier additional experiments were performed. Torrefaction resulted in increasing the phenol and decreasing the naphthalene mass fractions at the high temperature range. However, it did not show a significant effect on the polyaromatics species heavier than fluorene. Lastly, the environmental performance of a biorefinery based on gasification of torrefied wood for car fuels production showed significant environmental benefits in CO2 emissions reduction and air quality improvement of the broader area as well. To conclude, torrefaction adds benefits in the technical performance of the gasifier especially regarding the problematic tar content of the gasification gas, but it may decrease the gasification performance, and wood torrefaction integrated in a biorefinery system shows significant benefits to important environmental impacts.","gasify; wood; enviroment","en","doctoral thesis","","978-94-6299-833-9","","","","","","","","","Large Scale Energy Storage","","",""
"uuid:5f807568-492b-40eb-8618-bcdf1e1b2e7c","http://resolver.tudelft.nl/uuid:5f807568-492b-40eb-8618-bcdf1e1b2e7c","Escaping the emotional blur: Design tools for facilitating positive emotional granularity","Yoon, J. (TU Delft Design Aesthetics)","Desmet, P.M.A. (promotor); Pohlmeyer, A.E. (copromotor); Delft University of Technology (degree granting institution)","2018","In human-product interactions, pleasure has many different shades. We can, for example, be proud of using an eco-friendly detergent, be all aflutter in anticipation of a planned trip when looking at a calendar application or experience a feeling of cathartic relief when playing a mobile phone game. Although these experiences are all pleasurable, each is different from the other in terms of the feelings they engender, the conditions that evoke them and how they influence people’s thoughts and actions. Some people are more aware of these nuances and better able than others to articulate positive emotional states. This difference is called ‘Positive Emotional Granularity’ (PEG) (Tugade, Fredrickson, & Feldman Barrett, 2004). PEG reflects the degree to which a person is able to represent positive emotions with precision and specificity. This thesis focuses on designers’ PEG, and proposes that having an awareness of nuances between positive emotions can be advantageous for designers in their endeavour to generate positive emotional experiences. Design research has traditionally focused on generalised pleasure or liking, paying little attention to nuances in positive emotions. Consequently, little is known of either the implications of differentiating positive emotions in design processes or ways to support designers in this endeavour. The aim of this thesis is to develop an understanding of how designers’ nuanced understanding of positive emotions can be harnessed and how doing so can contribute to design processes. The research question was, ‘how can designers be supported in developing and applying a systematic understanding of nuanced positive emotions?’ The overarching approach encompassing the research activities was ‘research through design’, in which the act of designing new solutions and reflecting on the processes is regarded as a means of generating knowledge (Stappers, 2007). A series of design tools and techniques that explained the distinctiveness of positive emotions was conceptualised for the purpose of this research and tested by designers. This research contributes to the field of experience design by elucidating how PEG can add value to design processes, and by providing tools that support designers in developing their understanding of positive emotions and their abilities to select and design for nuanced and distinct positive emotions. Eight studies were conducted, each resulting in a set of new findings.","Design; User-centred design; Emotion knowledge; Emotional granularity; Positive emotions; emotion-driven design; Positive design; Human-centred design; Research through design","en","doctoral thesis","","978-94-6186-881-7","","","","","","","","","Design Aesthetics","","",""
"uuid:b71f3b0b-73a0-4996-896c-84ed43e72035","http://resolver.tudelft.nl/uuid:b71f3b0b-73a0-4996-896c-84ed43e72035","Emulating Fermi-Hubbard physics with quantum dots: from few to more and how to","Hensgens, T. (TU Delft QCD/Vandersypen Lab)","Vandersypen, L.M.K. (promotor); Delft University of Technology (degree granting institution)","2018","Interacting electrons on material lattices can build up strong quantum correlations, which in turn can lead to the emergence of a wide range of novel and potentially useful magnetic and electronic material properties. Our understanding of this physics, however, is severely limited by the exponential growth in complexity with system size, which leads all classical methods to fall fundamentally short. In this thesis, I show how artificial lattices of conduction band electrons in semiconductors, so-called quantum dot arrays, can be used to directly emulate and therefore elucidate such Fermi-Hubbard physics. To this end, I focus on two approaches. A top-down approach allows to scale easily, but lacks to ability to control or measure individual sites. A bottom-up approach on the other hand utilizes the small devices employed by the community for qubit experiments, in which the control of individual sites is both a blessing and a curse. We address the issue of control to the point where mapping to relevant models is possible and efficiently calibrating larger devices becomes feasible. These results open up the inherently well-suited and scalable platform of quantum dots to emulate novel quantum states of matter.","Fermi-Hubbard model; large quantum dot arrays; Coulomb blockade; Mott transition; classically intractable models","en","doctoral thesis","","978-90-8593-331-1","","","","Casimir PhD series 2017-47","","","","","QCD/Vandersypen Lab","","",""
"uuid:b1024bc5-46ad-450e-a3d3-090a166a67a7","http://resolver.tudelft.nl/uuid:b1024bc5-46ad-450e-a3d3-090a166a67a7","Fast Iterative Solution of the Time-Harmonic Elastic Wave Equation at Multiple Frequencies","Baumann, M.M. (TU Delft Numerical Analysis)","Vuik, Cornelis (promotor); van Gijzen, M.B. (copromotor); Delft University of Technology (degree granting institution)","2018","Seismic Full-Waveform Inversion is an imaging technique to better understand the earth's subsurface. Therefore, the reflection intensity of sound waves is measured in a field experiment and is matched with the results from a computer simulation in a least-squares sense. From a computational point-of-view, but also from an economic view point, the efficient numerical solution of the elastic wave equation on current hardware is the main bottleneck of the computations, especially when a large three-dimensional computational domain is considered. In our research, we focused on an alternative problem formulation in frequency-domain. The mathematical challenge then becomes to efficiently solve the time-harmonic elastic wave equation at multiple frequencies. The resulting sequence of shifted linear systems is solved with a new framework of Krylov subspace methods derived for this specific problem formulation. Our numerical analysis gives insight in the theoretical convergence behavior of the new algorithm.","Krylov subspace methods; Preconditioning; Shifted linear systems; Time-harmonic elastic wave equation; MSSS matrix computations; Spectral analysis","en","doctoral thesis","","978-94-6295-827-2","","","","","","","","","Numerical Analysis","","",""
"uuid:a56fedf2-4bc0-495e-8d31-95cdd4de213e","http://resolver.tudelft.nl/uuid:a56fedf2-4bc0-495e-8d31-95cdd4de213e","Blending technological, cognitive and social enablers to develop an immersive virtual learning environment for construction engineering education","Keenaghan, G.N. (TU Delft Cyber-Physical Systems)","Horvath, I. (promotor); Delft University of Technology (degree granting institution)","2018","The conceptual framework of the proposed novel system was to provide a stimulating learning experience for dislocated digital learners, who are seen as individuals with different perceptions and expectations. In addition to functionally integrate technological, cognitive and social enablers, the system was required to encapsulate what can be called the principles of cyber psychology. This research was not specifically about using game theories as a means of providing motivation and engagement. It was more about using technological enablers such as game engine software development kits and compatible 3D modelling software to blend with cognitive and social enablers as a conceptual design framework to develop web-based stimulated-learning systems.","Web-based learning system; technological enablers; cognitive enablers; social enablers; dis-located learners; co-located learners; 3D modelling software; game engine software; virtual environments; augmented environments; virtual learning environments","en","doctoral thesis","","978-94-6186-857-2","","","","","","","","","Cyber-Physical Systems","","",""
"uuid:1d6b0b9f-65e0-44d8-87d1-a21016c88653","http://resolver.tudelft.nl/uuid:1d6b0b9f-65e0-44d8-87d1-a21016c88653","Erosion of sand at high flow velocities: An experimental study","Bisschop, F. (TU Delft Offshore and Dredging Engineering)","van Rhee, C. (promotor); Visser, P.J. (copromotor); Miedema, S.A. (copromotor); Delft University of Technology (degree granting institution)","2018","The safety level of a dike is expressed in terms of risk. Risk is defined as the product of the probability of inundation of a polder (after failure of a dike) and the expected damage (casualties, economic damage and damage to the infrastructure) caused by inundation. The rate of inundation determines the amount of casualties and depends heavily on the flow velocity through the breach and breach development in time. The flow velocity in a breach can become larger than 5 m/s. Due to these large flow velocities, the application of conventional sediment pick-up functions in breach growth models, leads to a significant overestimation of the breach growth and thus the rate of inundation.","Erosion; Dredging; Breaching; Pick-up flux; Sand","en","doctoral thesis","","978-94-6186-868-8","","","","","","","","","Offshore and Dredging Engineering","","",""
"uuid:9688278b-1cde-4d6e-8d8e-d19e5b3ffd39","http://resolver.tudelft.nl/uuid:9688278b-1cde-4d6e-8d8e-d19e5b3ffd39","Chemical Sensors based on Si Nanowires: Surface modifications for the detection of ions, explosives and chemical vapors","Cao, A. (TU Delft OLD ChemE/Organic Materials and Interfaces)","Sudhölter, Ernst J. R. (promotor); de Smet, L.C.P.M. (copromotor); Delft University of Technology (degree granting institution)","2018","","","en","doctoral thesis","","978-94-6295-877-7","","","","","","","","","OLD ChemE/Organic Materials and Interfaces","","",""
"uuid:b1a1ead7-a631-4f05-b9a9-17a1be6e15e1","http://resolver.tudelft.nl/uuid:b1a1ead7-a631-4f05-b9a9-17a1be6e15e1","Information Propagation in Complex Networks: Structures and Dynamics","Märtens, M. (TU Delft Network Architectures and Services)","Van Mieghem, P.F.A. (promotor); Kuipers, F.A. (copromotor); Delft University of Technology (degree granting institution)","2018","This thesis is a contribution to a deeper understanding of how information propagates and what this process entails. At its very core is the concept of the network: a collection of nodes and links, which describes the structure of the systems under investigation. The network is a mathematical model which allows to focus on a very fundamental property: the mutual relations (links) between information exchanging agents (nodes). This simplicity makes networks elegant, as no specifics of any supporting hardware are needed to reason on this high level of abstraction. The developing field of network science led to countless applications of the network model to all sorts of complex systems in nature and technology. Naturally, it became an essential part of many multi-disciplinary research projects. Therefore, understanding how information propagates in networks enables us to learn and conceivably control the intricate processes, which we observe in complex systems. Since complex systems are the driver for this research, the first three chapters of this thesis are studies based on data collected from vastly different application domains, after more fundamental research is addressed in the later parts.
Chapter 2 deals with the interaction of players of a popular multiplayer online game. Due to the competitive design of the game, teams are formed ad-hoc and compete with each other for victory. Some of the players exhibit anti-social behavior towards their teammates, which is known as toxicity. We analyze how toxicity in player networks emerges by developing a toxicity detector, highlighting possible triggers and analyze the disposition of players towards toxic teammates. Furthermore, we show how toxicity is linked to game success.
Chapter 3 continues with a study of the human brain as a functional network. Information processing in the brain is measurable with technologies like magnetoencephalography. From such measurements that were collected from a group of subjects, the phase transfer entropy is computed as a quantity that reflects information exchange. When associated with the links between brain regions, unusual high numbers of certain substructures are observed in this network. We find that one of these substructures, the bi-directional two-hop path, to be highly abundant and robust within different frequencies bands, which highlights its importance for the propagation of brain activity. A clustering of the network based on these frequent substructures reveals a spatially coherent organization of important brain regions.
A common symbol of propagation is the virus, which is at the center of the third data-driven analysis of this thesis in Chapter 4. More precisely, we research the digital version of the virus, the computer worm, and analyze its propagation by epidemic network models. With epidemic models, the state of the nodes in a network can be described as susceptible or infected. An infection process and a curing process determine how the nodes are changing between those states. We extend on the standard epidemic models, the SIS model, by a time-dependent curing rate function to reflect the changes in the effectiveness of the active worm removal. Once we set the curing rate function, the empirical worm data are fitted and analyzed on multiple scales from the global over the country down to the autonomous system level. The fitted model explains how computer worms or similar self-replicating pieces of information might change in their effectiveness over long periods of time.
The SIS model returns as a central piece in Chapter 5 again. Although spreading processes are frequently modeled in isolation, the dynamics of many real-world applications are often driven by the interaction of multiple of such processes. These interactions can range from viruses that compete for susceptible nodes to viruses that mutually reinforce their propagation. We study the special case of superinfection, in which one dominant virus spreads within the infected population of a weaker virus. We highlight the conditions for which a co-existence of both viruses is stable and show that extinction cycles become possible if the infection rate of the dominant virus becomes too strong. Furthermore, we show that some of the possible outcomes of a superinfection are difficult to approximate with common mean-field techniques. However, the second largest eigenvalue of the infinitesimal generator of the underlying Markov process is potentially linked to co-existence and thus stability.
Chapter 6 is a study on the capabilities of symbolic regression for network properties. We develop an automated system based on Genetic Programming which is able to be trained by families of networks to learn the relations between several of their properties. These properties can be features of the networks like the eigenvalues of their adjacency or Laplacian matrices or network metrics like the network diameter or the isoperimetric number. We show that the system can generate approximate formulas for those metrics that often give better results than previously known analytic bounds. The evolved formulas for the network diameter are evaluated on a selection of real-world networks of different origins. The network diameter bounds hop-based information propagation and is thus of high importance for designing network algorithms. A careful selection of training networks and network features is crucial for evolving good approximate formulas for the network diameter and similar properties.
Finally, the thesis concludes with Chapter 7 which revisits the concepts that were developed and provides some critical assessment on their potential and limitations.","information propagation; functional brain networks; toxicity; multiplayer online games; network epidemics; epidemic spreading model; complex networks; symbolic regression","en","doctoral thesis","","978-94-028-0907-7","","","","","","","","","Network Architectures and Services","","",""
"uuid:9aca5d9a-df0a-4f73-aab6-669681a4f327","http://resolver.tudelft.nl/uuid:9aca5d9a-df0a-4f73-aab6-669681a4f327","Rayleigh-Bénard convection of a supercritical fluid: PIV and heat transfer study","Valori, V. (TU Delft RST/Reactor Physics and Nuclear Materials; TU Delft Fluid Mechanics)","van der Hagen, T.H.J.J. (promotor); Westerweel, J. (promotor); Rohde, M. (copromotor); Delft University of Technology (degree granting institution)","2018","Fluids above the critical point are widely used in industry. Chemical, pharmaceutical, food industry and energy production are some examples. In the energy production sector they are mainly used as cooling fluids, because they allow to increase the thermal efficiency of the power plants. However, the fundamentals of their heat transfer behavior are still unknown and current heat transfer models fail to predict it. Supercritical (SC) fluids are characterized by strongly varying fluid properties, which are responsible for their particular heat transfer behavior and make them very difficult to model, simulate and experimentally investigate. In past studies, buoyancy was identified as a key cause for the heat transfer deterioration observed in SC fluids. The aim of the research described in this thesis is to investigate the possibility of performing non-intrusive local velocity measurements with the optical technique PIV and to acquire global heat transfer measurements, with strongly changing fluid properties at SC conditions. The experiments were performed in a pure buoyancydriven flow: a Rayleigh-Bénard (RB) flow. The velocity fields of RB convection with strongly varying properties, beyond the so-called Oberbeck-Boussinesq (OB) approximation, were experimentally studied at atmospheric pressure first. An increase of the time-averaged velocity close to the bottom wall of the cell with respect to the top wall of about 13% was found. This finding confirmed experimentally a top-bottom ”broken symmetry” in the velocity field, which was observed in previous numerical and theoretical studies, but it was never experimentally demonstrated before. The heat transfer with strongly variable properties at SC conditions for constant Prandtl and Rayleigh numbers, specifically defined outside the validity range of the OB approximation, was experimentally studied. The measurements were performed at the Max Planck Institute of Dynamics and Self-Organization in Göttingen (Germany), with a European EuHIT project. It was observed that the measured Nusselt number defined for non-OB conditions was different from point to point, showing that merely the Rayleigh and Prandtl numbers are not sufficient to determine the heat transfer through the cell. It was also seen that the measured Nusselt number was 16% larger with respect to the one predicted by the Grossmann-Lohse theory (2000) for the same Rayleigh and Prandtl numbers at OB conditions. A feasibility study of particle image velocimetry (PIV) at SC conditions was done by using the background oriented schlieren technique (BOS). An estimation of the PIV experimental uncertainty at SC conditions was done with the statistical correlation method proposed by Wieneke et al., (2015). PIV was successfully performed at SC conditions. Main difficulties about its applicability were due to blurring and optical distortions in the boundary layer and thermal plumes regions. PIV measurements were performed at three different magnitudes of density difference between top and bottom of the cell. Two of the three experiments were done at similar Rayleigh and Prandtl numbers, defined for non-OB conditions: one towards the liquid phase and the other one towards the gas phase. The former showed a lower large scale circulation (LSC) velocity than the latter. All cases showed the presence of one asymmetric LSC roll, which is different from a typical RB convection flow at OB conditions.
Improvements in the accuracy of PIV measurements and the acquisition of more
heat transfer data at SC conditions, would help the study of the thermal and viscous boundary layer thicknesses and turbulence modifications that are responsible for different heat transfer regimes in SC fluids.","supercritical fluids; Rayleigh-Bénard convection; particle image velocimetry; heat transfer; optical distortions","en","doctoral thesis","","978-94-6233-856-2","","","","","","","","","RST/Reactor Physics and Nuclear Materials","","",""
"uuid:21d34d79-fc42-429d-860f-e6d17e0ca635","http://resolver.tudelft.nl/uuid:21d34d79-fc42-429d-860f-e6d17e0ca635","Retrieval of Aerosol Optical Depth Over Land by Inverse Modeling of Multi-Source Satellite Data","Wu, Y. (TU Delft Optical and Laser Remote Sensing)","Menenti, M. (promotor); de Graaf, M. (copromotor); Delft University of Technology (degree granting institution)","2018","The Aerosol Optical Depth (AOD), a measure of the scattering and absorption of light by aerosols, has been extensively used for scientific research such as monitoring air quality near the surface due to fine particles aggregated, aerosol radiative forcing (cooling effect against the warming effect by carbon dioxide CO2 ), aerosol long-term trend analysis and the climate change on regional and global scale.
Aerosols vary greatly over time and space. This is because of the short lifetime of aerosols (a few hours to a week), and also because of the heterogeneous distribution of sources and the variable effectiveness of atmospheric mixing though turbulence. To monitor aerosols, observations by space-borne instruments have a huge advantage (nearly global coverage daily) over ground-based measurements (point observation). Global quantitative aerosol information has been derived from satellite measurements for decades. The MODerate resolution Imaging Sepctroradiometer (MODIS) AOD product is proven to be mature and is extensively applied in different scientific fields. The current AOD product generated with the collection 6 (C6) Dark Target (C6_DT) algorithm over land is still suffering from errors or biases due to parameterization, assumptions, modeling, and retrieval techniques as well as ill-posed problems, presenting large uncertainties, including regional bias, angular effects and a large number of unphysical negative values. Chapter 1 discusses the challenges and limitations in the current satellite aerosol retrieval algorithm.
Owing to the use of static aerosol properties (predefined aerosol models and fixed vertical profile over the globe), the MODIS algorithm may give serious errors since aerosols can change over time and are distributed very diversely at different altitude levels. To quantify these errors, in Chapter 3 the sensitivity of AOD retrieval to the variation of aerosol vertical profiles and types with the MODIS algorithm is evaluated by a set of experiments. It was found that the AOD retrieval shows a high sensitivity to different vertical profiles and types.
As suggested by the sensitivity study, it is necessary to investigate the impact of dynamical aerosol properties in a real case. To do this, an adaptive development of the MODIS C6_DT algorithm was implemented to consider realistic aerosol vertical profile in the retrieval (Chapter 4). MODIS and Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observation (CALIPSO) measurements were used. Inferred from CALIPSO data, the vertical profile was applied into the new algorithm to generate an accurate Top Of the Atmosphere (TOA) reflectance for the retrieval. The AOD retrieval was compared between C6_DT and the new algorithm with cases of heavy smoke and dust. The difference in the retrieval was significant between C6_DT and the new algorithm, which demonstrated that C6_DT would give large errors in the retrieval for these cases.
In the MODIS algorithm, the assumption of the surface with isotropic reflection (Lambertian) is inconsistent with the well-known fact that the surface has a strong anisotropic reflection (non-Lambertian), and could lead to large uncertainties in estimating the surface contribution to satellite measurements, with resulting errors in the AOD retrieval. Chapter 5 describes a newly developed algorithm (BRF_DT) by considering non-Lambertian surface reflectance characterized by Bidirectional Distribution Reflectance Function (BRDF), where the surface reflection is described by four reflectance properties — bidirectional, directional-hemispherical, hemispherical-directional, and bihemispherical reflectance and coupled into the radiative transfer process to generate an accurate TOA reflectance. In addition, a parameterization of spectral relationship inherited from C6_DT was applied to constrain the surface BRF. The remaining three components are determined by MODIS BRDF/albedo product. As shown by sample plots and histograms as well as analysis and comparison against AERONET measurements, the AOD retrievals were significantly improved by BRF_DT especially for areas with heavy aerosol loading.
For the case of areas with light aerosol loading, the parameterization of spectral surface BRF should be further refined to yield a better retrieval. Chapter 6 shows that a new parameterization was derived for the BRF_DT algorithm (called BRF_DT2) by using 3 years of BRF data from AERONET-based Surface Reflectance Validation Network (AS-RVN). The contribution to the TOA reflectance dominated by the surface BRF was well estimated. As a result, negative retrievals and angular biases were significantly reduced in BRF_DT2. A summary of the current and future research of satellite aerosol retrieval is introduced in Chapter 7.
For historical reasons, the European electricity system is institutionally fragmented and organized along national boundaries. National-strategic objectives have been argued to hamper cross-border network expansion in Europe. Transmission system operators, national regulatory authorities, and governments, may pursue their own interests as a result of which transmission corridors that are in the pan-European interest may not be realized.
This thesis presents a quantitative, agent-based modeling approach to simulate network investment decisions by individual actors, which can be made subject to different investment identification and evaluation criteria. The model is used to determine the long-term differences in network development and the socio-economic welfare effects between the application of a national versus the application of a pan-European scope of investment identification and evaluation. On the basis of simulations performed under ten different scenarios, it concludes that although the application of a national cost-benefit perspective indeed leads to a reduction in transmission grid capacity in the long term, the reduction in socio-economic welfare remains limited, provided that only economic considerations play a role in network investment decisions.","Electricity; transmission; grid; TSO; cross-border; trans-national; network investment; agent-based; national interest; investment decision","en","doctoral thesis","","978-94-6186-871-8","","","","","","","","","Energie and Industrie","","",""
"uuid:a9ff69cd-e5a5-451e-a6c8-0681b633927d","http://resolver.tudelft.nl/uuid:a9ff69cd-e5a5-451e-a6c8-0681b633927d","Incorporating sensemaking perspective in design: Supporting physicians during the contouring tasks in radiotherapy","Aselmaa, A. (TU Delft Mechatronic Design)","Goossens, R.H.M. (promotor); Song, Y. (promotor); Delft University of Technology (degree granting institution)","2017","Radiotherapy is a type of cancer treatment that uses high energy radiation to shrink tumors by destroying cancer cells. It is estimated that 52 per cent of cancer patients can potentially benefit from this type of treatment (Delaney et al. 2005). Planning radiotherapy treatment is a complicated multi-disciplinary process (Aselmaa et al. 2013b). One of the most critical and cognitively challenging steps in the workflow for planning treatment is contouring. Through a complicated underlying cognitive process of ‘sensemaking’, physicians draw the visible boundary of the tumor (i.e. gross tumor volume, GTV) and surrounding organs that are also at risk, as identified in medical images, based on the synthesis of different types of data as well as their knowledge and experience.","Sensemaking; Radiotherapy; Design","en","doctoral thesis","","978-94-028-0890-2","","","","","","","","","Mechatronic Design","","",""
"uuid:fe293c97-3f1e-4a39-bcbd-f5ab79d32d87","http://resolver.tudelft.nl/uuid:fe293c97-3f1e-4a39-bcbd-f5ab79d32d87","Driver Psychology during Automated Platooning","Heikoop, D.D. (TU Delft Transport and Planning)","van Arem, B. (promotor); Stanton, N.A. (promotor); de Winter, J.C.F. (copromotor); Delft University of Technology (degree granting institution)","2017","With the rapid increase in vehicle automation technology, the call for understanding how humans behave while driving in an automated vehicle becomes more urgent. Vehicles that have automated systems such as Lane Keeping Assist (LKA) or Adaptive Cruise Control (ACC) not only support drivers in their journey, but also place them in a passive supervising role, scanning for potential hazardous stimuli in the environment or a system malfunction. More advanced technology that includes both lateral and longitudinal control and enables vehicles to drive at close distances from each other (called platooning technology) has the potential to reduce energy consumption and highway congestion. However, such technology places the driver in an even more critical position, as the time headway between vehicles is often below human reaction time (i.e., down to approximately 0.3 seconds). Little is known about driver behaviour, and the psychological constructs involved therewith, in automated platoons. This thesis investigates driver psychology during automated platooning.","","en","doctoral thesis","","","","","","","","","","","Transport and Planning","","",""
"uuid:2d2cbe51-12fc-4162-aaf2-044c6bc5e15b","http://resolver.tudelft.nl/uuid:2d2cbe51-12fc-4162-aaf2-044c6bc5e15b","Offshore wind power plants with VSC-HVDC transmission: Grid code compliance optimization and the effect on high voltage ac transmission system","Ndreko, M. (TU Delft Intelligent Electrical Power Grids)","van der Meijden, M.A.M.M. (promotor); Popov, M. (copromotor); Delft University of Technology (degree granting institution)","2017","The development of large offshore wind power generation in the North Sea has been significantly accelerated in the last years. The large distance from shore in combination with the need for large transmission capacity has raised the interest for the voltage source converter high voltage direct current technology (VSC-HVDC). Transmission system operators in order to ensure high degree of the power system security of supply, impose strict grid connection requirements to offshore wind power plants and their HVDC transmission. Based on these boundary conditions, the overall research objectives that have been assessed in the context of this thesis include the following. Assessment of the state of the art coordinated fault-ride through strategies for offshore wind power plants with VSC-HVDC transmission. Analysis of unbalanced grid faults for wind power plants with VSC-HVDC transmission. Investigation of the effect of negative sequence current control for onshore and offshore AC faults. Analysis of the effect of typical grid codes on the power system voltage and rotor angle stability. The developed methodologies, models and control schemes proposed within the context of this thesis could facilitate the analysis and stable operation of transmission systems with VSC-HVDC connected offshore wind power plants.","VSC-HVDC; offshore wind power; grid codes; optimal grid code compliance; negative sequence control","en","doctoral thesis","","978-94-6299-804-9","","","","","","","","","Intelligent Electrical Power Grids","","",""
"uuid:8e4ab395-624e-48d7-ad8e-bf1ec02ee834","http://resolver.tudelft.nl/uuid:8e4ab395-624e-48d7-ad8e-bf1ec02ee834","Interface-resolved simulations of dense turbulent suspension flows","Simões Costa, P. (TU Delft Fluid Mechanics)","Westerweel, J. (promotor); Boersma, B.J. (promotor); Breugem, W.P. (copromotor); Delft University of Technology (degree granting institution)","2017","Transport of particles by a carrier fluid is an important, ubiquitous process. Few of many obvious examples are the blood flow that feeds oxygen to the different parts of our bodies, wind-assisted pollination, sediment transport in sand storms, avalanches, or rivers, cloud formation, and pyroclastic flows. In industry one can think of the flocculation/sedimentation processes in the treatment of drinking water, circulating fluidized bed reactors, sediment transport in land reclamation works, and more.","","en","doctoral thesis","","978-94-92516-99-2","","","","","","","","","Fluid Mechanics","","",""
"uuid:00852372-e240-48c8-a621-c07b45836189","http://resolver.tudelft.nl/uuid:00852372-e240-48c8-a621-c07b45836189","Molecular dynamics simulations of martensitic transformation in iron","Ou, X. (TU Delft (OLD) MSE-3)","Sietsma, J. (promotor); Santofimia, Maria Jesus (copromotor); Delft University of Technology (degree granting institution)","2017","The aim of this PhD. thesis is to use molecular dynamics (MD) simulations to comprehend the mechanisms governing nucleation and growth of martensite phase in austenitic phases in iron. In view of this objective, the thesis is divided into five parts: Chapter 2 reviews previous investigations of the fcc-to-bcc transformation in iron by MD simulations; Chapter 3 performs a preliminary study of the nucleation and growth of bcc phase into fcc bulk in iron at existing bcc/fcc interfaces in the Nishiyama-Wassermann orientation relationship; Chapter 4 studies the mechanisms governing the growth of bcc phase at bcc/fcc interfaces in iron; Chapter 5 and 6 study the thermodynamics of the homogeneous and heterogeneous nucleation of bcc phase inside the fcc crystals, respectively; Chapter 7 illustrates the effects of external strain on the nucleation and growth of bcc phase in fcc iron with (and without) fcc/fcc grain boundaries. A more detailed description of each chapter are included below.","","en","doctoral thesis","","","","","","","","","","","(OLD) MSE-3","","",""
"uuid:44e23602-611f-4d43-81b4-f468c80749e0","http://resolver.tudelft.nl/uuid:44e23602-611f-4d43-81b4-f468c80749e0","Freestanding 2D Materials and their Applications: From Lab to Fab","Cartamil Bueno, S.J. (TU Delft QN/Steeneken Lab)","Steeneken, P.G. (promotor); van der Zant, H.S.J. (promotor); Delft University of Technology (degree granting institution)","2017","","Graphene; 2D materials; membranes; mechanical resonators; mechanical pixels; atomic force mucroscopy (AFM); laser interferometry; colorimetry; sensing applications; Graphene Interferometric Modulator Display (GIMOD)","en","doctoral thesis","","978-90-8593-328-1","","","","","","","","","QN/Steeneken Lab","","",""
"uuid:695c7410-ac2d-4fee-879b-2501a0d72421","http://resolver.tudelft.nl/uuid:695c7410-ac2d-4fee-879b-2501a0d72421","Metal–Insulator Transitions in Heterostructures of Quantum Materials","Mattoni, G. (TU Delft QN/Caviglia Lab)","van der Zant, H.S.J. (promotor); Caviglia, A. (copromotor); Delft University of Technology (degree granting institution)","2017","This thesis is an experimental investigation of the physical properties of different transition metal oxide ultra-thin films. A common feature of these various materials and structures is that they exhibit a solid-state phase transition from a metallic to an insulating state, which is triggered upon changing sample composition, or by varying an external stimulus such as temperature, illumination or gas pressure. The experiments performed cover a broad spectrum of condensed matter, from material growth, structural characterisation and nanodevice fabrication to low-temperature magnetotransport, synchrotron microscopy and gas sensing.","Quantum materials; metal–insulator transitions; X-ray photoemission electron microscopy; low-temperature electronic transport; complex oxide heterostructures","en","doctoral thesis","","978-90-8593-330-4","","","","Casimir PhD series 2017-46","","2018-12-18","","","QN/Caviglia Lab","","",""
"uuid:38086341-21d1-4869-a51e-2cae76143e12","http://resolver.tudelft.nl/uuid:38086341-21d1-4869-a51e-2cae76143e12","The Participation Triangle: Involving Generation Y in energy strategy","van Andel, I.C.O. (TU Delft Policy Analysis)","Thissen, W.A.H. (promotor); Enserink, B. (copromotor); Delft University of Technology (degree granting institution)","2017","The liberalization of the Dutch energy market has led to a change of relation between energy companies and their customers. At the same time, the Dutch energy policy expects energy companies to contribute to an energy supply that is cleaner, smarter and more varied, and available at any time at affordable prices. The situation since the liberalization of the energy market can be summarized in the following points:
- Energy companies provide a product: energy, that is of social interest and importance, which forces them to act in a socially responsible manner,
- Energy as a product is a commodity
- Energy consumers are free to choose the energy supplier they want, to provide in their energy need.
Consequently, energy suppliers have to think and act like a commercial company, which means that energy companies in a liberalized market, next to their public responsibility, have strategic marketing issues to handle.
The notion that energy companies a) need future energy consumers to help them understand changes going on at the consumer-end, and their probable implications on future energy supply, while b) they are unfamiliar with this specific group of consumers at the same time, has resulted in the following leading question of the research: How to involve the future energy consumer effectively in the strategy of an energy company?
Answering this practical design question requires answering a variety of underlying knowledge questions, including definitions of key concepts such as ‘involvement’ and ‘effective’, and, more generally, ‘What factors and conditions affect the process of involvement, and what is their impact on the effectiveness of the process?’, and ‘What are the design principles following from these insights’?
The theoretical basis to answer these knowledge questions lies in two research traditions; Policy Analysis and Consumer Research.
Consumer Research and Policy Analysis assign three common elements to the concept of involvement. They both implicitly and explicitly consider:
1) The topic: the subject the involvement is about. In this research the topic was the strategy of Eneco concerning future energy supply.
2) The participant: the person or group of persons that is actively involved or being involved with the topic. In this research the future energy consumer, represented by participating member of Generation Y, was the participant.
3) The initiator: the party that initiates and/or organizes the involvement of the participant in the topic. In this research Eneco, representing the energy company, was the initiator. In this research the relations between these elements are conceptualised as “The Participation Triangle”:","Policy Analysis; Marketing; Involvement; Participation; Participatory design; Generation Y; Action design research; Energy; Strategy; Co-creation","en","doctoral thesis","","978-94-6299-788-2","","","","","","","","","Policy Analysis","","",""
"uuid:7ad1aef5-7ce1-4972-9443-9f66a5c727f6","http://resolver.tudelft.nl/uuid:7ad1aef5-7ce1-4972-9443-9f66a5c727f6","Music Information Retrieval beyond Audio: A Vision-based Approach for Real-world Data","Bazzica, A. (TU Delft Multimedia Computing)","Hanjalic, A. (promotor); Liem, C.C.S. (promotor); Delft University of Technology (degree granting institution)","2017","Digital music platforms have recently become the primary revenue stream for recorded music, making record labels and content owners increasingly interested in developing new digital features for their users.
Besides listening to expert-curated playlists and automatically recommended music, users can also benefit from a more informative, non-linearly accessible experience accommodating multiple perspectives on the content.
To give some examples of such enriched experiences, an alternative version of a piece can automatically be suggested. Users can skip throughout a long classical music piece guided by a visualization of its structure (\eg movements, recurring themes). They can also switch viewpoints while watching a music video instead of sticking to the editor's choice.
Developing such features requires innovation of automated content-based methods that extract musical knowledge. Traditionally, Music Information Retrieval (Music IR) researchers have tackled this problem mostly from an audio-only perspective.
Several works have however shown that other types of data, such as social tags, listening behaviors, and symbolic music scores, can largely improve the performance of audio-only algorithms, or even enable tasks that cannot be solved at all using audio alone.
In this thesis, we focus on the relatively unexplored field of \textit{vision-based Music IR}, which studies how to analyze the visual channel accompanying a music recording in order to learn more about the music piece being performed.
Several existing methods require obtrusive settings, such as 3D motion capture systems, which are not applicable in professional environments (\eg during a live classical music concert). Other methods rely instead on favorable viewpoints, static cameras, and uniform backgrounds to simplify the musicians' movements analysis process.
In both cases, the devised algorithms may not be suitable for commercial music platforms, especially those dealing with \textit{real-world data} --- \ie \textit{unstructured} and \textit{unconstrained} music videos.
We therefore consider tasks, algorithms and datasets with the real-world data challenges in mind, advancing the state-of-the-art in two ways: (i) we investigate how to process videos of a single musician aiming to extract musically relevant cues that can be exploited to solve existing, as well as new, Music IR problems, and (ii) we address the challenging case of large ensembles, proposing a way to possibly parse complex scenes and link musician-wise cues to identity and instrumental part annotations.
More in detail, this thesis first presents a global motion feature which aims to represent musicians' movements over time.
While lightweight and instrument-generic, it shows limitations with camera motion.
For this reason, we switch to detecting ``play\-ing/non-playing'' (P/NP) labels, which can be guessed from different viewpoints and at different scales and they can be used to encode the instrumentation of a performance over time.
We first show the value of such semantic feature by proving that it allows to roughly synchronize a symbolic music score to a performance recording.
We then focus on the visual analysis of large classical music ensembles videos, presenting a semi-automatic framework for P/NP annotation.
The experiments show that video face clustering is a critical problem to solve; we therefore illustrate a novel method that exploits the \textit{quasi-static scene} properties of classical music videos to generate better face clusters by relying on an automatically built map of the scene.
Finally, we address the challenging problem of detecting note onsets for clarinetist videos as a case study for woodwind and brass instruments. We propose a novel convolutional network architecture based on multiple streams and absence of temporal pooling, aiming to capture the fine spatio-temporal information conveyed by finger movements.
Our proposed methods, outcomes, and envisioned applications show that real-world music videos are an unexploited asset rather than a problem to avoid.
Furthermore, the light this thesis sheds on vision-based Music IR gives various indications on where future Computer Vision and Music IR research agendas can meet, bringing further innovation to the digital music platforms market.","music information retrieval; computer vision; cross-modal analysis","en","doctoral thesis","","978-94-6299-807-0","","","","","","","","","Multimedia Computing","","",""
"uuid:861b26cf-499d-4038-af2e-a29641f84ada","http://resolver.tudelft.nl/uuid:861b26cf-499d-4038-af2e-a29641f84ada","A semi-continuous formulation for goal-oriented reduced-order models","Cheng, L. (TU Delft Aerodynamics)","Bijl, H. (promotor); Hulshoff, S.J. (copromotor); Delft University of Technology (degree granting institution)","2017","Modern computational and experimental techniques can represent the detailed dynamics of complex systems using large numbers of degrees of freedom. To facilitate human interpretation or the optimal design of control systems, however, reduced-order models (ROMs) are required. Conventional reduced-order modeling techniques, such as those based on proper orthogonal decomposition (POD), balanced proper orthogonal decomposition (BPOD), and dynamic mode decomposition (DMD), are purely data-driven. That is, the governing equations are not taken into account when determining the solution basis of the ROM. The resulting ROMs are thus sub-optimal, particularly when low numbers of degrees of freedom are used. Bui-Thanh et al. addressed this problem by determining ROM solution bases using a goal-oriented optimization procedure that seeks to minimize the error between the full and reduced-order goal functionals with the reduced-order model as a constraint. However, several issues limit the application of this approach. First, it requires explicit input matrices with the dimension of the reference data that result from spatially discretizing the governing equations. In addition, its derivation is restricted to linear governing equations and goal functionals. To overcome these limitations, our research group has proposed an alternative, a semi-continuous formulation (SCF), in which the ROM constraint and the optimization process are defined in a continuous setting. In this thesis, the mathematical framework of the SCF is illustrated, as is the algorithm used to solve the optimization problem.","Semi-Continuous Formulation Framework; 1D problems; Stokes problems; discrete proper orthogonal decomposition; Conjugate Gradient Method","en","doctoral thesis","","","","","","","","","","","Aerodynamics","","",""
"uuid:ac8ebee1-278c-484d-acc0-d39c765c1ac2","http://resolver.tudelft.nl/uuid:ac8ebee1-278c-484d-acc0-d39c765c1ac2","Towards predicting the (dis)comfort performance by modelling: methods and findings","Naddeo, A. (TU Delft Applied Ergonomics and Design)","Vink, P. (promotor); Delft University of Technology (degree granting institution)","2017","The research work underlying this thesis starts from a societal issue: A comfortable artefact helps people to improve their well-being and can be sold easier.
In order to fulfil these two requirements (wellbeing and companies’ profit) a comfort-driven human-centred design method is needed.
Driven by the thoughts of Stephen Hawking that, talking about the unified theory of Physics, in 1998 said “it may not aid the survival of our species; it may not even affect our life-style. But ever since the down of civilisation, people have not been content to see events as unconnected and inexplicable. They have craved an understanding of the underlying “MAIN ORDER” in the world”, the purpose of this PhD thesis is the investigation and the look of a model useful to describe, analyse, evaluate and predict (dis)comfort perceptions.
The contribution of this PhD work to solve this issue can be resumed by the following statement:
“Starting from the proposal for a general framework for describing comfort perception and a general law for predicting it, some studies about specific comfort performances have been conducted in order to define some comfort “functions” and to propose a design-path that should help designers and companies to improve the comfort performance of new products”.","","en","doctoral thesis","","978-94-6186-870-1","","","","","","","","","Applied Ergonomics and Design","","",""
"uuid:244b9699-0814-4bc9-aa48-07361989bd64","http://resolver.tudelft.nl/uuid:244b9699-0814-4bc9-aa48-07361989bd64","Bridging PIV spatial and temporal resolution using governing equations and development of the coaxial volumetric velocimeter","Schneiders, J.F.G. (TU Delft Aerodynamics)","Scarano, F. (promotor); Delft University of Technology (degree granting institution)","2017","A series of techniques is proposed for volumetric air flow measurements that are based upon the principles of particle image velocimetry (PIV). The proposed techniques fall in two categories; part 1 of this dissertation considers measurement data processing using constitutive laws and part 2 focuses on development of a coaxial volumetric flow measurement system that uses helium filled soap bubbles (HFSB) as tracer particles.
In part 1, first a technique is proposed to measure instantaneous volumetric pressure using a low repetition rate tomographic PIV system. Instead of time-resolved measurement of the flow temporal evolution, which typically required for pressure-from-PIV procedures, the required temporal information is obtained by solution of the incompressible Navier-Stokes equations in vorticity-velocity formulation using the spatial information available from the instantaneous measurements.
The reverse is proposed for cases where temporal resolution is more abundant, but spatial resolution is limited. The vorticity transport equation is leveraged to couple temporal information with instantaneous velocity data in the proposed VIC+ framework, in an attempt to obtain a dense velocity field at high spatial resolution. The governing principle is that by using the flow governing equations, the data ensemble used for interpolation is increased beyond instantaneous velocity measurements only. The technique is demonstrated to allow for measurement of vorticity and dissipation in a real-world experiment, which would otherwise be underestimated by more than 40% using the established tomographic PIV approach.
The proposed VIC+ technique uses a data ensemble for dense velocity interpolation consisting of the instantaneous velocity and material derivative measurements obtained from Lagrangian particle tracking velocimetry. An extension of the VIC+ framework that uses a measurement time-segment instead of instantaneous data only is shown to potentially improve the measurement fidelity further, when a cost-effective three-dimensional implementation can be realized.
An uncertainty quantification technique is proposed for future developments of such dense interpolation techniques. It is shown that the results from Lagrangian particle tracking measurements can be directly used for uncertainty quantification of dense interpolations and no independent measurement data is required.
In part 2 of this dissertation, a technique is first proposed for large-scale volumetric pressure measurement. The method follows recent developments of large-scale measurements using HFSB tracer particles, in combination with Lagrangian particle tracking and ensemble bin-averaging. This allows for evaluation of accurate velocity statistics and in turn the time-averaged pressure field.
The dissertation concludes with the proposal of the coaxial volumetric velocimeter (CVV). The CVV brings imaging and illumination together in a compact box, viewing and illuminating a measurement volume from a single viewing direction. The theoretical background that is derived shows that measurements in air using the CVV are only possible using tracer particles that scatter significantly more light than traditional micron sized tracer particles. Here, HFSB tracer particles are used. Due to the small solid angle of the imaging system, tracer particles need to be imaged over an extended number of snapshots to increase particle positional accuracy, making use of particle trajectory regularization.
A prototype CVV has been realized, which is first used to confirm that the flow around a sphere is measured with acceptable correspondence to a potential flow solution. Second, in the case of the flow around a cyclist, the CVV is shown to allow for measurements near both concave and convex surfaces within one measurement volume. This allows for flow analysis using skin-friction lines. In addition, the compact nature of the CVV allows mounting on a robotic arm for time-averaged of a large and complex wind tunnel model. The full-scale measurement of the flow around Giro d’Italia cyclist Tom Dumoulin shown using the CVV is an example of the latter.","PIV; PTV; Measurement; Aerodynamics; Wind tunnel; Fluid dynamics; CFD; Uncertainty; CVV; HFSB; VIC+","en","doctoral thesis","TU Delft OPEN Publishing","978-94-92516-97-8","","","","","","","","","Aerodynamics","","",""
"uuid:53c3c8cb-ff74-49c9-9e7d-e826a60fbba6","http://resolver.tudelft.nl/uuid:53c3c8cb-ff74-49c9-9e7d-e826a60fbba6","Creating public value: Optimizing cooperation Between public and private Partners in infrastructure Projects","Koops, L.S.W. (TU Delft Integral Design & Management)","Bakker, H.L.M. (promotor); Hertogh, M.J.C.M. (promotor); Bosch-Rekveldt, M.G.C. (copromotor); Delft University of Technology (degree granting institution)","2017","Infrastructure projects - such as the construction of tunnels and bridges or the (re)construction of roads and highways – are always performed to add quality to society. In The Netherlands, these projects are most often financed by the government, from local to national level, and constructed by private contractors. Public and private partners increasingly recognize the importance of cooperation to ensure successful execution of projects. However, the partnership arrangements made at strategic level are still difficult to ensure at tactical level, where the project is controlled. This study focuses on the tactical level and specifically on the perspective of the public project managers. It is investigated what they consider project success and how the project management team operates to control the project processes. The main result of this study is the public Value Chain in which the processes of the combined project organization are captured. Recommendations are made on the primary and secondary processes that binds the partners to each other. The public Value Chain will help collaborating partners to position their specific contribution to the project outcomes more clearly. Practitioners are encouraged to use the public Value Chain to organize their project activities and discuss the contribution of both public and private parent organizations to an efficient process. It can help partners to execute their specific contribution to the value they are creating. This will further optimize collaboration between public and private partners.","infrastructure; projects; public; private","en","doctoral thesis","","978-94-6186-863-3","","","","","","","","","Integral Design & Management","","",""
"uuid:1572a346-95c9-43a5-bf81-81d1fbfde2e9","http://resolver.tudelft.nl/uuid:1572a346-95c9-43a5-bf81-81d1fbfde2e9","Real-time resource model updating in continuous mining environment utilizing online sensor data","Yuksel-Pelk, C. (TU Delft Resource Engineering)","Jansen, J.D. (promotor); Benndorf, J. (promotor); Buxton, M.W.N. (copromotor); Delft University of Technology (degree granting institution)","2017","In mining, modelling of the deposit geology is the basis for many actions to be taken in the future, such as predictions of quality attributes, mineral resources and ore reserves, as well as mine design and long-term production planning. The essential knowledge about the raw materialproduct is based on this model-based prediction, which comes with a certaindegree of uncertainty. This uncertainty causes one of the most common problems in the mining industry, predictions on a small scale such as a train load or daily production are exhibiting strong deviations from reality.Some of the most important challenges faced by the lignite mining industry are impurities located in the lignite deposit. Most of the times, these high ash values cannot be captured completely by exploration data and in the predicted deposit models. This lack of information affects the operational process.","mining; online sensor data","en","doctoral thesis","","978-94-6233-803-6","","","","","","","","","Resource Engineering","","",""
"uuid:f9cc58c1-af77-4f1b-8de1-13a01c41a1e7","http://resolver.tudelft.nl/uuid:f9cc58c1-af77-4f1b-8de1-13a01c41a1e7","Activerende Gevels: Naar gedrag beïnvloedende gebouwen","Melet, E. (TU Delft Building Product Innovation)","Eekhout, A.C.J. (promotor); van Timmeren, A. (promotor); Delft University of Technology (degree granting institution)","2017","As a result of the large amounts of CO2 emissions the built environment produces, it contributes immensely to climate change. Within the strategies developed to reduce anthropogenic CO2 emissions, getting buildings to an energy-neutral level is one of the main priorities. The blueprint for this altered energy-efficient building now seems to be ready: the heavily insulated outer shells of buildings have decreased the demand for energy and the necessary coolness and warmth are induced using low-temperature systems. Whether these changes are enough remains to be seen. The actual, measured energy efficiency of these environmentally friendly buildings turns out to be lower than their theoretical efficiency; they use more energy than was expected up front. This difference can partly be attributed to the increased complexity of and sensitivity to improper use of these buildings. Users simply do not understand them well enough. Improper handling then turns into energy loss. Rebound effects also play a role in the lower levels of energy saving. The rebound effect states that as a result of higher efficiency, energy will relatively be cheaper. Lower energy bills will lead to a more increased use of warming and cooling, or to extra activities – whether or not polluting – outside the building...","gevels; co2 emissions","nl","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6366-001-3","","","","A+BE | Architecture and the Built Environment No 21 (2017)","","","","","Building Product Innovation","","",""
"uuid:8792f741-ddf0-435e-b3b1-9b57e1e3ddcb","http://resolver.tudelft.nl/uuid:8792f741-ddf0-435e-b3b1-9b57e1e3ddcb","At Home in the World: Architecture, the Public and the Writings of Hannah Arendt","Teerds, P.J. (TU Delft OLD Methods & Analysis)","Avermaete, T.L.P. (promotor); Delft University of Technology (degree granting institution)","2017","The modern emphasis on authenticity and the individual not only caused an ‘astonishing flowering of poetry and music’ and ‘the rise of the novel’ states the philosopher Hannah Arendt in her well-known book The Human Condition, but also the fall and ‘decline of the more public arts, especially architecture.’ In this study Hans Teerds takes up the challenge to address the public aspects of architecture, as they emerge from the political philosophy of Hannah Arendt. Starting from a reflection upon the contemporary urban landscape and its seemingly loss of public space, he challenges the contemporary theoretical discourses on the fall of the public character of architecture. Architecture, this study argues, shapes the experience of public appearance rather than being able to guarantee public life. It enables the human being to appear in public and to be ‘at home in the world’. The essential task of architecture is to shape and make tangible the common in society, to ‘thicken our understanding of the world.’ Architecture therefore is to be understood as a public enterprise, not simply a matter of the architect and other stakeholders only. After its fall, the challenge thus is to recover architecture as public art.
We focus specifically on the visual analysis of Diffusion Tensor Imaging (DTI)
datasets. DTI is a magnetic resonance imaging (MRI) based modality, which is commonly used in neuroscience to investigate brain white matter in vivo. It requires a long scanning time compared to other imaging modalities. Acceleration of MRI acquisitions has the potential to improve the applicability of DTI. Compressed sensing (CS) is a signal reconstruction technique that is used to accelerate MRI acquisitions. The traditional CS method aims at optimizing the global quality of the reconstructed image.
However, in practice, the quality of local structures is often of more interest. Therefore, we investigate CS for this purpose and contribute in this direction by adapting the traditional CS reconstruction method to focus on the quality of local structures.","Diffusion Tensor Fields; Comparative Visualization; Ensemble Visualization; Glyph Design; Visual Analysis","en","doctoral thesis","","978-94-92516-96-1","","","","","","","","","Computer Graphics and Visualisation","","",""
"uuid:e6ab025b-6acf-4d4c-a583-ad1e39a8caa7","http://resolver.tudelft.nl/uuid:e6ab025b-6acf-4d4c-a583-ad1e39a8caa7","Techniques for depth acquisition and enhancement of depth perception","Liao, J. (TU Delft Computer Graphics and Visualisation)","Eisemann, E. (promotor); Delft University of Technology (degree granting institution)","2017","Depth plays an essential role in computer graphics for the sorting of primitives. The related data representation, the depth map, is useful for many different applications. In this dissertation, we present solutions for creating depth maps with the goal of using these maps to enhance the depth perception in the original images. Regarding the generation of depth maps, we propose two solutions, a reconstruction method via near-light photometric stereo (PS) and a depth map design tool via user guidance. Additionally, we present several techniques for image enhancement of depth perception based on depth information. In the following, we give a short summary of the dissertation.","","en","doctoral thesis","","978-94-92516-90-9","","","","","","","","","Computer Graphics and Visualisation","","",""
"uuid:4ca8c0b8-0835-47c3-8523-12fc356768f3","http://resolver.tudelft.nl/uuid:4ca8c0b8-0835-47c3-8523-12fc356768f3","PIV Uncertainty Quantification and Beyond","Wieneke, B.F.A. (TU Delft Aerodynamics)","Scarano, F. (promotor); Sciacchitano, A. (copromotor); Delft University of Technology (degree granting institution)","2017","The fundamental properties of computed flow fields using particle imaging velocimetry (PIV) have been investigated, viewing PIV processing as a black box without going in detail into algorithmic details. PIV processing can be analyzed using a linear filter model, i.e. assuming that the computed displacement field is the result of some spatial filtering of the underlying true flow field given a particular shape of the filter function. From such a mathematical framework, relationships are derived between the underlying filter function, wavelength response function (MTF) and response to a step function, power spectral density, and spatial autocorrelation of filter function and noise.
A definition of a spatial resolution is provided independent of some arbitrary threshold e.g of the wavelength response function and provides the user with a single number to appropriately set the parameters of the PIV algorithm required for detecting small velocity fluctuations.
The most important error sources in PIV are discussed and an uncertainty quantification method based on correlation statistics is derived, which has been compared to other available UQ-methods in two recent publications (Sciacchitano et al. 2015; Boomsma et al. 2016) showing good sensitivity to a variety of error sources. Instantaneous local velocity uncertainties are propagated for derived instantaneous and statistical quantities like vorticity, averages, Reynolds stresses and others. For Stereo-PIV the uncertainties of the 2C-velocity fields of the two cameras are propagated into uncertainties of the computed final 3C-velocity field.
A new anisotropic denoising scheme as a post-processing step is presented which uses the uncertainties comparing to the local flow gradients in order to devise an optimal filter kernel for reducing the noise without suppressing true small-scale flow fluctuations.
For Stereo-PIV and volumetric PIV/PTV, an accurate perspective calibration is mandatory. A Stereo-PIV self-calibration technique is described to correct misalignment between the actual position of the light sheet and where it is supposed to be according to the initial calibration procedure. For volumetric PIV/PTV, a volumetric self-calibration (VSC) procedure is presented to correct local calibration errors everywhere in the measurement volume.
Finally, an iterative method for reconstructing particles (IPR) in a volume is developed, which is the basis for the recently introduced Shake-the-Box (STB) technique (Schanz et al. 2016).","PIV technique; PIV Uncertainty; Calibration method","en","doctoral thesis","","978-94-92516-88-6","","","","","","2017-12-12","","","Aerodynamics","","",""
"uuid:474e53d4-de7a-483a-9ce8-18eb99f902fa","http://resolver.tudelft.nl/uuid:474e53d4-de7a-483a-9ce8-18eb99f902fa","Characterization of a novel Optical Micro-machined Ultrasound Transducer","Leinders, S.M. (TU Delft ImPhys/Acoustical Wavefield Imaging)","de Jong, N. (promotor); Urbach, Paul (promotor); Verweij, M.D. (copromotor); Delft University of Technology (degree granting institution)","2017","We design and demonstrate a prototype ultrasound sensor based on a photonic micro-ring resonator integrated on a silicon membrane, and show that it can detect very low pressure ultrasound waves. The use of integrated photonics in future array transducers has several benefits: for instance it provides a small spatial footprint, compatibility with MRI due to the lack of electrical wiring, easy interrogation of the array of elements and ease of mass production, which may result in cost-effective fabrication of array transducers. To understand the working principle of the sensor, we have modeled the basic sensor element, fabricated the sensor and measured the response of the sensor to ultrasound. We have studied the response of the optical resonator separately before we integrate the resonator on the membrane and measure the response of the entire sensor. Besides the characterization of the sensor, we have expanded the existing knowledge of acoustical noise to determine the noise mechanism of the sensor.","","en","doctoral thesis","","978-94-028-0870-4","","","","","","","","","ImPhys/Acoustical Wavefield Imaging","","",""
"uuid:7ec08141-8ea9-4a7e-923c-b01d9a367b47","http://resolver.tudelft.nl/uuid:7ec08141-8ea9-4a7e-923c-b01d9a367b47","On the mechanics and stability of micro-plates in electrically loaded MEMS devices","Sajadi, B. (TU Delft Computational Design and Mechanics)","van Keulen, A. (promotor); Goosen, J.F.L. (copromotor); Delft University of Technology (degree granting institution)","2017","In the last decades, Micro-Electro-Mechanical Systems (MEMS) have drawn immense attention due to their potential use in a wide variety of modern applications, including micro-mechanical sensors and actuators. MEMS are devices combining mechanical and electrical components between 1 and 100 micrometers, all integrated into a single chip. The performance of these devices hinges on the deflection and movement of these micro-mechanical components and clearly, improvement and innovation of MEMS require a comprehensive knowledge and in-depth understanding of the nonlinear mechanics of these components.
In spite of the simple geometry of common micro-mechanical components, modeling the mechanics of micro-mechanical sensors and actuators is rather complex. In particular, the mechanics of micro-plates in electrostatic MEMS is entangled with two influential sources of nonlinearity namely, geometrical nonlinearity and the nonlinearity due to the presence of the electric field. These sources of nonlinearity are often the origin of instability and failure in MEMS devices, but might also be exploited to achieve, for example, higher sensitivity in the device. In either way, such nonlinearities shall be incorporated in the modeling and design of these micro-mechanical components.
This thesis provides an investigation on nonlinear mechanics of micro-plates in electrostatic MEMS devices. Based on the proposed models, we are able to predict some phenomena in micro-plates that have not been noticed before and to study these aspects in a detailed level which was not possible previously. In particular, based on total potential energy and a Lagrangian approach, the nonlinear mechanics and stability of a clamped circular micro-plate in interaction with an electrostatic field is studied. The effects of different loading conditions (i.e. static and dynamic electric potential, and with or without the presence of a differential pressure) on the stability of such a system are addressed.
The results of this study suggest that in presence of a differential pressure the steady-state motion of an electrically actuated micro-plate can be bi-stable or even multi-stable. In fact, a differential pressure can cause additional limit points and an unstable solution branch in the -static or dynamic- steady state solutions of the system. Saddle-node and period doubling bifurcations are repeatedly observed in the results and are recognized as main mechanisms of pull-in. Furthermore, one newly observed critical point in static loading is shown to be highly sensitive to the applied differential pressure suggesting the possibility of employing this limit point for sensing applications.
In addition, this thesis provides a study on analyzing nano-plates within the framework of continuum mechanics. In this regard, the nonlinear vibrations of an electrically actuated graphene resonator is modeled and a methodology is proposed for characterization of its mechanical properties. In addition, the possibility of capturing the scaling effects in the mechanical behavior of nano-plates by employing a nonlocal continuum theory is addressed. As a result, two modification factors for the extensional and bending stiffness of nano-plates are presented to account for the effect of thickness in the nonlocal elasticity formulations.
Finally, the mechanical performance and instability of a micro-plate as a transducer in surface stress sensing is investigated and an optimized design for such a sensor is proposed. It is shown that using the proposed optimized design, the sensitivity and overall reliability of such capacitive surface stress sensors can be significantly improved.
The proposed techniques for modeling the mechanics of micro-plates in MEMS devices are simple and computationally efficient. They can provide in-depth insight into MEMS behavior and can be useful for designing MEMS with plate-like micromechanical components.","Stability; Micro-Plates; nonlinear dynamics; bifurcation; Electrostatic MEMS; Pull-in; nonlinear mechanics","en","doctoral thesis","","978-94-6233-844-9","","","","","","","","","Computational Design and Mechanics","","",""
"uuid:441ec955-cd8d-4ae0-b2f0-98fbf91a570a","http://resolver.tudelft.nl/uuid:441ec955-cd8d-4ae0-b2f0-98fbf91a570a","Through the Organism’s eyes: The interaction between hydrodynamics and metabolic dynamics in industrial-scale fermentation processes","Haringa, C. (TU Delft ChemE/Transport Phenomena)","Mudde, R.F. (promotor); Noorman, H.J. (promotor); Delft University of Technology (degree granting institution)","2017","The broth in industrial scale fermentors may contain significant gradients in, for example, substrate concentration, dissolved oxygen and shear rates. From the perspective of microbes in this fermentor, these gradients translate to temporal variations in their environment that may affect their metabolic response. As a result, there may be differences in process yield between laboratory scale fermentations and their industrial counterpart. Rather than scaling-up bioprocesses based on equivalence, it is recommended to scale-down: mimic the large-scale environment in lab scale setups, to account for hydrodynamic-metabolic interaction from the start. In this thesis, the use of Euler-Lagrange computational fluid dynamics to capture the large-scale fermentation environment is explored. Lagrangian simulations offer to study processes from the microbial perspective (so-called “lifelines”), and enable coupling of metabolic models describing the response to external variations. With this, it is possible to take the history of the trajectory of the microbe into account, as organisms may not adapt to their surroundings instantaneously. Guidelines for the setup of fermentor simulations are presented, and several means for processing the lifelines are discussed. The obtained information is used to design lab-scale fermentations that mimic large-scale conditions. It is furthermore shown how coupled hydrodynamic-metabolic simulations can be used to predict yield-loss, assess process improvements, and study the onset of population heterogeneity in large-scale fermentors. Additionally, a more fundamental towards the role of the turbulent Schmidt number in multi-impeller mixing is included.","euler-lagrage; fermentor; CFD; hydrodynamic; Metabolic","en","doctoral thesis","","978-94-6299-793-6","","","","","","","","","ChemE/Transport Phenomena","","",""
"uuid:03d807bf-9dca-4ff2-a797-8521227625e2","http://resolver.tudelft.nl/uuid:03d807bf-9dca-4ff2-a797-8521227625e2","Reflections on the Reversibility of Nuclear Energy Technologies","Bergen, J.P. (TU Delft Ethics & Philosophy of Technology)","van de Poel, I.R. (promotor); Taebi, B. (copromotor); Delft University of Technology (degree granting institution)","2017","The development of nuclear energy technologies in the second half of the 20th century came with great hopes of rebuilding nations recovering from the devasta-tion of the Second World War or recently released from colonial rule. In coun-tries like France, India, the USA, Canada, Russia, and the United Kingdom, nuclear energy became the symbol of development towards a modern and technologically advanced future. However, after more than six decades of experi-ence with nuclear energy production, and in the aftermath of the Fukushima nuclear disaster, it is safe to say that nuclear energy production is not without its problems.
Some of these problems have their origins in the very materiality of the technolo-gies involved. For example, not only does the use of highly radioactive materials give rise to risks for the current generation (e.g., in the potential for disaster when reactors melt down) but high-level radioactive waste from nuclear energy production presents a serious intergenerational problem for which an acceptable final solution or its implementation remains elusive. Moreover, nuclear energy technologies have specific social and political consequences. For example, they have been said to be authoritarian technologies (Winner, 1980), requiring cen-tralized authority, secrecy, and technocratic decision-making.
While some of these problems could have been foreseen before nuclear energy technologies were introduced, others only arose after these technologies were already integrated into the social and infrastructural fabric of our lives. Addition-ally, new technologies (e.g., Generation III, III+ and IV reactors) are still being developed, bringing with them new and uncertain hazards and risks. Ignorance and uncertainty about the possible deleterious effects of introducing a new technology are inevitable, especially if the technology is complex, large time-scales are involved, or risks depend on social or political factors unforeseen in the design stage. However, this should not deter us from developing and intro-ducing new technologies. Rather, it should motivate us to organize these ‘exper-iments’ with new technologies in society in such a way that we can learn about their possible hazards and risks as effectively and responsibly as possible (van de Poel, 2011, 2015). In this way, it is possible to minimize risks and avoid unwant-ed moral, social or political developments. However, organizing such experi-ments responsibly also means that one could come to the conclusion that continuing an experiment is no longer responsible or desirable. Should we be prepared for such a scenario, and if so, how could we do that? One possible strategy to tackle this issue is that the technology and its introduction should be reversible. The aim of this thesis is to further explore this strategy by answering the following main research question (RQ) and accompanying subquestions (SQ):
RQ: What are the implications of reversibility for the responsible develop-ment and implementation of nuclear energy technologies?
SQ1: Under what conditions can nuclear energy technologies be considered reversible?
SQ2: Why should nuclear energy technologies be reversible?
SQ3: If so, how could the reversibility of nuclear energy technologies be achieved?
After the introductory chapter 1, the chapters that form the main body of this dissertation each provide a distinct contribution to answering the three subques-tions and, by extension, the main research question. Guided by three historical case studies of nuclear energy technology development (i.e., India, France and the USA), chapter 2 answers the first subquestion by formulating the two condi-tions under which it can be considered reversible, i.e., 1) the ability to stop the further development and deployment of a that technology in society, and 2) the ability to undo the undesirable outcomes (material, institutional or symbolic) of the development and deployment of the technology. Chapter 3 subsequently tackles the second subquestion by establishing the general desirability of technological reversibility by virtue of its relation to responsibility in Emmanuel Levinas’ ethical phenomenology. It argues that technology development is a legitimate response to responsibility but inevitably falls short of the responsibility that inspires it, incessantly calling for technological and political change in the process. Having thus argued that nuclear energy technologies should ideally be reversible, chap-ters 4 and 5 work towards specific strategies to achieve technological reversibil-ity. Chapter 4 first investigates the processes that make it difficult to stop the further development and implementation of a nuclear energy technology in society, thus provid-ing input on how to fulfill the first condition for the reversibility of nuclear energy technologies. To do so, it presents a phenomenological perspective on technology and its adoption based on the work of Alfred Schutz. It also explores different ways in which technology adoption drives the processes of path depend-ence towards technological lock-in. Chapter 5 examines the history of geological disposal of high-level radioactive waste in the USA. It identifies a number of concrete policy pitfalls that could lead to lock-in and that should consequently be avoided. It also presents a number of general design strategies that could facilitate the undoing of undesirable consequences of a technology, thus providing input on how to fulfill the second condition for the reversibility of nuclear energy technol-ogies.
Chapter 6 summarizes the central findings of the thesis and explains how these help to answer the research questions. On top of this, it reflects on a number of complications connected to reversibility considerations. Based on this, it is concluded that the question of irreversibility and reversibility is context- and technology-specific and a matter of degree. The chapter concludes with a reflec-tion on generalizations and limitations of the results. Finally, chapter 7 discusses the implications of this dissertation’s results for responsibly experimenting with nuclear energy technologies in society.
region have altered the characteristics of the ice regime in recent decades, leading to several ice disasters during freezing or breaking-up periods. The integrated water resources management plan developed by the Yellow River Conservancy Commission (YRCC) outlines the requirements for water regulation in the upper Yellow River during ice flood periods. YRCC is developing measures that not only safeguard against ice floods, but also assure the availability of
adequate water resources. These provide the overall requirements for developing an ice regime forecasting system including lead-time prediction and required accuracy. In order to develop such a system, numerical modelling of ice floods is an essential component of current research at the YRCC, together with
field observations and laboratory experiments. In order to properly model river ice processes it is necessary to adjust the hydrodynamic equations to account for thermodynamic effects. In this research, hydrological and meteorological data from 1950 to 2010 were used to analyse the characteristics of ice regimes in the past. Also, additional field observations were carried out for ice
flood model calibration and validation. By combining meteorological forecasting models with statistical models, a medium to short range air temperature forecasting model for the Ning-Meng reach was established. These results were used to improve ice formation modelling and prolong lead-time prediction. The numerical ice flood model developed in this thesis for the Ning-Meng reach allows better forecasting of the ice regime and improved decision support for upstream reservoir regulation and taking appropriate measures for disaster risk reduction.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-1-138-48701-7","","","","Dissertation submitted in fulfillment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of the UNESCO-IHE Institute for Water Education.","","","","","Environmental Fluid Mechanics","","",""
"uuid:45f42dab-8427-4113-ba56-50c9e7171a36","http://resolver.tudelft.nl/uuid:45f42dab-8427-4113-ba56-50c9e7171a36","Deflections and Natural Frequencies as Parameters for Structural Health Monitoring: The Effect of Fatigue and Corrosion on the Deflections and the Natural Frequencies of Reinforced Concrete Beams","Veerman, R.P. (TU Delft Materials and Environment)","van Breugel, K. (promotor); Koenders, E.A.B. (promotor); Delft University of Technology (degree granting institution)","2017","The Dutch road infrastructure contains a large number of concrete bridges and viaducts. Malfunction of these bridges and viaducts has large financial consequences. To avoid malfunctions of bridges, bridges need to be inspected frequently and actions should be taken when required. The reliability of the inspections of the health of a bridge could be increased by monitoring the deflections and/or the vibrations of the bridge. The main idea of a Structural Health Monitoring system is that degradation of a bridge results in detectable changes in the deflections and in the modal properties of the bridge.
The results of data-driven modal calculations of an existing concrete bridge have been compared with the results of Finite Element modal calculations of this bridge to investigate whether this comparison can be used to predict damage in the bridge. The effect of fatigue and localized corrosion damage on the deflections of a Reinforced Concrete beam was investigated by laboratory tests. The effect of fatigue and localized corrosion damage on the deflections and on the first natural frequency of a Reinforced Concrete beam was predicted by Finite Element calculations and probabilistic calculations. It was concluded from the tests and from the calculations that deflections and natural frequencies cannot be used as indicator for fatigue damage in concrete bridges.","Reinforced concrete; Fatigue; Corrosion; Natural Frequencies; Deflections; Service life","en","doctoral thesis","","9789065624161","","","","","","","","","Materials and Environment","","",""
"uuid:9c9cddf0-5bae-4869-854f-fda61439f27d","http://resolver.tudelft.nl/uuid:9c9cddf0-5bae-4869-854f-fda61439f27d","Elastic wavefield inversion by the alternating update method","Rizzuti, G. (TU Delft ImPhys/Quantitative Imaging)","Mulder, W.A. (promotor); Delft University of Technology (degree granting institution)","2017","Full-waveform inversion is a promising tool for a wide range of imaging scenario, in that it has the potential to harness the non-linear relationship between model parameters and data (as opposed to traditional methodologies), in order to produce truly quantitative results. Non-linearity represents an opportunity, in this sense, but it also begets local minimum issues when gradientbased optimization is employed. In this thesis, we are particularly interested in the quantitative estimation of the elastic parameters of the earth, such as compressibility, shear modulus, and density. If successful, this procedure brings geophysical imaging closer to the ultimate step of seismic exploration: the retrieval of pore pressure and rock properties. This would be of direct use for, say, the oil and gas industry. Other interesting applications are the description of the near surface/near ocean bottom, as a way to reduce drilling hazards, or non-destructive inspection of defects in oil and gas pipes, when ultrasounds are employed.","Elastic wavefeld inversion; seismic imaging; 2-D inverse scattering","en","doctoral thesis","","","","","","","","","","","ImPhys/Quantitative Imaging","","",""
"uuid:58d2f768-20c2-48ea-9b54-efb94611cda6","http://resolver.tudelft.nl/uuid:58d2f768-20c2-48ea-9b54-efb94611cda6","Separating glacial isostatic adjustment and ice-mass change signals in Antarctica using satellite data","Engels, Olga (TU Delft Physical and Space Geodesy)","Klees, R. (promotor); Delft University of Technology (degree granting institution)","2017","The main goal of this thesis involves the development of a refined methodology to
separate the mass change signals associated with glacial isostatic adjustment (GIA)
from those of surface ice/firn by exploiting the strengths of independent data sets,
such as those from gravimetry, altimetry, climate data, and others. To achieve this,
various research efforts were conducted addressing specific aspects of the methodology and subsequent data processing. This led to a number of new contributions to the topic,","GIA; GRACE; Antarctica; ice-mass changes; time-varying trend; patch approach; ICESat; SMB","en","doctoral thesis","","978-94-6361-039-1","","","","","","2018-07-01","","","Physical and Space Geodesy","","",""
"uuid:4e441a1c-b5cb-4c2a-8dc4-edd64736013f","http://resolver.tudelft.nl/uuid:4e441a1c-b5cb-4c2a-8dc4-edd64736013f","Fast Model Predictive Control Approaches for Road Traffic Control","Han, Y. (TU Delft Transport and Planning)","Hoogendoorn, S.P. (promotor); Hegyi, A. (copromotor); Delft University of Technology (degree granting institution)","2017","Traffic congestion has become a global issue that has a significant impact on our societys productivity. Its negative effects not only lie in the travel delays and unsafe conditions that it brings to road users, but also many aspects of our lives such as the air we all breathe. Construction and traffic management are typical alternatives for traffic researchers and practitioners to reduce congestion. Traffic management, which intends to make a better use of existing infrastructure, is more economical and environmentally friendly and becoming an increasingly preferred option. Dynamic traffic control proves to be efficient in the management of network traffic flows. This thesis focuses on the development of dynamic traffic control strategies to reduce congestion. Advanced dynamic traffic control strategies using model predictive control (MPC) approaches can considerably reduce traffic congestion. MPC for traffic systems utilizes a traffic model to predict traffic states evolutions based on the current states of the system, and determines the optimal control actions that result in the optimum value of an objective function. This feature enables the controller to take advantage of potentially larger future gains at a current (smaller) cost, so as to avoid myopic control actions...","road traffic; fast model predictive control","en","doctoral thesis","TRAIL Research School","978-90-5584-230-8","","","","TRAIL Thesis Series no. T2017/13, the Netherlands Research School TRAIL","","","","","Transport and Planning","","",""
"uuid:bc384229-4535-4965-ac8e-54c74f7e9aaa","http://resolver.tudelft.nl/uuid:bc384229-4535-4965-ac8e-54c74f7e9aaa","Paper in architecture: Research by design, engineering and prototyping","Latka, J.F. (TU Delft Teachers of Practice)","Eekhout, A.C.J. (promotor); Bac, Z. (promotor); Delft University of Technology (degree granting institution)","2017","Paper is a fascinating material that we encounter every day in different variants: tissues, paper towels, packaging material, wall paper or even fillers of doors. Despite radical changes in production technology, the material, which has been known to mankind for almost two thousand years, still has a natural composition, being made up of fibres of plant origin (particularly wood fibres). Thanks to its unique properties, relatively high compression strength and bending stiffness, low production costs and ease of recycling, paper is becoming more and more popular in many types of industry.
Mass-produced paper products such as special paper, paperboard, corrugated cardboard, honeycomb panels, tubes and L- and U-shapes are suitable for use as a building material in the broad sense of these words – i.e., in design and architecture. Objects for everyday use, furniture, interior design elements and partitions are just a few examples of things in which paper can be employed. Temporary events such as festivals, exhibitions or sporting events like the Olympics require structures that only need to last for a limited period of time. When they are demolished after a few days or months, their leftovers can have a significant impact on the local environment.
In the context of growing awareness of environmental threats and the efforts undertaken by local and international organisations and governments to counter these threats, the use of natural materials that can be recycled after their lifespan is becoming increasingly widespread.
Paper and its derivatives fascinate designers and architects, who are always looking for new challenges and trying to meet the market’s demands for innovative and proecological
solutions. Being a low-cost and readily available material, paper is suited to the production of emergency shelters for victims of natural and man-made disasters, as well as homeless persons.
In order to gain a better understanding of paper’s potential in terms of architecture, its material properties were researched on a micro, meso and macro level. This research of the possible applications of paper in architecture was informed by two main research questions:
What is paper and to what extent can it be used in architecture?
What is the most suitable way to use paper in emergency architecture?
To answer the first research question, fundamental and material research on paper and paper products had to be conducted. The composition of the material, production methods and properties of paper were researched. Then paper products with the potential to be used in architecture were examined. The history of the development of paper and its influence on civilisation helped the author gain a better understanding of the nature of this material, which we encounter in our lives every day. Research on objects for everyday use, furniture, pavilions and architecture realised in the last 150 years allowed the author to distinguish various types of paper design and paper architecture. Analysis of realised buildings in which paper products were used as structural elements and parts of the building envelope resulted in a wide array of possible solutions. Structural systems, types of connections between the various elements, impregnation methods and the functionalities and lifespan of different types of buildings were systematised. The knowledge thus collected allowed the author to conduct a further exploration of paper architecture in the form of designs and prototypes.
To answer the second research question, the analysed case studies were translated into designs and prototypes of emergency shelters.
During the research-by-design, engineering and prototyping phases, more than a dozen prototypes were built. The prototypes differed in terms of structural systems, used materials, connections between structural elements, impregnation methods, functionality and types of building. The three versions of the Transportable Emergency Cardboard House project presented in the final chapter form the author’s final answer to the second research question.
Paper will never replace traditional building materials such as timber, concrete, steel, glass or plastic. It can, however, complement them to a significant degree.","paper; potential of paper; properties; products; production","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-92516-95-4","","","","A+BE | Architecture and the Built Environment No. 19 (2017)","","","","","Teachers of Practice","","",""
"uuid:ee9e2ff4-256b-4ed7-8049-a9ab86820cc8","http://resolver.tudelft.nl/uuid:ee9e2ff4-256b-4ed7-8049-a9ab86820cc8","Exploring urban flooding incidence through spatial information: A complementary view to support climate adaptation of lowland cities","Gaitan Sabogal, S. (TU Delft Water Resources)","van de Giesen, N.C. (promotor); ten Veldhuis, Marie-claire (copromotor); Delft University of Technology (degree granting institution)","2017","Cities are vulnerable to local floods due to heavy rainfall. Urban flooding causes damage to buildings and contents, and also disturbs daily city activities as it entails drainage, transportation, and electricity interruptions. Urban flooding is expected to increase as climate change drives heavier rainfall events. Population and assets densification, as well as infrastructure aging, increasingly hamper cities from tackling pluvial flooding. Climate adaptation measures can help cities to face the challenge of heavier weather and urban flooding. Examples of those measures are: smart drainage maintenance and emergency responses, urban climate-proofing and retrofitting, and provision of real-time flooding information to citizens and government officials, among others. To plan and perform such measures it is required to know, and even predict before a heavy storm is onset, where, when, and why urban flooding occurs. This knowledge is not always available though. Required knowledge to design and implement adaptation measures against urban flooding is insufficient in cases such as Amsterdam and Rotterdam. In these cities, urban drainage models are limited to certain districts or uncalibrated; they cannot validly predict where or when the drainage system will surcharge or flood, and thus, they cannot be used for flood damage modeling. Moreover, urban flooding may not only depend on hydraulic parameters of underground drainage systems; other physical and socioeconomic
characteristics of the urban fabric may also influence the flooding likelihood at a particular urban location. Urban flooding can be better understood by using non-hydraulic and unconventional sources of information. Available public data, curated by statistics, cadastral, or municipal call-center services, can provide insights about urban flooding damage. Using mainstream technology, such as web, traffic, and smart-phone cameras, can also afford for valuable data about urban flooding impacts, which contributes to the development of climate adaptation measures in lowland cities. This dissertation aimed to determine the potential of such alternative data sources in better explaining urban flooding incidents. Employed methods combined techniques from geographic information systems, graph theory, community ecology, and computer vision. The exploration done in this research follows three main steps: testing previously proposed models, exploring currently available data sources, and evaluating the usefulness of attainable and affordable technology to gather key, nonexistent data about the timing, location, and extent of urban flooding incidents.","Urban flood modelling; Open spatial data; Data mining; Pattern recognition; Climate change adaptation","en","doctoral thesis","","978-94-6186-877-0","","","","","","","","","Water Resources","","",""
"uuid:ce04a07d-89fc-470a-9d1a-b6fae9182dae","http://resolver.tudelft.nl/uuid:ce04a07d-89fc-470a-9d1a-b6fae9182dae","Train Trajectory Optimization Methods for Energy-Efficient Railway Operations","Wang, P. (TU Delft Transport and Planning)","Hoogendoorn, S.P. (promotor); Goverde, R.M.P. (promotor); Delft University of Technology (degree granting institution)","2017","Even though rail is more energy efficient than most other transport modes, the enhancement of energy efficiency is an important issue for railways to reduce their contributions to climate change further as well as to save costs and enlarge competition advantages involved. This thesis is motivated by the challenges in improving energy efficiency of train operations. The main objectives are to develop the modelling and solution methods for the train trajectory optimization problem to improve the model accuracy and the computation time, to apply the methods in a train driver advisory system development, and to develop a multi-train trajectory optimization method to solve the delay recovery and the energy-efficient timetabling problem.","Optimizing Train Trajectory; Energy Efficiency; Genetic Algorithms","en","doctoral thesis","TRAIL Research School","978-90-5584-231-5","","","","TRAIL Thesis Series no. T2017/12, the Netherlands TRAIL Research School","","2017-12-31","","","Transport and Planning","","",""
"uuid:94b5831b-d22d-4fb2-8122-4d4300ae4526","http://resolver.tudelft.nl/uuid:94b5831b-d22d-4fb2-8122-4d4300ae4526","Mainstream anammox, potential & feasibility of autotrophic nitrogen removal","Hoekstra, M. (TU Delft BT/Environmental Biotechnology)","van Loosdrecht, Mark C.M. (promotor); Kleerebezem, R. (copromotor); Delft University of Technology (degree granting institution)","2017","Currently wastewater treatment plants (WWTP) consume a lot of energy and surface area. While the incoming water contains chemical energy (BOD) and reusable resources which are not effectively utilized. The ideal is to develop a treatment scheme which allows for the efficient removal of pollutants while minimizing the energy input and maximizing the recovery of energy and resources present in the wastewater. This thesis describes the potential and feasibility of implementing of the partial nitritation/anammox (PN/A) process in the mainstream of a municipal WWTP. Implementation of this technology will allow a complete re-design of the conventional wastewater treatment scheme from an energy consuming into an energy producing system.
In wastewater treatment plants nitrogen is currently removed in two sequential microbial conversions: nitrification and denitrification. For the nitrification step oxygen is needed and for the denitrification step anoxic conditions and BOD are required. The PN/A technology can be used to optimize the municipal mainstream wastewater treatment technology. In the PN/A process the incomplete oxidation of ammonium to nitrite (by aerobic ammonium oxidising bacteria, AOB) is combined with the anaerobic ammonium oxidation (by anammox bacteria). The first advantage is; due to the autotrophic nature of the pathways used, there is no longer a need for carbon to remove nitrogen through denitrification. The carbon in the wastewater can therefore be used for different means for instance for the production of biogas. A second advantage of the PN/A technology is the use of biofilms for (part of) the biomass. Biofilms/granules can lead to higher biomass concentrations in the reactor and therefore higher volumetric loading rates can be applied. Biofilms are easier to separate from water compared to sludge flocs, so a more compact sludge retention system can be built (compared to current secondary clarifiers). Thirdly all nitrogen conversions can take place in the same reactor, omitting the two different zones/tanks for nitrification/denitrification.
Bayesian probability theory is fairly well known and well established. Bayesian probability theory is not only a powerful tool of data analysis, but it also may function as a model for the way we (implicitly) do induction, that is, the way we make plausibility judgments on the basis of incomplete information. In Part I of this thesis we will make the case that Bayesian probability theory is nothing but common sense quantified.
The Bayesian decision theory, as proposed in this thesis, derives directly from Bayesian probability theory. In this decision theory we compare utility probability distributions, which are constructed by way of assigning utilities, that is, subjective worths, to the objective outcomes of our outcome probability distributions, which are derived by way of Bayesian probability theory.
When the outcomes under consideration are monetary, then we may use the Weber-Fechner law of psychophysics, or, equivalently, Bernoulli's utility function, to assign utilities to these outcomes. This mapping of outcomes to utilities, transforms our outcome probability distributions to their corresponding utility probability distributions.
That utility probability distribution which is located more to the right on the utility axis will tend to be, depending on the context of our problem of choice, either more profitable or less disadvantageous than the utility probability distribution that is more to the left. So, we will tend to prefer that decision which `maximizes' our utility probability distributions. This then, in a nutshell, is the whole of our Bayesian decision theory. In Part~II of this thesis, we will apply the Bayesian decision theory to both investment and insurance problems.
Not all questions are equal, some questions, when answered, may give us more information than others. Stated differently, questions may differ in their relevancy, in relation to some issue of interest we wish to see resolved. This is borne out by the well known adage that, 'to know the question, is to have gone half the journey'.
Bayesian information theory, by way of a mathematical operationalization of the concept of a question, allows us to determine which question, when answered, will be the most informative in relation to some issue of interest. The Bayesian information theory does this by assigning relevancies to the questions under consideration. These relevancies are then operated upon, by way of the information theoretical product and sum rules, in order to determine the relevancy of some question in relation to the issue of interest.
The Bayesian information theory constitutes an expansion of the 'canvas of rationality', and, consequently, of the range of psychological phenomena which are amenable to mathematical analysis. For example, we may assign relevancies not only to questions, but also to the messages that are communicated to us by some source of information.
The relevancy of a message represents the usefulness of that message, when received, in determining some issue of interest. By assigning a relevancy to the message, we indirectly assign a relevancy to the sources of information itself; possible examples of sources of information being the media, scientists, and governmental institutions. In Part~III of this thesis, we will give an information theoretical analysis of a simple risk communication problem.
Bayesian probability has its axiomatic roots in lattice theory, as the product and sum rule of Bayesian probability theory may be derived by way of consistency requirements on the lattice of statements. One may derive, likewise, by way of consistency requirements on the lattice of questions, the product and sum rules of Bayesian information theory.
So, if we choose rationality, that is, consistency requirements on lattices, as our guiding principle in the derivation of our theories of inference, then we get on the one hand a Bayesian probability theory, with as its specific application a Bayesian decision theory, and on the other hand we get a Bayesian information theory. In doing so, we obtain a comprehensive, coherent, and powerful framework with which to model human reasoning, in the widest sense.","Bayesian; Decision Theory; Expected Utility Theory; Bernoulli; Weber-Fecchner Law; Allais Paradox; Ellsberg Paradox; Probability Theory; Inquiry Calculus; Information Theory; Prospect Theory","en","doctoral thesis","","978-90-9030716-9","","","","","","","","","Safety and Security Science","","",""
"uuid:36b80a78-ffd2-4a3d-a6b3-9bbc188b255e","http://resolver.tudelft.nl/uuid:36b80a78-ffd2-4a3d-a6b3-9bbc188b255e","Quadruple-Junction Thin-Film Silicon-Based Solar Cells","Si, F.T. (TU Delft Photovoltaic Materials and Devices)","Zeman, M. (promotor); Isabella, O. (copromotor); Delft University of Technology (degree granting institution)","2017","The direct utilization of sunlight is a critical energy source in a sustainable future. One of the options is to convert the solar energy into electricity using thin-film silicon-based solar cells (TFSSCs). Solar cells in a triple-junction configuration have exhibited the highest energy conversion efficiencies within the thin-film silicon photovoltaic technology. Going further from the state-of-the-art device structures, this thesis works on the concept of quadruple-junction TFSSCs, and explores the potential and feasibility of such configuration.
The initial experimental realization of quadruple-junction TFSSCs is demonstrated in Chapter 2. The fabricated thin-film a-SiOx:H/a-Si:H/nc-Si:H/nc-Si:H solar cells showed favorable fill factors (FF) and exceptionally high open-circuit voltages (VOC) up to 2.91 V, suggesting a high quality of the material depositions and of the process control. Optical simulations were used in the design of the device structure, to precisely control the thickness and optical absorption in the layers. This preliminary experiment indicated how improvements can be made by better light management.
The spectral response of the component subcells is important information for the study of multi-junction solar cells, and the accurate measurement of such properties turns out to be challenging. Chapter 3 analyzes the mechanism of the spectral response measurement of multi-junction solar cells, by means of modeling the optoelectrical response of the subcells and their internal interactions. The formation of measurement artifacts, and their dependence on cell properties and measurement conditions, are elucidated. The analyses lead to comprehensive guidelines on how to conduct a trustworthy measurement and sensible data interpretation.
Absorbing semiconductor materials with different bandgaps are desirable for multi-junction solar cells. Thin-film a-SiGex:H cells have been developed to accommodate an absorber material with an intermediate bandgap between that of a-Si:H and nc-Si:H. Chapter 4 reports the development of a-SiGex:H cells using mixed-phase SiOx:H materials in the doped layers. Bearing the band alignment in mind, the optimization of p- and n-type SiOx:H layers resulted in satisfying device performance. The use of SiOx:H p- and n-layers offers great flexibility when integrating the cell in a multi-junction solar cell.
Chapter 5 describes the development of quadruple-junction TFSSCs using four different absorber materials. The thin-film wide-gap a-Si:H/narrow-gap a-Si:H/a-SiGex:H/nc-Si:H solar cells promotes reasonable spectral utilization because of the descending bandgap along the direction of light incidence. The tunnel recombination junctions between the subcells have been optimized to ensure effective interconnections thus the proper functioning of the multi-junction device. Advanced light management, which involved the use of modulated surface textured front electrode, was arranged for enhancing the optical performance. These investigations reveal the potential of quadruple-junction TFSSCs.
Chapter 6 evaluates the benefit of multi-junction solar cells with different number of subcells. The gains and losses inherent in adding more subcells have been critically assessed from the optical and electrical points of view. The effects of optical reflection, parasitic absorption, tunnel recombination junctions, and filtered illumination in multi-junction cells on the performance were investigated. In general, all types of losses increase with the number of subcells. Among them, the filtered illumination in the subcells can play a significant role in case of a large number of subcells. These results show that such comprehensive analysis helps to judge whether it is reasonable to develop a multi-junction solar cell with a certain structure.","solar cells; photovoltaics; thin-film; silicon-based; multi-junction solar cells; spectral response measurements; quadruple-junction; amorphous silicon materials","en","doctoral thesis","","978-94-028-0869-8","","","","","","","","","Photovoltaic Materials and Devices","","",""
"uuid:5fc08214-3d74-434d-ba02-097d221e26ea","http://resolver.tudelft.nl/uuid:5fc08214-3d74-434d-ba02-097d221e26ea","A mediated reality suite for spatial interaction: Symbiosis of physical and virtual environments for forensic analysis","Poelman, R. (TU Delft System Engineering)","Verbraeck, A. (promotor); Lukosch, S.G. (copromotor); Delft University of Technology (degree granting institution)","2017","This thesis is concerned with the creation of a hard- and software artifact that allows professionals to collaborate in a digitally augmented physical space, otherwise known as meditated reality or physical computing. On-premise professionals are supported with sensors and human computer interaction capabilities while freely walking around. A significant amount of time was devoted to the creation of this artifact, but the main goal is the development of a new way of collaboration.","","en","doctoral thesis","","978-94-028-0880-3","","","","","","","","","System Engineering","","",""
"uuid:98b5a5cd-839a-4b6d-9e66-cfe3aa75636a","http://resolver.tudelft.nl/uuid:98b5a5cd-839a-4b6d-9e66-cfe3aa75636a","Occupant behavior and energy consumption in dwellings: An analysis of behavioral models and actual energy consumption in the dutch housing stock","Bedir, M. (TU Delft Design Informatics)","Sanlyıldız, I.S. (promotor); Visscher, H.J. (promotor); Delft University of Technology (degree granting institution)","2017","Much is known about the increasing levels of energy consumption and environmental decay caused by the built environment. Also, more and more attention is shown to the energy consumption of dwellings, from the early design stage until the occupants start living in them. The increasing complexity of building technologies, the occupants’ preferences, and their needs and demands make it difficult to achieve the aimed energy consumption levels. The goal of reducing the energy consumption of dwellings and understanding the share of occupant behavior in it form the context of this research.
Several studies have demonstrated the ‘energy performance gap’ between the calculated and the actual energy consumption levels of buildings, and have explored the reasons for it. The energy performance gap is either caused by calculation drawbacks, uncertainties of modeling weather conditions, construction defects regarding air tightness and insulation levels, or by occupant behavior. This research focuses on the last aspect, i.e. analyzing the relationship between occupant behavior and energy consumption in dwellings, understanding the determinants of energy consumption, and finding occupants’ behavioral patterns.
There are several dimensions of occupant behavior and energy consumption of dwellings: dwelling characteristics including the energy and indoor comfort management systems, building envelope, lighting and appliances; occupant characteristics including the social, educational and economical; and actual behavior, including the control of heating, ventilation and lighting of spaces, and appliance use, hot water use, washing, bathing, and cleaning. Attempting to understand this complexity asks for a methodology that covers both quantitative and qualitative methods; and both cross-sectional and longitudinal data collection, working interdisciplinary among the domains of design for sustainability, environmental psychology, and building and design informatics.","energy consumption; dwellings; Dutch housing","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-92516-98-5","","","","A+BE | Architecture and the Built Environment No. 16 (2017)","","","","","Design Informatics","","",""
"uuid:46f0b6e6-5592-4b05-983b-a04c8f0f88a8","http://resolver.tudelft.nl/uuid:46f0b6e6-5592-4b05-983b-a04c8f0f88a8","Water stress detection using radar","van Emmerik, T.H.M. (TU Delft Water Resources)","van de Giesen, N.C. (promotor); Steele-Dunne, S.C. (copromotor); Delft University of Technology (degree granting institution)","2017","Vegetation is a crucial part of the water and carbon cycle. Through photosynthesis carbon is assimilated for biomass production, and oxygen is released into the atmosphere. During this process, water is transpired through the stomata, and is redistributed in the plant. Transpired water is refilled by uptake of water from the root zone in the subsurface. Transpiration by vegetation accounts for most of the total evaporation from land on a global scale. In some ecosystems, such as tropical rainforests, transpiration even makes up more than 70% of total evaporation.
Periods of low water availability, water stress, leads to irreversible damage to plants, and can eventually lead to plant death. To prevent this, various mechanisms are activated by the vegetation to survive. Transpiration is reduced as a result of vegetation water stress, which can affect the water and carbon cycle on local, regional, and even global scales. Additionally, water stress in crops is one of the major reasons for harvest losses, threatening food security. However, many effects of vegetation water stress on crops and tropical forests remains poorly understood.
New satellite observations provide opportunities for better detection and understanding of vegetation water stress. Recent research suggests that radar remote sensing might yield valuable insights into vegetation water content. Radar backscatter is sensitive to vegetation because of direct backscatter from the canopy, and through two-way attenuation of the signal as it travels through the vegetation layer. The degree of interaction of radar waves with the vegetation is mainly a function of the vegetation dielectric constant, which is in turn primarily influenced by vegetation water content.
Over the last years, various studies have reported links between anomalies in radar backscatter and vegetation water stress. This has led to the hypothesis that radar backscatter is sensitive to vegetation water stress. Additional field measurements of vegetation water content and dielectric constant, in combination with radar backscatter are necessary to test this hypothesis. This is what inspired this thesis. Based on a combination of field measurements using new sensors, models, and radar backscatter, this thesis focuses on understanding the effects of water stress on plant dynamics, identifying early signatures of vegetation water stress, and exploring the opportunities of early water stress detection using radar remote sensing.
This thesis studies the effects of vegetation water stress across scales, from individual leaves to rainforests. A new method is presented that allows measurements of leaf dielectric properties on living plants. First, the method is tested on tomato plants in a controlled environment. By measuring tomato plants with and without water stress, it is demonstrated that there is a significant difference in the leaf dielectric properties of stressed and unstressed tomato plants. Second, this same method is used under field conditions. Using data sets of corn plants with and without water stress, it is demonstrated that water stress changes plant water content, resulting in significant changes of leaf dielectric properties. Using the field data from the stressed corn field, a modeling study was done to investigate the sensitivity of radar backscatter to water stress. Here, it is shown that total and leaf water content can change considerably during the day, leading to observable differences in radar backscatter.
To study the effects of water stress in tropical rainforests, accelerometers were placed on trees in the Brazilian Amazon to measure tree sway. Tree sway depends on various tree properties, and this thesis demonstrates that the measured tree acceleration is sensitive to tree mass, intercepted rainfall, and tree-atmosphere interactions. Using five months of acceleration data from 19 trees, an effect of the transition from the wet to the dry season was found. This thesis hypothesizes that this was caused by water related changes in tree mass, or leaf fall in response to increased tree water deficit.
Finally, coinciding field data on tree water content and tree water deficit, and radar backscatter, were used to demonstrate the sensitivity of radar backscatter to increased water stress. During the transition from wet to dry season, a strong drop was found in radar backscatter, which is explained by a rapid increase in measured tree water deficit.
For years, the hypothesis that radar backscatter is sensitive to vegetation water stress has been discussed. Yet, a lack of observations withheld this hypothesis to be tested. This thesis uses field data of crops, and trees in tropical forests, and modeling approaches to finally demonstrate that vegetation water stress results in significant changes in plant water status, which lead to observable variations in radar backscatter.","radar; vegetation; water stress; hydrology; amazon; maize; corn; rainforest; tomato; dielectric constant; dielectric properties; remote sensing","en","doctoral thesis","","978-90-6824-060-3","","","","","","","","","Water Resources","","",""
"uuid:53096fa6-80ef-43a8-9fe9-d123b15d9efe","http://resolver.tudelft.nl/uuid:53096fa6-80ef-43a8-9fe9-d123b15d9efe","A new computational approach towards the simulation of concrete structures under impulsive loading","Pereira, Luis (TU Delft Applied Mechanics)","Sluys, Lambertus J. (promotor); Weerheijm, J. (copromotor); Delft University of Technology (degree granting institution)","2017","Extraordinary actions such as blast loadings and high velocity impact are rare, but usually have devastating effects. Thus, making critical infrastructures, such as military and governmental facilities, power-plants, dams, bridges, hospitals, etc., more resilient against these hazards is one of the best ways to protect ourselves and our societies. Since concrete is a very common construction material, the development of realistic numerical tools to efficiently simulate its failure behavior under extreme dynamic loading conditions is of paramount importance, but still a major challenge.
This thesis presents a new stress-based nonlocal effective rate-dependent damage model, developed to simulate the dynamic response and failure of concrete during ballistic impact. The proposed isotropic damage formulation combines the effect of three damage modes: (i ) tension (mode I), (i i ) compressive-shear (mode II and mixed-mode) and (i i i ) hydrostatic damage to describe crushing of the cement matrix under pressure. The strain-rate dependent update of the constitutive relations to express the dynamic increase of strength and fracture energy in tension and compression is made a function of an effective rate, instead of the commonly used instantaneous strain rate. An enhanced version of the stress-based nonlocal regularization scheme is used to correct spurious mesh sensitivity. The proposedmodel was developed solely in the effective strain-space, following an entirely explicit computation scheme.","Concrete; Ballistic impact; Hydrostatic damage; Stress-based nonlocal; Effective rate","en","doctoral thesis","","978-94-028-0848-3","","","","","","","","","Applied Mechanics","","",""
"uuid:dbaa874a-69b5-441a-bd56-2b7bb1c01c17","http://resolver.tudelft.nl/uuid:dbaa874a-69b5-441a-bd56-2b7bb1c01c17","Work Floor Experiences of Supply Chain Partnering in the Dutch Housing Sector","Venselaar, M.H. (TU Delft Housing Management)","Gruis, V.H. (promotor); Wamelink, J.W.F. (promotor); Delft University of Technology (degree granting institution)","2017","This book is about work floor experiences of professionals at work floors of housing organizations in The Netherlands in their attempts to apply supply chain partnering. I did not only choose this topic because of its academic and practical relevance. The choices I made, and the personal motivation behind those choices say a lot about what has driven me to do this research. Therefore, this prologue focusses on the experiences that have led me to conducting this PhD-research. In answering this question, I will describe a few milestones that I consider important events that led me towards this research.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-92516-89-3","","","","A+BE | Architecture and the Built Environment No. 15 (2017)","","","","","Housing Management","","",""
"uuid:40d94dfa-75f8-4b72-b313-3e72c782b9f9","http://resolver.tudelft.nl/uuid:40d94dfa-75f8-4b72-b313-3e72c782b9f9","Fast calculation of electrical transients in power systems after a change of topology","Thomas, R. (TU Delft Intelligent Electrical Power Grids)","van der Sluis, L. (promotor); Vuik, Cornelis (promotor); Lahaye, D.J.P. (copromotor); Delft University of Technology (degree granting institution)","2017","A power system is composed of various components such as generators, transformers, transmission lines, switching devices and loads. They have their mathematical model and graphical representation. Sometimes, a power system’s change of topology occurs due to events like short circuits, lightning striking a transformer, or a reconfiguration of the transmission system. In this thesis, a new way of simulating large scale power systems is presented from the modeling point of view. In the literature, a lot of modeling methods and mathematical tools are available to tackle this subject. However, this thesis mainly focuses on the time domain simulation of large scale power systems - and in particular, transients which appear after a change of topology. A change of topology in electrical networks impact time domain simulations on two levels. The first impact is that it is necessary to update or re-compute the set of equations. The computation time of this action on the topology can be significant - especially for large scale power systems. The second impact of this change of topology is the transient that will occur. Usually, this change will impose to numerically compute fast oscillations in currents and voltages until they reach a new steady state...","Power system; Elctrical transient; Modeling nethods; Ordinary differential equations; Integration methods; Runge-Kutta methods; linear solvers","en","doctoral thesis","","978-94-6299-791-2","","","","","","","","","Intelligent Electrical Power Grids","","",""
"uuid:989257c4-0001-4f57-ad78-08f1180def7b","http://resolver.tudelft.nl/uuid:989257c4-0001-4f57-ad78-08f1180def7b","Modeling of atomic layer deposition on nanoparticle agglomerates","Jin, W. (TU Delft ChemE/Product and Process Engineering)","Kleijn, C.R. (promotor); van Ommen, J.R. (copromotor); Delft University of Technology (degree granting institution)","2017","Nanoparticles are increasingly applied in a range of fields, such as electronics, catalysis, energy and medicine, due to their small sizes and consequent high surface-volume ratio. In many applications, it is attractive to coat the nanoparticles with a layer of different materials in order to gain new functionalities. For instance, a coated layer can modify the chemical properties of the nanoparticles, protect the core material resulting in increased stability, facilitate the biofunctionalization, etc. Atomic layer deposition (ALD) is a gas-phase technique that can form an ultrathin solid film on a range of substrates. It utilizes two self-limiting surface reactions applied in an alternating sequence. By controlling the number of applied cycles, the thickness of the coated layer can be controlled with nanometer precision. Several experimental reports in literature have shown that applying ALD to nanoparticles using a fluidized bed is a promising way of producing large quantities of coated nanoparticles. Fluidization is a gas-phase technique that can process large quantities of particles by suspending them in an upward gas stream. It provides good gas-solid mixing, scale-up potential, and allows continuous processing. However, due to the strong cohesive forces between particles, nanoparticles cluster into large agglomerates when fluidized. These agglomerates have a complex, hierarchical structure, which has been commonly described as fractal for their self-similarity under different length scales. During the ALD process, the precursors have to diffuse into such structures to reach the surface of inner particles.","nanoparticle agglomerate; atomic layer deposition","en","doctoral thesis","","978-94-6186-866-4","","","","","","","","","ChemE/Product and Process Engineering","","",""
"uuid:349c75cd-11cb-41b2-a403-ce4b1dd3be0d","http://resolver.tudelft.nl/uuid:349c75cd-11cb-41b2-a403-ce4b1dd3be0d","Early stage precipitation in aluminum alloys: An ab initio study","Zhang, X. (TU Delft (OLD) MSE-7)","Thijsse, B.J. (promotor); Sluiter, M.H.F. (copromotor); Delft University of Technology (degree granting institution)","2017","Multiscale computational materials science has reached a stage where many complicated phenomena or properties that are of great importance to manufacturing can be predicted or explained. The word “ab initio study” becomes commonplace as the development of density functional theory has enabled the predictions to be independent of experimental data or empirical parameters. For some crucial phenomena, e.g., precipitation processes in multicomponent alloys, however, challenges exist due to the requirement of an accurate and efficient description of both energetics and kinetics of a complex system. In the present thesis, a systematic methodology has been established for predicting the morphology and realistic formation kinetics of precipitates in multicomponent alloys. Aluminum alloys are chosen as prototype applications of the present methodology, because of the well-known strengthening mechanism—age or precipitation hardening which is a typical and important precipitation process utilized in industrial materials. As one of the main computational approaches, cluster expansion technique is applied to study vacancy properties in concentrated Cu-Ni alloys. Diffusion kinetics in dilute Al-Cu alloys including the role of multiple diffusion barriers has been investigated by kinetic Monte Carlo simulations. At finite temperature, electronic entropy contribution to the free energies of the transition metals is also discussed.","Aluminum alloy; precipitation; ab initio; cluster expansion; kinetic Monte Carlo simulation","en","doctoral thesis","","978-94-028-0881-0","","","","","","","","","(OLD) MSE-7","","",""
"uuid:e372e4db-eb77-4b80-8ed4-f56991270e48","http://resolver.tudelft.nl/uuid:e372e4db-eb77-4b80-8ed4-f56991270e48","From New to Normal: Designing for product category legitimacy","Bork, S. (TU Delft Marketing and Consumer Research)","Schoormans, J.P.L. (promotor); Silvester, S. (copromotor); Joore, Peter (copromotor); Delft University of Technology (degree granting institution)","2017","A large number of new products and services, such as electric cars, solar panels, and electric boats are introduced and play an important role during sustainability transitions. However, when introduced, these new products do not fit with established aspects of society such as expectations, habits, and infrastructures, leading to low levels of legitimacy. Based on the definition of Suchman (1995) I define the legitimacy of products as: The generalized perception or assumption that the use of a product category is appropriate, proper, or desirable within a specific context of physical objects and elements and a socially constructed system of norms, values, rules, beliefs, and expectations. Legitimacy is seen as a prerequisite for large--‐ scale adoption of new products and services...","","en","doctoral thesis","","978-94-92801-13-5","","","","","","","","","Marketing and Consumer Research","","",""
"uuid:7457ca32-f859-48a5-b906-22bde395d928","http://resolver.tudelft.nl/uuid:7457ca32-f859-48a5-b906-22bde395d928","Energy Management in Smart Cities","Calvillo Muñoz, C.F. (TU Delft Energie and Industrie)","Herder, P.M. (promotor); Sanchez Miralles, A. (promotor); Delft University of Technology (degree granting institution)","2017","Models and simulators have been widely used in urban contexts for many decades. The drawback of most current models is that they are normally designed for specific objectives, so the elements considered are limited and they do not take into account the potential synergies between related systems. The necessity of a framework to model complex smart city systems with a comprehensive smart city model has been remarked by many authors.
Therefore, this PhD thesis presents: i) a general conceptual framework for the modelling of energy related activities in smart cities, based on determining the spheres of influence and intervention areas within the city, and on identifying agents and potential synergies among systems, and ii) the development of a holistic energy model of a smart city for the assessment of different courses of action, given its geo-location, regulatory and technical constraints, and current energy markets. This involves the creation of an optimization model that permits the optimal planning and operation of energy resources within the city.","Energy system model; Smart city; Distributed energy resources; Distributed generation; EV; Metro; Energy markets; Planning and operation model","en","doctoral thesis","","978-84-697-6448-0","","","","The doctoral research has been carried out in the context of an agreement on joint doctoral supervision between Comillas Pontifical University (Madrid, Spain), KTH Royal Institute of Technology (Stockholm, Sweden) and Delft University of Technology, (Delft, the Netherlands).","","","","","Energie and Industrie","","",""
"uuid:00e46bd1-eef8-4411-8626-fbd902749904","http://resolver.tudelft.nl/uuid:00e46bd1-eef8-4411-8626-fbd902749904","Self-Healing Polymer Composites","Post, W. (TU Delft Novel Aerospace Materials)","van der Zwaag, S. (promotor); Garcia, Santiago J. (copromotor); Delft University of Technology (degree granting institution)","2017","Over the past decade, many different strategies were developed to introduce selfhealing properties in structural polymer composite materials. Although many of the investigated routes show promising results on an academic scale, the first commercially feasible self-healing structural polymer composite has yet to be developed. The main objective of this thesis is to contribute to the gap closure between the academic concept of structural self-healing composites and its fully implemented industrial applications. As such, each chapter targeted one of the existing scientific issues, within some of the most promising healing strategies available, that are currently preventing the industrial acceptance of self-healing polymer composites.","Self healing; Polymer composites","en","doctoral thesis","","978-94-6295-811-1","","","","","","","","","Novel Aerospace Materials","","",""
"uuid:49c6fe9d-2d29-420a-91a2-a97e2049e15e","http://resolver.tudelft.nl/uuid:49c6fe9d-2d29-420a-91a2-a97e2049e15e","Strategic Conformance: Exploring Acceptance of Individual-Sensitive Automation for Air Traffic Control","Westin, C.A.L. (TU Delft Control & Simulation)","Mulder, Max (promotor); Borst, C. (copromotor); Delft University of Technology (degree granting institution)","2017","LIKE many complex and time-critical domains, air traffic control (ATC) is facing a fundamental modernization that builds on the use of more advanced automation (represented by SESAR in Europe and NextGen in the United States). The current function allocation-based relationship between controller and machine is envisioned to evolve to a more fluid, continuous and mutually coordinated team relationship. Consequently, the controller is expected to assume a supervisory and monitoring role, while relinquishing much of the tactical “hands-on” tasks to automation. ATC automation, in turn, is expected to grow in intelligence and its cognitive abilities to become more of a team member providing decision support and acting more autonomously. In association to these changes, one of the most pressing human factors challenges is how we can design automation that is embraced, accepted and trusted by the controller...","Acceptance; Air Traffic Control; Automation; Decision Aid; Decision-Making; Personalisation; Strategic Conformance; Transparency","en","doctoral thesis","","978-94-6299-659-5","","","","","","","","","Control & Simulation","","",""
"uuid:97c63275-d4fa-4984-a818-55feb015820a","http://resolver.tudelft.nl/uuid:97c63275-d4fa-4984-a818-55feb015820a","Toekomstbestendig renoveren","Brinksma, H. (TU Delft Housing Management)","Gruis, V.H. (promotor); van der Flier, C.L. (copromotor); Delft University of Technology (degree granting institution)","2017","Homes are renovated a number of times during their lifespan. Although we can regard each of these renovations as new, it is more prudent to implement a future-proof solution to renovation.
The purpose of this study is to gain an insight into how future-proof renovation solutions are for homes built between 1975 and 1991 that are currently being carried out or offered on the market. The study adopts a primarily architectural viewpoint to examine the hypothesis that we first need to be aware of what is architecturally possible and relevant, before it makes sense to answer any further questions.","housing; renovation; future-proof renovation; sustainability","nl","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-92516-83-1","","","","A+BE | Architecture and the Built Environment No 13 (2017)","","","","","Housing Management","","",""
"uuid:98c73893-a400-4fb2-9266-a9b5e19ccc69","http://resolver.tudelft.nl/uuid:98c73893-a400-4fb2-9266-a9b5e19ccc69","Reliability of transmission networks: Impact of EHV underground cables & interaction of offshore-onshore networks","Tuinema, B.W. (TU Delft Intelligent Electrical Power Grids)","van der Meijden, M.A.M.M. (promotor); van der Sluis, L. (promotor); Rueda, José L. (copromotor); Delft University of Technology (degree granting institution)","2017","For the future, several developments of the power system are expected. The transition towards a more sustainable energy supply puts new requirements on the design and operation of power systems, and the transmission network in particular. Offshore, a transmission grid will be implemented to collect large-scale wind energy and to interconnect national power systems. Onshore, the transmission network needs to be reinforced to transmit the large-scale renewable energy to the main load centers. As all these developments will impact the reliability of transmission networks, it is important to study and quantify these impacts in reliability analysis.
Traditional overhead lines are facing more and more opposition from local society, such that underground extra-high voltage (EHV) cable connections become a promising alternative for future grid extension. As EHV underground cables are a relatively new technology, not much is known yet about their behavior in large transmission networks. Underground cable connections (consisting of cables, joints and terminations) are in general less reliable than traditional overhead lines, mainly because of their much longer repair time. This can negatively influence the reliability of the transmission network as well. In this thesis, the reliability of EHV underground cables and overhead lines are compared, from the level of individual connections to transmission network level. It is studied which factors are of influence and what measures can be taken to improve the reliability.
For offshore grids, network redundancy is a topic of discussion. Implementing offshore redundancy can be costly, but no network redundancy might lead to large capacity outages when connecting large offshore wind farms. In this thesis, the reliability of various offshore configurations is compared and it is discussed what level of redundancy is the most optimal. The consequences of offshore network outages for the operation of the onshore power system are considered as well. Recommendations for the implementation of offshore redundancy are given.
The studies in this thesis show the importance of an integrated reliability analysis, where network design and system operation are combined, to find the most optimal solution. Decisions on network design will have consequences for system operation and vice versa. The studies in this thesis show how the reliability of underground cables and offshore networks can be assessed, which factors are of influence and what measures can be taken to improve power system reliability in the future.","reliability analysis; EHV underground cables; offshore networks","en","doctoral thesis","","978-94-6299-778-3","","","","","","","","","Intelligent Electrical Power Grids","","",""
"uuid:6d4208be-65c1-43e8-afa0-5019f22c6167","http://resolver.tudelft.nl/uuid:6d4208be-65c1-43e8-afa0-5019f22c6167","Analysis and Remediation of the Salinized, Damour Coastal (Dolomitic) Limestone Aquifer in Lebanon","Khadra, W.M. (TU Delft Geo-engineering)","Stuyfzand, Pieter Jan (promotor); Delft University of Technology (degree granting institution)","2017","Coastal aquifer management has recently emerged as a main scope in groundwater hydrology, especially in arid and semi-arid zones. About two thirds of the human population are currently gathered close to shorelines relying on coastal groundwater resources. Worldwide, these systems are subject to quality deterioration due to a multitude of anthropogenic impacts and subsequent saltwater intrusion (SWI).
Many hydrological and hydrochemical features of SWI have been disclosed during the past century through numerous case studies, column studies, scale models, flow and reactive transport modeling. Yet, many scientific and engineering challenges remain, some of which need to be addressed for a better prospecting of future coastal freshwater reserves. The scope of this thesis is to contribute to the analysis and remediation of SWI by studying the following aspects: (1) response of carbonate aquifers with varying Ca/Mg content to SWI, (2) behavior of trace elements (TEs) where fresh and intruded seawater mix, (3) derivation of groundwater baseline levels in polluted settings, notably salinized aquifers, (4) identification and quantification of major hydrogeochemical processes stimulated by SWI, (5) reliability of complex models (especially in karst) with variable-density and solute transport formulations, and (6) feasibility of SWI mitigation strategies.
In order to reach the scope, some existing tools have been adapted and new tools developed. All together, they offer an interesting toolbox for investigating SWI anywhere. They were successfully applied to a stressed dolomitic limestone aquifer system in Lebanon (Eastern Mediterranean), suffering from salinization and other minor anthropogenic impacts.","Baseline chemistry; System Analysis; Groundwater mapping; Multi-tracing; Salinization; Hydro(geo)chemistry; Trace elements; Reactive transport modeling; Karst; CDC approach; Time Series; High recovery RO; Groundwater deterioration; Non-conventional water resources; Urban water system; Managed aquifer recharge; Fresh-keeper wells; Lebanon","en","doctoral thesis","","978-94-6186-861-9","","","","","","","","","Geo-engineering","","",""
"uuid:7ed71021-e0f6-4d65-92aa-791b3e9fa817","http://resolver.tudelft.nl/uuid:7ed71021-e0f6-4d65-92aa-791b3e9fa817","Flexibility in adaptation planning: When, where and how to include flexibility for increasing urban flood resilience","Radhakrishnan, M. (TU Delft Hydraulic Structures and Flood Risk)","Zevenbergen, C. (promotor); Pathirana, A (copromotor); Delft University of Technology (degree granting institution)","2017","The magnitude and urgency of the need to adapt to climate change is such that addressing it has been taken up by the United Nations as one of the sustainable development goals - Goal 13 (SDG13) in 2015. SDG13 emphasises the need to strengthen resilience and adaptive capacity to climate related hazards and natural disasters. Coping with urban floods is one of the major needs of climate adaptation, where integration of climate change responses into flood risk management policies, strategies and planning at international, national, regional and local levels is now the norm. However, much of this integration lacks effectiveness or real commitment from stakeholders involved in adaptation planning and implementation. Hence this research has focused on integrating flexibility based adaptation responses into an urban flood risk management context. The research has synthesised flexible adaptation practices from several disciplines including information technology, automobile and aerospace manufacturing. The outcomes of the research are brought together in a framework for structuring local adaptation responses and an adaptation planning process based on flexibility concepts. The outcomes provide a way to assist with the identification of the appropriate nature and type of flexibility required; where flexibility can best be incorporated; and when is the most appropriate time to implement the flexible adaptation responses in the context of urban flooding.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-0-8153-5729-2","","","","Dissertation submitted in fulfilment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of the UNESCO-IHE Institute for Water Education.","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:4f081b83-b03f-4358-abb3-efab1111aff4","http://resolver.tudelft.nl/uuid:4f081b83-b03f-4358-abb3-efab1111aff4","Estimating the impacts of urban growth on future flood risk: A comparative study","Veerbeek, W. (TU Delft Hydraulic Structures and Flood Risk)","Zevenbergen, C. (promotor); Delft University of Technology (degree granting institution)","2017","The unprecedented growth of cities has a significant impact on future flood risk that might exceed the impacts of climate change in many metropolitan areas across the world. Although the effects of urbanisation on flood risk are well understood, assessments that include spatially explicit future growth projections are limited. This comparative study provides insight in the long term development of future riverine and pluvial flood risk for 18 fast growing megacities. For these cities a spatially explicit urban growth model has been developed capable to identify and extrapolate spatial development trends into growth projections for the short, medium and long term. For some cities like Dhaka or Lahore, the outcomes are alarming while others show an implicit tendency for flood adverse urban growth. The outcomes not only provide a baseline absent in current practise, but also a strategic outlook that might better establish the role of urban planning in limiting future flood risk.","Urban growth; urban flood management; flood risk; land use change; urban modelling; assessment","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-0-8153-5733-9","","","","Dissertation submitted in fulfillment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of the UNESCO-IHE Institute for Water Education.","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:9f560388-49fd-48ba-8929-9744f549e1bc","http://resolver.tudelft.nl/uuid:9f560388-49fd-48ba-8929-9744f549e1bc","Investigation of foam generation, propagation and rheology in fractures","Alquaimi, B. (TU Delft Reservoir Engineering)","Rossen, W.R. (promotor); Delft University of Technology (degree granting institution)","2017","Naturally fractured reservoirs (NFRs) are found in many countries around the globe, in almost every lithology. These reservoirs can be carbonates, sandstones, or shale, in the case of unconventional or basement reservoirs. NFRs have been explored and exploited globally for groundwater, geothermal energy, hydrocarbon production, coalbedmethane production, and nuclear-waste sequestration. They have unique characteristics in their flow behavior. Short-circuiting is encountered in these reservoirs during fluid-displacement processes. This unfavourable behavior leads to considerable unrecovered hydrocarbons. Injection of gas into these reservoirs to enhance oil recoverywithout mobility control can greatly reduce the efficiency of the enhanced oil recovery process. Foam greatly reduces the mobility of gas in non-fractured porous media and improves sweep efficiency. However, the knowledge of foam in fractured porous media is far less complete [Chapter 1].","Capillary number; flow in fractures; capillarity in fractures; fracture desaturation curves; in-situ foam generation; foam in fractures; mobility control in fractures; pre-generated foam flow; foam propagation","en","doctoral thesis","","978-94-6233-823-4","","","","","","","","","Reservoir Engineering","","",""
"uuid:d2d37a85-5c7c-4e9d-bb37-1283af0d3909","http://resolver.tudelft.nl/uuid:d2d37a85-5c7c-4e9d-bb37-1283af0d3909","Assessing residential Smart Grids pilot projects, products and services: Insights from stakeholders, end-users from a design perspective","Obinna, U.P. (TU Delft Design for Sustainability)","Reinders, A.H.M.E. (promotor); Joore, Peter (copromotor); Delft University of Technology (degree granting institution)","2017","The transition of the electricity system to smart grids would require from residential endusers to adapt to a new role of co-provider or active participants in the electricity system. End-users would for instance use energy efficiently, generate renewable energy locally, plan or shift energy consumption to most favourable times (such as when renewable energy is most abundant or during low peak periods), and trade self-produced electricity with other households. In a residential smart grid, a large part of the electricity supply in households will be generated by various decentralized energy resources like wind turbines, photovoltaic (PV) solar systems and micro-cogeneration systems. In this context, smart grids are supposed to provide the opportunity to make optimal use of renewable energy by matching demand to supply conditions, thereby facilitating the energy transition towards a more sustainable and less fossil fuel dependent society.","","en","doctoral thesis","","","","","","","","","","","Design for Sustainability","","",""
"uuid:bfa8a148-0d3a-49bc-a0db-aba16211a90d","http://resolver.tudelft.nl/uuid:bfa8a148-0d3a-49bc-a0db-aba16211a90d","Incorporating Crowd Perspectives into Multimedia Retrieval Systems","Vliegendhart, R. (TU Delft Multimedia Computing)","Hanjalic, A. (promotor); Larson, M.A. (promotor); Delft University of Technology (degree granting institution)","2017","The twenty-first century has brought plentiful computational power and bandwidth to the masses and has opened up access to multimedia recording devices for everyone. With these developments, a shift in the landscape of multimedia took place: from traditional one-to-many programming (the paradigm of traditional television) to many-to-many creation of diverse content. Nowadays, everyone can become a content creator and connect with new audiences, which has resulted in an explosion of diverse and available multimedia content. In tandem with this change, user needs have evolved as well. Yet, existing multimedia retrieval systems have been struggling to keep up with what users are looking for.
In this thesis, we argue that a multi-perspective approach is desired in order to cater to a diverse range of user needs. In order to know which perspectives should be taken, we turn to the crowd as a source of information on which perspectives would be actually helpful for serving users of multimedia retrieval systems. The central question underlying the research presented in this thesis is: How can we incorporate these perspectives of the crowd into multimedia retrieval systems?
The first major part of the thesis consists of the development of methodologies for effectively addressing the crowd in crowdsourcing studies. It first introduces the concept of framing. Framing allows people to picture a particular scenario that helps them to understand the task at hand and thus would result in high quality answers. Following the framing methodology, the focus shifts to the refinement of elicitation techniques in order to effectively model the common understanding on a particular topic. The methodologies presented in this first part are shown to be useful in informing the design of new features for a multimedia retrieval system.
The second major part of the thesis builds upon the methodologies developed in the first part and uses them to push the research on non-linear video access, i.e., supporting users in consuming relevant parts of a video, further in two ways. First, in a carefully designed crowdsourcing experiment, user comments referring to specifically mentioned time-points in a video are analyzed to build a crowd-informed typology that captures new dimensions of relevance at the time-code level. The usefulness of this typology is tested through a crowdsourced user study on a simulated search scenario. Second, a methodology is developed for obtaining realistic viewing behaviors through crowdsourcing experiments, which can be used in designing and testing new non-linear video access methods. This methodology stresses the importance of not only properly framing the crowdsourcing task, but also that the crowd and multimedia domain are jointly chosen in order to observe behavior that resembles behavior that participants would normally exhibit outside of the experiment. The methodology is used to demonstrate its ability to capture implicit viewing behavior that can be used to support users in non-linearly accessing videos.
The final contributions of the thesis consist of practical pointers for future work and a set of open research questions pertaining crowdsourcing tasks with an interpretive nature. The practical pointers for future work are fueled by experience gained through the various crowdsourcing campaigns that have been carried out throughout the thesis. Addressing these pointers will help in making crowdsourcing research more effective and reduce the effort needed in carrying out experiments. The set of open research questions are formulated by positioning this thesis in relation to prior related work. These questions serve as a starting point for future research on interpretive crowdsourcing tasks and pursuing them could aid the development of retrieval systems with multiple perspectives on multimedia.","multimedia retrieval; crowdsourcing; video search; user study","en","doctoral thesis","","978-94-6299-787-5","","","","","","2017-11-20","","","Multimedia Computing","","",""
"uuid:c70c38c8-af5f-4454-a612-38b92c773ea2","http://resolver.tudelft.nl/uuid:c70c38c8-af5f-4454-a612-38b92c773ea2","Improving risk neutral valuation techniques with applications in insurance","Singor, S.N. (TU Delft Numerical Analysis)","Oosterlee, C.W. (promotor); Delft University of Technology (degree granting institution)","2017","In times of market turmoil volatility increases and stock values and interest rates decrease, so that the risks in the balance sheets of insurance companies increase. An important part of these risks is due to the guarantees that are embedded in insurance policies. Life insurers sell products like unit-linked, profit sharing and variable annuity products. These contracts contain guarantees to the policyholders. Such contracts embedded in the insurers’ liabilities are called embedded options.
The value and cash flow of these contracts are respectively relevant for the balance sheet and the profit and loss account. Typically in periods of volatile markets, the value of these embedded options increase, so that the insurance company must hold a larger liability value on the balance sheet in order to be able to pay out future cash flows. The valuation of these embedded options in insurance liabilities is therefore important to insurers for risk management applications. In this thesis we consider various topics regarding the valuation of these embedded options.","","en","doctoral thesis","","978-90-9030645-2","","","","","","","","","Numerical Analysis","","",""
"uuid:7c428552-4a92-4955-8401-1ca2df3dbc34","http://resolver.tudelft.nl/uuid:7c428552-4a92-4955-8401-1ca2df3dbc34","Towards the design of flexibility management in smart grids: A techno-institutional perspective","Eid, C. (TU Delft Energie and Industrie)","Weijnen, M.P.C. (promotor); Hakvoort, R.A. (copromotor); Delft University of Technology (degree granting institution)","2017","The European policy focus on smart grids implies their development as an indispensable part of the future power system. However, the definition of a smart grid is broad and vague, and the actual implementation of a smart grid can differ significantly depending on the stakeholders involved. Smart electricity grids can be defined as electricity networks that can intelligently integrate the behaviour and actions of all end users connected to them – generators, consumers and those that are both – in order to efficiently ensure a sustainable, economic and secure electricity supply. This integration of behaviour is achieved through a two-way information and power exchange between suppliers and consumers using information technology.","energy; electricity; regulation; European Union; distribution networks; balancing markets; distributed energy resources; congestion management; market transparency; European governance; European modes of regulation; regulatory change","en","doctoral thesis","","978-94-6233-738-1","","","","The doctoral research has been carried out in the context of an agreement on joint doctoral supervision between Comillas Pontifical University, Madrid, Spain, KTH Royal Institute of Technology, Stockholm, Sweden and Delft University of Technology, the Netherlands.","","","","","Energie and Industrie","","",""
"uuid:a2289118-0909-49be-8071-ea062bf26adc","http://resolver.tudelft.nl/uuid:a2289118-0909-49be-8071-ea062bf26adc","Tailoring the free volume of all-aromatic polyimide membranes for CO2/CH4 gas separation","Madzarevic, Z. (TU Delft Novel Aerospace Materials)","Dingemans, T.J. (promotor); Delft University of Technology (degree granting institution)","2017","Efficient and cost-effective technologies that will enable separation and capture of CO2 are needed. The development of high-performance all-aromatic poly(ether)imide (P(E)I) membranes is attractive as they offer a large degree of design freedom and they are cheap to operate. However, the molecular design rules towards P(E)I membranes that exhibit high selectivity and high permeability with no or little CO2 plasticization are still largely unknown. The main objective of the research presented in this thesis is to understand the structure-property relationships of all-aromatic polyimide- and polyetherimidebased gas separation (CO2/CH4) membranes. In particular, the role of backbone design and how this affects the free volume and gas separation performance of the final membranes.","gas separation; Membrane; Positron annihilation; Polyimide; Polyetherimide; Free volume; CO2 removal; Structure-property relationship","en","doctoral thesis","","978-94-6186-854-1","","","","","","","","","Novel Aerospace Materials","","",""
"uuid:22c521e7-0ee4-4a68-a05e-4c41249ea928","http://resolver.tudelft.nl/uuid:22c521e7-0ee4-4a68-a05e-4c41249ea928","Biomonitor-Reflection of Large-Distance Air Mass Transported Trace Elements","Henriques Vieira, B.J. (TU Delft RST/Applied Radiation & Isotopes)","Wolterbeek, H.T. (promotor); Freitas, M.C. (promotor); Delft University of Technology (degree granting institution)","2017","The present thesis’ topic is the biomonitoring of atmospheric trace elements with attention focused on the long-range transported trace elements. The aim was to provide improved understanding of aerosol characteristics under the atmospheric transport dynamics of Central North Atlantic at different altitudes, and also evaluate the usability of lichen transplants to monitor those long-range transported elements. The study was carried out at Pico mountain in Azores, Portugal. The high altitude of this mountain reaching the Low Free Troposphere and its position in the central North Atlantic were the decisive factors for the selection of this sample site, because it was possible to analyze aerosol deposition from surrounding continents (Africa, Europe and Central-North America) at the layer of the atmosphere where the aerosol transportation occurs, both with sample collectors (active sampling) and with biomonitors (passive sampling). Since the number of samples was several hundreds, the thesis also includes a study on analytical aspects: a comparison of Nuclear Activation Analysis methods, in terms of accuracy, sensitivity and flexibility in routine application. The thesis comprises three main parts, divided in 8 chapters, including the general introduction and the general discussion and outlook: the first part is about a comparison of several NAA approaches under different experimental conditions; the second part is about aerosol characterization and source apportionment; and the third part focused on vitality of transplanted lichens and their usability, the latter in the case study implying several altitudes transects at Pico mountain, aiming at the possible recognition of elemental deposition from long-range pollution sources.","Neutron Activation Analysis; air pollution; aerosols; trace elements; biomonitoring; lichen transplants; long-range transport; North Atlantic; low troposphere","en","doctoral thesis","","9789462957701","","","","","","","","","RST/Applied Radiation & Isotopes","","",""
"uuid:bed854a8-e4bc-4d23-b90c-00d68c5f6517","http://resolver.tudelft.nl/uuid:bed854a8-e4bc-4d23-b90c-00d68c5f6517","Application of microwave plasma technology to convert CO2 into high value products","Fernandez de la Fuente, J. (TU Delft Intensified Reaction and Separation Systems)","Stankiewicz, A.I. (promotor); Stefanidis, G. (promotor); Delft University of Technology (degree granting institution)","2017","The global energy challenges along with global warming are regarded as the most important issues faced by humankind in the 21st century. A fossil fuels-based energy economy cannot support the rapidly increasing world energy demand in a sustainable manner. Hence, the development and implementation of alternative solutions to the use of fossil fuels have become a top priority for goverments, industries and academia. In this regard, a collaborative project (ALTEREGO) – funded by the European Union with the involvement of four industrial partners and four academic institutions, was carried out to develop novel forms of energy for intensified chemical manufacturing. In this thesis, the application of microwave plasma technology to convert carbon dioxide (CO2) into added-value products was studied with a twofold purpose: the storage of electricity into chemicals and the chemical recycling of CO2. This thesis is divided into four different sections where fundamental and engineering aspects of microwave plasma and its application to CO2 transformation are investigated. The first section tries to determine whether microwave plasma reactors can outperform conventional thermal chemical reactors, particularly when CO2 is part of the feedstock. The second section explores further optimization of microwave plasma reactors by combining experimental and modelling work. The third section tackles the problem of implementation of complex kinetic models, exemplified for CO2 dissociation, into multidimensional multiphysics simulations. The last section discusses scale up of microwave plasma technology, potential applications in the chemical industry and the milestones on the way to implementation of the technology to commercial scale. In this doctoral work, a bench-scale microwave plasma reactor was built to investigate two key chemistries: the reduction of CO2 with hydrogen (H2) and the splitting of pure CO2. In Chapter 2, we prove that microwave plasma can outperfom conventional thermal reactors; a chemical CO2 conversion as high as ~80% was attained under microwave plasma conditions, compared to ~60% via thermal processes. High microwave power input, high H2 content in the feed and low operating pressure favoured the attaitment of high CO2 conversions. Chapter 3 shows that two-dimensional multiphysics models with simple chemistries (e.g. argon) allow to study different reactor configurations in order to find the optimum performance. Thus, modelling results were used to develop a modified downstream section of the microwave plasma reactor that led to the improvement of chemical CO2 conversion (from 40 to 60%) at low H2 content in the feed, which is beneficial given the current limited scalability of the microwave plasma technology. In Chapter 4, a new simplification approach of state-to-state kinetic models in microwave plasma conditions is presented for the CO2 molecule. By means of chemical lumping, significant reduction in the number of species and reactions, 13 and 44 respectively, was achieved as opposed to its benchmark state-to-state kinetic model that required about 100 species and 10000 reactions. Lastly, Chapter 5 summarizes the current state-of-the-art applications of the microwave plasma technology, along with the existing possibilities for scale up. Additionally, a detailed description of the scientific and engineering challenges towards the commercialization of this technology is given. In the last chapter (Chapter 6), the major conclusions of the project are summarized and recommendations for continuation of the research are provided.","microwave plasma technology; CO2","en","doctoral thesis","","978-94-6299-761-5","","","","","","","","","Intensified Reaction and Separation Systems","","",""
"uuid:4957facf-cca9-41ea-9e40-8f68aedbbc6d","http://resolver.tudelft.nl/uuid:4957facf-cca9-41ea-9e40-8f68aedbbc6d","Afschrijven op publieke infrastructuur","Verlaan, J.G. (TU Delft Integral Design & Management)","Vrijling, J.K. (promotor); de Ridder, H.A.J. (promotor); Delft University of Technology (degree granting institution)","2017","Een exploratieve studie naar de financieel-economische implicaties van de invoering van een baten-lastenstelsel bij de Nederlandse Rijksoverheid voor de instandhouding van civieltechnische netwerken. Doel is om als professioneel opdrachtgever op basis van bedrijfseconomische beginselen de instandhoudingsactiviteiten optimaal te sturen. Hiermee kan een infrastructuurnetwerk effectief en efficiënt blijven voldoen aan de maatschappelijke wensen en behoeften naar vervoer en transport.
‘Which mechanisms drive the development of global tourism and its CO2 emissions, and what are potential effects and consequences of policy strategies to mitigate these emissions?","","en","doctoral thesis","","978-94-028-0812-4","","","","","","","","","Policy Analysis","","",""
"uuid:284c6349-3abf-4400-abfc-748cbc060ae0","http://resolver.tudelft.nl/uuid:284c6349-3abf-4400-abfc-748cbc060ae0","Accuracy and efficiency in numerical river modelling: Investigating the large effects of seemingly small numerical choices","Platzek, F.W. (TU Delft Environmental Fluid Mechanics; Deltares)","Stelling, G.S. (promotor); Pietrzak, J.D. (promotor); Delft University of Technology (degree granting institution)","2017","A river engineer is challenged with the task of setting up an appropriate model for a certain application. The model needs to provide suitable answers to the questions asked (i.e. be effective) and needs to do this within the available time (i.e. be efficient). To set up such a model with sufficient accuracy and certainty, a modeller needs to fully understand all processes that determine the flow patterns and the flow resistance. These encapsulate both the physical processes, such as bottom friction and turbulent mixing, as well as the unwanted, ’numerical processes’, due to discretization errors and grid effects. Unfortunately, these errors can be considerably large and can greatly influence model results.
To quantify the effects of numerical inaccuracies on the flow patterns and resistance (or backwater) in a river, several building blocks of the governing flow equations were analyzed. In particular for moderate resolutions, where a part of the geometrical variation in a river is captured on the grid, the influence of the momentum advection scheme and the turbulence model on the model results increases. For these modelling aspects, certain common methods were therefore analyzed concerning their accuracy, efficiency and convergence properties, for grid resolutions applied in practical engineering work.
The common consensus is that the backwater in river models is dominated by bottom friction and that momentum advection only has a local effect on the water levels and flow patterns. However, in this work, it is demonstrated that the artificial backwater contribution from the momentum advection approximation can be of the same order of magnitude as the bottom friction contribution, depending on the advection scheme. First this is shown using a one-dimensional (1D) analysis and then it is verified using 1D and two-dimensional (2D) numerical experiments with a wavy bed, with emerged and submerged groynes and finally for an actual river. For each test, the backwater contribution of three basic first-order and two second-order accurate advection schemes are computed and compared. The size of this contribution is found to be largely determined by the conservation/constancy properties of the scheme and to a lesser extent by the order of the scheme.
Of course, the bottom friction forms the most important contribution to the total backwater. In 2D models, the bottom friction computation is considered to be straightforward, in particular when applying the newly-developed subgrid method by Casulli and Stelling [51] and Stelling [219].
However, for three-dimensional (3D) hydrodynamic models, the computation is more complicated due to the vertical structure of the flow. Most 3D river models apply the popular σ-layering, where the grid nicely follows the bed and free-surface. At present, z-layer models are seldomly applied in river computations, because they suffer from the problem of inaccurate and discontinuous bottom shear stress representation, commonly assumed to arise due to the staircase bottom representation. At higher grid resolution, where more features of the topography are represented on the grid, a terrain-following coordinate system such as the σ-layering can result in a strong distortion of the grid. This is avoided using a z-layer discretization. Additionally, the latter discretization could be very efficient for river applications, due to the fact that excessive vertical resolution is avoided in shallow areas, such as floodplains.
For this purpose, the discretized equations for the z-layer model are analyzed and the cause of the inaccuracies is clearly shown to come from the emergence of thin near-bed layers. Based on this analysis, a new method is presented that significantly reduces the errors and the grid dependency of the results. The method consists of a near-bed layer-remapping and a modified near-bed discretization of the k-ε turbulence model. The applicability of the approach is demonstrated for uniform channel flow, using a schematized 2D vertical (2DV) model and for the flow over a bottom sill using the Delft3D modelling system (Deltares [69]).
Finally a new modelling strategy is presented for improving the efficiency of computationally intensive flow problems in environmental free-surface flows. The approach combines the recently developed semi-implicit subgrid method by Casulli and Stelling (Casulli [46], Casulli and Stelling [51], and Stelling [219]) with a hierarchical-grid solution strategy. The method allows the incorporation of high-resolution data on subgrid scale to obtain a more accurate and efficient hydrodynamic model. The subgrid method improves the efficiency of the hierarchical grid method by providing better solutions on coarse grids. The method is applicable to both steady and unsteady flows, but it is particularly useful in river computations with steady boundary conditions. There, the combined hierarchical grid-subgrid method reduces the computational effort to obtain a steady state with factors up to 43. For unsteady models, the method can be used for efficiently generating accurate initial conditions and further dynamic computations on high-resolution grids. Additionally, the method provides automatic insight in grid convergence. The efficiency and applicability of the method is demonstrated using a schematic test for the vortex shedding around a circular cylinder and a real-world case study on the Elbe River in Germany.
fractured, f r om which a significant amount of hydrocarbons are produced.
Naturally fractured reservoirs, like all reservoirs, are exploited in t w o stages:
primary recovery and secondary recovery (sometimes followed by tertiary
recovery, i.e. enhanced oil recovery (EOR)), with different recovery
mechanisms. During primary production, the reservoir is produced by fluid
expansion. In secondary production and EOR, since the fractures are much
more permeable than the matrix, the injected water or EOR agent flows
rapidly through the fracture network and surrounds the matrix blocks. Oil
recovery then depends on efficient delivery of water or EOR agent to the
matrix through the fracture network.","Fractured reservoir; oil recovery; dual-porosity; dualpermeability; percolation; non-uniform flow; fracture spacing; Peclet number","en","doctoral thesis","","978-94-6186-862-6","","","","","","","","","Reservoir Engineering","","",""
"uuid:66da67a6-cf90-4a71-822e-3d27d0e7ec8d","http://resolver.tudelft.nl/uuid:66da67a6-cf90-4a71-822e-3d27d0e7ec8d","Applications of passive microwave data to monitor inundated areas and model stream flow","Shang, H. (TU Delft Optical and Laser Remote Sensing)","Menenti, M. (promotor); Jia, L. (promotor); Steele-Dunne, S.C. (copromotor); Delft University of Technology (degree granting institution)","2017","The observation of surface water bodies in all weather conditions and better knowledge about inundation patterns are important for water resource management and flood early warning. Microwave radiometers at 37 GHz were applied to observe and study the inundation pattern in large subtropical floodplains in China, i.e. the Poyang Lake and Dongting Lake floodplains, due to the trade-off between the capability to penetrate hydrometeors and vegetation, revisiting time, and spatial coverage and resolution. Taking the shallow sensing depth at 37 GHz into account, open water, inundated area and water saturated soil surface all determine the surface emittance measured by the radiometer. Thus, Water Saturated Surface (WSS) is defined as the combination of these three land surface elements.
In subtropical regions, seasonal changes in vegetation cover and various surface roughness conditions are the major challenges for the observation of surface water bodies with microwave radiometers. Atmospheric attenuation, observation gaps and errors in the microwave observations reduce the quality of daily radiometric observations. To deal with the attenuation due to vegetation and surface roughness, a two-step model was developed: the first step is to retrieve the polarization difference emissivity from Polarization Difference Brightness Temperature (PDBT) at 37 GHz with the simplified radiative transfer model and the vegetation optical thickness at 37 GHz parameterized from Normalized Difference Vegetation Index (NDVI) ; the second step is to retrieve the fractional area of WSS from the emissivity difference with a linear model, which can be parameterized according to the Qp surface roughness model. To remove the noise and extract the surface signal (including surface emittance and vegetation attenuation) from the daily PDBT time series, the Time Series Analysis Procedure (TSAP) was developed to identify the spectral features of noisy components in the frequency domain and remove them with a proper filter. The overall method combined the TSAP and the two-step model to derive daily observation of WSS area. The retrieved WSS area in the Poyang Lake floodplain was in a good agreement with the lake area observed from MODerate-resolution Imaging Spectroradiometer (MODIS) and Advanced Synthetic Aperture Radar (ASAR). The observations and analysis of the inundation patterns in the Poyang Lake and Dongting Lake floodplains with this method illustrated the close relationship between inundated area, precipitation and stream flow.
Furthermore, a lumped hydrological model, named the discrete rainfall-runoff model, was developed to fully use the retrieved WSS area and to study the role of inundated area in stream flow production. This model simulates stream flow as the integration of contributions of antecedent precipitation in a certain period. Three implementations of the model were developed with the help of ground water table depth and the retrieved WSS area. The case study in the Xiangjiang River basin (upstream catchment of the Dongting Lake floodplain), China, illustrated that: 1) the longest duration of antecedent precipitation is a key parameter to determine model performance; 2) long duration would increase the model uncertainty and lead to overfitting; 3) the application of the WSS area can reduce the duration required to achieve a reasonable accuracy. The model parameters indicated the interaction between stream flow and various water storages, and the calibration results of three implementations implied the recharge period of ground water.
This study seeks to add to our understanding of urban diversity, as perceived and experienced by those who inhabit, frequent and govern urban areas. The study further makes use of a variety of qualitative and participatory techniques (i.e. qualitative interviews, roundtable talks, participant observations, and focus groups) to gather rigorous empirical data on living with and managing diversity in an inner-suburban neighbourhood of Toronto, namely Jane‑Finch.","Toronto; Jane-Finch; Diversity","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-92516-80-0","","","","A+BE | Architecture and the Built Environment No 12 (2017)","","","","","OLD Geo-information and Land Development","","",""
"uuid:e011fb5f-d2cf-4f1f-905d-d2f1e82630a2","http://resolver.tudelft.nl/uuid:e011fb5f-d2cf-4f1f-905d-d2f1e82630a2","Single-molecule approaches to unravel the mechanism of SMC proteins","Eeftens, J.M. (TU Delft BN/Cees Dekker Lab)","Dekker, C. (promotor); Delft University of Technology (degree granting institution)","2017","Every cell deals with the challenge of organising its DNA. First, the DNA needs to be compacted in size by several orders of magnitude. For example, in each human cell, 2 meters of DNA need to fit inside a micron-sized cell nucleus. Second, the DNA needs to stay accessible for cellular processes such as transcription and replication. To achieve these goals, cells are assisted by proteins that organise the DNA by locally bending the DNA, wrapping DNA around them, or by making DNA loops. A prime example are the Structural Maintenance of Chromosomes (SMC) family of proteins,which is known to be essential for DNA organisation. In eukaryotes, the SMC complex cohesin is responsible for keeping sister-chromatids together until the cell is ready to divide. Without cohesin, division might occur prematurely, leading to unevenly divided DNA. The SMC complex condensin is responsible for compacting the DNA into mitotic chromosomes. Indeed, without condensin, the DNA does not formproperly organised chromosomes. This thesis describes a series of experiments that aim to understand the molecular mechanism of these SMC proteins.","SMC proteins; cohesin; condensin; single-molecule biophysics; magnetic tweezers; DNA curtains; atomic force microscopy","en","doctoral thesis","","978-90-8593-320-5","","","","Casimir PhD Series, Delft-Leiden 2017-36","","","","","BN/Cees Dekker Lab","","",""
"uuid:29c23e60-9f4c-4d5e-9ab9-9bf6c520df01","http://resolver.tudelft.nl/uuid:29c23e60-9f4c-4d5e-9ab9-9bf6c520df01","Diamond-based Fabry-Perot microcavities for quantum networks","Bogdanovic, S. (TU Delft QID/Hanson Lab)","Hanson, R. (promotor); Delft University of Technology (degree granting institution)","2017","A quantumnetwork would allow the distribution of a quantum state over many spatially separated quantum nodes which individually possess the ability to generate, process and store quantum information. Connecting these nodes through quantum communication channels would enable sending quantum information over arbitrarily long distances, secret key distribution with guaranteed secure communication and distributed quantum computing. An appealing platform for implementation of quantum networks is the nitrogen-vacancy (NV) center in diamond.
An NV center is an optically active defect in diamond created by the substitution
of two adjacent carbon atoms in the diamond crystal matrix by a nitrogen atom and a neighboring vacancy. The NV center fits remarkably well in the described quantum network framework. Its electronic spin can be optically initialized, read out in a single shot, coherently manipulated and coupled to the nearby carbon-13 nuclear spins. These properties represent necessary ingredients of a multi-qubit quantum node. The possibility of the entanglement between the state of the NV center spin and a photon establishes a photonic quantum link which can enable the entanglement of the quantum nodes, forming a quantum network.
First building blocks of the proposed quantum networks based on NV centers were demonstrated by performing entanglement between two distant NV centers separated by more than a kilometer. However, the current success rate of the entangling protocols is greatly reduced due to the low emission probability of the NV center photons into the resonant zero phonon line (ZPL) and the inefficient photon extraction from the diamond. This is the central problem which prevents promoting our experiments beyond proof-of-principle demonstration towards practical implementation of the proposed quantumnetworks.
The goal of this thesis is tackling these issues by coupling NV centers to optical cavities which would greatly increase the rate of generation and collection of the ZPL photons through themechanism of Purcell enhancement.
The design and the fabrication of the components constituting a diamond-basedFabry-Perot microcavity, as used in this thesis, are described in Chapter 4. For large enhancement of the NV center resonant emission, a low cavity mode volume is necessary. This is achieved by inserting a micrometers thin diamond slab into the cavity architecture; we present the fabrication details of such samples.
Chapter 5 describes the fabrication of an integrated platform for microwave signal delivery to the NV centers within a diamond membrane in the cavity architecture. Microwave striplines and arrays of unique markers are embedded in the mirror onto which the diamond is bonded. We investigate the mirror optical properties post fabrication and find that the fabrication method preserves the mirror optical performance. We describe the diamond bonding method and demonstrate addressing of the NV center spin.
In Chapter 6 we probe the properties of the cavity with the embedded diamond membrane through measurements of the cavity finesse. We investigate the cavity finesse dependence on length, mode structure and temperature. We further explore the operation at cryogenic temperatures by probing the effect of cryostation vibration on the cavity linewidth.
Finally, in Chapter 7 we discuss ways of improving the existing experimental capabilities, outline the first steps for demonstrating enhancement of the NV center resonant emission and suggest future experiments that can be performed with this system. We conclude that coupling of the NV centers to the cavities developed in this research could lead to an increase of remote entanglement success rates by more than three orders of magnitude.","","en","doctoral thesis","","978-90-8593-321-2","","","","Casimir PhD series, Delft-Leiden 2017-37","","","","","QID/Hanson Lab","","",""
"uuid:72603e00-09dc-4a68-94ee-1f0878dedd4d","http://resolver.tudelft.nl/uuid:72603e00-09dc-4a68-94ee-1f0878dedd4d","Next Generation Automotive DeNOX Catalysts: Ceria What Else?","Wang, Y. (TU Delft ChemE/Catalysis Engineering)","Makkee, M. (promotor); Kapteijn, F. (promotor); Delft University of Technology (degree granting institution)","2017","Nitrogenoxides (NOx, including NO and NOx) are a group of hazardous, toxic and harmfulgasses, which have an adverse effect on both environment and human health,e.g., acid rain, photochemical smog, and affecting the human respiratorysystem. The NOx concentration in most of the EUcities exceeds the EU annual limit value (40 μg/m3). Around40% of the emitted NOx is attributedto transport related emissions. In currently applicable Euro 6, the real NOx emissionfrom a diesel car is on average 400% than the Euro 6 regulation limit allows ifmeasured under more realistic driving conditions. Although NSR and SCR DeNOx systemshave been broadly investigated and commercially applied with the aim to reduceNOx emissions from lean burn engines,some common problems still exist, e.g., a narrow temperature window and a lowgas hourly space velocity (up to 50.000 L/L/h)in order to convert the NOx selectivelyinto Nx. Due to the in practice high NOxemissionfrom September 2017 additional legislation will be in force to arrive at a morerealistic determination of the highly dynamic NOxemissionby among others the introduction of the real driving emission (RDE) test in thecertification procedure. The Di-Air (Diesel NOxaftertreatment by Adsorbed Intermediate Reductants) system was developed by Toyota(2011-2012) and is still under development. This Di-Air system showed promiseby yielding a high NOx conversion,especially at high temperature (up to 600 ∘C) and high gas hourly space velocity (up to 125.000 L/L/h).This system opts to meet the future stringent NOxreductionrequirements under RDE test conditions (Chapter 1)…","Ceria; NOx; TAP; Di-Air","en","doctoral thesis","","978-94-6186-859-6","","","","","","","","","ChemE/Catalysis Engineering","","",""
"uuid:792fc17c-d68f-480a-b797-e7750879f4a2","http://resolver.tudelft.nl/uuid:792fc17c-d68f-480a-b797-e7750879f4a2","Modeling level alignment at interfaces in molecular junctions","Celis Gil, J.A. (TU Delft QN/Thijssen Group)","van der Zant, H.S.J. (promotor); Thijssen, J.M. (copromotor); Delft University of Technology (degree granting institution)","2017","Molecular devices are planned as alternative solutions for heat dissipation problems and reliable fabrication of nano-scale devices. However, it also opens up possibilities of combining many other degrees of freedom into functional device design. While they introduce interesting opportunities for study, they also demand a versatile, scalable toolset. In this thesis we calculate the electronic transport through molecular devices using the DFT+NEGF technique. We model the interaction of the molecule with the electrodes surfaces taking into account different facts such as the gap reduction produced by the charge polarization on metallic surfaces, the spin states of the molecule and the hydrophilicity of the leads. We hope our contribution helps to improve the functional single molecule devices design.","Molecular Electronics; Single-Molecule Junction; Green’s Function; DFT; level alignment","en","doctoral thesis","","978-90-8593-322-9","","","","","","","","","QN/Thijssen Group","","",""
"uuid:3ed1b44e-44f4-4b84-abd1-34722d106c22","http://resolver.tudelft.nl/uuid:3ed1b44e-44f4-4b84-abd1-34722d106c22","Flexible piezoelectric composites: Bridging the gap between materials and applications","Deutz, D.B. (TU Delft Novel Aerospace Materials)","Groen, W.A. (promotor); van der Zwaag, S. (promotor); Delft University of Technology (degree granting institution)","2017","The main objective of this thesis was to develop new piezoelectric ceramic polymer
composites that maintain the ease of manufacturing of random composites and
could function as a human touch sensor, while simultaneously exploring their potential as energy harvesters.","","en","doctoral thesis","","978-94-6295-722-0","","","","","","","","","Novel Aerospace Materials","","",""
"uuid:10d555dd-8f03-4986-b5bf-ef09a63c92e1","http://resolver.tudelft.nl/uuid:10d555dd-8f03-4986-b5bf-ef09a63c92e1","Integrated Community Energy Systems","Koirala, B.P. (TU Delft Energie and Industrie)","Herder, P.M. (promotor); Hakvoort, R.A. (copromotor); Delft University of Technology (degree granting institution); Comillas Pontifical University (degree granting institution); KTH Royal Institute of Technology (degree granting institution)","2017","Energy systems across the globe are going through a radical transformation as a result of technological and institutional changes, depletion of fossil fuel resources, and climate change issues. Accordingly, local energy initiatives are emerging and increasing number of the business models are focusing on the end-users. In this context, Integrated community energy systems (ICESs) are emerging as a modern development to reorganize local energy systems allowing simultaneous integration of distributed energy resources (DERs) and engagement of local communities. With the emergence of ICESs new roles and responsibilities as well as interactions and dynamics are expected in the energy system. With this background, this thesis aims to understand the ways in which ICESs can contribute to enhancing the energy transition.
This thesis utilizes a conceptual framework consisting of four institutional and three societal levels in order to understand the interaction and dynamics of ICESs implementation. Current energy trends and the associated technological, socio-economic, environmental and institutional issues are reviewed. The developed ICES model performs optimal planning and operation of ICESs and assesses their performance based on economic and environmental metrics. This thesis demonstrates the added value of ICESs to the individual households, local communities, and the society. As the added value of ICESs is impacted by the institutional settings internal and external to the system, a comprehensive institutional design considering techno-economic and institutional perspectives is necessary to ensure effective contribution of ICESs in the energy transition.
Currently, PHA is commercially produced using pure cultures and well-defined substrates. To reduce the cost of PHA production and allow broad application, microbial enrichment cultures could be used. This eliminates the need for axenic conditions and allows the use agro-industrial waste streams as substrate, contributing to the development of a circular economy.
The aim of this thesis was to investigate scale-up aspects of the PHA production by microbial enrichment cultures. A translation of the previously developed laboratory process to industrial application raises new questions concerning the impact of (variable) wastewater composition and process design, for example. Chapter 2 and 3, therefore, focus on the fate of different constituents of an acidified waste stream, and the second part (Chapter 4-6) of the thesis focusses on alternative process configurations. Chapter 1 provides a general introduction to the topic and describes the research preceding this thesis. Chapter 7 summarizes and integrates the main findings.","Feast-famine; Microbial enrichment culture; Plasticicumulans acidivorans; Polyhydroxyalkanoate (PHA); Resource recovery","en","doctoral thesis","","978-94-028-0815-5","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:dd6d52a5-b091-44c1-ba45-f96c8c3c3590","http://resolver.tudelft.nl/uuid:dd6d52a5-b091-44c1-ba45-f96c8c3c3590","Efficient Algorithms for Network-Wide Road Traffic Control","van de Weg, Goof Sterk (TU Delft Transport and Planning)","Hoogendoorn, S.P. (promotor); Hegyi, A. (copromotor); Delft University of Technology (degree granting institution)","2017","Controlling road traffic networks is a complex problem. One of the difficulties is the coordination of actuators, such as traffic lights, variables speed limits, ramp metering and route guidance, with the aim to improve the network performance over a near-future time horizon. This dissertation develops algorithms that specifically balance fast computation time and improved traffic network performance; both for freeway traffic in part I, and for urban traffic in part II.","Traffic Control; Model predictive control (MPC); Link Transmission Model; Variable speed limits; Ramp metering; Route guidance; Intersection control; Road traffic; METANET","en","doctoral thesis","TRAIL Research School","978-90-5584-229-2","","","","TRAIL Thesis Series no. T2017/11, the Netherlands TRAIL Research School","","","","","Transport and Planning","","",""
"uuid:7f631f4e-3ffa-42fc-9ed1-c98af583ea28","http://resolver.tudelft.nl/uuid:7f631f4e-3ffa-42fc-9ed1-c98af583ea28","Synthesis and evaluation of porous titanium scaffolds prepared with the space holder method for bone tissue engineering","Arifvianto, B. (TU Delft Biomaterials & Tissue Biomechanics)","van der Helm, F.C.T. (promotor); Zhou, J. (copromotor); Delft University of Technology (degree granting institution)","2017","Loss of function and impaired life quality as a result of large bone defects remain a serious problem in the society. Basically, the bone tissue has the capability of healing by itself when fractured. However, impaired healing may occur, leading to delayed union or even non-union when a bone segment is excised above a critical size. In recent years, bone tissue engineering has received increasing attention in the biomedical research community as an alternative approach to bone defect reconstruction. With this approach, damaged bone tissue can be repaired and remodelled with new bone cells at the defect site. For this purpose, a synthetic porous material, namely scaffold, is needed to act as a template to facilitate cellular activities, such as the migration
and proliferation of osteoblasts and mesenchymal cells, as well as the transport of nutrients and oxygen required for vascularization during bone tissue development at the defect site. Currently, titanium is considered to be a preferred biomaterial for bone tissue engineering scaffolds owing to its excellent biocompatibility and mechanical properties. So far, the space holder method has been preferably used for the fabrication of titanium scaffolds with high porosity and open, interconnected pores for bone tissue engineering. Despite a large number of studies on the scaffold fabrication with this method, the mechanisms involved and the way to control the porous structure of scaffolds during fabrication have not yet been fully understood.","synthesis; evaluation; titanium scaffold; space holder method; bone tissue engineering","en","doctoral thesis","","978-94-6186-851-0","","","","","","","","","Biomaterials & Tissue Biomechanics","","",""
"uuid:37d758b5-bf35-4a34-83c9-235008eaf116","http://resolver.tudelft.nl/uuid:37d758b5-bf35-4a34-83c9-235008eaf116","Metal-Organic-Framework mediated supported-cobalt catalysts in multiphase hydrogenation reactions","Sun, X. (TU Delft ChemE/Catalysis Engineering)","Kapteijn, F. (promotor); Gascon, Jorge (promotor); Delft University of Technology (degree granting institution)","2017","The production of most industrially important chemicals involves catalysis. Depending on the difference in phases between the catalysts and reactants, one distinguishes homogenous catalysis and heterogeneous catalysis, with the latter being more attractive in real applications, due to the easy separation of products from catalysts and reusing the latter. In spite of the research and development of heterogeneous catalysts for decades, the exploration for catalysts system with outstanding activity, stability and selectivity remains a challenging task. In general, most of the chemical reactions occur on the surface atoms of supported metal (oxide) nanoparticles. Therefore, to address this challenge, current studies generally focus on understanding the relation between the catalytic performance and catalyst properties by controlling the particle size and distribution, and even
the shape of supported nanoparticles, and the interaction between nanoparticles and support. In order to further contribute to this objective, in this thesis we applied metal-organic-frameworks (MOFs) as a sacrificial precursor to produce catalysts for catalytic hydrogenation reactions, important routes for the production of a variety of fine and bulk chemicals in industry.","","en","doctoral thesis","","978-94-028-0808-7","","","","","","","","","ChemE/Catalysis Engineering","","",""
"uuid:23077104-34da-4ea4-9fdd-795958f060e8","http://resolver.tudelft.nl/uuid:23077104-34da-4ea4-9fdd-795958f060e8","Learning from co-housing initiatives: Between Passivhaus engineers and active inhabitants","Tummers-Mueller, L.C. (TU Delft Spatial Planning and Strategy)","van den Dobbelsteen, A.A.J.F. (promotor); van Bueren, Ellen (promotor); Delft University of Technology (degree granting institution)","2017","Following the UN world summits on Climate Change (Paris 2015) and Habitat (Quito 2016), most European cities assume an active role to implement internationally agreed goals related to climate change, translated in the so-called New Urban Agenda. At the same time, the urban housing market is increasingly inaccessible for low- and middle-income households. To overcome problems such as failing housing supply and high energy-bills, groups of residents take initiatives to create and manage housing projects collectively; these initiatives are further indicated as ‘co-housing’.
The aim of this study is to create deeper understanding of the current rise of co-housing in Europe, and what it could mean in urban policies addressing energy transition and climate change. There are two domains where co-housing can become an important asset for urban development: design and maintenance of (semi-)public space for climate change mitigation, and the transition to a circular metabolism in housing. Based on empirical data, this thesis concludes that co-housing projects present relevant models and approaches for reducing the energy consumption and for integrating renewable energies in the general housing stock. Engineers can learn from co-housing pioneers to advance the targets for energy-transition and further develop sustainable cities.
The thesis contributes to the emerging body of knowledge with a new understanding of co-housing, analysing its ‘key-features’ with an interdisciplinary framework, in a European context. It adds a new perspective to existing co-housing research, which is dominated by social sciences, by drawing attention to the physical characteristics of co-housing, produced in architectural, planning and engineering processes (the technosphere). The choices made during design and building are not only shaped by the residents’ aims and perception of sustainability, but also influenced by technosphere-related institutions, such as the building-components industry, energy or waste networks and providers, and planning regulations. The professional partners for the projects, such as housing associations and engineers, are equally affected by the institutional context, but their position is different from that of residents. They may for example be more anchored in governmental or professional regulations.","co-housing; Sustainability; Urban development; technosphere; energy-transition","en","doctoral thesis","A+BE | Architecture and the Built Environment","","","","","A+BE | Architecture and the Built Environment No 14 (2017)","","","","","Spatial Planning and Strategy","","",""
"uuid:94185e03-0e70-423f-a3f9-a059953877e1","http://resolver.tudelft.nl/uuid:94185e03-0e70-423f-a3f9-a059953877e1","Signal strength based localization and path-loss exponent self-estimation in wireless networks","Hu, Y. (TU Delft Signal Processing Systems)","Leus, G.J.T. (promotor); Delft University of Technology (degree granting institution)","2017","Wireless communications and networking are gradually permeating our life and substantially influencing every corner of this world. Wireless devices, particularly
those of small size, will take part in this trend more widely, efficiently, seamlessly and smartly. Techniques requiring only limited resources, especially in terms of hardware, are becoming more important and urgently needed. That is why we focus this thesis around analyzing wireless communications and networking based on signal strength (SS) measurements, since these are easy and convenient to gather. SS-based techniques can be incorporated into any device that is equipped with a wireless chip.","","en","doctoral thesis","","978-94-6295-758-9","","","","","","","","","Signal Processing Systems","","",""
"uuid:7fe64dde-7fb5-4392-8160-da6f7916dc6b","http://resolver.tudelft.nl/uuid:7fe64dde-7fb5-4392-8160-da6f7916dc6b","Estimating geocenter motion and changes in the Earth’s dynamic oblateness from GRACE and geophysical models","Sun, Y. (TU Delft Physical and Space Geodesy)","Klees, R. (promotor); Riva, R.E.M. (copromotor); Delft University of Technology (degree granting institution)","2017","Geocenter motion and changes in the Earth’s dynamic oblateness (J2) are of great importance in many applications. Among others, they are critical indicators of largescale mass redistributions, which is invaluable to understand ongoing global climate change. The revolutionary Gravity Recovery and Climate Experiment (GRACE) satellite mission enables a constant monitoring of redistributing masses within the Earth’s system. However, it still cannot provide reliable time variations in degree-1 coefficients and degree-2 zonal coefficients, which are directly related to geocenter motion and J2 variations.","Geocenter motion; J2; Temporal gravity field variations; Mass transport; Glacial isostatic adjustment; GRACE; Satellite Laser Ranging","en","doctoral thesis","","978-94-6361-016-2","","","","","","","","","Physical and Space Geodesy","","",""
"uuid:fafbbfcb-9ed1-4920-8b9b-e4e64d143daf","http://resolver.tudelft.nl/uuid:fafbbfcb-9ed1-4920-8b9b-e4e64d143daf","Electromagnetic Design of High Frequency PFC Boost Converters using Gallium Nitride Devices","Wang, W. (TU Delft DC systems, Energy conversion & Storage)","Ferreira, Jan Abraham (promotor); Delft University of Technology (degree granting institution)","2017","Throughout the history of power electronics, main driving force of developments is attribute to innovations in power semiconductor technology. With continuous technical improvements in the past 30 years, Si devices, being the most widely used power semiconductor technology, are approaching physical limits e.g. breakdown field, thermal conductivity, etc. of the basic material. Performances of power converters such as efficiency, power density, etc., therefore, has entered into a stage that further improvements are not likely to happen without revolutionary advance in power semiconductor technology. GaN power semiconductor devices, judging from the wide bandgap nature of its material, have the potential of outperforming conventional Si counterparts in measures of high voltage, high temperature and high frequency operations. Capabilities of a GaN device, however, is influenced by more issues, which, apart from material properties, also include die design and fabrication approaches, device packaging technologies, how they are used in an application, etc. In fact, at this stage, available GaN transistors are mostly of lateral structure and, as a consequence, confined to low voltage ones (<1kW) while maximum junction temperature of these products is limited to 175 °C because of the lack of suitable packaging technologies. So far, high frequency operation performance of GaN power semiconductors is unknown and needs to be investigated. This thesis explores high frequency operation potentials of single-die, normally-off GaN power semiconductors that are suited for high voltage, low current applications. The exploration is carried out by means of conducting loss modeling of GaN transistors, uncovering desirable operation conditions of GaN devices for high frequency operations according to analysed results from the model, identifying optimal topologies and operation modes in power converters that can facilitate such conditions for GaN to achieve optimal utilization of the new technology and demonstrating potentials of GaN power semiconductors in an application with all the developed techniques employed.","","en","doctoral thesis","","978-94-028-0825-4","","","","","","","","","DC systems, Energy conversion & Storage","","",""
"uuid:89c1ddb9-db46-4664-a501-1125917d2622","http://resolver.tudelft.nl/uuid:89c1ddb9-db46-4664-a501-1125917d2622","Charge Carrier Trapping Processes and Deliberate Design of Afterglow Phosphors","Luo, H. (TU Delft RST/Fundamental Aspects of Materials and Energy)","Dorenbos, P. (promotor); Delft University of Technology (degree granting institution)","2017","In this thesis, two different charge carrier trapping and detrapping processes are
investigated: (1) electron trapping and electron release; (2) hole trapping and hole
release. Both of these two processes can be used to “deliberate design” afterglow
phosphors or storage materials.","","en","doctoral thesis","","978-94-6295-768-8","","","","","","","","","RST/Fundamental Aspects of Materials and Energy","","",""
"uuid:5f84f8a9-b4e7-4248-a9cb-be9bde19b69d","http://resolver.tudelft.nl/uuid:5f84f8a9-b4e7-4248-a9cb-be9bde19b69d","Ballistic Majorana nanowire devices","Gül, Önder (TU Delft QRD/Kouwenhoven Lab)","Kouwenhoven, Leo P. (promotor); Bakkers, E.P.A.M. (promotor); Delft University of Technology (degree granting institution)","2017","The dissertation reports a series of electron transport experiments on semiconductor nanowires towards realizing the hypothesized topological quantum computation. A topological quantum computer manipulates information that is stored nonlocally in the topology of a physical system. Such an operation possesses advantages over the current quantum computation platforms due to its robustness against local sources of decoherence, offering a natural fault-tolerance. Among various candidate platforms to realize topological quantum computation, semiconductor nanowires with strong spin-orbit coupling attached to conventional superconductors have emerged as a prime contender. The predicted topological properties of such a system is associated with the emergence of Majorana modes.
The presence of disorder has been considered to be the main obstacle towards the realization of a topological quantum computer based on semiconductor nanowires. Disorder can mimic the experimentally measurable properties of Majoranas, or can render the promise of fault-tolerance ineffective. The experiments in the dissertation aim for eliminating the disorder on the surface of the nanowire, and in the interface between the nanowire and the superconductor. Following a series of investigations demonstrating materials improvements, ballistic Majorana nanowire devices are realized.","topological states; topological quantum computation; topological superconductivity; Majorana; semiconductor nanowire; InSb","en","doctoral thesis","","987-90-8593-313-7","","","","Casimir PhD Series, Delft-Leiden 2017-29","","","","","QRD/Kouwenhoven Lab","","",""
"uuid:e2f5a2d2-7e79-4049-9031-6924d7ec0f22","http://resolver.tudelft.nl/uuid:e2f5a2d2-7e79-4049-9031-6924d7ec0f22","High resolution resist-free lithography in the SEM","Hari, S. (TU Delft ImPhys/Charged Particle Optics)","Kruit, P. (promotor); Hagen, C.W. (copromotor); Delft University of Technology (degree granting institution)","2017","Focussed Electron Beam Induced Processing is a high resolution direct-write nanopatterning technique. Its ability to fabricate sub-10 nm structures together with its versatility and ease of use, in that it is resist-free and implementable inside a Scanning Electron Microscope, make it attractive for a variety of applications in nanofabrication. FEBIP comprises two complementary techniques: Electron Beam Induced Deposition and Electron Beam Induced Etching. In EBID (EBIE), the electron beam is scanned in the presence of a precursor gas that has been let into the chamber of the SEM. The precursor molecules adsorbed onto the sample surface are dissociated by the electron beam, as well as by secondary and backscattered electrons that are generated at the surface by the interaction of the electron beam with the sample. The nonvolatile dissociation product forms a deposit (etch) on the surface, while the volatile products are pumped out. A pattern can thus be deposited (etched) by merely scanning the beam in the presence of the precursor. As the secondary electrons are lower in energy (< 50 eV), they contribute more significantly to the dissociation than the higher energy backscattered or primary electrons. At the outset therefore, the resolution in EBID is limited by the emission radius of the secondary electrons, which can be as low as a few nanometres. The fabrication of lines as little as 3 nm wide on bulk silicon attests to the high resolution patterning capability of EBID, which in turn makes it potentially attractive for lithography. The development of a laboratory nanofabrication technique into a viable alternative for lithography, however, requires several criteria to be met.","Lithography; Nanofabrication; Electron microscopy; high resolution; Electron beams","en","doctoral thesis","","978-94-6299-752-3","","","","","","","","","ImPhys/Charged Particle Optics","","",""
"uuid:07dc4081-2436-4b6b-8f6d-b36d6a588047","http://resolver.tudelft.nl/uuid:07dc4081-2436-4b6b-8f6d-b36d6a588047","Modelling of pile load tests in granular soils: Loading rate effects","Nguyen, T.C. (TU Delft Geo-engineering)","van Tol, A.F. (promotor); Holscher, P. (copromotor); Delft University of Technology (degree granting institution)","2017","People have used pile foundations throughout history to support structures by transferring
loads to deeper and stronger soil layers. One of the most important questions during the design of the pile foundation is the bearing capacity of the pile. The most reliable method for determining the bearing capacity is to use results from pile load tests. Traditionally, the static pile load tests have been used and more recent the dynamic tests. The rapid load test, at intermediate loading rates, was invented to overcome the disadvantages of the static tests (expensive and time-consuming) and the dynamic tests (the stress wave effects) and is conducted more and more in practice.","rapid pile load test; granular soil; excess pore pressure; rate effect; excess pore pressure effect","en","doctoral thesis","","978-94-6186-858-9","","","","","","","","","Geo-engineering","","",""
"uuid:d99fcde3-4b01-4077-9a1f-a8b6eda6a5e7","http://resolver.tudelft.nl/uuid:d99fcde3-4b01-4077-9a1f-a8b6eda6a5e7","Strategies and genetic tools for engineering free-energy conservation in yeast","Mans, R. (TU Delft BT/Industriele Microbiologie)","van Maris, A.J.A. (promotor); Pronk, J.T. (promotor); Delft University of Technology (degree granting institution)","2017","Microbial production of fuels and chemicals provides opportunities for replacing conventional production processes, which are based on chemical synthesis from non-renewable raw materials or on labour- and capital-intensive extraction from animal or plant tissues. Aeons of evolution, in which astronomical numbers of microorganisms competed for scarce resources, have optimized and streamlined the thousands of biochemical conversions in their cells for growth in specific natural environments. The resulting metabolic diversity, represented by many millions of microbial species, offers a great potential for developing novel microbial conversions of renewable substrates to products. Major advances in (recombinant) DNA technology have enabled the engineering of several microorganisms into efficient production platforms, which can be further modified for the production of a wide range of fuels and chemicals. For high-volume products based on microbial fermentation, such as transport fuels and commodity chemicals, the use of substrate can comprise up to 70% of the total product costs. These high substrate costs make a high product yield on the substrate essential for economic viability.","","en","doctoral thesis","","","","","","","","2018-01-01","","","BT/Industriele Microbiologie","","",""
"uuid:3ba81eb3-3278-411b-abc2-effdbb119991","http://resolver.tudelft.nl/uuid:3ba81eb3-3278-411b-abc2-effdbb119991","Time-resolved Imaging of Secondary Gamma Ray Emissions for in vivo Monitoring of Proton Therapy: Methodological and Experimental Feasibility Studies","Cambraia Lopes Ferreira da Silva, P.","van der Graaf, H. (promotor); Parodi, K (promotor); Schaart, D.R. (copromotor); Vieira Crespo, P.A. (copromotor); Delft University of Technology (degree granting institution)","2017","Particle therapy (PT), including proton therapy, has important advantages compared to external beam photon therapy (section 1.1). This is because most of the therapeutic effect of a proton beam is localized at the endpoint, where most of its energy is imparted to the medium (Bragg peak), with nearly no dose deposited beyond that point. However, the highly localized dose deposition makes proton therapy more sensitive to (1) patient morphological alterations, including tumor progression / regression, (2) organ motion, (3) patient setup errors, (4) tissue lateral heterogeneities that render the results obtained with non-Monte-Carlo-based treatment planning algorithms unreliable to some degree, (5) beam characteristics utilized for treatment planning, and (6) the conversion of Hounsfield units (computed tomography data), to tissue density and stoichiometry. In addition, uncertainties in the mean excitation potential I, necessary to calculate the stopping power of the penetrating ions, further contribute to potential beam range inaccuracies. Given the aforementioned sources of treatment error, an imaging technique capable of providing feedback proportional to the quality of the treatment being delivered is highly desired and a very active field of research in proton therapy (section 1.2). Specifically, it is of utmost importance to develop an imaging technique capable of providing feedback with respect to the in vivo beam range, especially when highly-heterogeneous beam paths are crossed by a pencil beam. Such a imaging technique can make use of secondary gamma (γ) radiation emitted by the patient, as a result of nuclear interactions between the projectiles and the nuclei of the irradiated medium. These techniques are mainly divided into two categories, according to the type of secondary γ rays probed: (1) positron emission tomography (PET), which makes use of delayed emission, namely pairs of 511 keV annihilation photons, resulting from β+-decay; and (2) prompt gamma (PG) imaging, which makes use of the emission of single photons typically on a sub-nanosecond timescale…","","en","doctoral thesis","","978-94-6186-850-3","","","","","","","","","","","",""
"uuid:aae32bcc-a6cb-49fc-878b-d94d2d77d906","http://resolver.tudelft.nl/uuid:aae32bcc-a6cb-49fc-878b-d94d2d77d906","Ambient-Energy Powered Multi-Hop Internet of Things","Rao, V.S. (TU Delft Embedded Systems)","Niemegeers, I.G.M.M. (promotor); Venkatesha Prasad, Ranga Rao (copromotor); Delft University of Technology (degree granting institution)","2017","The Internet of Things (IoT) is one of the disruptive technologies in today’s connected world. The idea is to connect every thing to the Internet. IoT holds the key to many current and future technologies that will significantly influence the quality and sustainability of life. The vision of IoT is to enable large scalemonitoring and/or control in order to either observe a phenomenon or to automate tasks. Many novel IoT applications are fueling an exponential growth in the deployment of embedded devices. These devices equipped with sensors and/or actuators with wireless communication capabilities are central in realizing the IoT infrastructure. These devices must have a small form factor in order to be portable, deployable and economical. Therefore, these devices are resource constrained with respect to the available power, computing and memory. In order to reduce the usage of the power on communications involved, amulti-hop approach is adopted, leading to wireless sensor networks
(WSNs).","Energy Harvesting; Internet of Things (IoT); wireless sensor network (WSN); Constructive Interference","en","doctoral thesis","","978-94-6186-856-5","","","","","","2019-07-31","","","Embedded Systems","","",""
"uuid:c6a426c2-dd7f-4738-9143-fd024d887239","http://resolver.tudelft.nl/uuid:c6a426c2-dd7f-4738-9143-fd024d887239","Nanoscale Electrostatic Control of Superconducting Oxide Interfaces","Mulazimoglu, E. (TU Delft QN/Caviglia Lab)","Vandersypen, L.M.K. (promotor); Caviglia, A. (promotor); Delft University of Technology (degree granting institution)","2017","","Complex Oxide Heterostructures; two-dimensional electron system (2DES); LaAlO3/SrTiO3; gate tunable superconductivity; nanoscale top gating; superconducting quantum interference devices (SQUIDs); superconducting quantum point contact (SQPC)","en","doctoral thesis","","978-90-8593-317-5","","","","","","2018-10-13","","","QN/Caviglia Lab","","",""
"uuid:ebcc41b7-3574-4cd7-84ba-0ef1d4fe6e3b","http://resolver.tudelft.nl/uuid:ebcc41b7-3574-4cd7-84ba-0ef1d4fe6e3b","Strategic design of multi-actor nascent energy and industrial infrastructure networks under uncertainty","Melese, Y.G. (TU Delft Energie and Industrie)","Herder, P.M. (promotor); Stikkelman, R.M. (copromotor); Delft University of Technology (degree granting institution)","2017","This thesis focuses on the design of nascent energy and industrial infrastructure networks: networks that still needed to be built and for which neither scope, size, nor participants were certain. It develops systematic design analysis approaches to help improve design under uncertainty by means of flexibility. There are four parts to the thesis. The first part focuses on understanding the concept of flexible design and its application to the design of engineering systems and energy infrastructure networks. The second part focuses on flexibility analysis with the objective of improving their lifetime performance in the face of uncertain design requirements. A systematic engineering design approach combining graph theory network modelling, exploratory modelling and real options is proposed to explore candidate designs, identify valuable flexibility enablers and appreciate the value of flexible design strategies. The third part considers the role of risk sharing when actors co-invest in infrastructure networks under uncertain environment. Contractual arrangements are modelled between actors as a cooperative game and analyses the effects of uncertainty. The fourth part focuses on how private and public actors may enhance desired performances when developing new energy and industrial infrastructure networks under uncertainty.","Energy and industrial infrastructure networks; Design under uncertainty; Design flexibility; Risk sharing; Cooperative game theory; Real options","en","doctoral thesis","","978-94-6361-002-5","","","","","","","","","Energie and Industrie","","",""
"uuid:dbaf67cc-598c-4b26-b07f-5d781722ebfd","http://resolver.tudelft.nl/uuid:dbaf67cc-598c-4b26-b07f-5d781722ebfd","Safe Online Robust Exploration for Reinforcement Learning Control of Unmanned Aerial Vehicles","Mannucci, T. (TU Delft Control & Simulation)","Mulder, Max (promotor); van Kampen, E. (copromotor); Delft University of Technology (degree granting institution)","2017","","Unmanned Aerial Vehicles; Reinforcement Learning; Safe Exploration; Hierarchical Reinforcement Learning","en","doctoral thesis","","978-94-028-0762-2","","","","","","2017-10-13","","","Control & Simulation","","",""
"uuid:4d4e8b34-674b-4c8b-b314-b277ba374d5e","http://resolver.tudelft.nl/uuid:4d4e8b34-674b-4c8b-b314-b277ba374d5e","Thermal Comfort in Sun Spaces: To what extend can energy collectors and seasonal energy storages provide thermal comfort in sun spaces?","Wiegel, C. (TU Delft Design of Constrution)","Knaack, U. (promotor); Auer, Thomas (promotor); Klein, T. (copromotor); Delft University of Technology (degree granting institution)","2017","Preparation for fossil fuel substitution in the building sector persists as an essential subject in architectural engineering. Since the building sector still remains as one of the three major global end energy consumer – climate change is closely related to construction and design.
We have developed the archetype sun space to what it is today : a simple but effective predominant naturally ventilated sun trap and as well as living space enlargement. With the invention of industrial glass orangery’s more and more changed from frost protecting envelopes to living spaces from which we meantime expect thermal comfort in high quality.
But what level of thermal comfort provide sun spaces? And to what extend may sun spaces manage autarkic operation profiting from passive solar gains and, beyond that, surplus energy generation for energy neutral conditioning of aligned spaces? We deliver detailed information for this detected gap of knowledge.
We know about limited thermal comfort in sun spaces winter times.
This reasons the inspection of manifold collector technologies, which enable to be embedded in facades and specifically in sun space envelopes. Nonetheless, effective façade integrated collectors are ineffective in seasons with poor irradiation. Hence, the mismatch of offer and demand we have experienced with renewable energies ignites thinking about appropriate seasonal energy storages, which enlarges the research scope of this work. This PhD thesis project investigates on both, a yearly empirical test set up analysis and a virtual simulation of different oriented and located sun spaces abroad Germany. Both empirical and theoretical evaluation result in a holistic research focusing on a preferred occupation time in terms of cumulative frequencies of operational temperature and decided local discomfort, of potential autarkic sun space operation and prospective surplus exergy for alternative heating of aligned buildings. The results are mapped geographically for Germany.
Fossil fuel substitution, as far as this thesis elaborated, is closely related to quality of thermal comfort, sun space orientation and energetic standard of the aligned building. Unexpectedly, spaces, which define envelopes incorporating collectors in combination with storage technologies both profit and suffer to some extend in respect to thermal comfort.
Essentially, we can conclude, that the more area-wise efficient and the more integral the collector technology is incorporated into façade design, the more distinct significance of thermal comfort quality and fossil fuel substitution is.
Eventually, this dissertation determines the potential of a new generation of sun spaces in the context of energy transition.","thermal comfort; sun space; thermal behaviour","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-92516-81-7","","","","A+BE | Architecture and the Built Environment No 11 (2017)","","","","","Design of Constrution","","",""
"uuid:1d20cb03-ae0d-4ff7-9e41-7a8a9f5e6be4","http://resolver.tudelft.nl/uuid:1d20cb03-ae0d-4ff7-9e41-7a8a9f5e6be4","Fatigue strength of repaired cracks in welded connections made of very high strength steels","Akyel, A. (TU Delft Steel & Composite Structures)","Bijlaard, F.S.K. (promotor); Kolstein, M.H. (copromotor); Delft University of Technology (degree granting institution)","2017","For cyclically loaded structures, fatigue design becomes one of the important design criteria. The state of art shows that with modification of the conventional structural design methodology, the use of very high strength steels may have a positive effect on fatigue strength of welded connections. However, there is little known about the repair of fatigue cracks in welded connections made of very high strength steels. In this study, the fatigue strength of the repaired base material and repaired fatigue damaged V-shape welded connections made of very high strength steels is investigated. This thesis consists of four parts. In Part I, the emphasis was put on the effects of the material imperfections on the fatigue strength of materials and this part consists of an extensive literature study and a microscopic examination of the fracture surfaces of the base material of very high strength steels. Part II presents a literature survey on the fatigue crack repair methods and the results of the experimental programme of the current study. The experimental programme comprises fatigue tests on the repaired base material and repaired V-shape welded connections made of very high strength steels. The V-shape welded specimens were made of rolled and cast steel plates. In Part III, fatigue strength prediction models were evaluated and a comparison was made between the fatigue strength curves from the prediction models and the fatigue strength curves of the test results. In the last part, conclusions and recommendations from the study are presented. The analysis of test results revealed that the fatigue strength of fatigue damaged V-shape welded connections made of very high strength steels can be recovered by an appropriate repair procedure.","Very high strength steel; Repaired welded connections; Cast steels; Fatigue; Repair welding","en","doctoral thesis","","978-94-91909-47-4","","","","","","","","","Steel & Composite Structures","","",""
"uuid:8fcebd18-5bd0-4b81-9358-147d7963d1c6","http://resolver.tudelft.nl/uuid:8fcebd18-5bd0-4b81-9358-147d7963d1c6","Narrative perspectives on the development of coastal pilot projects","Bontje, L.E. (TU Delft Policy Analysis)","Thissen, W.A.H. (promotor); Slinger, J (copromotor); Delft University of Technology (degree granting institution)","2017","Pilot projects are favoured instruments for exploring and perhaps realising
policy change. The challenges that coastal policy faces, the frequency with
which pilot projects are implemented, and the criticisms regarding their
efficacy make it interesting and relevant to study pilot projects in this policy
field. This dissertation on ‘Narrative perspectives on the development of
coastal pilot projects’ aims to deepen understanding of the development
of pilot projects in their actor-networks. It utilises the concept of narratives
both in the conceptualisation of the development of pilot projects and in the
design of a research strategy, choosing to learn from the experiences of actors
involved in coastal pilot projects.
The Sand Engine in the Netherlands and Ystad’s sand nourishment
project in Scania, Sweden, are the two coastal pilot projects from which
empirical data are drawn. Retrospective biographies of the Sand Engine
pilot project, and the narrative competitions active in both the Sand Engine
and Ystad cases were identified using deductive and inductive narrative
analysis methods. This thesis highlights that pilot projects function not only as
learning instruments for understanding the (bio)physical system, but also as
instruments where actor-based learning is storified and success can be claimed
and institutionally anchored.","","en","doctoral thesis","","978-90-827579-0-3","","","","","","","","","Policy Analysis","","",""
"uuid:fea96fa7-e720-443a-938a-c4a9c1483bd6","http://resolver.tudelft.nl/uuid:fea96fa7-e720-443a-938a-c4a9c1483bd6","Multigrid Method for the Coupled Free Fluid Flow and Porous Media System","Luo, P. (TU Delft Numerical Analysis)","Oosterlee, C.W. (promotor); Delft University of Technology (degree granting institution)","2017","","","en","doctoral thesis","","978-94-6186-852-7","","","","","","","","","Numerical Analysis","","",""
"uuid:f2b59f85-1447-4d87-9220-b562f279778c","http://resolver.tudelft.nl/uuid:f2b59f85-1447-4d87-9220-b562f279778c","On the ecology of dissimilatory nitrate reduction to ammonium","van den Berg, E.M. (TU Delft BT/Environmental Biotechnology)","van Loosdrecht, Mark C.M. (promotor); Kleerebezem, R. (copromotor); Delft University of Technology (degree granting institution)","2017","The anthropogenic nitrogen inputs in the environment exceed the input by natural processes and impact the global nitrogen cycle considerably . Human meddling in the N-cycle occurs mainly in agricultural ecosystems. Loss of nitrogen from the agricultural soils, other than crop harvest, can have polluting effects on other environments. The three main processes through which the losses occur are ammonia volatilization, the production of gaseous nitrogen compounds and leaching of nitrate , contributing to acid rain, ozone depletion and eutrophication respectively. To reduce N-pollution and improve mitigation strategies, we need to expand our understanding of the metabolic and environmental controls of the nitrogen cycle processes. This thesis focuses on the microbial competition for nitrate between two dissimilatory nitrate reduction processes in the nitrogen cycle, as the different end-products entail important biogeochemical consequences for nitrogen retention in aquatic ecosystems such as wastewater treatment plants, as well as the successful operation of wastewater treatment systems. Nitrate can be reduced to nitrogen gas in the denitrification process, removing the nitrogen from the environment, which is desired for alleviation of eutrophication or treatment of waste water. Alternatively, in the process of dissimilatory nitrate reduction to ammonium (DNRA), ammonium is the end product, and the nitrogen is conserved in the environment, which can be beneficial in fertilizer management.","chemostat; enrichment; dissimilatory nitrate reduction; DNRA; denitrification","en","doctoral thesis","","9789462957213","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:e25ff43d-b8ae-4b6c-9bc9-d10768c4ab11","http://resolver.tudelft.nl/uuid:e25ff43d-b8ae-4b6c-9bc9-d10768c4ab11","Imaging systems in the Delft Multi-Beam Scanning Electron Microscope 1","Ren, Y. (TU Delft ImPhys/Charged Particle Optics)","Kruit, P. (promotor); Delft University of Technology (degree granting institution)","2017","The goal of this Ph.D. research is to develop imaging systems for the multiple beam scanning electron microscope (MBSEM) built in Delft University of Technology. This thesis includes two imaging systems, transmission electron (TE) imaging system, and secondary electron (SE) imaging system. The major conclusions, key results and some suggestions for future improvements are highlighted in this chapter.","Multi-beam SEM (MBSEM); Transmission electron imaging; Secondary electron imaging","en","doctoral thesis","","9789462957114","","","","","","","","","ImPhys/Charged Particle Optics","","",""
"uuid:c7399cb8-1f40-44a4-afff-933cfeaa3048","http://resolver.tudelft.nl/uuid:c7399cb8-1f40-44a4-afff-933cfeaa3048","Low head hydropower for local energy solutions","Narrain, P.A.G. (TU Delft Environmental Fluid Mechanics)","Mynett, A.E. (promotor); Wright, N.G. (promotor); Delft University of Technology (degree granting institution)","2017","The role of small hydropower is becoming increasingly important on a global level. Increasing energy demand and environmental awareness has further triggered research and development into sustainable low cost technologies. In developing countries, particularly in rural areas, local power generation could considerably improve living conditions. With this in mind, the development of a next generation low head hydropower machines was subject of investigation in the EU-project HYLOW. Being part of the research lines of that project, this thesis presents a numerical modelling approach to improve the design of machines like water wheels for increased hydraulic efficiency. Nowadays, Computational Fluid Dynamics (CFD) enables numerical models to be quite accurate and incorporate physical complexities like free surfaces and rotating machines. The results of the CFD simulations carried out in this research show that a change in blade geometry can result in higher torque levels, thereby increasing performance. Numerical simulations also enabled to determine the optimal wheel-width to channel-width ratio and further improve performance by modifying the channel bed conditions upstream and downstream of the water wheel. With a power rating in the low kilowatt range, low-head hydropower machines like optimised water wheels seem to have a clear potential for small-scale energy generation, thereby contributing to achieving the Sustainable Development Goals by providing local energy solutions.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-0-8153-9612-3","","","","Dissertation submitted in fulfillment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of the UNESCO-IHE Institute for Water Education.","","","","","Environmental Fluid Mechanics","","",""
"uuid:7b757b94-c395-424f-8907-e797f5153c04","http://resolver.tudelft.nl/uuid:7b757b94-c395-424f-8907-e797f5153c04","Process optimization for polyhydroxyalkanoate (PHA) production from waste via microbial enrichment cultures","Korkakaki, E. (TU Delft BT/Environmental Biotechnology)","van Loosdrecht, Mark C.M. (promotor); Kleerebezem, R. (copromotor); Delft University of Technology (degree granting institution)","2017","Polyhydroxyalkanoates (PHA) are compounds naturally produced by microorganisms, with many industrial applications, either as bioplastics or as precursors for production of chemicals. Until now, industrial PHA production was conducted with pure strains of bacteria fed with well-defined feedstocks, making the overall process non-economically feasible. The last decades research on PHA was devoted on producing them in open enrichment cultures using wastewater as substrate, and making the process continuous, decreasing the production costs. Laboratory research with well-defined VFA-based substrates enables high accumulation of PHA, up to 90wt% of the total biomass. After demonstrating the potential for PHA production via enrichment cultures, research was devoted on applying this research. Several wastestreams and operational conditions were used to test PHA production on pilot scale and maybe on short notice on industrial scale. Until now, it was shown that also when using fermented industrial wastewater (e.g. paper mill, food) a high cellular PHA-content could be achieved.
The object of this thesis was to tackle problems associated with PHA production when operating the process using wastewater and to make it feasible to apply the strategy universally.
In the first chapter general information about PHA (process- and material- based) are given.
In the second chapter leachate from the source separated organic fraction of municipal solid waste (OFMSW) was evaluated as a substrate for polyhydroxyalkanoates (PHA) production. Initially, biomass enrichment was conducted directly on leachate in a feast-famine regime. Maximization of the cellular PHA content of the enriched biomass yielded to a low PHA content (29 wt%), suggesting that the selection for PHA-producers was unsuccessful. When the substrate for the enrichment was switched to a synthetic volatile fatty acid (VFA) mixture -resembling the VFA carbon composition of the leachate- the PHA-producers gained the competitive advantage and dominated. Subsequent accumulation with leachate in nutrient excess conditions resulted in a maximum PHA content of 78wt%. Based on the experimental results, enriching a PHA-producing community in a “clean” VFA stream, and then accumulating PHA from a stream that does not allow for enrichment but does enable a high cellular PHA content, enables a high cellular PHA content, contributing to the economic feasibility of the process. The estimated overall PHA yield on substrate can be increased four-fold, in comparison to direct use of the complex matrix for both enrichment and accumulation.
The success of enriching PHA-producers in a feast-famine regime strongly depends on the substrate utilized. A distinction can be made between substrates that select for PHA-producers (e.g. volatile fatty acids) and substrates that select for growing organisms (e.g. methanol). In the third chapter the feasibility of using such a mixed substrate for PHA-production was evaluated. A sedimentation step was introduced in the cycle after acetate depletion and the supernatant containing methanol was discharged. This process configuration resulted in an increased maximum PHA storage capacity of the biomass from 48wt% to 70wt%. A model based on the experimental results indicated that the length of the pre-settling period and the supernatant volume that is discharged play a significant role for the elimination of the side population. The difference of the kinetic properties of the two different populations determines the success of the proposed strategy.
Double-limitation systems have shown to induce polyhydroxyalkanoates (PHA) production in chemostat studies limited in e.g. carbon and phosphate. In the fourth chapter the impact of double limitation on the enrichment of a PHA producing community was studied in a sequencing batch process. Enrichments at different C/P concentration ratios in the influent were established and the effect on the PHA production capacity and the enrichment community structure was investigated. Experimental results demonstrated that when a double limitation is imposed at a C/P ratio in the influent in a range of 150 (C-mol/mol), the P-content of the biomass and the specific substrate uptake rates decreased. Nonetheless, the PHA storage capacity remained high (with a maximum of 84wt%). At a C/P ratio of 300, competition in the microbial community is based on phosphate uptake, and the PHA production capacity is lost. Biomass specific substrate uptake rates are a linear function of the cellular P-content, offering advantages for scaling-up the PHA production process due to lower oxygen requirements.
In the fifth chapter, PHA accumulating microbial enrichment cultures were established in an anaerobic/aerobic sequencing batch reactor (SBR) with glucose as sole substrate. The effect of different solid retention times (SRT; 2 and 4 days) on PHA accumulation were investigated. The experimental results revealed that at both SRT conditions, glucose was first stored anaerobically as glycogen with energy generation from lactate fermentation. Subsequently lactate and glycogen were fermented to acetate and propionate in the anaerobic phase. At 2 d SRT operation, during the aerobic phase the fermentation products where rapidly sequestered by aerobic PHA accumulating microorganisms. When (limiting) nutrients were applied under aerobic conditions PHA formation occurred under anaerobic conditions. At a longer SRT of 4 days the fermentation products where already sequestered in the anaerobic phase into PHA by glycogen accumulating organisms (GAO). In all systems the glucose uptake rate was very fast (-2.7 C-mol/C-mol/h), making it the primary competition factor. Under the conditions tested direct conversion of glucose to PHA was not possible.
In the sixth chapter some recommendations and questions that remain unanswered are addressed. As suggested, process could be improved by using a continuous system which would include a settling tank for the removal of carbon that is slowly consumed and leads to growth of “inert” biomass. The possibility of operating the process at lower oxygen concentrations, or completely anoxically, is also discussed.
Most of our knowledge on CRISPR immunity has come from conventional biochemical techniques, that average the population dynamics and thereby mask the underlying molecular dynamics. In this thesis, we adopt single-molecule fluorescence techniques that allow for real-time visualization of the molecular dynamics that underlie CRISPR immunity. Our single-molecule approach reveals that the various stages of CRISPR immunity in E. coli are highly dynamic and tightly coupled, leading to a robust immune response. The results presented in this thesis contribute to a new level of understanding on the molecular mechanisms behind CRISPR immunity and may aid in the development of CRISPR-based tools for engineering biology.
2. It possibly increases workability by pointing out windows of opportunity in conditions that were considered as unworkable from a statistical point of view.
The chosen approach to obtain the mentioned deterministic prediction of waves and induced motion response, is to use the ship’s navigation radar as a remote wave sensor. The spatial domain that can be covered by a navigation radar to observe the sea surface is of course limited: both its minimum and maximum range is limited. Besides it is obvious that the wave observation will only be available in the past, and by no means in the future. Therefor, the first chapter answers the theoretical question where in the spatio-temporal domain waves can be accurately predicted, given a perfect spatio temporal observation of the waves. An indicator is proposed that specifies predictability in space and time based on the spatio-temporal observation and based on a given wave condition. It is confirmed that the group velocity of the waves is governing concerning this question. In the remaining chapters basically 2 different approaches are proposed and investigated to solve a linear wave representation based on input from synthesized radar images of sea waves. Finally the methods are applied to real radar data acquired during a sea trial. Based on the solution of the linear wave field, ship motions were predicted using pre-computed linear motion tranfer functions. Correlation coefficients up to 0.86 were obtained for the heave motion predicted 60 sec in advance.","wave prediction; decision support; wave radar; deterministic ship motion prediction","en","doctoral thesis","","978-94-6233-754-1","","","","","","2022-10-06","","","Ship Hydromechanics and Structures","","",""
"uuid:9b3f9bb6-ef1b-41ed-803a-7e7976784b85","http://resolver.tudelft.nl/uuid:9b3f9bb6-ef1b-41ed-803a-7e7976784b85","Visualizing Rules, Regulations, and Procedures in Ecological Information Systems","Comans, J. (TU Delft Control & Simulation)","Mulder, Max (promotor); van Paassen, M.M. (copromotor); Delft University of Technology (degree granting institution)","2017","Increasing automation in aviation has played a key role in the rapid development of the aviation industry in the last decades. Without any doubt, it has vastly increased safety and efficiency. Automation will continue to play an important role in the future. Consensus exists that this does not necessarily mean only increasing the level of automation, it also means developing automation in such a way that the human operator keeps a central role in the system.","EID","en","doctoral thesis","","978-94-028-0751-6","","","","","","","","","Control & Simulation","","",""
"uuid:4c1fca32-534f-464b-9035-a6a622ca1679","http://resolver.tudelft.nl/uuid:4c1fca32-534f-464b-9035-a6a622ca1679","Probing Li-ion transport in Sulfide-based solid-state batteries","Yu, C. (TU Delft RST/Fundamental Aspects of Materials and Energy)","Brück, E.H. (promotor); Wagemaker, M. (copromotor); Delft University of Technology (degree granting institution)","2017","The presented thesis aims at understanding the relationship between the synthesis procedure of argyrodite Li6PS5X (X=Br, Cl) solid electrolytes and the resulting structure, morphology, and solid-state battery performance. Specifically, argyrodite solid electrolytes in combination with Li2S positive electrodes were investigated using X-ray and neutron diffraction, impedance spectroscopy and solid state NMR.","","en","doctoral thesis","","978-94-6186-853-4","","","","","","","","","RST/Fundamental Aspects of Materials and Energy","","",""
"uuid:46a4329f-5d76-41ca-aecd-71c855908bde","http://resolver.tudelft.nl/uuid:46a4329f-5d76-41ca-aecd-71c855908bde","Empirical Modelling of Inter-organizational Knowledge Collaboration","Haghighi Talab, A. (TU Delft Economics of Technology and Innovation)","van Beers, Cees (promotor); Scholten, V.E. (copromotor); Delft University of Technology (degree granting institution)","2017","Open innovation, knowledge co-creation, and research joint ven-tures, unified under the term 'inter-organizational knowledge collaboration', are discussed in various fields of innovation man-agement to ultimately shape inno-vation strategy of the organiza-tions and the innovation policy.
Several ongoing debates are crucial in the allocation of resources and division of labor with regards to the innovation system: industries vs. universities, who are the salient actors of the innovation system? Death of distance vs. geographical boundedness, does distance mat-ter? Network cohesion vs. struc-tural holes, where in the network is more fertile for innovation?
This book, discussing these de-bates, intends to direct the innovation strategy and policy.
Based on principal component analysis, and saturation index calculations, this thesis highlights that, the predominant mechanisms controlling the fluoride enrichment in the groundwater probably include calcite precipitation and Na/Ca exchange processes, both of which deplete Ca from the groundwater, and promote dissolution of fluorite. The mechanisms also include F-/OH- anion exchange as well as evapotranspiration, which concentrate fluoride ions, hence increasing its concentration in the groundwater. Spatial mapping showed that the high-fluoride groundwaters occur predominantly in the Saboba, Cheriponi and Yendi districts in the Northern region of Ghana.
The thesis further highlights that, modifying the surface properties of locally available materials by aluminium coating, is a very promising approach to develop a novel fluoride adsorbent. Aluminium oxide coated media reduced fluoride in water from 5. 0 ± 0.2 mg/L to ≤ 1.5 mg/L (WHO guideline), in both batch and continuous flow column experiments in the laboratory. Kinetic and isotherm adsorption modelling, thermodynamic calculations, as well as Fourier Transform Infrared and Raman spectroscopy studies, suggest the mechanism of fluoride adsorption onto aluminium oxide coated media involved both physisorption and chemisorption processes.
Field testing in Northern Ghana showed that the adsorbent is also capable of treating fluoride-contaminated groundwater in field conditions. The adsorbent also showed good regeneration potential making it a promising defluoridation adsorbent in practical applications in developing countries.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-0-8153-9207-1","","","","Dissertation submitted in fulfillment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of the UNESCO-IHE Institute for Water Education.","","","","","Sanitary Engineering","","",""
"uuid:d3f4e239-a40d-4857-8781-160851ea8b51","http://resolver.tudelft.nl/uuid:d3f4e239-a40d-4857-8781-160851ea8b51","Exploring next-generation scintillation materials","Awater, R.H.P. (TU Delft RST/Fundamental Aspects of Materials and Energy)","Dorenbos, P. (promotor); Delft University of Technology (degree granting institution)","2017","Scintillation materials convert high-energy radiation into many visible photons and, in combination with a photodetector, are used as ionizing radiation detectors. Since the discovery of ionizing radiation, there have been intensive research efforts in finding new, better performing scintillators, resulting in the development of a large variety of scintillation materials. Each scintillation material is tailored to a specific application to have the best performance. With ever-increasing material demands set by the various applications of scintillators, the search for even better performing scintillation materials remains an active field. This thesis explores newavenues of scintillation materials research in order to find the next-generation of scintillation materials.","Scintillator; CeBr3; VRBE; Bi3+; Bi2+; Pb2+; Tl+","en","doctoral thesis","","978-94-6295-750-3","","","","","","","","","RST/Fundamental Aspects of Materials and Energy","","",""
"uuid:1fcc6ab4-daf5-416d-819a-2a7b0594c369","http://resolver.tudelft.nl/uuid:1fcc6ab4-daf5-416d-819a-2a7b0594c369","GPU-accelerated CFD Simulations for Turbomachinery Design Optimization","Aissa, M.H. (TU Delft Numerical Analysis; von Karman Institute for Fluid Dynamics)","Vuik, Cornelis (promotor); Verstraete, Tom (copromotor); Delft University of Technology (degree granting institution)","2017","Design optimization relies heavily on time-consuming simulations, especially when using gradient-free optimization methods. These methods require a large number of simulations in order to get a remarkable improvement over reference designs, which are nowadays based on the accumulated engineering knowledge already quite optimal.
High-Performance Computing (HPC) is essential to reduce the execution time
of the simulations. While parallel programming using the CPU is established since more than two decades, the use of accelerators, such as the Graphics Processing Unit (GPU), is relatively recent in design optimization. The GPU has actually a huge computational power comparable to a many-core cluster but concentrated in one device. This raw power is not easy to utilize as entire code parts have to be rewritten using a GPU programming language. Even though high-level standards (e.g. openACC) are able to bring a basic acceleration with a low development effort, it is not simple to get large speedups with these methods. Low-level programming languages are more efficient but different speedups are reported and there is a need for
a deep analysis to make the GPU potential more transparent to scientists especially non-experts in HPC.
In order to study the GPU acceleration for CFD steady simulations, two in-house CFD solvers have been ported to the GPU; one with explicit and the second with implicit time-stepping. After the porting and the validation of the GPU solvers, the GPU code optimization leads to the identification of a set of key parameters affecting the GPU efficiency. At the same time, both methods have been compared resulting into a performance model and a classification of the GPU acceleration of some CFD operations. The purpose is to enable scientists to take an educated decision concerning the GPU porting of their CPU applications by providing an expected GPU speedup.
In addition to the two GPU CFD solvers that are now integrated into the in-
house design optimization software package, this research provided key elements to reduce the ambiguity about the GPU potential, namely a qualitative analysis and a classification. These tools can help selecting the best candidate for a breakthrough in CFD acceleration. At the same time, this work identified serious limitations in the preconditioning of a linear system of equations and the limit of today iterative matrix factorization methods in terms of stability and convergence. There is a need for a paradigm shift toward inherently parallel preconditioners. The developed tools have been used for the optimization of a compressor and a turbine cascade resulting
into a faster optimization process on the GPU.","CFD RANS; GPU acceleration; Turbomachinery application; CUDA","en","doctoral thesis","","978-2-87516-123-9","","","","","","","","","Numerical Analysis","","",""
"uuid:b6017c93-bc47-4ea8-95e9-4b346648567a","http://resolver.tudelft.nl/uuid:b6017c93-bc47-4ea8-95e9-4b346648567a","Socio-spatial change in Lithuania: Depopulation and increasing spatial inequalities","Ubareviciene, Ruta (TU Delft OLD Urban Renewal and Housing)","van Ham, M. (promotor); Delft University of Technology (degree granting institution)","2017","Since the 1990s Lithuania’s geopolitical and economic position has radically shifted from being a relatively affluent and prosperous region in the Soviet Union to a relatively poor country on the periphery of the European Union. The transition period was accompanied by a sharp population decline, which makes Lithuania one of the fastest shrinking countries in the world today. Furthermore, in the socialist period, planning policy focused on decentralisation and sought to limit the growth of the major Lithuanian cities. Now most of the economic growth and demographic potential is concentrated in a few metropolitan regions, particularly in Vilnius. Extreme population decline and uneven spatial development can be seen as a threat to the economic and social stability of Lithuania.
The aim of this thesis is to gain more insight into the recent socio-spatial transformation processes and their consequences in Lithuania. This thesis investigates the main features and drivers of socio-spatial change. It shows why we should be concerned, despite the growing economy and improvements in the standard of living, as Lithuania is facing major challenges related to extreme population decline and increasing socio-spatial inequality. The results of this study provide a better understanding of the development processes and reveal how the Soviet-designed socio-spatial structures adapted to a market economy environment.","Socio-spatial change; Lithuania; Depopulation; Segregation","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-92516-75-6","","","","A+BE | Architecture and the Built Environment No. 9 (2017)","","","","","OLD Urban Renewal and Housing","","",""
"uuid:993e98ca-3c91-4591-9fbf-26bd6eea2354","http://resolver.tudelft.nl/uuid:993e98ca-3c91-4591-9fbf-26bd6eea2354","The interplay between polymerase organization and nucleosome occupancy along DNA: How dynamic roadblocks on the DNA induce the formation of RNA polymerase pelotons","van den Berg, A.A. (TU Delft BN/Martin Depken Lab)","Dekker, N.H. (promotor); Depken, S.M. (copromotor); Delft University of Technology (degree granting institution)","2017","During transcription RNA polymerase (RNAP) moves along a DNA molecule to copy the information on the DNA to an RNA molecule. Many textbook pictures show an RNAP sliding along empty DNA, but in reality it is crowded on the DNA and RNAP competes for space with many proteins such as other RNAP’s and histones. Coverage of DNA by histones is essential for DNA protection and signaling. However, during transcription RNAP evicts histones, which then rebind quickly or are replaced by other proteins. How does crowding of RNAP and histones on the DNA affects transcription dynamics on the one hand, and how does transcription activity change the density and exchange of histones along the DNA on the other hand? Those are the central questions of this thesis.","Transcription; transcriptional bursts; TASEP; Crowding; Nucleosomes; Bus route model","en","doctoral thesis","","978-90-8593-308-3","","","","Casimir PhD series, Delft-Leiden 2017-24","","","","","BN/Martin Depken Lab","","",""
"uuid:748b66b7-0f95-4978-8ce8-2ebf4bd5ee0b","http://resolver.tudelft.nl/uuid:748b66b7-0f95-4978-8ce8-2ebf4bd5ee0b","Root for rain: Towards understanding land-use change impacts on the water cycle","Wang-Erlandsson, L. (TU Delft Water Resources)","Savenije, Hubert (promotor); Rockström, J. (promotor); Gordon, L.J. (copromotor); Delft University of Technology (degree granting institution)","2017","We live today on a human-dominated planet under unprecedented pressure on both land and water. The water cycle is intrinsically linked to vegetation and land use, and anticipating the consequences of simultaneous changes in land and water systems requires a thorough understanding of their interactions. This thesis aims to advance our knowledge of how land-use change influences the water cycle, i.e., focussing on the role of land-use in mediating water’s journey from land evaporation, to atmospheric moisture, and to precipitation on land.
This thesis first presents the development (Chapter 2) and evaluation (Chapter 3) of the process-based water balance model STEAM (Simple Terrestrial Evaporation to AtmosphereModel). STEAM simulates five different evaporation fluxes, based on land-use representation with only a limited number of parameters. Comparison with independent data shows that STEAM produces realistic evaporative partitioning and hydrological fluxes over different locations, seasons and land-use types.
Chapter 4 investigates the temporal characteristics of partitioned evaporation, and shows that terrestrial residence timescale of transpiration (days to months) is substantially longer than that of interception (hours). The vegetation’s ability to transpire by retaining and accessing soil moisture at great depth is critical for dry season evaporation, and the substantial differences in temporal characteristics between evaporation fluxes can create contrasting moisture recycling patterns.
In response to the importance of root zone storage capacity for transpiration and moisture recycling simulation, Chapter 5 sets out to present an ’earth observation-based’ method for estimating this critical parameter in land surface modelling. By assuming that vegetation does not root deeper than necessary to bridge critical dry periods, satellitebased evaporation were used to derive root zone storage capacity. The new estimate improved evaporation simulation overall, and in particular during the least evaporating months in sub-humid to humid regions with moderate to high seasonality. The results suggest that several forest types are able to create a large storage to buffer for severe droughts, in contrast to e.g., grasslands and croplands.
Based on the new insights, Chapter 6 analyses the effects of land-use change on river flows. In some of the world’s largest basins, precipitation was found to bemore influenced by extra-basin, than within-basin, land-use change. In fact, in several non-transboundary basins, river flows were considerably influenced by land-use changes in foreign countries, suggesting new transboundary water relationships in international politics.
This thesis addressed different domains in the water cycle to improve our understanding of land-water interactions. Every water flux and stock requires our examination, whether they flow visibly in rivers, travel invisibly in the air, or hide deep in soil and roots. Because of the terrestrial water cycle’s interaction with land, and therefore human activities, we are in an extraordinary position to shape its path and pace.","water resources; moisture recycling; land-use change; land-atmosphere interactions","en","doctoral thesis","","","","","","","","","","","Water Resources","","",""
"uuid:04b4caa3-1d27-4b97-85ff-2036deb70be8","http://resolver.tudelft.nl/uuid:04b4caa3-1d27-4b97-85ff-2036deb70be8","Characterizing Cortical Responses Evoked by Robotic Joint Manipulation after Stroke","Vlaar, M.P. (TU Delft Biomechatronics & Human-Machine Control)","van der Helm, F.C.T. (promotor); Schouten, A.C. (copromotor); Delft University of Technology (degree granting institution)","2017","Cortical damage after a stroke often affects movement control, resulting in impairments such as paresis and synergies. Although some recover, most stroke survivors are left with reduced function of the upper limb, which has a severe impact on their activities of daily living. People who have suffered a stroke demonstrate heterogeneous impairments due to large variability in lesion location and extent; thus, rehabilitation should be tailored to each individual. Design and evaluation of rehabilitation programs requires a thorough understanding of the healthy and impaired sensorimotor system. Impairments to the motor system have been extensively investigated. On the contrary, the sensory aspects of impaired motor control have received less attention. This thesis intends to characterize the relation between somatosensory information from the periphery and the corresponding cortical responses using electroencephalography (EEG).","","en","doctoral thesis","","978-94-028-0750-9","","","","","","","","","Biomechatronics & Human-Machine Control","","",""
"uuid:ea192937-0abc-49ba-9dff-da93d8c6b278","http://resolver.tudelft.nl/uuid:ea192937-0abc-49ba-9dff-da93d8c6b278","Water supply and demand management in the Galápagos: A case study of Santa Cruz Island","Reyes Perez, M.F. (TU Delft Sanitary Engineering)","Kennedy, M.D. (promotor); Trifunović, Nemanja (copromotor); Delft University of Technology (degree granting institution)","2017","Water resources in tourist islands have been severely threatened, especially in the Galápagos Islands, where the increased local population has generated attractive income from the tourist services. In addition, the data regarding water supply and demand are scarce. This study investigates water supply and demand in Santa Cruz, the most populated island of Galápagos. The research encompasses a thorough assessment of the water supply crisis, as well as the quantification of water demand from different categories (domestic, tourist, restaurants and laundries) through surveys, in the absence of water metering. Also, specific water demand was assessed by installing 18 water meters. The results yield a wide range of water consumption, questioning the current assumption of water scarcity. Furthermore, a prognosis of water supply and demand was carried out, and also several intervention strategies were proposed such as rainwater harvesting, greywater recycling, leakage reduction, water meter installation, water demand reduction, as well as seawater desalination to cope with the future population growth. Due to the fragility of the ecosystem, these strategies were assessed through a Multi-Criteria Decision Analysis, considering environmental, technical, economic and social aspects, as well as relevant stakeholders’ perspectives. finally, the water supply network of Puerto Ayora was evaluated in order to understand the need of the current intermittent supply regime. A methodology was developed to estimate the overflow of the domestic roof tanks (a common incidence amongst local population). The results question the practicality of individual household storage. The final results show that the current situation in terms of the lack of water quantity may not be real, as it has been thought for the last decades. The water issues refer more importantly to the water quality, as well as to the lack of proper water management practices.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-0-8153-7247-9","","","","Dissertation submitted in fulfillment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of the UNESCO-IHE Institute for Water Education.","","","","","Sanitary Engineering","","",""
"uuid:9280a620-e2a7-4280-8d7d-18aeb02b43c7","http://resolver.tudelft.nl/uuid:9280a620-e2a7-4280-8d7d-18aeb02b43c7","Normative Social Applications: User-centered Models for Sharing Location in the Family Life Domain","Kayal, A. (TU Delft Interactive Intelligence)","Neerincx, M.A. (promotor); van Riemsdijk, M.B. (copromotor); Brinkman, W.P. (copromotor); Delft University of Technology (degree granting institution)","2017","Social media platforms are used by a massive, growing number of users, who use these platforms to share content such as text, photos, videos, and location information. As the spread of social media is playing an increasingly important role in our world, literature has shown that while aiming to promote a number of human values (e.g. friendship, social recognition, and safety), this type of technology may pose risk to other values (e.g. privacy and independence), creating what has been defined as value tensions. This thesis proposes the norm-based, Social Commitment (SC) models as a solution that could potentially provide tailored support for user values. As research shows that norms can fulfill (or pose risks to) values, SC models could utilize their normative core to as well as their ability to contain key relevant information that complement the missing features in social applications’ preference settings, to give users a rich, flexible, and adaptive structure that improves their social application experience. Location sharing in the family life (i.e. within families with children in the elementary school age) was selected as an application domain, as it provided potential use cases that are abundant with value tensions (e.g. a child’s safety vs. their independence), while embodying the essential elements of data sharing using social platforms. The research followed a Situated Cognitive Engineering approach, and an exploratory investigation into the social context of the application domain was conducted: focus groups and cultural probing studies with parents and children, and the collected data was analyzed using grounded theory. The result was a grounded model that showed (1) how activities, concerns, and limitations related to family life are connected through specific user values, and (2) that norms can support these values by promoting activities, alleviating concerns and overcoming limitations. Further on, a conceptual model was built, and subsequently a SC grammar (and a semantic lifecycle) were developed for this domain: the SC-model allowed users to construct commitments amongst each other for sharing and receiving social data, harmonized to their values via normative statements. A location-sharing application was developed so that, in addition to location-sharing features found in familiar commercial platforms, it also contained an implementation of our SC grammar. The SC model’s expressivity was validated through a qualitative user study with parents and children, where nearly all participants’ normative statements were found to be expressible through the proposed model. The SC grammar’s usefulness (within the application domain) as well as its ease of use were validated through a crowd-sourced, online user study. The SC model’s ability to provide improved human value support was validated through a user study conducted with elementary school children using the location-sharing application we developed, as well as a questionnaire constructed to measure fulfillment of children’s values relevant to the domain. Results demonstrated that enhancing the app with the SC model has improved its support a number of children’s values while posing no risk to the remaining measured values in the process. In the thesis’s final user study, we demonstrated that using contextual information (e.g. a user’s value profile) as well as commitment attributes (e.g. recency and norm type), can be used to create predictive models that are capable of automatically resolving the vast majority of conflicts that may occur amongst location-sharing commitments. In conclusion, this thesis demonstrates that SC models possess the potential to provide an easy to use, flexible tool that allows social applications to work better in users’ favor,","","en","doctoral thesis","","","","","","SIKS Dissertation Series No. 2017-38. The research reported in this thesis has been carried out under the auspices of SIKS, the Dutch Research School for Information and Knowledge Systems.","","","","","Interactive Intelligence","","",""
"uuid:d28d754d-661a-4e37-962d-438f604240c8","http://resolver.tudelft.nl/uuid:d28d754d-661a-4e37-962d-438f604240c8","Nanostructured lime-based materials for the conservation of calcareous substrates","Borsoi, G. (TU Delft Heritage & Technology)","van Hees, R.P.J. (promotor); Lubelli, B. (copromotor); Delft University of Technology (degree granting institution)","2017","Nanolimes, i.e. dispersions of lime (Ca(OH)2)nanoparticles in alcohol, have been extensively investigated over the last two decades as consolidation products for calcareous substrates.
The use of nanolimes for consolidation of mural paintings arises from the lack of e ective and compatible consolidants for this type of substrates; the use of nanolimes was later extended also to limestone and lime-based mortars, as an alternative for silica-precursor consolidants (e.g. tetraethoxysilan - TEOS), which had shown to have a limited e ectiveness and compatibility with calcareous substrates.
Nanolime dispersions are characterized by a very small size of the lime particles, which should provide a proper penetration within the porous network of most building materials. In fact, a homogeneous and in-depth penetration of the consolidant is a crucial requirement when dealing with decayed stones and plasters/renders.
The e ectiveness of nanolime dispersions reported in literature appears controversial. Some authors observed a proper penetration and moderate consolidating action, whereas others report poor penetration, poor consolidation action and sometimes the formation of a white haze on the treated surface. There is no agreement concerning the factors a ecting the transport and deposition of the lime nanoparticles within a porous network, and the causes of the observed drawbacks are not well understood.
Therefore, the main research question is:
Is nanolime a suitable alternative to silica-precursor consolidants (e.g. TEOS) for the consolidation of calcareous substrates?
More speci cally, the following research questions can be formulated:
– How and up to which extent can the e ectiveness and compatibility of nanolime be improved? How can deposition of nanolime in depth be improved and the appearance of a white haze on the surface avoided?
– How can nanolime properties be ne-tuned to improve the e ectiveness and compatibility of the treatment?
– What is the e ect of di erent application methods on the e ectiveness of nanolime consolidation?
This research investigates and elucidates the behaviour of nanolime products for consolidation of calcareous substrates. Based on the developed knowledge, it proposes and validates a methodology (including solvent modi cation and application protocol) for improving the consolidation e ectiveness of nanolime dispersions, making these a suitable alternative for TEOS products.
Firstly, an experimental campaign was carried out in order to understand the penetration and deposition of commercial nanolimes on coarse porous calcareous substrates (Maastricht limestone). The main cause of the poor nanolime deposition in-depth was identi ed in the back-transport of the nanoparticles towards the drying surface, as a consequence of the high volatility and low kinetic stability of the dispersions.
The modi cation of the nanolime properties, through the optimization of the solvent, appears thus a feasible strategy to improve the in-depth deposition of the lime nanoparticles. New nanolimes were synthetized and dispersed in a selection of solvents conferring di erent stability and drying rate to the obtained nanolime dispersions. A conceptual model, correlating the properties (i.e. drying rate and kinetic stability) of nanolimes dispersed in di erent solvents, to the moisture transport behaviour of the substrates to be treated, was conceived. The model was experimentally validated on coarse porous (Maastricht) and ne-porous (Migné) limestones.
Experimental results con rmed the predictions of the model that nanolimes dispersed in solvent with lower volatility and stability (e.g. water or butanol) have a good in-depth deposition within coarse porous networks. On the other hand, solvents with higher volatility and guaranteeing higher kinetic stability (e.g. ethanol or isopropanol) to the relative dispersions, should be preferred for substrates with ne porous networks. Fine- tuning the properties of the nanolime dispersion (by modi cation of the solvent) to the moisture transport behaviour of the substrate, is shown to be a successful strategy for improving in-depth deposition of lime nanoparticles.
On the basis of the obtained results, the solvent mixture was further ne-tuned using ethanol-water mixtures. Results proved that ethanol-based nanolime, mixed with a minor amount of water (5%), can provide be er nanoparticles in-depth deposition within coarse porous substrates (e.g. Maastricht limestone), when compared to dispersions in pure ethanol.
The application procedure of nanolime dispersions was also studied and optimized, this step being a crucial aspect for a successful consolidation; nanolimes were applied both by capillary absorption (method commonly used for laboratory tests) or by nebulization (method widely used in situ) on a coarse porous limestone and a mortar.
20 Nanostructured lime-based materials for the conservation of calcareous
The research showed that results obtained by application through capillary absorption do not always correspond to those obtained by nebulization.
The e ectiveness and compatibility of nanolimes with improved properties and a ne-tuned application protocol were nally veri ed. Fresh and weathered Maastricht limestone, as well as lime-based mortars, were treated. Results showed that nanolime dispersions can guarantee an in-depth consolidation both in laboratory mortar specimens and weathered limestone, with only a moderate alteration of the total porosity and of the moisture transport properties of the investigated substrates.
Therefore, nanolime dispersions, provided that they are properly formulated and applied, can be a suitable and compatible alternative to TEOS for the consolidation of coarse porous substrates.
This dissertation contributes to de ne guidelines to support restorers and professionals in the choice and application of nanolime dispersions for consolidation of calcareous substrates.","Nanolime; Consolidation; Calcareous Materials; Limestone; Mortar; Solvent Modification","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6186-847-3","","","","A+BE | Architecture and the Built Environment No. 8 (2017)","","","","","Heritage & Technology","","",""
"uuid:74925f40-1efc-469f-88ee-e871c720047e","http://resolver.tudelft.nl/uuid:74925f40-1efc-469f-88ee-e871c720047e","Aeroelastic Modelling and Design of Aeroelastically Tailored and Morphing Wings","Werter, N.P.M. (TU Delft Aerospace Structures & Computational Mechanics)","Bisagni, C. (promotor); De Breuker, R. (copromotor); Delft University of Technology (degree granting institution)","2017","In order to accommodate the growth in air traffic whilst reducing the impact on the environment, operational efficiency is becoming more and more important in the design of the aircraft of the future. A possible approach to increase the operational efficiency of aircraft wings is the use of aeroelastic tailoring, by taking advantage of the directional stiffness properties of composite materials to control the aeroelastic deformations of the wing in a beneficial way or morphing, by actively changing the wing shape in flight to optimise performance across a range of flight conditions.
In order to investigate the benefits of aeroelastic tailoring and morphing, this dissertation presents a dynamic aeroelastic analysis and optimisation framework suitable for the design of aeroelastically tailored and morphing wings. The framework is sufficiently efficient to explore the design space, as well as sufficiently comprehensive to account for all factors relevant in the design of aircraft wings.
In order to illustrate the advantages of the framework, it has been applied to two design studies: the optimisation of a morphing wing designed for a 25 kg UAV and the optimisation of the NASA Common Research Model, a contemporary transonic supercritical wing, resulting in wing designs that take advantage of the aeroelastic response of the wing, ensuring optimal performance at cruise flight conditions, while showing significant improvements at off-cruise conditions.
The forward gravity field modelling method that I improve upon in this dissertation is mostly used for topographic/isostatic mass reduction of gravity data. The methodology is able to transform density-models into gravitational potential fields using a spherical harmonic representation. I show that this methodology in the existing form is not suited to be used for density layers in lower crustal and upper mantle regions. The binomial series inherent to this methodology do not converge when applied to deep mass structures, and therefore it is not possible to truncate the series at a low degree to approximate the mass. This approximation is crucial for the computational efficiency of the methodology. I propose a correction that mitigates this erroneous behaviour, which enables this methodology to efficiently compute the potential field of deep situated masses. I benchmark the improved methodology with a tesseroid-based gravity-field modelling software, and I show that my software is accurate within ±4 mGal, when modelling the Moho density interface (with a range in signal of ±500 mGal. The improved methodology is used in the studies described in this thesis.
With an efficient and accurate forward modelling methodology, I am able to use global gravity field data in studies of the solid Earth. In the central part of Fennoscandia the crust is currently uplifting, because of the delayed response of the viscous mantle to melting of the regional Late Pleistocene ice sheet. This process, called glacial isostatic adjustment (GIA), causes a negative anomaly in the present-day static gravity field as isostatic equilibrium has not been reached yet. Several studies have used this anomaly as a constraint on models of GIA, but the uncertainty in crustal and upper mantle structures had not been properly taken into account. In revisiting this problem, I show that the GIA gravity signal overlaps with mantle convection signals, such that a simple spherical harmonic truncation is not sufficient to separate these two phenomena. Furthermore, I find that, in contrast to the other studies, the effect of crustal anomalies on the gravity field cannot be effectively removed, because of the relative large uncertainties in the crustal density models. Therefore, I propose to correct the observed gravity field for GIA with numerical modelling results when constructing geophysical models that assume isostatic equilibrium. I show that correcting for GIA results in a significant vertical readjustment of the geometry of structural layers in the modelled crust of 5-10 percent. Correcting the gravity field for GIA prior to assuming isostatic equilibrium might be relevant in other areas with ongoing post-glacial rebound such as North America and the polar regions.
Uncertainty in lithospheric density models is still the limiting factors in solid Earth studies and needs to be improved. Lithospheric density anomalies can, among other methods, be estimated from seismic tomography, gravity studies, or joint studies using both datasets. I compare different gravity-based density models of the lithosphere to a tomographic-derived solution and characterise the sources that introduce large uncertainties in the density models of the lithosphere. To study the uncertainty between global and regional crustal models, I select a region where the crust is explored in great measure with seismic profiles, namely the British Isles and surrounding areas, where I use three crustal models to quantify the crustal uncertainty: CRUST1.0, EUCrust-07, and a high-resolution regional P-wave velocity model of the region. The crustal models contribute to the uncertainty of the density of the lithosphere with ±110 kg/m3. Furthermore, I study various P-wave velocity-to-density conversions to quantify the uncertainty introduced by these conversion methods (±10 kg/m3. All different crustal density models are forward modelled into gravity anomalies using the improved methodology of Chapter 2 and these gravity anomalies are subsequently removed from the gravity observations. The unmodelled long-wavelength signal in the gravity field representing mass anomalies in the deep mantle are removed from the observation by spherical harmonic truncation, introducing an uncertainty of ±5 kg/m3. Also, the choice of density background model (±20 kg/m3) and lithosphere-asthenosphere boundary uncertainty (±30 kg/m3) have a small but significant effect on the estimated lithosphere densities. However, the inhomogeneous spatial distribution of profiles of controlled-source seismic exploration of the crustal thickness and density distribution proves to be the largest source of uncertainty (±110 kg/m3). The gravity-based lithospheric density solutions with a variation of ±100 kg/m3 are completely different in magnitude and spatial signature to the densities (±35 kg/m3) derived from a shear wave velocity model. This demonstrates that the tomographic model has a limited resolution, which can be related to regularisation that is used in the construction of global tomographic models. To account for this spectral imbalance, I spatially filter the gravity-based density models, resulting in similarities in spatial correlation and magnitude between that of the gravity-based and the tomographic-derived density. With the filtered gravity-based density I am able to estimate lateral varying conversion values between shear wave velocity and density for the lithosphere, which shows a correlation with major tectonic regions. This correlation shows that the independent gravity-based solutions, despite being filtered, can help in identifying different compositional domains in the lithosphere.
Satellite observations also provide global data on the temporal variations of the gravity field. In the last study, I show that global gravity-change observations from the GRACE satellite mission can be used to study GIA in the Barents Sea Region. The Barents Sea is subject to ongoing postglacial uplift since the melting of the Weichselian ice sheet that covered this region. The deglaciation history is not well known because there is only data from locations close to the boundary of the former ice sheet, in Franz Joseph Land, Svalbard, and Novaya Zemlya. At these locations the magnitude of the GIA uplift is limited, reducing the signal-to-noise of the data. The GRACE mission measures the gravity-change due to GIA at the center of the Barents Sea, where the maximum uplift and ongoing gravity-change is situated. I show that the linear trend in the gravity-change derived from a decade of observations from the GRACE satellite mission can constrain the volume of the ice sheet after correcting for current ice-melt, hydrology and far-field gravitational effects. Regional ice loading models based on new geologically-inferred ice margin chronologies show a significantly better fit to the GRACE data than the global ice models ICE-5G and ICE-6G_C. The regional ice models in this study contain less ice mass during LGM in the Barents Sea than ICE-5G (5-6.3 m equivalent sea level vs. 8.5 m). Also, I show that the GRACE gravity-change is sensitive to the upper mantle viscosity underneath the Barents sea, for which I found a minimum value of 4x1020 Pas, regardless of the ice loading history. The GRACE gravity-change should be used as a constraint in any future GIA modelling of the Barents Sea, because it is the only measurement that captures the signal of maximum GIA.
The high resolution and accurate global gravity field models do give new insights in the structure and density distribution of the upper mantle. The presented studies in this dissertation demonstrate that analysing the spectral signature of gravity data is very useful. Medium-to-short-scale features, like lateral density variation in the lithosphere and GIA gravity-change in the Barents Sea can be separate from other gravity-change sources by applying spectral filters. For longer wavelength signals, such as the GIA static gravity signal in Fennoscandia, this proves to be more difficult due to the overlap in the long-wavelength region by mantle convection signals and other deep mantle signals. On the whole, the global gravity field models and their spectral signature play an important part in building a global density model of the Earth, in which lithosphere, GIA, but also mantle convection and core-mantle boundary effects need to be combined to explain the gravity field.
urban wastewater systems are defined as a combination of combined sewer systems and wastewater treatment plants (WWTPs). Urban wastewater systems discharge, through combined sewer over flows (CSOs) and WWTP effluent, onto the receiving waters. Receiving waters are thus closely linked to urban wastewater systems but are not a part of it.","","en","doctoral thesis","","978-94-6233-707-7","","","","","","","","","Sanitary Engineering","","",""
"uuid:8736e6df-f2fb-48ac-aa28-7d61a214507e","http://resolver.tudelft.nl/uuid:8736e6df-f2fb-48ac-aa28-7d61a214507e","Reduction of uncertainty in stability calculations for slopes under seepage","Liu, K. (TU Delft Geo-engineering)","Hicks, M.A. (promotor); Vardon, P.J. (copromotor); Delft University of Technology (degree granting institution)","2017","br","Data assimilation; ensemble Kalman filter; heterogeneity; inverse analysis; seepage; slope stability; uncertainty reduction; unsaturated soil","en","doctoral thesis","","9789461868442","","","","","","2017-09-22","","","Geo-engineering","","",""
"uuid:e47bfa54-4d58-4c82-829f-3cb2ceb6cfc7","http://resolver.tudelft.nl/uuid:e47bfa54-4d58-4c82-829f-3cb2ceb6cfc7","Exploring the Structure, Properties, and Applications of Highly Ordered Bionanocomposites","Zlopasa, J. (TU Delft BT/Environmental Biotechnology)","Picken, S.J. (promotor); van Breugel, K. (promotor); Koenders, Eduardus (promotor); Delft University of Technology (degree granting institution)","2017","Nature displays a multitude of fascinating materials, from beautiful colors of butterfly wings to the toughness of mullosc shells, which are formed in mild enviornmental conditions with commonly occuring materials, such as chitosan or calcium carbonate. These composite materials display an intricate interplay of biopolymers and minerals forming highly ordered structure. The function of these materials is primarily determined by the selective pressure of the enviornment that certain organisms are placed in. It varies from the “delicate” signaling colors to a robust impact resistence.","","en","doctoral thesis","","978-94-6186-849-7","","","","","","","","","BT/Environmental Biotechnology","","",""
"uuid:82981ce2-e734-440f-91e6-061e568125dc","http://resolver.tudelft.nl/uuid:82981ce2-e734-440f-91e6-061e568125dc","What lies beneath: Bounded manageability in complex underground infrastructure projects","Leijten, M. (TU Delft Organisation & Governance)","de Bruijn, J.A. (promotor); Veeneman, Wijnand (copromotor); Delft University of Technology (degree granting institution)","2017","Complex underground infrastructure construction projects tend to develop in a state of “bounded manageability”. Various types of uncertainties are inherent to these projects and put the project manager in front of serious challenges, risking budget overruns, delays and sometimes even technical failure. Managing such a project is a matter of considering the options to keep ambitions and means in balance, without losing control. In practice this means that project managers, when confronted with these uncertainties in their projects, have to make trade-offs that have strong “double-bind” characteristics. With every advantage come serious downsides. This disables them to make optimal decisions. Moreover, a chosen path pre-defines conditions for later trade-offs that were often not considered explicitly when choosing this path. This research provides a new framework for understanding uncertainty in the management of projects. It maps out the typical manageability dilemmas that evolve in complex underground infrastructure projects and comes with suggestions to improve this manageability.","infrastructure; project management; uncertainty","en","doctoral thesis","","978-94-6233-694-0","","","","","","","","","Organisation & Governance","","",""
"uuid:d518b379-462a-448f-83ef-5ba0e761c578","http://resolver.tudelft.nl/uuid:d518b379-462a-448f-83ef-5ba0e761c578","Synthesis of mechanisms with prescribed elastic load-displacement characteristics","Radaelli, G. (TU Delft Mechatronic Systems Design)","Herder, J.L. (promotor); Delft University of Technology (degree granting institution)","2017","In this dissertation a collection of concepts to synthesise nonlinear springs is presented. Such springs can be useful in various application domains where, e.g., multi-stability or static balancing is desired. These behaviors are often sought to alleviate the effort required for actuation. The explored concepts are presented by showing the design methods, numerical or analytical models, and assessing their viability with experimental evaluations.
In part~I two concepts show how the linear moment characteristic of torsion bars can be reshaped into a nonlinear one. Torsion bars are often suitable energy storage elements because they can be conveniently integrated within the hinge of a mechanism. In both examples the synthesised nonlinear characteristic is determined such that it counteracts the moment of a turning pendulum. The way how the characteristic is reshaped is, however, very different. In the first concept multiple springs are employed, but activated or deactivated by mechanical stops in order to create a piecewise linear characteristic. In the second concept the characteristic is reshaped by a set of non-circular gears. These gears are arranged in a planetary way to obtain a compact transmission.
In part~II the focus is on planar compliant mechanisms that by virtue of their optimized shape exhibit the desired behavior. A few examples demonstrate that, even with relatively simple topologies, complex characteristics can be synthesised accurately. For example, a single beam clamped at one end and pivoted at the other end, is able to match a sinusoidal moment characteristic for a half period. In a second example we were able to produce a constant force by a doubly clamped optimally shaped beam. The constant force of this minimalistic design can be applied to balance a weight over a range of motion approximately equal to the largest dimension of the design. In another example it is shown that an optimized beam shape can emulate the behavior of zero free-length springs. These springs have ideal properties but are in practice difficult to make. We also show that a meta-material constituted by a lattice of zero free-length springs, exhibits very peculiar properties as zero Poisson's ratio, isotropy, and constant Young's modulus, up to large strains. Obtaining the required spring bahaviour at such small scale would become possible by the use of optimally shaped beam springs.
In the last example of part~II a design consisting of four symmetric beams that move over a straight line of continuous static equilibrium is shown. As an aid to the design process, a representation of the elastokinematic behavior is introduced, based on the potential energy field (PEF). The PEFs characterise the behavior of compliant systems not only instantaneously, but over an area of possible displacement locations of the endpoint of the system.
Part~III of this dissertation is dedicated to compliant shell mechanisms. The design of compliant mechanisms as spatial, thin walled, and possibly double curved structures has some interesting and promising aspects. Because of their inherent nonlinear behavior, for example, they lend themselves good for synthesising the nonlinear equilibrium path. With compliant shell mechanisms it is also possible to conveniently create anisotropic stiffness, such that some motion directions are travelled much easier with respect to others. This type of effects can be tailored to create a desired kinematic function. In applications as wearable devices and interactive structures, compliant shell mechanisms can yield to slender, lightweight, aesthetically pleasing, and highly functional solutions. In this dissertation some progresses are made in this infant field of research. As a showcase, in the first chapter of this part, a self-balanced shell is designed. The optimized doubly curved shape of this shell is in continuous equilibrium with its own weight over a fairly large range of motion. In the subsequent two chapters, a tailored moment-angle characteristic is realized by optimizing the parameters of a basic origami mechanism. In the last chapter of this part a spiral spring with various cross-sections is analyzed to understand the anisotropic stiffness behaviors that can be achieved. In particular, the out-of-plane spatial behavior is studied. This is done by using the PEFs, for the first time in three dimensions.
In part~IV two application examples are shown. First a shell mechanism, designed to provide a constant force, is applied to the tip of a heart ablation catheter. The constant force at the tip of the catheter helps maintaining contact with the heart wall while preventing dangerously high forces. The second example shows the concept of a large scale collapsible wall, consisting of a doubly curved shell that balances its own weight. Such wall, employed as e.g. a sound barrier, could be hidden flat when not in use, and be lifted upright when it is needed.
The concepts presented in this dissertation are applied to selected examples. However, they can be applied to synthesise a broader scope of desired characteristics. Also, the ideas can be generalised by moving from springs to mechanisms, i.e. where input and output have distinct locations. A step even further is to apply distributed actuation, sensing, and control on the deforming bodies such to obtain real automata, where advantage is taken of the synthesised elastic behavior. It is also advisable to direct future research into the use of composites as spring material. It can be expected that their high strength, their tailorable anisotropy, and the possibility to deliberately introduce prestress will lead to springs with increased performance and improved control of the behavior. Future research should also be directed towards improving the available design aids, including PEFs, for compliant mechanism designers. Furthermore, it is expected that the developments of this dissertation can be beneficially applied in an increasing number of application areas.","Nonlinear spring synthesis; Compliant shell mechanisms; Static balancing; Potential energy fields","en","doctoral thesis","","978-94-6186-840-4","","","","","","2017-09-15","","","Mechatronic Systems Design","","",""
"uuid:72824e9e-cb4e-4161-bd80-6a665b634739","http://resolver.tudelft.nl/uuid:72824e9e-cb4e-4161-bd80-6a665b634739","Hybrid Organic - Inorganic Polymer Electrolyte Membranes for Low to Medium Temperature Fuel Cells","Cordova Chavez, M.E. (TU Delft OLD ChemE/Organic Materials and Interfaces)","Picken, S.J. (promotor); Kelder, E.M. (copromotor); Delft University of Technology (degree granting institution)","2017","Crude oil, coal and gas are currently the main resources of energy in the world. The World Energy Outlook claimed in 2007 that the major source of energy (about 84%) would still be generated from fossil fuels in 2030. By these projections, the world's fossil fuel reserves will be consumed within a few decades, making it necessary to have a well stablished replacement for fossil fuels to fulfil our energy demands. Furthermore, the environmental impacts of fossil fuels are becoming clearer to scientists and governments. Among the population, environmental awareness is increasing as well, which leads to an increase in the demand for energy that does not harm the environment.
Fuel Cells are one of the most promising clean energy technologies, which are in clear consideration to replace fossil fuels in the future. They work as electrochemical energy conversion devices, similar to batteries, but do not require the recharging process, since they just depend on the presence of fuel to keep producing electricity. In most fuel cells, hydrogen is supplied to the anode and oxygen to the cathode, which results in production of water, heat and what is the most important, electricity. Unfortunately, several drawbacks with fuel cells have been identified. Probably the most important one is the very high cost, which is caused by use of the expensive electrolyte membrane and the catalyst...","Fuel Cells; Electrolyte; sPEEK; Hybrid; BDS; Inner phase; Conductivity; LiBPO4","en","doctoral thesis","","978 94 028 0728 8","","","","","","2017-12-31","","","OLD ChemE/Organic Materials and Interfaces","","",""
"uuid:ee9b5495-fde2-4bb1-807d-e7547f2a393d","http://resolver.tudelft.nl/uuid:ee9b5495-fde2-4bb1-807d-e7547f2a393d","A study on the near-surface flow and acoustic emissions of trailing edge serrations: For the purpose of noise reduction of wind turbine blades","Arce Leon, C.A. (TU Delft Aerodynamics)","Scarano, F. (promotor); Ragni, D. (copromotor); Delft University of Technology (degree granting institution)","2017","The flow near the surface, and the acoustic emissions of trailing edge serrations are investigated in this work. The use of this family of aerodynamic devices on airfoils is intended for the reduction of turbulent boundary layer-trailing edge noise (TBL-TE noise). This purpose has been well demonstrated in wind tunnel and numerical experiments. Particularly, their use in the wind turbine industry has been of great interest in recent years. A growing number of field measurements have shown that a noticeable noise reduction of TBL-TE noise in state-of-the-art blades is also obtained. A full explanation on the mechanism of how noise is reduced is nevertheless lacking. Existing experimental research on serrations offers only a limited characterization of the relevant flow parameters. Fundamental concerns pertaining to the conditions at which that data has been previously gathered are furthermore recurrent. The persistent use of flow-misaligned serrations creates a situation in which flow structures may be observed and misinterpreted as necessary for the attainment of noise reduction. This circumstance complicates the discussion and isolation of the relevant noise reduction mechanism...","Aeroacoustics; Trailing Edge Serrations; Wind Turbine Blades; Particle Image Velocimetry; Acoustic Array Beamforming","en","doctoral thesis","","978-94-92516-68-8","","","","","","","","","Aerodynamics","","",""
"uuid:8b16b984-197d-4486-a139-02cbf9b80e69","http://resolver.tudelft.nl/uuid:8b16b984-197d-4486-a139-02cbf9b80e69","Selective Electrocatalytic CO2 Conversion on Metal Surfaces","Ma, M. (TU Delft ChemE/Materials for Energy Conversion and Storage)","Dam, B. (promotor); Smith, W.A. (copromotor); Delft University of Technology (degree granting institution)","2017","The electrocatalytic conversion of CO2 into fuel and valuable chemicals by coupling with a renewable electricity source is a promising strategy for closing the anthropogenic carbon cycle. The critical step for achieving this practical CO2 utilization is to find a suitable electrocatalyst that is capable of reducing CO2 in a cost-effective process with high efficiency, stability and selectivity for a desired product...","","en","doctoral thesis","","978-94-92516-67-1","","","","","","","","","ChemE/Materials for Energy Conversion and Storage","","",""
"uuid:4f752da9-ccb7-462a-a5f8-a75b6532fa11","http://resolver.tudelft.nl/uuid:4f752da9-ccb7-462a-a5f8-a75b6532fa11","One Step Membrane Filtration: A fundamental study","Haidari, A.H. (TU Delft Sanitary Engineering)","van der Meer, W.G.J. (promotor); Heijman, Sebastiaan (copromotor); Delft University of Technology (degree granting institution)","2017","This study focuses on spiral-wound membrane (SWM) modules, which are the most common commercially available membrane modules for reverse osmosis (RO) and nanofiltration (NF). While RO membranes can remove almost all kinds of substances from the feed water, they are usually equipped with pretreatment steps for conditioning and modifying the feed water to prevent clogging and fouling of these modules. Energy consumption, fouling and concentration polarization are considered as the primary challenges in these types of RO modules. These challenging factors depend of the feed water quality and they are related directly or indirectly to the design of applied feed spacer in SWM modules. A feed spacer provides a channel between two envelopes from entrance to outlet of a module to let the water flows tangentially over the membrane surfaces from the feed to the concentrate side. Additionally, feed spacers are designed to destabilize the concentration polarization layer; and thereby increase the mass transfer in SWM modules of RO. However, application of feed spacers to efficiently mix the flow comes at the expense of higher energy consumption and fouling formation. Therefore, it is important to understand the hydraulic conditions inside the spacer-filled channels such as those encountered in SWM-modules of RO.
Previous RO-studies related to production of drinking water are primarily performed with the assumption that RO-elements are used for desalination. In contrast to this assumption, the most commercially available configuration RO elements, SWM modules, gained more attention to be used for purification of freshwater resources currently. The main reason of using RO for freshwater purification is that it provides an effective barrier against the continuously emerging micro- and nano-contaminants, which cannot be (easily) removed by conventional treatment technologies. The energy use, scaling and retention in RO are influenced by the concentration polarization. When RO is applied on freshwater, the effects of concentration polarization on the energy use become significantly smaller because the osmotic pressure difference is negligible. This thesis is divided into two parts. In the first part, this thesis describes a unique brackish water pilot study, which operates without chemical pretreatment. Such pilot study is referred to as one step membrane filtration (OSMF) system. An OSMF-system is an NF/RO-system in which membranes are applied directly on the feed water and operate without chemical pretreatment. In the second part, this thesis focuses on the hydraulic conditions of spacer-filled channels and the role of spacer geometry and orientation thereon. The effect of feed spacers on hydraulic conditions is investigated by studying the actual velocity profiles occurring in the spacer-filled channel and the relation between energy losses and spacer geometry. Particle image velocimetry (PIV) technique is used to determine the orientation and configuration effects of spacers on the actual velocity profiles. PIV is a non-invasive and powerful tool to achieve high-resolution velocity profiles experimentally, which can be used for verification and validation of numerical studies related to spacer-filled channels. 2D and 3D numerical studies have contributed significantly to our understanding of hydraulic conditions in SWM modules of RO. However, these numerical simulations are usually validated with low-resolution experimental methods. PIV measurements from this study, thus, can provide high-resolution experimental data (7.4£7.4¹m2) for validation of these numerical studies.
The first results in PIV technique is a simultaneous velocity profile, which is obtained by using each two taken frames at a determined time interval. In this thesis, the simultaneous velocity profiles are used to investigate the variation of temporal velocity at certain locations (points) inside a mesh of a spacer. The spatial velocity profiles that are discussed in this thesis obtained by computing the average of related simultaneous velocity profiles. Chapter 3 describes a detailed explanation about the experimental methods used in this thesis. Summary of chapter 3 is repeated in the experimental section of chapters 5, 6 and 7. The effect of feed spacers on hydraulic conditions is investigated by comparing an empty channel with a spacer-filled channel, which was filled with a commercial feed spacer (chapter 5). The flow in the empty channel was in a straight line from inlet to outlet and it was steady compared to the flow in the spacer-filled channel. The spatial velocity profiles of spacer-filled channels showed a bimodal shape with a peak at low-velocity ranges and a peak at high-velocity ranges. The low-velocity regimes occurred mostly in regions close to the filaments and a high-velocity regimes occurred at the narrowed parts of the channel or directly after the narrowing parts. The difference between low and high velocity regimes with regard to the velocity magnitude and frequency was higher at a higher flow. Although the increase in velocity magnitude causes generation of a higher shear, it is not necessarily beneficial. That is because the optimal flux as the results of destabilization of the concentration polarization layer will be achieved at a specific velocity. A further increase of the velocity from this optimal velocity will have only a marginal effect on destabilization of the concentration polarization layer, enhancing the flux and consequently production increase. The hydraulic conditions inside a spacer-filled channel are influenced by configuration as well as orientation of the spacer. The pressure drop, which is measured during this thesis inside the channels with commercial feed spacers was in a good agreement with pressure drop from previous mathematical models. Previous studies reported a lower pressure inside the channels with the cavity spacers than the channels with zigzag spacers. The zigzag or net-type configuration is the common configuration used in SWM of RO. In the zigzag configuration, two layers of filaments with equal average diameter lay on top of each other and make an angle of 45o with each other (hydraulic angle) and with the flow (flow attack angle). In the cavity configuration, the diameter of transverse filaments is smaller than longitudinal filaments. Cavity spacers in this study had a flow attack angle of 135o. The biggest ratio of transverse filaments’ diameter to channel height was about 0.6 with cavity spacers. With cavity spacer used in this thesis, the flow was mainly in a straight line from inlet to outlet. The flow disturbance in channels with cavity spacers was at down- and upstream of transverse filaments. The flow acceleration over the transverse filaments of examined spacers was greater for cavity spacers with bigger transverse filaments. In channels with cavity spacers, the velocity was clearly higher at the channel side without transverse filaments than the channel side with transverse filaments. In channels with zigzag spacers, the flow close to the membrane was in the direction of filaments attached to the membrane, i.e. the direction of flow at the top of channel was perpendicular to the direction of flow at the bottom. The flow pattern at the middle of the channel was a combination of flow patterns at the top and bottom. The greater friction losses that were found for spacers with bigger relative height and smaller aspect ratio indicate that the orientation and geometry of transverse filaments contribute to pressure losses. However, it was not possible to find a reliable correlation for predicting the pressure losses based on the geometric characteristics of feed spacers in experimental conditions. Effect of feed spacer orientation on the flow investigated by using the same commercial spacer at two different flow attack angles. For this purpose, three feed spacers are used with different thickness. The thickness of the top and bottom filaments was the same for each spacer. Pressure drop in the channel with the thinnest spacer was clearly higher at normal orientation (with a flow attack angle of 450) than at ladder orientation (with a flow attack angle of 90o). The difference between two orientations in the channels with thicker spacers was insignificant. The difference between the lowest and highest the velocity was greater in ladder orientation than zigzag orientation. Commercial ladder spacers with the characteristics described in this thesis are more sensitive to fouling than zigzag spacers because in ladder spacers the velocity becomes virtually zero at the side of the channel where transverse filaments are attached to the membrane.
This Ph.D. thesis answers two questions to master these challenges. First, what is the impact of the operation and control of a, possibly multi-terminal, offshore grid based on VSC-HVDC on the transient stability of the onshore power system? Second, how can we model and simulate these impacts while maintaining the desired simulation accuracy and speed? The results of this thesis facilitate fast and accurate assessment of stability impacts of large transmission systems with a significant proportion of converter-interfaced generation.","","en","doctoral thesis","","978-94-6299-652-6","","","","","","","","","Intelligent Electrical Power Grids","","",""
"uuid:4634305a-ac8f-4193-aef7-15927da1c272","http://resolver.tudelft.nl/uuid:4634305a-ac8f-4193-aef7-15927da1c272","Numerical Determination of Permeability in Unsaturated Cementitious Materials","Li, K. (TU Delft Applied Mechanics)","Sluys, Lambertus J. (promotor); Stroeven, P. (copromotor); Delft University of Technology (degree granting institution)","2017","To assess durability of cement-based materials, permeability is commonly considered as an important indicator. It is defined as the rate of movement of an agent (liquid or gas) through the porous medium under an applied pressure. Although permeability can be directly measured in laboratories, these experimental tests generally require specialized equipment and long periods of time to be completed, so they are laborious and expensive. For economic and ecological interests, numerical models are considered as an attractive alternative. Up until now, however, permeability of virtual cement seems to exceed experimental data by several orders of magnitude. Full saturation however, as generally assumed in numerical evaluations, does not realistically represent the experiments. Modelling fluid flow through unsaturated cement-based materials constitutes the focal point of this thesis. It is shown that the saturation degree has a significant effect on the permeability.","","en","doctoral thesis","","978-94-6186-839-8","","","","","","","","","Applied Mechanics","","",""
"uuid:e8590c25-5cfc-43a9-989e-e98b1ea9a8d8","http://resolver.tudelft.nl/uuid:e8590c25-5cfc-43a9-989e-e98b1ea9a8d8","Integrated 6-DoF Lorentz Actuator with Gravity Compensation for Vibration Isolation in In-Line Surface Metrology","Deng, R. (TU Delft Mechatronic Systems Design)","Munnig Schmidt, R.H. (promotor); Spronck, J.W. (copromotor); Delft University of Technology (degree granting institution)","2017","","","en","doctoral thesis","","978-94-6186-833-6","","","","","","","","","Mechatronic Systems Design","","",""
"uuid:2e99ea9a-6a88-415b-bb38-7e5e4c70741e","http://resolver.tudelft.nl/uuid:2e99ea9a-6a88-415b-bb38-7e5e4c70741e","Controlled-source seismic reflection interferometry: Virtual-source retrieval, survey infill and identification of surface multiples","Boullenger, B. (TU Delft Applied Geophysics and Petrophysics)","Wapenaar, C.P.A. (promotor); Draganov, D.S. (copromotor); Delft University of Technology (degree granting institution)","2017","The theory of seismic interferometry predicts that the cross-correlation (and possibly summation) between seismic recordings at two separate receivers allows the retrieval of an estimate of the inter-receiver response, or Green's function, from a virtual source at one of the receiver positions. Ideally, the recordings must consist of the responses from a homogeneous distribution of seismic sources that effectively surround the receivers. This principle has successfully been exploited to retrieve from recorded passive data more easily usable and interpretable responses. In fact, the retrieval of virtual-source responses has led to a wide range of applications, including for controlled-source seismic surveys. The latter is the case for data-driven methods for redatuming reflection data to a receiver level below the source-acquisition surface, or methods to suppress surface waves in land seismic data.
In this thesis, I studied the application of seismic interferometry to surface reflection data, that is, reflection data acquired with both sources and receivers at or near the earth's surface. This is a typical configuration for seismic exploration, either in land or marine surveys. The retrieval of additional virtual sources at receiver locations in that configuration would result in having effectively more shot points. Depending on whether the virtual-source responses contain relevant information, the combined source and virtual-source coverage could allow more complete illumination of the subsurface and so better imaging of its structures. This could be particularly the case for surveys with areas or directions poorly sampled by sources, including large gaps, but with receivers present. The main research questions are what are the conditions for retrieving useful virtual-source reflection responses and with what accuracy.
As I first show from mathematical derivations, the retrieval of virtual-source reflection responses from the application of seismic interferometry to exploration-type reflection data does not comply with several theoretical requirements. A major requirement is that the two considered receivers would need to be enclosed by a boundary of sources. This condition is obviously not fullfilled by the single-sided illumination as in exploration surveys. Consequently, as I show using modelled reflection data, the virtual-source reflection responses are retrieved with several distortions, including the presence of undesired non-physical reflection events.
Yet, in spite of the non-ideal single-sided configuration, cross-correlating the reflection records at receiver pairs and summing over source profiles allows retrieving virtual-source responses with relevant reflection signals. These virtual reflection signals are referred to as pseudo-physical reflections, as they share the same kinematics as physical reflections but contain distortions due to the {cross-correlation} process. By studying further the numerical examples, I determine the influence of several acquisition-related parameters and subsurface characteristics on the accuracy of the virtual-source reflection responses.
Then, based on a theoretical approach using the convolution-type reciprocity theorems, instead of the cross-correlation type, I show how part of the distortions in the virtual-source responses retrieved by cross-correlation could be reduced by performing a multidimensional-deconvolution operation. The potential benefits of multidimensional deconvolution are verified with a numerical example, showing that the obtained virtual-source reflection responses match better the reference physical responses.
In addition, I highlight the essential role of the surface-related multiples in the retrieval process of pseudo-physical reflections. In turn, the retrieved pseudo-physical reflections provide usable feedback about the surface multiples. In particular, I present a method based on the stationary-phase analysis of the retrieved pseudo-physical reflection arrivals for detecting surface-related multiple reflections in the acquired data. The results from tests on numerically modelled data show that this interferometric method allows identifying prominent surface multiples in a wide range of source-receiver offsets. Also, I determine that this correlation-based method performs still well even in the case of missing near-offset reflection data. This interesting property suggests that for robust prediction of multiples, the method could be further developed and complement convolution-based schemes which often suffer from missing near-offset data.
Still, the main objective in retrieving virtual-source responses is to obtain additional desirable shot points for improving processing or imaging. In general, interpolation techniques are applied to the seismic data to compensate for the irregularities of the acquisition geometry. However, most of the direct interpolation techniques do not allow retrieval of the missing data if the gap is larger than the Nyquist criterion. I show, using numerically modelled datasets, that in these challenging cases, decisive information for imaging may be obtained from the retrieval of virtual sources as long as surface-multiple energy is present in the shot records. In particular, I show that virtual images (obtained from retrieved virtual data) can reveal initially invisible structures in the images obtained from the uncomplete reflection data.
Finally, I apply seismic reflection interferometry on a 3D land seismic dataset to test further the practical feasibility of retrieving relevant virtual-source reflection responses. The survey was acquired at a mining site in a hard rock environment with recorded reflections characterized by a relatively poor signal-to-noise ratio. The first results presented in this thesis show evidences of retrieved pseudo-physical reflections. By testing different source contributions, these investigations also show that the retrieval of these desirable events may largely depend on the location and extent of the considered source patch with respect to the virtual source and receiver geometry.","Seismic; Interferometry; Cross-correlation; reflection; Interpolation; Survey","en","doctoral thesis","","978-94-92516-72-5","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:3e74dea2-c01b-4b23-8194-2faec501a3c7","http://resolver.tudelft.nl/uuid:3e74dea2-c01b-4b23-8194-2faec501a3c7","Improving the competitiveness of green ship recycling","Jain, K.P. (TU Delft Ship Design, Production and Operations)","Hopman, J.J. (promotor); Pruyn, J.F.J. (copromotor); Delft University of Technology (degree granting institution)","2017","The end of life of a ship is determined by its owner on the basis of various commercial and technical factors. Once decided to scrap a ship, almost all end-of-life (EOL) ships are sold to recycling yards for dismantling; except for a few which are converted into museums, hotels, storage, and artificial reefs. As the decision is a commercial one, the selection of a yard is predominantly based on the offer price, which depends on the location of the yard and the recycling process employed.
Amongst major recycling centres, generally the yards located in the Indian subcontinent offer more than the Chinese yards, and Turkish yards offer the lowest of the three. Also, within these countries, the yards compliant with the international regulations and safety standards (green), and non-compliant yards (substandard/non-green) co-exist. The contrasting difference in offer price between the two makes the non-green yards more lucrative. Since the regional difference in price is due to perpetual local factors, this research focuses mainly on improving the competitiveness of green yards, irrespective of the region. The aim is to reduce the economic incentive to use substandard yards.
The concept of ‘cleaner production’ is applied to solve the research problem, which identified three main measures. First, the material flow analysis (MFA) to improve the planning and awareness on the yard. Second is the use of a waste-to-energy (WtE) technology to improve the valorisation of waste. And third, the use of the design-for-recycling (DfR) concept to improve the recyclability of new ships. The quantification of material streams of EOL ships is also suggested to support these measures.
A ‘material quantification model’ based on the ship’s lightweight distribution is developed to enable yards to quantify the material streams of EOL ships. Standardizing the format of lightweight distribution will ensure the speedy determination of material streams of EOL ships. The classification societies could play a leading role in implementing this simple yet effective solution.
An MFA model driven by the output of the first model (quantified material streams of the ship) is suggested to conduct analyses on recycling yards. An MFA can effectively be used by yards to conduct several planning related tasks; most importantly, to determine the amounts of materials generated for disposal (waste) and recycling. Therefore, yards are recommended to plan the ship recycling process using the MFA results.
For the WtE technology, the use of a plasma gasification plant on a large recycling yard (capacity of at least 1 million LDT per year) is estimated to increase the offer price in the range of $0.24 to $7.31 per LDT, depending on the recycling rate and plant size. The application of a plasma gasification plant is limited to the large size yards located predominantly in China as against the small to medium size yards in the subcontinent.
While comparing the industry in the Indian subcontinent with China/Turkey, upgrading the non-green yards is also a possibility to bridge the price gap. The upgrade of an existing pier-breaking facility up to the Hong Kong convention standards is estimated to reduce the offer price in the range of $4 to $9 per LDT. For other facility types, the reduction is likely to be in the range of $10 to $35 per LDT, depending on the facility type, recycling capacity and the upgrade cost.
For the DfR concept, the ship design features useful for reverse production, such as modular accommodation and lifting supports, amongst others, are suggested. A new format of the ship’s lightweight distribution is also proposed as a documental change to the ship design. Although these features will not reduce the offer price gap between the green and non-green yards as both yard types bear the same advantages of new design features, the recycling operations will definitely be streamlined and offer prices in general will be improved.
When all four improvements are combined and applied on the three major regions, it is clear that a gap of about 20 $/LDT and 30 $/LDT will remain between green and non-green yards in Turkey, and the Indian subcontinent respectively. However, in China, the gap can be reduced well within the range of 5 $/LDT. What also becomes clear is the availability of a much better developed downstream market in the Indian subcontinent will still ensure that prices offered here are about 25 $/LDT and 100 $/LDT higher than in China and Turkey respectively. The fact that components can be sold instead of just scrap materials is an important factor in this.
This thesis discusses certain aspects of SHJ solar cells and has a main focus on the surface passivation of the c-Si wafer. In this way it aims to contribute to the understanding of SHJ solar cell fabrication and operation, helping to improve the SHJ solar cell performance.","","en","doctoral thesis","","978-94-028-0714-1","","","","","","","","","Photovoltaic Materials and Devices","","",""
"uuid:5b950e57-3704-43b2-ab03-f0da1dd79cc9","http://resolver.tudelft.nl/uuid:5b950e57-3704-43b2-ab03-f0da1dd79cc9","Field Experiments and Reactive Transport Modeling of Subsurface Arsenic Removal in Bangladesh","Rahman, M.M. (TU Delft Water Resources)","Bakker, M. (promotor); van Breukelen, B.M. (copromotor); Delft University of Technology (degree granting institution)","2017","The principle of Subsurface Arsenic (As) Removal (SAR) is to extract anoxic groundwater, aerate it and reinject it. Oxygen in the injected water reacts with iron in the resident groundwater to form hydrous ferric oxide (HFO). Dissolved As sorbs onto the HFO, which allows for the extraction of groundwater with lower As concentrations.","Groundwater; Arsenic; Bangladesh; Reactive transport modeling; Subsurface arsenic removal","en","doctoral thesis","","978-94-028-0701-1","","","","","","","","","Water Resources","","",""
"uuid:24516d5d-94ae-4cc6-a644-ed1b0343b8d6","http://resolver.tudelft.nl/uuid:24516d5d-94ae-4cc6-a644-ed1b0343b8d6","Self-management support system for renal transplant patients: Understanding adherence and acceptance","Wang, W. (TU Delft Interactive Intelligence)","Neerincx, M.A. (promotor); Brinkman, W.P. (promotor); Delft University of Technology (degree granting institution)","2017","Computer-based support for disease self-management has been proposed for chronic patients to stimulate early awareness of disease changes, facilitate patients’ autonomy, and reduce demands on health care resources. Renal transplant patients need lifelong care and can be viewed as chronically ill: they visit hospital regularly to monitor their blood level creatinine and blood pressure. They should also benefit from self-management as other chronic patients do. For the renal transplant patients, a self-management support system (SMSS) was designed and tested, with which they could conduct selfmeasuring regularly to check the renal function and get corresponding feedback. In the study, there were three feedback categories: (1) alright, and therefore patients did not have to take an extra action; (2) mild concern, and therefore patients were requested to measure again; and (3) concern, and therefore patients were advised to contact the hospital. To conduct self-management safely, it is important for the patients to follow the protocol and the system feedback. Therefore, to understand these patients’ selfmanagement behaviour, preferences, and adherence, this thesis investigates possible influencing factors for them to adhere to and accept the SMSS. The study entailed two related research lines: a lab study line that focused on the user interface design of a SMSS, and a clinical trial line that focused on patients’ acceptance and adherence of a SMSS…","self-management support system; user interface; renal transplant patient; adherence; acceptance; attitude","en","doctoral thesis","","978-94-6186-831-2","","","","","","","","","Interactive Intelligence","","",""
"uuid:384bf6be-42df-4fba-bba0-0648c7a52e24","http://resolver.tudelft.nl/uuid:384bf6be-42df-4fba-bba0-0648c7a52e24","Advantages of Electromagnetic Interferometry Applied to Ground-Penetrating Radar: Non-Destructive Inspection and Characterization of the Subsurface Without Transmitting Anything","Feld, R. (TU Delft Applied Geophysics and Petrophysics)","Slob, E.C. (promotor); Delft University of Technology (degree granting institution)","2017","Ground-penetrating radar (GPR) is a non-destructive method that images the subsurface using radar. A transmitter generates a radar pulse. This signal propagates into the ground where it reflects against subsurface heterogeneities, and travels back to the surface. A receiver records the reflected signal. The reflected signal contains information about the subsurface. GPR is useful for pavement- and structures- inspection, object-detection, and characterization of the subsurface. For example, many forms of pavement damage of highways originate in the bottom layers and are invisible until the pavement cracks come to surface. GPR can indicate pavement damage before it is visible at the surface, so that preventive actions can be performed where necessary.
We work towards developing GPR without the need to transmit any signal. Instead, we use signals that are already available in the air, such as mobile phone signals. A technique called electromagnetic interferometry selects those signals that are measured before they enter the ground and after they reflect. It extracts the path from receiver to subsurface and back to the receiver. The result looks as if the receiver has transmitted a signal, while no signal was transmitted by that receiver. This receiver is called a virtual source. By repeating this step for many receiver combinations we create a virtual dataset. This virtual data provides a well-interpretable image of the subsurface.","auto-correlation; advanced pavement model; cross-correlation; deconvolution; 2.5D line-configuration; electromagnetic interferometry; ground-penetrating radar (GPR); mono-static GPR; passive interferometry; pavement damage","en","doctoral thesis","","978-94-6295-693-3","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:786608c2-38ce-4dbc-97d2-8a26080829ba","http://resolver.tudelft.nl/uuid:786608c2-38ce-4dbc-97d2-8a26080829ba","Online optimization-based predictive flight control using first-order methods","Ferranti, L. (TU Delft Team Tamas Keviczky)","Verhaegen, M.H.G. (promotor); Keviczky, T. (copromotor); Delft University of Technology (degree granting institution)","2017","In fields such as aerospace or automotive, the use of classical control methods such as PID is still significant. The presence of constraints, however, impacts on the performance of these controllers that are usually designed to avoid constraint saturation. MPC techniques are the obvious alternative to handle constraint saturation and fully exploit the operative range of the system. Furthermore, MPC can be used as a fault-tolerant controller to handle actuator faults and control reallocation in a modular and systematic way.
In this dissertation, we rely on MPC techniques to handle constraints and actuator faults, motivated by an aerospace application (i.e., the longitudinal control of an Airbus passenger aircraft). The proposed fault-tolerant strategy strongly relies on the MPC capability of handling constraints and directly controlling each actuator independently. The advantages of MPC in terms of performance and fault tolerance, however, are shadowed by the computational requirements of this technique. The presence of an optimizer compromises the performance of the controller in terms of computation time and increases its requirements in terms of hardware and software. In this dissertation, we design MPC-tailored optimizers suitable for online optimization. The proposed contributions aim to bring MPC closer to actual implementation on the next generation of aircraft.","Model predictive control (MPC); constrained optimization; Flight Control","en","doctoral thesis","","978-94-6186-838-1","","","","","","","","","Team Tamas Keviczky","","",""
"uuid:43dd673d-e80c-4e52-b8eb-f9a20df79646","http://resolver.tudelft.nl/uuid:43dd673d-e80c-4e52-b8eb-f9a20df79646","A cost-effective bacteria-based self-healing cementitious composite for low-temperature marine applications","Palin, D. (TU Delft Materials and Environment)","van Breugel, K. (promotor); Jonkers, H.M. (copromotor); Delft University of Technology (degree granting institution)","2017","Bacteria-based self-healing concrete is an innovative self-healing materials approach, whereby bacteria embedded in concrete can form a crack healing mineral precipitate. Structures made from self-healing concrete promise longer service lives, with associated economic benefits [1]. Despite concretes susceptibility to marine-based degradation phenomena [2], and much of the world’s marine infrastructure being located in cool with freezing climatic zones (annual average temperature < 10°C and average summer temperature generally < 20 °C) [3], research on the development of bacteria-based self-healing concrete has been largely restricted to room temperature freshwater studies [4-14]. The objective of the current project was, therefore, to develop a cost-effective bacteria-based self-healing cementitious composite for application in low-temperature marine environments. The current thesis charts the development of this composite. In Chapter 2 the autogenous healing capacities of ordinary Portland cement (OPC) and blast-furnace slag (BFS) cement mortar specimens submerged in fresh and seawater, are visually quantified and characterised. The BFS cement specimens healed all crack widths up to 104 µm, and OPC specimens healed all crack widths up to 592 µm, after 56 days in seawater. BFS cement specimens healed all crack widths up to 408 µm, and OPC specimens healed all crack widths up to 168 µm, after 56 days in freshwater. OPC specimens in seawater displaying the higher crack healing capacity also demonstrated considerable losses in compressive strength. Differences in performance are attributable to the amount of calcium hydroxide in these mortars and specific ions present in seawater. Chapter 3 reports on the crack healing capacity of seawater submerged mortar specimens with the aid of a crack permeability test. Cracks of defined widths were created in BFS cement specimens allowing reference crack permeability values to be generated for unhealed-specimens against which healed-specimens were quantified. Specimens with 0.2 mm wide cracks demonstrated no water flow after 28 days submersion. Specimens with 0.4 mm cracks demonstrated decreases in water flow of 66% after 28 days submersion and 50 to 53% after 56 days submersion. Chapter 4 presents a modified permeability test for generating crack permeability data for cementitious materials. To gauge for any improvement both the modified and unmodified tests were tested and compared. Cracks were generated in mortar specimens using both tests, the accuracy of these cracks was analysed through stereomicroscopy and computer tomography (CT), and the water flow through the cracks determined. Reduction factors and crack flow models were generated, and the accuracy and reliability of the predictions assessed. All of the models had high predictive accuracies (r2 = 0.97-0.98), while the reliability of these predictions was higher for the models generated with the crack width analysis through stereomicroscopy. The cracks generated by the modified test were more accurate (within 20 µm of the desired crack widths) than those of the unmodified test. The modified test was 30% quicker (10 hours for twenty-one specimens) than the unmodified test at generating the crack permeability data. Further, crack width analysis through stereomicroscopy is currently/generally quicker than analysis through CT. Chapter 5 presents a bacterial isolate and organic mineral precursor compound, as part of a cost-effective healing agent for low-temperature marine concrete applications. Organic compounds were screened based on their cost and concrete compatibility, and bacterial isolates based on their ability to metabolise concrete compatible organic compound and to function in a low-temperature marine concrete crack. Magnesium acetate was the cheapest organic compound screened, and when incorporated (1% of cement weight) in mortar specimens had one of the lowest impacts on compressive strength. Bacterial isolate designated psychrophile (PSY) 5 demonstrated very good growth under saline (3%), high pH (9.2), low-temperature (8ºC) conditions, with sodium lactate as an organic carbon source; and good growth at room temperature using magnesium acetate as an organic carbon source. Further, PSY 5 also demonstrated good spore production when grown on monosodium glutamate at room temperature. Chapter 6 presents a bacteria-based bead for realising self-healing concrete in low-temperature marine environments. The bead, consisting of calcium alginate encapsulating bacterial spores and mineral precursor compounds, was assessed for: oxygen consumption, swelling, and its ability to form an organic-inorganic composite in a simulative marine concrete crack solution (SMCCS) at 8ºC. After six days in the SMCCS, the bacteria-based beads formed a calcite crust on their surface and calcite inclusions in their network, resulting in a calcite-alginate organic-inorganic composite. The beads swell by 300% to a maximum diameter of 3 mm, while theoretical calculations estimate that 0.1 g of the beads are able to produce ~1 mm3 of calcite after 14 days submersion. Swelling and the formation of bacteria induced mineral precipitation providing the bead with considerable crack healing potential. It is estimated, based on the bacteria-based beads costing roughly 0.7 €.kg-1, that bacteria-based self-healing concrete made using these beads would cost 135 €.m-3. Chapter 7 presents a bacteria-based self-healing cementitious composite for application in low-temperature marine environments. The composite was tested for its crack healing capacity with the water permeability test presented in Chapter 4, and for its strength development through compression testing. The composite displayed an excellent crack healing capacity, reducing the permeability of cracks 0.4 mm wide by 95%, and cracks 0.6 mm wide by 93%, following 56 days submersion in artificial seawater at 8ºC. Some conclusions were drawn based on the results obtained during the development of the bacteria-based self-healing cementitious composite: • Visual crack closure is not a measurement for the regain of functional properties such as strength. Visual crack closure, therefore, should only be conducted as a complementary method when measuring the regain of such a property. • The capacity of a cementitious material to heal a crack depends on the width of the crack, thermodynamic considerations, the presence of water and the amount of ions available in the crack. Autogenous crack healing for seawater submerged cementitious materials is principally attributable to the precipitation of aragonite and brucite in the cracks. • The crack healing capacity of a bacteria-based cementitious composite is directly related to the amount of organic carbon available to the bacteria, and so the cheaper the organic mineral precursor compound, the cheaper the bacteria-based self-healing technology in general. Further, the compound must not have an adverse effect on concrete properties when included and must be readily metabolised by the bacteria as part of the healing agent. Magnesium acetate, in the current study, best balanced these criteria making it a good candidate as the organic mineral precursor compound for the healing agent. • A large number of specimen replicates (≥ 7) are required to generate reliable crack permeability data, and hence to quantify the crack healing capacity of cementitious materials through their functional water tightness. • The bacteria-based self-healing cementitious composite displayed an excellent crack healing capacity, reducing the permeability of cracks 0.4 mm wide by 95% and cracks 0.6 mm wide by 93%, following 56 days submersion in artificial seawater at 8ºC. This crack healing capacity was attributable to: mineral precipitation as a result of chemical interactions between the cement paste and seawater; bead swelling; magnesium-based precipitates as a result of chemical interactions between the magnesium of the beads and hydroxide ions of the cement paste; and bacteria-induced mineral precipitation. • The 28-day compressive strength of mortar specimens incorporated with beads was 55% of plain mortar specimens. Reducing the amount of bacteria-based beads will likely increase the compressive strength of the bacteria-based self-healing cementitious composite. Such a reduction, given the swellability of the beads, may have relatively little impact on the healing capacity of the composite. • The bacteria-based self-healing cementitious composite shows great potential for realising self-healing concrete in low-temperature marine environments, while the organic-inorganic healing material formed by the composite represents an exciting avenue for self-healing concrete research. I hope that the work presented herein provide a valuable reference for those interested in bacteria-based self-healing concrete, particularly for application in marine environments, and more generally for those interested in the wider field of self-healing materials research.","self-healing concrete; bacteria; marine; low-temperature; cost-effective; organic-inorganic composite","en","doctoral thesis","","978-94-92516-77-0","","","","","","2018-03-02","","","Materials and Environment","","",""
"uuid:275dbc0a-db3d-4b53-90ee-6fe25de24f7e","http://resolver.tudelft.nl/uuid:275dbc0a-db3d-4b53-90ee-6fe25de24f7e","Elastic Instabilities in Polymer-Solution Flow Through Porous Media","Kawale, D.","Kreutzer, M.T. (copromotor); Rossen, W.R. (copromotor); Delft University of Technology (degree granting institution)","2017","Crude oil has been an important source of energy for several decades now. Upon discovery of a crude oil reservoir until its abandonment, typically the oil recovery process can be classified in three stages. The primary and the secondary stage involve utilizing and maintaining reservoir pressure to recover oil respectively. The tertiary, or the Enhanced Oil Recovery (EOR) stage relates to employing various chemical or thermal methods to recover oil. Polymer flooding is the most promising enhanced oil recovery method that has shown potential to increase the oil recovery. Considering all the proven oil reserves from which mankind has recovered oil until now and average the oil recovery, it is less than 40%. EOR technologies have the potential to tap into the ∼ 60% crude oil that is still present in the oil reservoirs. One of the biggest technical challenges in perfecting a polymer EOR process is predicting the polymer injectivity. The viscoelastic behaviour of polymer solutions coupled with the complex pore-scale flow geometry of geological porous media gives rise to non-linear resistance to flow. Consequently, during injection of polymer solutions in an oil reservoir, the pressure-drop can increase dramatically beyond certain flow rates. This behaviour is known as apparent shear-thickening. In this thesis, we focus on understanding the apparent shear-thickening behaviour of polymer solutions as they flow through porous media. We identify the mechanism causing the non-linear resistance to flow of polymer solutions through porous media. Our experiments reveal the mechanisms at the pore-scale and also at the molecular-scale.\par","polymer rheology; porous media, elastic instabilities; microfluidics; enhanced oil recovery","en","doctoral thesis","","978-94-92516-74-9","","","","","","","","","ChemE/Product and Process Engineering","","",""
"uuid:4341f2e2-232a-4361-a4ee-d21d91476e1b","http://resolver.tudelft.nl/uuid:4341f2e2-232a-4361-a4ee-d21d91476e1b","Resilient Industrial Systems: A Complex System Perspective to Support Business Decisions","Bas, G. (TU Delft Energie and Industrie)","Herder, P.M. (promotor); Nikolic, I. (copromotor); van der Lei, T.E. (copromotor); Delft University of Technology (degree granting institution)","2017","Industrial systems increasingly need to become more resilient to developments in their environment. To take the right decisions and improve their resilience, those companies need insight into the effects of resilience-enhancing actions. A substantial part of those actions' effects follow from the adaptation of the focal company's environment in response to its actions. The current, predominantly inward focused, perspective used to assess actions cannot be used to capture those indirect effects of an action. Therefore, this thesis addresses how we can conduct a more comprehensive assessment of a company's actions that can enhance its resilience. This research develops and tests a novel combination of theoretical perspectives to execute such a comprehensive assessment. In five case studies, with increasing complexity along several variables, we develop simulation models to assess a variety of possible resilience-enhancing actions. The outcomes of the case studies indicate that our combination of theoretical perspectives, operationalized in our models, can indeed capture the indirect effects of the assessed actions, and that including those indirect effects substantially influences the performance of the focal company. With this approach, companies can assess their proposed actions more comprehensively, enabling them to take actions that improve their resilience to the increasing volatility in industrial systems.","adaptation; agent-based modelling; business decision assessment; complex adaptive systems; industrial systems; market dynamics; resilience; system perspective","en","doctoral thesis","","978-94-6186-834-3","","","","","","","","","Energie and Industrie","","",""
"uuid:7a4be37b-350a-4fd6-9e16-944cb02a312a","http://resolver.tudelft.nl/uuid:7a4be37b-350a-4fd6-9e16-944cb02a312a","Investigation on Alkali-Surfactant-Foam (ASF) for Enhanced Oil Recovery, Experimentally and Theoretically","Hosseini Nasab, S.M. (TU Delft Reservoir Engineering)","Zitha, P.L.J. (promotor); Delft University of Technology (degree granting institution)","2017","This thesis presented an extensive study on various aspects of ASF flooding process for EOR. It provides insight into hybrid EOR processes that are of a combination of immiscible gas and chemicals injection in sandstone reservoir. We wanted to discover the mechanism of oil displacement by ASF flooding in terms of 1) formation of oil bank, 2) transport of dispersed oil, and 3) movement and pushing of oil bank and dispersed oil by foam. The main premise of this thesis is whether immiscible foam flooding as an EOR technique can be improved by ASF flooding by a combination of the mechanisms of ASP EOR and Foam EOR methods? The first part of the thesis, chapters two and three, is devoted to numerical simulation and mechanistic modelling of the Foam flooding EOR process and the ASP flooding EOR process. Knowledge obtained from these two chapters formed the basis for further study of the behavior of foam in bulk and porous media in the presence of oil. The second part, chapters four to six, is based on the systematic laboratory experimental study of ASF EOR in the bulk and in the consolidated porous media conditions, and subsequently proposing a novel chemical EOR approach. Below we will give a summary of the main findings obtained in this thesis.","","en","doctoral thesis","","","","","","","","2018-12-31","","","Reservoir Engineering","","",""
"uuid:b0cec363-1527-4e9a-98de-e85bb94389d8","http://resolver.tudelft.nl/uuid:b0cec363-1527-4e9a-98de-e85bb94389d8","Dynamic processes on complex networks: The role of heterogeneity","Qu, B. (TU Delft Multimedia Computing)","Hanjalic, A. (promotor); Wang, H. (copromotor); Delft University of Technology (degree granting institution)","2017","In the recent decades, various dynamic process models on complex networks have been built to study the mechanisms by which an opinion, a disease or generally the information spreads in real-world networks. For example, opinion models are developed to illustrate the competition of opinions in a population, and epidemic models are used to describe, e.g. how an epidemic spreads in a social contact network or how information propagates in an online social network. Classic models always assume the homogeneous interactions. For example, the infection rates are the same for all pairs of nodes. However, the infection rates between different pairs of nodes which may depend on e.g. interaction frequencies are usually different , thus heterogeneous. In this thesis, we aim to explore the influence of heterogeneity on dynamic processes especially on the prevalence of an epidemic or opinion. We consider two types of dynamic processes: the Non- Consensus Opinion (NCO) model and the Susceptible-Infected-Susceptible (SIS) model. This thesis is mainly devoted to the latter one. We investigate the heterogeneity in both network topology models, e.g. directed networks, and dynamic process models, such as heterogeneous infection rates.","","en","doctoral thesis","","978-94-6186-837-4","","","","","","","","","Multimedia Computing","","",""
"uuid:d23770ce-51ad-43d3-960b-3fa2ad7623f1","http://resolver.tudelft.nl/uuid:d23770ce-51ad-43d3-960b-3fa2ad7623f1","Feature-Oriented Evolution of Variant-rich software systems","Dintzner, N.J.R. (TU Delft Software Engineering)","van Deursen, A. (promotor); Pinzger, M. (promotor); Delft University of Technology (degree granting institution)","2017","Most modern software systems can be adjusted to satisfy sets of conflicting requirements issued by different groups of users, based on their intended usage or execution context. For systems where configurations are a core concern, specific implementation mechanisms are put in place to allow the instantiation of sets of tailored components. Among those, we find selection processes for code artefacts, variability-related annotation in the code, variability models representing the available features and their allowed combina- tions.
In such systems, features, or units of variability, are scattered across the aforementioned types of artefacts. Maintenance and enhancement of existing systems remain a challenge today, for all types of software systems. But in the case of variant-rich systems, engineers face an additional challenge due to the complexity of the product instantiation mechanisms: the maintenance of the variability model of the system, the complex build mechanisms, and fine-grained variability in the source code. The evolution of the system should be performed such that the information contained within the various artefacts remains consistent. In practice, this means that as the implementation of the system evolves, so should the mechanisms put in place to generate tailored products.
Little information is available regarding changes occurring in such systems. To efficiently support such developers tasks and ease maintenance and enhancements activities, we need a deep understanding of the changes that take place in such systems. The state of the art provides trends over long period of times, highlighting systems growth - such as number of added or removed features in each release, or the evolution of cross- tree constraints in a variability model. While important to describe the core dynamics behind the evolution of a system, this does not provide information on the changes per- formed by developers leading to such trends. Similarly, this global information cannot be leveraged to facilitate developers’ activities.
The focus of this thesis is the acquisition and usage of change information regard- ing variant-rich system evolution. We show how the information lacking from today’s state-of-the-art can be obtained from variant-rich system change history. We propose a set of tool-supported approaches designed to gather such information and show how we leverage change information for change impact analysis, or to derive knowledge on developer practices and the challenges they face during such operations. With this work, we shed new light on change scenarios involving heterogeneous artefacts regarding their nature as well as their prevalence in the evolution of such complex systems, and change impact analysis in variant-rich systems.
We designed a model-based approach to extract feature related changes in heterogeneous artefacts. With such an approach, we can gather detailed information of feature evolution in all relevant artefacts. We created an approach for multi-product line modelling for impact computation. We leverage variability information to produce a collection of inter-related variability models, and show how to use it for targeted feature-
change impact analysis on available capabilities of the product family. By applying our change extraction approaches on the Linux kernel, we were able to empirically characterise the evolution of the variability model of this system. We showed that the variability model of that system evolves mostly through modification of existing features, rather than through additions and removals. Similarly, we studied co-evolution of artefacts dur- ing feature evolution in the Linux kernel. Our study revealed that, in this system, most features evolve mostly through their implementation, and complex changes, involving heterogeneous artefacts are not the most frequent.
Through this work, we provide detailed information on the evolution of a system, namely the Linux kernel, and the means used to obtain this information. We show that the gathered data allow us to reflect on the evolution of such a system, and we argue that gathering such information on any system is a source of valuable information regarding a system architecture. To this end, all tools developed in the context of this study were made available to the public.
In this work, we provide key information on the evolution of the Linux kernel, as well as the means to obtain the same information from other variant-rich systems. The knowledge gained on common evolution scenarios is critical for tool developers focus- ing on the support of development of variant-rich systems. A better understanding of common evolution scenario also allows engineers to design systems that will be better equipped to elegantly evolve through such scenarios. While a number of challenges will still have to be addressed in this domain, this work constitutes a step toward a better understanding of variant-rich system evolution and therefore toward better variant-rich system designs.
To conclude that reliability value is sufficient (or not), it is necessary to calculate its value before and after reliability improvement. Such calculations can be done analytically or by a simulation approach. Usually simulation approach is time consuming for a large number of simulations. Small number of simulations leads to an error in the results. Therefore analytical methods are often welcomed by both – scientists and practitioners.
This thesis investigates analytical methods of reliability calculation focusing on systems with degradation. Analytical formulas of reliability calculation have limitations for systems with degradation due to non-constant failure rates (in this thesis they are modelled by Weibull distribution). These limitations have been shown in the example of a braking system of moving walks in Chapter 3: analytical methods are mainly applicable only to systems with constant failure rates especially in the case of redundant systems.","reliability; safety systems; failure rate function; degradation; redundancy","en","doctoral thesis","","978-94-6233-662-9","","","","","","","","","Transport Engineering and Logistics","","",""
"uuid:b0407549-ef78-4c29-a07f-fb3f02ec9f30","http://resolver.tudelft.nl/uuid:b0407549-ef78-4c29-a07f-fb3f02ec9f30","Carbon nanotube based solutions for on-chip thermal management","Silvestri, C.","Sarro, Pasqualina M (promotor); Zhang, Kouchi (promotor); Delft University of Technology (degree granting institution)","2017","The performances of microelectronic and optoelectronic devices are often severely limited by high temperatures and insufficient heat management. Therefore, when considering device fabrication and packaging, it is important to select materials based on their thermal performance. The increasing demand for more integrated functionality and miniaturization of microelectronic systems is pushing the limits of traditional cooling and packaging approaches. In fact, thermal management may well be the major bottleneck of the next electronics revolution. Efficient thermal management solutions are required at chip level as well as at system level. For example, heat dissipation is fundamental in microprocessor and integrated circuits (ICs) as in current mobile electronic or in server farms. Moreover, self-heating in applications like high power light-emitting diodes and solar cells affects their long-termstability. Therefore, novel cooling solutions are being developed based on nanotechnologies and functional nanomaterials. In particular, nanomaterials aremainly used as localized on-chip cooling solutions. They span from harvesting thermal energy, by using piezoelectric nanowires and super-lattice thin films, to heat spreading through graphene layers or nanocrystalline diamond, towards carbon nanotubes (CNTs) as thermal interface material (TIM) and heat sinks...","carbon nanotube; thermal management; on-chip cooling; Microfabrication; thermal analysis","en","doctoral thesis","","978-94-028-0694-6","","","","","","2018-12-31","","","Electronic Components, Technology and Materials","","",""
"uuid:81d9473e-667e-4301-bd48-f7f0218974af","http://resolver.tudelft.nl/uuid:81d9473e-667e-4301-bd48-f7f0218974af","Scalable information extraction from point cloud data obtained by mobile laser scanner","Wang, J. (TU Delft Optical and Laser Remote Sensing)","Menenti, M. (promotor); Lindenbergh, R.C. (copromotor); Delft University of Technology (degree granting institution)","2017","The rise of intelligent transportation, autonomous driving and 3D virtual cities demands highly accurate and regularly updated 2D and 3D maps. However, traditional surveying andmapping techniques are inadequate as they are labor intensive and cost inefficient. Mobile Laser Scanning (MLS) systems, which combine Light Detection and Ranging (LiDAR) with navigation techniques, are able to acquire highly accurate 3D measurements of road environments.","Mobile Laser Scanning; voxels; octrees; geometric information; individual tree separation; object recognition","en","doctoral thesis","","978-94-92683-65-6","","","","","","","","","Optical and Laser Remote Sensing","","",""
"uuid:96053b94-0f1a-45cd-9e45-1f39e78c55de","http://resolver.tudelft.nl/uuid:96053b94-0f1a-45cd-9e45-1f39e78c55de","Preventing Loss of Aircraft Control: Aiding pilots in manual recovery from roll-limited situations","Koolstra, H.J. (TU Delft Control & Simulation)","Mulder, Max (promotor); van Paassen, M.M. (copromotor); Delft University of Technology (degree granting institution)","2017","Loss of aircraft lateral control can be problem, specifically when multi engine propeller aircraft are faced with an engine failure. Another, less frequent phenomena is the loss of lateral control in case of aircraft damage. In this thesis, a method is developed to determine the minimum required aircraft velocity as a function of aircraft state and pilot inputs. This led to a new display that was evaluated with pilots in the loop in the SIMONA research simulator. Test revealed that recovery from roll limited situations is a very demanding task for pilots. The added controllability indications can help but only after extensive training. The handling of unknown damage could, however, improve considerable by using tyhe new indications in combination with a controllability check.","Aircraft lateral-directional control, minimum control speed, damaged aircraft.","en","doctoral thesis","","978-94-6186-816-9","","","","Herman J. Koolstra is a retired experimental test pilot of the RNLAF. After his retirment he worked for the Military Aviation Authority and started his research at TUDelft at the department of Control and Simulation.","","","","","Control & Simulation","","",""
"uuid:b44de3d4-25dd-453c-988c-07fca027a612","http://resolver.tudelft.nl/uuid:b44de3d4-25dd-453c-988c-07fca027a612","Integrated process and solvent design for CO2 capture using Continuous Molecular Targeting - Computer Aided Molecular Design (CoMT-CAMD)","Stavrou, M. (TU Delft Engineering Thermodynamics)","Gross, J. (promotor); Bardow, A. (promotor); Delft University of Technology (degree granting institution)","2017","The cost of currently available technologies for CO2 capture should be further reduced to allow for large scale implementation of Carbon Capture and Storage. Solvents for CO2 capture systems with physical absorption are usually selected based on heuristics, engineering expertise and experimental trials. The performance of the separation system is, however, defined by both the properties of the selected solvent and the process conditions, which should be considered simultaneously. In this thesis, the Continuous Molecular Targeting - Computer Aided Molecular Design (CoMT-CAMD) framework is extended and applied to the simultaneous optimization of process and solvent for CO2 capture systems with physical absorption.","","en","doctoral thesis","","","","","","","","","","","Engineering Thermodynamics","","",""
"uuid:cf99aded-77c9-42da-b0d4-009372620a2e","http://resolver.tudelft.nl/uuid:cf99aded-77c9-42da-b0d4-009372620a2e","Analysis of mass variations in Greenland by a novel variant of the mascon approach","Ran, J. (TU Delft Physical and Space Geodesy)","Klees, R. (promotor); Ditmar, P.G. (promotor); Delft University of Technology (degree granting institution)","2017","The Greenland ice sheet (GrIS) is currently losing mass, as a result of complex mechanisms of ice-climate interaction that need to be understood for reliable projections of future sea level rise. The thesis focuses on the estimation of mass anomalies in Greenland using data from the GRACE satellite gravity mission. Monthly GRACE gravity field solutions are post-processed using a new variant of the ""mascon approach''. Greenland is covered with multiple distinctive ""mascons'', assuming the mass anomalies within each one are laterally-homogeneous.
Gravity disturbances at mean satellite altitude are synthesized from the GRACE spherical harmonic coefficients. They are used as pseudo-observations to estimate the mascon mass anomalies using weighted least-squares techniques. No regularization is applied. The full noise covariance matrix of gravity disturbances is propagated from the full noise covariance matrix of spherical harmonic coefficients using the law of covariance propagation. Those matrices represent a complete stochastic description of random noise in the data, provided that it is Gaussian. The inverse noise covariance matrix is used as a weight matrix in the weighted least-squares estimate of the mascon mass anomalies. The limited spectral content of the gravity disturbances is accounted for by applying a low-pass filter to the design matrix providing a spectrally consistent functional model.
Using numerical experiments with simulated signal and data, we demonstrate the importance of the data weighting and of the spectral consistency between the mascon model and the pseudo-observations. The developed methodology is applied to process real GRACE data using CSR RL05 monthly gravity field solutions with full noise covariance matrices. We distinguish five GrIS drainage systems. The obtained mass anomaly estimates per mascon are integrated over individual drainage systems, as well as over entire Greenland. We find that using a weighted least-squares estimator reduces random noise in the estimates by factors ranging from 1.5 to 3.0, depending on the drainage system. Furthermore, we compare the de-trended mascon mass anomaly time-series with similar time-series from the Regional Atmospheric Climate Model (RACMO 2.3), which describes the Surface Mass Balance (SMB). We show that the weighted least-squares estimate reduces the discrepancies between the time-series by 24\%--47\%.
Then, we combine GRACE mass anomaly estimates, SMB model outputs, and ice discharge data to systematically analyze the mass budget of Greenland at various temporal and spatial scales. Among others, we reveal a substantial seasonal meltwater storage, which peaks in July, reaching in total $100 \pm 20$ Gt. Meltwater storage is particularly intense in the northern, northwestern and southeastern drainage systems. An analysis of outlet glacier velocities shows that the contribution of ice discharge to the seasonal mass variations is minor, at a level of only a few Gt. In addition, we propose a simple way to use GRACE data for validating SMB model outputs in winter, based on the fact that ice discharge cannot be negative.
Finally, we use numerical simulations and real data to identify the optimal GRACE data processing strategy (primarily the size of the mascons) for three temporal scales of interest: monthly mass anomalies, mean mass anomalies per calendar month, and long-term linear trends. We show that the two major contributors to the error budgets are random errors and parameterization (model) errors; the latter are caused by a spatial variability of actual mass anomalies within individual mascons. We find that the errors in long-term linear trend estimates are mainly caused by the parameterization errors, and that accurate estimates require small size mascons in combination with the ordinary least-squares estimator. The error budget of mean mass anomalies per calendar month is dominated by the parameterization error when the size of mascons is large and by random errors otherwise. Hence, accurate estimates require mascons of intermediate size in combination with a weighted least-squares estimator. Finally, we find that random errors are the dominant error source in monthly mass anomalies. We advise to use in this case large mascons and a weighted least-squares estimator.
Our new variant of the mascon approach and the results of this thesis can be used in support of future research on GrIS hydrology, glacier dynamics, and surface mass balance, as well as their mutual interactions.","Greenland Ice Sheet; GRACE; Ice discharge; melt water; Variance co-variance matrix; mascon; Surface mass balance","en","doctoral thesis","","978-94-92683-64-9","","","","","","","","","Physical and Space Geodesy","","",""
"uuid:3746fbe1-fd8a-4b5e-aa0b-a09c69ac0abc","http://resolver.tudelft.nl/uuid:3746fbe1-fd8a-4b5e-aa0b-a09c69ac0abc","Anisotropic Joint Migration Inversion: Automatic estimation of reflectivities and anisotropic velocities","Alshuhail, A.A. (TU Delft ImPhys/Acoustical Wavefield Imaging)","van Vliet, L.J. (promotor); Verschuur, D.J. (copromotor); Delft University of Technology (degree granting institution)","2017","One of the most crucial estimates obtained from reflection seismology is the seismic image. It provides a map of the subsurface reflectivities. However, in order to construct an accurate map an accurate propagation velocity model is needed. For simple geologic environments an isotropic velocity model is sufficient, however, for complex geologic environments an anisotropic velocity model is more appropriate and more realistic in describing wave propagation. Ignoring the anisotropic kinematics in these geologic environments will most definitely lead to sub-optimal or even poor imaging results, especially with the tendency of today's acquisition geometries that include measurements at large source-receiver offsets...","seismic imaging; Anisotropy","en","doctoral thesis","","978-94-6186-830-5","","","","","","","","","ImPhys/Acoustical Wavefield Imaging","","",""
"uuid:f1498c73-5caa-4858-904b-7cbbf9c04d9b","http://resolver.tudelft.nl/uuid:f1498c73-5caa-4858-904b-7cbbf9c04d9b","Experimental modeling of sloshing at small-scale: Relevance at full-scale through analysis of the physics of impacts","Karimi, M.R. (TU Delft Ship Hydromechanics and Structures)","Kaminski, M.L. (promotor); Ghidaglia, JM (promotor); Delft University of Technology (degree granting institution)","2017","","","en","doctoral thesis","","","","","","","","","","","Ship Hydromechanics and Structures","","",""
"uuid:a989069c-54e4-4d80-a30a-a6fb9b333287","http://resolver.tudelft.nl/uuid:a989069c-54e4-4d80-a30a-a6fb9b333287","Design aspects of pipe belt conveyors","Zamiralova, M. (TU Delft Transport Engineering and Logistics)","Lodewijks, G. (promotor); Delft University of Technology (degree granting institution)","2017","This dissertation investigates how to design a pipe belt conveyor system in a more effective way, considering two aspects: to ensure conveyor belt has sufficient bending stiffness to form an enclosed pipe shape without a contact loss with the idler rolls; and to reduce energy consumption of the system from the indentation rolling resistance.
A conveyor belt bending stiffness is quantified from the toughability test. To detect an appearance of a contact loss with the idler rolls, contact forces are determined using three approaches: experimental testing; a newly introduced analytical approach, constructed based on the Displacement Method of Superposition with Maxwell-Mohr Integrals, and using FEM analysis. To determine the indentation rolling resistance, a 3D Maxwell model is used with multiple Maxwell parameters and Winkler foundation.
the use of specialized devices, in specialized server designs, optimized for a certain class of workloads, is gaining momentum. Data movement has been demonstrated to be a significant drain of energy, and is furthermore a performance bottleneck when data is moved over an interconnect with limited bandwidth. With data becoming an increasingly important asset for governments, companies, and individuals, the development of systems optimized on a device and server level for data-intensive workloads, is
necessary. In this work, we explore some of the fundamentals required for such a system, as well as key use-cases...","Square Kilometre Array; computer architecture; near-data processing; high-performance computing","en","doctoral thesis","","978-94-6186-821-3","","","","","","","","","Computer Engineering","","",""
"uuid:9083a9cc-64a1-4676-9134-9f8652d629e0","http://resolver.tudelft.nl/uuid:9083a9cc-64a1-4676-9134-9f8652d629e0","Integrated capacity assessment and timetabling models for dense railway networks","Bešinović, Nikola (TU Delft Transport and Planning)","Hoogendoorn, S.P. (promotor); Goverde, R.M.P. (copromotor); Delft University of Technology (degree granting institution)","2017","Mainline railways in Europe are experiencing increasing use as the worldwide demand for passenger and freight transport is growing across all transport modes. At the same time, much of the existing railway network is reaching its capacity and has become susceptible to disturbances. This thesis creates, optimizes, and evaluates railway timetables to promote more reliable, attractive and sustainable railway transport systems. In essence, we demonstrate that optimization, simulation and data analysis can be successfully applied to improving railway traffic planning and account for better infrastructure capacity use and increased level of service for passengers and freight operators.","","en","doctoral thesis","TRAIL Research School","978-90-5584-226-1","","","","TRAIL Thesis Series no. T2017/9, the Netherlands TRAIL Research School","","","","","Transport and Planning","","",""
"uuid:3997066a-0ad6-4de2-9c79-e5e474bae20f","http://resolver.tudelft.nl/uuid:3997066a-0ad6-4de2-9c79-e5e474bae20f","Bio-based ground improvement through Microbial Induced Desaturation and Precipitation (MIDP)","Pham, P.V. (TU Delft Geo-engineering)","van Paassen, L.A. (promotor); Heimovaara, T.J. (promotor); Delft University of Technology (degree granting institution)","2017","Improving and alternating soil foundation conditions is a common task in construction and civil engineering. Besides conventional ground improvement methods, there are several biological processes that can improve ground properties by precipitating calcium carbonate. Denitrification is one of the biological processes that can be used. The process generates besides carbonate precipitation also a gas phase in the soil. Therefore, the denitrification based method, or Microbially Induced Desaturation and Precipitation, MIDP, expands the potential of biological processes to improve the ground conditions for different applications.
Liquid batch experiments show that denitrification based MICP is a coupled process, in which denitrification and calcium carbonate precipitation processes influence and are beneficial for each other. To minimize accumulation of the denitrification intermediates, both the substrate ratio and concentration to be used are important. When using a relatively low substrate concentration at the ratio that favours microbial growth, there was no nitrite at the end of the experiments, precipitation rate was up to 0.26 w%/day. This value is higher than the observed values in literature and improves the potential of using this process for ground improvement applications. After one batch treatment, sufficient gas was produced within 1 or 2 days to desaturate the sand to the gas percolation threshold, which ranged from 21 to 50% depending on pore size. The gas stability appeared to be dependent on the relative proportion of the produced gas volume with the gas percolation threshold of the soil. When the treated sand was subjected to monotonic loading in triaxial tests, the soil stiffness and dilatancy were increased, and the pore pressure response in undrained loading was dampened. The treatments also decrease the soil hydraulic conductivity, and even led to clogging in the experiment using low substrate concentration whereby the microbial growth was favoured.
Overall in the thesis, MIDP has shown its capability to alter hydro-mechanical behaviour of sandy soils at laboratory scale, and can be applied for a wide range of ground improvement applications. Upscaling the investigation and optimizing it toward different specific applications are required for future work.
The bronzes have been studied from an interdisciplinary point of view, which has allowed the extraction of information that is also applicable to other corroded bronzes in general. It is argued that:
- corrosion products and inclusions may reflect original microstructure
- corrosion inhibitor BTAH binds to Sn and SnO2
- technical investigations before conservation increase the information value of an artefact.","bronze; corrosion; archaeology; conservation; biography; materials science; microstructure; BTAH; Oss-Zevenbergen; Early Iron Age","en","doctoral thesis","","","","","","","","","","","Team Joris Dik","","",""
"uuid:a1bec569-8d24-45ea-9e7e-63c0b900504e","http://resolver.tudelft.nl/uuid:a1bec569-8d24-45ea-9e7e-63c0b900504e","Models in Science and Engineering: Imagining, Designing and Evaluating Representations","Poznic, M. (TU Delft Ethics & Philosophy of Technology)","Kroes, P.A. (promotor); Hillerbrand, R.C. (promotor); Delft University of Technology (degree granting institution)","2017","How can one learn about particular phenomena by using models? This is the central question of the present book. One brief answer is that one can learn about phenomena by using models if these models represent the phenomena. A longer answer will be presented in the individual chapters. Answering this question involves not only (partially) explaining what representation is, but also how the notions of representation and evaluation are connected in the context of modeling. The thesis includes a fresh look at so-called similarity views on representation and a discussion of fictionalist accounts of modeling, while expanding on the general framework of indirect representation. A case study in bioengineering is used to show that the indirect view of representation must acknowledge a distinction between two directions of fit in relations between vehicles and targets. In this context the notion of design is interpreted as a relation between a vehicle and a target, thereby connecting ideas from philosophy of science with ideas from philosophy of technology. In the concluding chapters fictionalist accounts of modeling are discussed. These accounts are criticized from an epistemological point of view but the accounts’ foundational theory of make-believe is constructively applied to a case study in climate modeling.","","en","doctoral thesis","","978-90-386-4315-1","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:d825f0b8-c6a7-4aac-91f3-25fea6f72046","http://resolver.tudelft.nl/uuid:d825f0b8-c6a7-4aac-91f3-25fea6f72046","Fostering Engagement in Knowledge Sharing: The Role of Perceived Benefits, Costs and Visibility","Sedighi, M. (TU Delft Economics of Technology and Innovation)","Brazier, F.M. (promotor); van Beers, Cees (promotor); Lukosch, S.G. (copromotor); Delft University of Technology (degree granting institution)","2017","Knowledge sharing is an organisational process that is essential to create and maintain sustained competitive advantage by organisations to react quickly to changing external circumstances. Participation of employees is a major challenge to facilitate knowledge sharing within organisations. Knowledge exchange channels have been developed in knowledge management (KM) systems to support the emergent social nature of knowledge sharing. Establishing communication channels between participants does not assure that knowledge sharing will actually take place within or between organisations. Knowledge sharing performance depends onhow participants use the technologies provided...","","en","doctoral thesis","","","","","","SIKS Dissertation Series No. 2017-20","","2017-06-30","","","Economics of Technology and Innovation","","",""
"uuid:f0f06ff5-53b9-4a79-87c9-f49aee5fd405","http://resolver.tudelft.nl/uuid:f0f06ff5-53b9-4a79-87c9-f49aee5fd405","Electron density studies on magnetic systems","Boeije, M.F.J. (TU Delft RST/Fundamental Aspects of Materials and Energy)","Brück, E.H. (promotor); van Dijk, N.H. (copromotor); Delft University of Technology (degree granting institution)","2017","In this thesis, the boundary conditions for the development of giant magnetocaloric materials are investigated. The magnetocaloric effect is found in magnetic materials, when they are subjected to an external magnetic field. This leads to abrupt magnetization changes that cause a temperature change in the material. Materials based on Fe2P show giant temperature changes around room temperature and are especially suited for cooling applications. This is due to the large magnetization change that can be realized with the application of a relatively small magnetic field around the ferro- to paramagnetic phase transition. This transition is of a first order, giving rise to latent heat and rise to
‘‘giant’’ effects. After earlier studies investigated the relation between microscopic and macroscopic properties of these materials, the focus of this thesis is on the electronic factors that play a role in the stability and phase transitions of these compounds. After all, when the mechanism behind these phase transitions is clear, is it easier to search for new materials that show similar phase transitions. Two strategies are possible: elucidating the mechanism of Fe2P-based materials or investigating materials that show similar phase transitions. The latter is described in the next paragraph...","","en","doctoral thesis","","978-94-6295-635-3","","","","","","","","","RST/Fundamental Aspects of Materials and Energy","","",""
"uuid:ed994629-1130-4c92-a6ec-1d03f5f5f211","http://resolver.tudelft.nl/uuid:ed994629-1130-4c92-a6ec-1d03f5f5f211","Carbonated water flooding: Process overview in the frame of co2 flooding","Peksa, A.E. (TU Delft Reservoir Engineering)","Zitha, P.L.J. (promotor); Wolf, K.H.A.A. (copromotor); Delft University of Technology (degree granting institution)","2017","The main scope of the work related to the physical and dynamical processes associated with the injection of carbonated water in porous media. Carbonated water flooding is an alternative for traditional CO2 flooding. Both methods have the potential to recover any oil left behind after primary and secondary recovery while storing CO2 at the same time. The advantage of Carbonated Water flooding as compared to CO2 flooding is related to the high buoyancy of CO2 that results in gravity separation if the CO2 is not dissolved in water. This results in sub-optimal flooding of the reservoir and possible leakage of CO2.","Enhanced oil recovery; carbonated water flooding; carbon capture and storage; Bentheimer sandstone; mineralogy; molecular diffusion; dielectric behavior; zeta potential; stagnant zone","en","doctoral thesis","","978-94-028-0687-8","","","","","","","","","Reservoir Engineering","","",""
"uuid:1ed4a443-3934-436a-ad8b-0e496b0f7be7","http://resolver.tudelft.nl/uuid:1ed4a443-3934-436a-ad8b-0e496b0f7be7","Form Follows Feeling: The Acquisition of Design Expertise and the function of Aesthesis in the Design Process","Curry, T.M. (TU Delft OLD Urban Compositions)","Bekkering, H.C. (promotor); Delft University of Technology (degree granting institution)","2017","While the consideration of functional and technical criteria, as well as a sense of coherence are basic requirements for solving a design problem; it is the ability to induce an intended quality of aesthetic experience that is the hallmark of design expertise. Expert designers possess a highly developed sense of design, or what in this research is called aesthesis. Reflection on 25 years teaching design in the USA, Hungary, and China led to the observation that most successful design students, more than intellectual ability, drawing, model making or drive, all seemed to possess what may be called an intuitive sense of good design. It is not that they already know how to design, or that they are natural designers, it is that they have a more developed sense aesthesis. This research takes a multi-disciplinary approach to build a theory that describes what is involved in acquiring design expertise,identifies how aesthesis functions in the design process, and determines if what appears to be an intuitive sense of design is just natural talent or an acquired ability.
While the consideration of functional and technical criteria, as well as a sense of coherence are basic requirements for solving a design problem; it is the ability to induce an intended quality of aesthetic experience that is the hallmark of design expertise. Expert designers possess a highly developed sense of design, or what in this research is called aesthesis. Reflection on 25 years teaching design in the USA, Hungary, and China led to the observation that most successful design students, more than intellectual ability, drawing, model making or drive, all seemed to possess what may be called an intuitive sense of good design. It is not that they already know how to design, or that they are natural designers, it is that they have a more developed sense aesthesis. This research takes a multi-disciplinary approach to build a theory that describes what is involved in acquiring design expertise,identifies how aesthesis functions in the design process, and determines if what appears to be an intuitive sense of design is just natural talent or an acquired ability.
The research started with topics related to design methodology, which led to questions related to cognitive psychology, especially theories of problem-solving. An in-depth review of research in embodied cognition challenged the disembodied concept of the mind and related presuppositions, and reintroduced the body as an essential aspect of human cognition. This lead to related topics including: pre-noetic (pre-verbal) knowledge, the cognitive architecture of the brain, sense mechanisms and perception, limitations and types of memory as well as the processing capacity of the brain, and especially how emotions/feelings function in human cognition, offering insight into how designing functions as a cognitive process.
The research provides evidence that more than technical rationality, expert designers rely heavily on a highly developed embodied way of knowing (tacit knowledge) througout the design process that allows them to know more than they can say. Indeed, this is the hallmark of expert performers in many fields. However, this ability is not to be understood as natural talent, but as a result of an intense developmental process that includes years of deliberate practice necessary to restructure the brain and adapt the body in a manner that facilitates exceptional performance. For expert designers it is aesthesis (a kind of body knowledge), functioning as a meta-heuristic, that allows them to solve a complex problem situation in a manner that appears effortless. Aesthesis is an ability that everyone possesses, but that expert designers have highly developed and adapted to allow them to produce buildings and built environments that induce an intended quality of aesthetic experience in the user. It is a cognitive ability that functions to both (re)structure the design problem and evaluate the solution; and allows the designer to inhabit the design world feelingly while seeking aesthetic resonance that anticipates the quality of atmosphere another is likely to experience. This ability is critical to the acquisition of design expertise.","Architecture; Cognitive Psychology; Problem-solving; Embodied Cognition; Tacit Knowledge; Design Process","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-92516-63-3","","","","A+BE | Architecture and the Built Environment No 6 (2017)","","","","","OLD Urban Compositions","","",""
"uuid:0b67a0dd-a951-46f3-bbaa-86270e546c4e","http://resolver.tudelft.nl/uuid:0b67a0dd-a951-46f3-bbaa-86270e546c4e","Design of pattern-placed Revetments","Peters, D.J. (TU Delft Hydraulic Structures and Flood Risk)","Vrijling, J.K. (promotor); Verhagen, H.J. (copromotor); Delft University of Technology (degree granting institution)","2017","Revetment systems prevent erosion of dikes. The systems need to be stable under wave attack. The size and the weight of revetment elements is the main contribution to their stability. Pattern-placed revetments consist of relatively small blocks or column-shaped natural stones or concrete elements, placed in a regular grid. The pattern creates a regular distribution of joints and voids which limit the build-up of water pressure in the system. The pattern also contributes to the mechanical resistance against wave pressures. When applied on a slope gravity induces a down-slope force in the revetment and provides coherence through frictional interlocking. In this thesis this mechanism is studied in detail with model simulations and field measurements. The contribution of frictional interlocking makes the observed high resistance of pattern-placed revetment structure against concentrated loads comprehensible. This phenomenon can be used in the development of new revetment systems and in optimization of the design of revetment slopes and dikes.","","en","doctoral thesis","","978-946-295-665-0","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:fc4d5fc5-cee9-44ef-bca1-e69250c1480f","http://resolver.tudelft.nl/uuid:fc4d5fc5-cee9-44ef-bca1-e69250c1480f","Hybrid Monte Carlo methods in computational finance","Leitao Rodriguez, A. (TU Delft Numerical Analysis)","Oosterlee, C.W. (promotor); Delft University of Technology (degree granting institution)","2017","Monte Carlo methods are highly appreciated and intensively employed in computational finance in the context of financial derivatives valuation or risk management. The method offers valuable advantages like flexibility, easy interpretation and straightforward implementation. Furthermore, the dimensionality of the financial problem can be increased without reducing the efficiency significantly. The latter feature of Monte Carlo methods is important since it represents a clear advantage over other competing numerical methods. Furthermore, in the case of option valuation problems in multiple dimensions (typically more than five), theMonte Carlo method and its variants become the only possible choices. Basically, theMonte Carlo method is based on the simulation of possible scenarios of an underlying process and by then aggregating their values for a final solution. Pricing derivatives on equity and interest rates, risk assessment or portfolio valuation are some of the representative examples in finance, where Monte Carlo methods perform very satisfactorily. The main drawback attributed to these methods is the rather poor balance between computational cost and accuracy, according to the theoretical
rate ofMonte Carlo convergence. Based on the central limit theorem, theMonte
Carlo method requires hundred times more scenarios to reduce the error by one order...","","en","doctoral thesis","","978-94-028-0681-6","","","","","","","","","Numerical Analysis","","",""
"uuid:b60cb231-222d-434d-b987-cf36605bc719","http://resolver.tudelft.nl/uuid:b60cb231-222d-434d-b987-cf36605bc719","Surface wear reduction of bulk solids handling equipment using bionic design","Chen, G. (TU Delft Transport Engineering and Logistics)","Lodewijks, G. (promotor); Schott, D.L. (copromotor); Delft University of Technology (degree granting institution)","2017","Bulk solids handling continues to play an important role in a number of industries. One of the issues during bulk solids handling processes is equipment surface wear. Wear results in high economic loss and increases downtime. Current wear reduction methods such as optimizing transfer conditions or using wear-resistant materials, have brought notable progress. Nevertheless, the wear loss is still significant. Therefore, new solutions for reducing the surface wear must be investigated.
Because wear also occurs to the surfaces of many biological organisms, inspirations for wear reduction can be obtained from biology. In this research, the bionic design method is explored to reduce the surface wear of bulk solids handling equipment.
This thesis firstly illustrates the analytical wear models in bulks solids handling. Hence, the wear phenomena in biology are investigated. Based on the analogies between biology and bulk solids handling, a bionic design method for wear reduction of bulk solids handling equipment surfaces is developed. Furthermore, two bionic models for reducing abrasive and erosive wear respectively, are proposed for the applications of bulk solids handling equipment surfaces.
To model the effects of applying bionic models on the surface wear of bulk solids handling equipment, the discrete element method (DEM) is utilized. Using the parameter values obtained from experiments, the wear of bionic surfaces and conventional smooth surfaces is successfully modeled.
By comparing predicted wear loss from bionic surfaces and smooth surfaces, the effectiveness of reducing wear by application of bionic models are successfully demonstrated. Moreover, parametric studies on geometrical parameters of bionic models were also carried out. The results demonstrate that as biological wear reduction mechanisms are implemented, wear reduction of bulk solids handling equipment surfaces can be achieved. It is shown that abrasive wear loss can be reduced by up to 63% whilst erosive wear loss can be reduced by up to 26%.","wear prediction; discrete element method; bulk solids handling; bionic design","en","doctoral thesis","TRAIL Research School","978-90-5584-227-8","","","","TRAIL Thesis Series T2017/8, the Netherlands TRAIL Research School","","","","","Transport Engineering and Logistics","","",""
"uuid:31ec6c27-2f53-4322-ac2f-2852d58dfa05","http://resolver.tudelft.nl/uuid:31ec6c27-2f53-4322-ac2f-2852d58dfa05","Design principles of multifunctional flood defences","Voorendt, M.Z. (TU Delft Hydraulic Structures and Flood Risk)","Vrijling, J.K. (promotor); Delft University of Technology (degree granting institution)","2017","Multifunctional flood defences are structures that primarily protect land from being covered by water and simultaneously serve other purposes. The present dissertation focuses on the combination of flood protection with functions that are fulfilled by means of buildings and objects, with a high degree of structural integration. This can typically be found in the urban context, where the combination of long-term flood protection and spatial quality is considered crucial for the viability of cities along rivers and seas.
The design of multifunctional flood defences is more complicated than of plain flood defences, as two incompatible design cultures are involved: spatial design and hydraulic engineering. Moreover, the structural composition of multifunctional flood defences is more complex and more diverse than of plain flood defences. Therefore, an overall, integrated, design method was developed that maintains the strengths of both existing approaches, but avoids their weaknesses. In addition, a method is proposed for the design phase of verifying the flood protection function. The methods were validated by design teams of students and by additional case studies.","flood defences; multidisciplinary design; hydraulic engineering; integrated design","en","doctoral thesis","","978-94-028-0678-6","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:2c54c1c6-3f5f-43cd-897c-78d59db28e04","http://resolver.tudelft.nl/uuid:2c54c1c6-3f5f-43cd-897c-78d59db28e04","Solid oxide fuel cell (SOFC) integrated power plants: System and kinetic studies","Thallam Thattai, A. (TU Delft Energy Technology)","Boersma, B.J. (promotor); Geerlings, J.J.C. (promotor); Aravind, P.V. (copromotor); Delft University of Technology (degree granting institution)","2017","Increased climate change over past decades has resulted in an increase in the average temperature (also called global warming) of Earth’s climate system. At the recent Paris climate conference (COP21) in 2015, 195 countries in the world have agreed upon a stringent plan to limit global warming below 2oC. This demands a significant reduction in the industrial emission of greenhouse gases, predominantly carbon dioxide (CO2). Existing fossil fuel (coal, natural gas) fired power plants account for the majority share in global carbon dioxide (CO2) and other harmful (SOx , NOx) emissions. Therefore clean, efficient and flexible power plant concepts need to be developed towards upgrading existing power plants and to meet the strict CO2 emission targets. Combined cycle power plants like the integrated gasification combined cycle, IGCC (coal based) and integrated reforming combined cycle, IRCC (natural gas based) can be utilized to produce electricity using fossil fuels at relatively high efficiencies compared to conventional single cycle plants.
Possible approaches to make IGCC/IRCC power plants cleaner, efficient and more flexible include biomass utilization (renewable energy source), application of CO2 capture technologies, retrofitting with highly efficient fuel conversion technologies like solid oxide fuel cells (SOFCs) and energy/fuel storage. This dissertation primarily aims to provide design concepts and thermodynamic system analysis for large scale IGCC and IRCC power plants with a focus on achieving high electrical efficiencies, low CO2 emissions and high operational flexibility. SOFCs have been explored as an efficiency augmenting technology and metal hydride based hydrogen storage as a flexibility option. Furthermore, future development of safe and optimally operating hydrocarbon (like natural gas (methane)) fuelled SOFC units on the basis of system and numerical models, requires reliable experimental data and understanding in the underlying reaction kinetics. Thereupon, an extended experimental study has been carried out in this work on methane steam reforming (MSR) kinetics in single operating SOFCs.","","en","doctoral thesis","","978-94-6186-823-7","","","","","","","","","Energy Technology","","",""
"uuid:e6fc3865-531f-4ea9-aeff-e2ef923ae36f","http://resolver.tudelft.nl/uuid:e6fc3865-531f-4ea9-aeff-e2ef923ae36f","Modeling, design and optimization of flapping wings for efficient hovering flighth","Wang, Q. (TU Delft Computational Design and Mechanics)","van Keulen, A. (promotor); Goosen, J.F.L. (copromotor); Delft University of Technology (degree granting institution)","2017","Inspired by insect flights, flapping wing micro air vehicles (FWMAVs) keep attracting attention from the scientific community. One of the design objectives is to reproduce the high power efficiency of insect flight. However, there is no clear answer yet to the question of how to design flapping wings and their kinematics for power-efficient hovering flight. In this thesis, we aim to answer this research question from the perspectives of wing modeling, design and optimization.
Quasi-steady aerodynamic models play an important role in evaluating aerodynamic performance and designing and optimizing flapping wings. In Chapter 2, we present a predictive quasi-steady model by including four aerodynamic loading terms. The loads result from the wing's translation, rotation, their coupling as well as the added-mass effect. The necessity of including all four of these terms in a quasi-steady model to predict both the aerodynamic force and torque is demonstrated. Validations indicate a good accuracy of predicting the center of pressure, the aerodynamic loads and the passive pitching motion for various Reynolds numbers. Moreover, compared to the existing quasi-steady models, the proposed model does not rely on any empirical parameters and, thus, is more predictive, which enables application to the shape and kinematics optimization of flapping wings.
For flapping wings with passive pitching motion, a shift in the pitching axis location alters the aerodynamic loads, which in turn change the passive pitching motion and the flight efficiency. Therefore, in Chapter 3, we investigate the optimal pitching axis location for flapping wings to maximize the power efficiency during hovering flight. Optimization results show that the optimal pitching axis is located between the leading edge and the mid-chord line, which shows a close resemblance to insect wings. An optimal pitching axis can save up to 33% of power during hovering flight when compared to optimized traditional wings used by most of the flapping wing micro air vehicles (FWMAVs). Traditional wings typically use the straight leading edge as the pitching axis. In addition, the optimized pitching axis enables the drive system to recycle more energy during the deceleration phases as compared to their counterparts. This observation underlines the particular importance of the wing pitching axis location for energy-efficient FWMAVs when using kinetic energy recovery drive systems.
The presence of wing twist can alter the aerodynamic performance and power efficiency of flapping wings by changing the angle of attack. In order to study the optimal twist of flapping wings for hovering flight, we propose a computationally efficient fluid-structure interaction (FSI) model in Chapter 4. The model uses an analytical twist model and the quasi-steady aerodynamic model introduced in Chapter 2 for the structural and aerodynamic analysis, respectively. Based on the FSI model, we optimize the twist of a rectangular wing by minimizing the power consumption during hovering flight. The power efficiency of the optimized twistable wings is compared with corresponding optimized rigid wings. It is shown that the optimized twistable wings can not dramatically outperform the optimized rigid wings in terms of power efficiency, unless the pitching amplitude at the wing root is limited. When this amplitude decreases, the optimized twistable wings can always maintain high power efficiency by introducing certain twist while the optimized rigid wings need more power for hovering.
Considering the high impact of the root stiffness on flapping kinematics and power consumption, we present an active hinge design which uses electrostatic force to change the hinge stiffness in Chapter 5. The hinge is realized by stacking three conducting spring steel layers which are separated by dielectric Mylar films. The theoretical model shows that the stacked layers can switch from slipping with respect to each other to sticking together when the resultant electrostatic force between layers, which can be controlled by the applied voltage, is above a threshold value. The switch from slipping to sticking will result in a dramatic increase of the hinge stiffness (about 9x). Therefore, a short duration of the sticking can still lead to a considerable change in the passive pitching motion. Experimental results successfully show the decrease of the pitching amplitude with the increase of the applied voltage. Flight control based on the electrostatic force can be very power-efficient since there is ideally no power consumption due to the control operations.
In Chapter 6, we retrospect and discuss the most important aspects related to the modeling, design and optimization of flapping wings for efficient hovering flight. In Chapter 7, the overall conclusions are drawn and recommendations for further study are provided.","flapping wing; passive pitching; pitching axis; aerodynamic model; power efficiency; optimization","en","doctoral thesis","","978-94-92516-57-2","","","","","","","","","Computational Design and Mechanics","","",""
"uuid:c9667e59-a7f9-45f1-a975-32c6dfe2d95d","http://resolver.tudelft.nl/uuid:c9667e59-a7f9-45f1-a975-32c6dfe2d95d","Quantum Transport Phenomena in Magnetic Molecules","Gaudenzi, R. (TU Delft QN/van der Zant Lab)","van der Zant, H.S.J. (promotor); Delft University of Technology (degree granting institution)","2017","In this work, we investigate a series of physical phenomena resulting from the confinement of electrons in those localised charge and spin quantum boxes generically called molecules.","Molecular magnetism; quantum transport; molecular electronics; radicals; Landauer principle; superconductivity; proximity effect","en","doctoral thesis","","9789085933052","","","","","","","","","QN/van der Zant Lab","","",""
"uuid:0f9fe428-baa0-4e8c-948f-e30a1c289727","http://resolver.tudelft.nl/uuid:0f9fe428-baa0-4e8c-948f-e30a1c289727","Situation Awareness for Socio Technical Systems: A simulation gaming study in intermodal transport operations","Kurapati, S. (TU Delft Policy Analysis)","Verbraeck, A. (promotor); Lukosch, H.K. (copromotor); Delft University of Technology (degree granting institution)","2017","Operating socio technical systems such as energy distribution networks, power plants, container terminals, and healthcare systems is a grand challenge. Decision making in these systems is complex due to their size, diversity, dynamism, social component, distributed nature, uncertainty, and vulnerability to disruptions. Human actors in these systems have to channel their pre-decision time to assess and classify current situation based on their individual or organizational goals rather than analyse possible alternatives for an optimal
outcome. In this effect, Situation Awareness, a human factor required to perceive, comprehend and project the future of a current situation is considered to be an essential prerequisite for decision making in socio technical systems. Although the importance of Situation Awareness is well established it has not
been studied extensively in socio technical systems. Therefore the key objective of this dissertation was to study the role of Situation Awareness on decision making and performance of individuals and teams in socio technical systems
within the context of intermodal transport operations in container terminals.","","en","doctoral thesis","TRAIL Research School","978-90-5584-225-4","","","","TRAIL Thesis Series no. 2017/7, the Netherlands Research School TRAIL","","","","","Policy Analysis","","",""
"uuid:54ea2136-ade7-4c0e-bf85-6d03a1a7af36","http://resolver.tudelft.nl/uuid:54ea2136-ade7-4c0e-bf85-6d03a1a7af36","Control of the doubly-fed induction machine for wind turbine applications","Nguyen Tien, H. (TU Delft Delft Center for Systems and Control)","Scherpen, J.M.A. (promotor); Scherer, C.W. (promotor); Hellendoorn, J. (promotor); Delft University of Technology (degree granting institution)","2017","The linear control of doubly-fed induction machines in wind power systems normally encounters the problems in variations of the rotor mechanical angular speed and other time-varying parameters. However, better performance requirements against changes in the machine parameters and exogenous inputs are desired. This can be achieved by appropriate controller design. Furthermore, the robustness of the controlled system under the effects of grid voltage dips is an important aspect as well. This work focuses on the use of linear matrix inequalities for analysis and synthesis of current controllers for doubly fed induction machines in wind power systems. The design is aimed at improving the robust dynamic performance of the controlled system in a wide range of mechanical rotor speed variations and reducing the effect of stator voltage dips when the grid undergoes a fault. A rigorous robust analysis based on the integral quadratic constraints approach is presented for evaluating and comparing the robustness of the controlled system with respect to the changes in the machine inductances and the rotor angular speed for the linear parameter varying approach and a conventional design.","","en","doctoral thesis","","978-94-6186-825-1","","","","","","","","Delft Center for Systems and Control","","","",""
"uuid:f0f9be64-5c8f-4432-8bae-61d121d32fcc","http://resolver.tudelft.nl/uuid:f0f9be64-5c8f-4432-8bae-61d121d32fcc","Under Pressure: Explorations on the dynamics of prioritization in dual-task driving","Jansen, R.J. (TU Delft Human Information Communication Design)","de Ridder, H. (promotor); van Egmond, R. (copromotor); Delft University of Technology (degree granting institution)","2017","Monitoring radio messages while driving is an omnipresent dual-task combination in police work, but it is also one that is considered unsafe for regular drivers. Whereas regular drivers are expected to fully prioritize the driving task, police officers typically do not have the option to stop their car to attend important incoming messages, nor can they afford an uninformed arrival at the scene. A novel method for the visualization of observational data shows that police work is highly fragmented, and suggests that frequent reports on work overload are related to dual- task involvement in this fragmented workflow. Therefore, a series of experimental studies have been conducted to understand the mechanics that underlie and result from task prioritization in a dynamic complex socio-technical system, such as the police context. Methodological implications are presented for the interpretation of tradeoffs between task performance and mental effort as function of task prioritization. Furthermore, practical implications are presented for the development of information technology in police vehicles. Finally, recommendations for future research include the validation of an integrated model on coping strategies, task prioritization, and dual-task switching.","","en","doctoral thesis","","97890065624086","","","","","","","","","Human Information Communication Design","","",""
"uuid:27f84d52-31cb-47ec-bc57-c4f30b6b1c5b","http://resolver.tudelft.nl/uuid:27f84d52-31cb-47ec-bc57-c4f30b6b1c5b","Evolving biocomplexity: Experimental studies on the evolution of protein complexes and ecological diversity","Flohr, R.C.E. (TU Delft BN/Bertus Beaumont Lab)","Beaumont, H.J.E. (promotor); Dogterom, A.M. (promotor); Delft University of Technology (degree granting institution)","2017","","compositional evolution; bacterial flagellar motor; adaptive radiation","en","doctoral thesis","","","","","","","","2021-12-16","","","BN/Bertus Beaumont Lab","","",""
"uuid:efd65279-4561-4733-9537-79bf6f4b2dfa","http://resolver.tudelft.nl/uuid:efd65279-4561-4733-9537-79bf6f4b2dfa","Advancing the manufacture of complex geometry GFRC for today's building envelopes","Henriksen, T.N. (TU Delft Design of Constrution)","Knaack, U. (promotor); Lo, S.N.G. (copromotor); Delft University of Technology (degree granting institution)","2017","Thin-walled glass fibre reinforced concrete (GFRC) panels are being used as the primary cladding material on many landmark buildings especially in the last decade. GFRC is an ideal material for building envelopes because it is durable, it can resist fire and the environmental impact is low compared to other materials, because the base materials used in the production of GFRC are widely available throughout the world. Thin-walled GFRC was initially developed as a cladding material in the 1970s and 1980s where the majority of the available research lies.
The introduction of 3D CAD software has enabled the design of buildings with complex shapes that, in the past, would have been rationalised to meet budget and time constraints. However, when GFRC has been proposed for buildings with a complex freeform geometry it has been replaced with alternative materials such as glass reinforced plastic (GFRP) due to the high cost and time required to fabricate suitable GFRC panels using conventional manufacturing methods. The literature showed that empirical performance characterization of GFRC had not been researched in detail regarding the limits of functionality or any systematic approach to understanding their use in complex geometry building envelopes.
As a first step the key architectural demands, the main barriers and limitations in the manufacture of complex geometry thin-walled GFRC were identified by interviewing and visiting manufacturers, designers and key buildings. This identified the key barrier to be the process of producing the mould for casting the complex geometry GFRC panels. Solutions to resolve them were tested over several stages for each of the main production methods most suited for the manufacture of thin-walled GFRC, namely; the automated premixed method, the premixed method and the sprayed method. The results from the laboratory testing over all the stages, and the prototype structure manufactured with the identified solution from the testing, answered the main research question: How can the manufacture of complex geometry thin-walled GFRC be advanced to meet today’s architectural demands?","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-92516-62-6","","","","A+BE I Architecture and the Built Environment No 5 (2017)","","","","","Design of Constrution","","",""
"uuid:1e7e1b0e-727c-4703-b4f0-60432d739b9b","http://resolver.tudelft.nl/uuid:1e7e1b0e-727c-4703-b4f0-60432d739b9b","EMERGO: The Dutch flood risk system since 1986","Rijcken, T. (TU Delft Hydraulic Structures and Flood Risk)","Vrijling, J.K. (promotor); Kok, M. (promotor); Voûte, M.A. (copromotor); Delft University of Technology (degree granting institution)","2017","PART I | A RESEARCH AND DESIGN PROJECT ABOUT FLOOD RISK POLICY SINCE 1986 The period between the Dutch flood disaster of 1953 and the year 2016 can be divided into two eras, separated by the year 1986, when the famous Eastern Scheldt barrier was completed. The perspective of water professionals on flood risk policymaking during the three decades before 1986 was dominated by the reconstruction approach of the Delta Works and has frequently been studied. The three decades after 1986 have a less obvious general approach, which has not yet been studied in depth as a whole. This dissertation attempts to develop a coherent perspective on flood risk policy during the last 30 years. This thesis’s research objective is to develop a comprehensive flood risk and water systems analysis framework, to be used for two purposes. First, providing a historical interpretation of flood risk policy by answering the main research question: how can the development of the Dutch flood risk system since 1986 be characterised fundamentally? In the core of the thesis, three main historical trends are identified. The first trend results from a study of systematic approaches to flood risk through the years, the second main trend addresses the relevance to flood risk of additional flood risk-related water system objectives (freshwater conveyance, shipping, nature/ecotopes and landscape quality) and the third trend involves additional new ideas or narratives which have been influential during the studied period. The second purpose of the water systems analysis framework is to meet the design objective of the thesis: to design an internet platform to represent the systems analysis framework and illustrate historical and future development of the Dutch flood risk system. The aim of the platform is to systematically organize and visualize the available studies and design projects, to educate about water systems, to inspire users to add contributions and monitor user behaviour to help indicate new research and design opportunities and support policy decisions. Acknowledged criteria for scientific and societal relevance guide the design throughout the thesis. Chapter 2 introduces the platform, which was called SimDelta in 2012 and renamed Flowz in 2017. A brief survey of approaches to water system planning and ‘serious games’ concludes that a graphic interface to visualise technical-physical complexity and socio-political complexity (or: supply and demand of analyses and ideas) is increasingly recognised to contribute to effective policymaking. A structure for the platform is proposed, consisting of six stackable software blocks: the base block contains interactive maps generated in a systems model, the top block involves communication between stakeholders to make choices in a virtual problem-solution space. Usage over the internet makes it possible to record preferences, and ‘crowdsource’ corrections, improvements and new ideas. The extent to which the concept can contribute to policymaking can only be tested by developing it step-by-step. Chances for success will depend on how the platform relates to existing ways information is obtained and existing types of decision support. PART II | AN INTEGRATED FLOOD RISK AND WATER SYSTEMS ANALYSIS FRAMEWORK (not added here…) PART III | STRUGGLING IN ‘MASLOW’S HIERARCHY FOR WATER INFRASTRUCTURE (not added here…).","Dutch flood risk policy; Integrated flood risk systems analysis framework; Design of a graphic language to represent the development of national water systems","en","doctoral thesis","","978-94-6233-669-8","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:1ec23f45-abd6-4310-b0e5-73338e655974","http://resolver.tudelft.nl/uuid:1ec23f45-abd6-4310-b0e5-73338e655974","Efficient predictive model-based and fuzzy control for green urban mobility","Jamshidnejad, A. (TU Delft Delft Center for Systems and Control)","Hellendoorn, J. (promotor); Papageorgiou, M. (promotor); Delft University of Technology (degree granting institution)","2017","In this thesis, we develop efficient predictive model-based control approaches, including model-predictive control (MPC) andmodel-based fuzzy control, for application in urban traffic networks with the aim of reducing a combination of the total time spent by the vehicles within the network and the total emissions. The thesis includes three main parts, where in the first part the main focus is on accurate approaches for estimating the macroscopic traffic variables, such as the temporal-spatial averages, from a microscopic point-of-view. The second part includes efficient approaches for solving the optimization problem of the nonlinear MPC controller. The third and last part of the thesis proposes an adaptive and predictivemodel-based type-2 fuzzy control scheme that can be implementedwithin amulti-agent control architecture.","","en","doctoral thesis","TRAIL Research School","978-90-5584-224-7","","","","TRAIL Thesis Series T2017/6, the Netherlands TRAIL Research School","","","","Delft Center for Systems and Control","","","",""
"uuid:1f384cf1-01c3-4484-a4ca-1ea1227e8bc0","http://resolver.tudelft.nl/uuid:1f384cf1-01c3-4484-a4ca-1ea1227e8bc0","Live cell studies of bacterial DNA replication, recombination, and degradation","Wiktor, J.M. (TU Delft BN/Cees Dekker Lab)","Dekker, C. (promotor); Delft University of Technology (degree granting institution)","2017","It seems that the evolution of life on this planet repeatedly acknowledges the value of
the genetic information, by providing an abundant variety of elegant mechanisms for the repair of damage occurring to DNA. The need for such mechanisms tells us that the state of cellular DNA is under constant threat of disintegration and decay. This may be obvious now, but was previously neglected and it took almost two decades after the discovery of the double-helix structure of DNA to realize that DNA is subject to a range of distinct forms of damage. Double-stranded breaks (DSBs) are a particularly dangerous damage occurring when both of the DNA strands are broken at the same position along the DNA. In order to recover the integrity of the genome, the broken strands must undergo an elaborate process to find a repair template elsewhere in the cell where the same genetic code is imprinted. In this thesis we focus on aspects of the repair of such lesions and we approach this problem from multiple angles to obtain insights into the processing of breaks and the relation between the homology search and the spatial organization of bacterial genome.","recombination; end-resection; replication; double stranded breaks; DNA repair","en","doctoral thesis","","978-90-8593-306-9","","","","","","","","","BN/Cees Dekker Lab","","",""
"uuid:c7dedc60-45e1-4c58-86da-418b9b389ad4","http://resolver.tudelft.nl/uuid:c7dedc60-45e1-4c58-86da-418b9b389ad4","Teaching and learning science through design activities: A revision of design-based learning","van Breukelen, D.H.J. (TU Delft Science Education and Communication)","de Vries, M.J. (promotor); Delft University of Technology (degree granting institution)","2017","","science; technology; design; learning; teaching; concepts","en","doctoral thesis","","978-94-92679-02-4","","","","","","","","","Science Education and Communication","","",""
"uuid:7efd562f-82fa-468f-b074-bfa7d640a9ee","http://resolver.tudelft.nl/uuid:7efd562f-82fa-468f-b074-bfa7d640a9ee","Autonomous landing of Micro Air Vehicles through bio-inspired monocular vision","Ho, H.W. (TU Delft Control & Simulation)","Mulder, Max (promotor); de Croon, G.C.H.E. (copromotor); Delft University of Technology (degree granting institution)","2017","","monocular vision; bioinspiration; micro air vehicle; autonomous landing; optical flow; efference copies; appearance cues; distance estimation; adaptive control; visual servoing; obstacle detection; self-supervised learning","en","doctoral thesis","","978-94-6186-818-3","","","","","","","","","Control & Simulation","","",""
"uuid:7ee5405a-85f1-4bd2-b776-2013715c8783","http://resolver.tudelft.nl/uuid:7ee5405a-85f1-4bd2-b776-2013715c8783","Long-Term Behaviour of Railway Crossings: Wheel-Rail Interaction and Rail Fatigue Life Prediction","Xin, L. (TU Delft Railway Engineering)","Dollevoet, R.P.B.J. (promotor); Markine, V.L. (copromotor); Delft University of Technology (degree granting institution)","2017","Railway turnouts are important components of railway infrastructure as they provide flexibility and guidance to the rail traffic. Because of geometrical discontinuities in the crossing area of the turnouts, high impact forces due to passing wheels acting on the crossing nose can occur. In the field, severe rail damage problems are found in crossing areas. Statistical evidence shows that turnout failures cause major operational disturbances in a railway network, which lead to higher maintenance costs as compared with other track components.
The research presented here was motivated by the short service life of the turnout crossing observed in the Dutch railway network and by the need to improve the performance of railway turnouts. Moreover, there is a lack of advanced numerical tools such as dynamic three-dimensional (3-D) models to analyse wheel–rail interactions in crossings on the stress and strain levels, particularly for models that are coupled with life estimation of the crossing.
Therefore, the goal of this study is to develop numerical tools for the analysis of the dynamic interaction between the wheel and turnout crossing, and the prediction of fatigue life of crossings, aiming to improve the crossing performance and prolong its service life.","railway crossing; wheel-rail contact; finite element modeling; fatigue life prediction","en","doctoral thesis","","9789462956315","","","","","","","","","Railway Engineering","","",""
"uuid:ad3f23f8-5bb7-47d1-a42b-bd1043fed661","http://resolver.tudelft.nl/uuid:ad3f23f8-5bb7-47d1-a42b-bd1043fed661","Use of Affordances for Efficient Robot Learning","Wang, C. (TU Delft Interactive Intelligence)","Babuska, R. (promotor); Hindriks, K.V. (copromotor); Delft University of Technology (degree granting institution)","2017","","Robot Learning; Affordance; Reinforcement Learning; Developmental Robotics","en","doctoral thesis","","978-94-6186-814-5","","","","","","","","","Interactive Intelligence","","",""
"uuid:1686e932-2df7-41df-80af-643d5a34fb2f","http://resolver.tudelft.nl/uuid:1686e932-2df7-41df-80af-643d5a34fb2f","Conformance Control in Heterogeneous Oil Reservoirs with Polymer Gels and Nano-Spheres","Lenchenkov, N. (TU Delft Reservoir Engineering; TU Delft Lab Geoscience and Engineering)","van Kruijsdijk, C.P.J.W. (promotor); Delft University of Technology (degree granting institution)","2017","In many oil fields, water is injected into a reservoir to displace oil to the production wells. During the injection process, oil is pushed by water towards production wells which have a lower pressure than the rest of the reservoir. If the reservoir is homogeneous, then a good sweep efficiency of the water flood process is expected. However, most oil reservoirs are stratified and that creates a permeability contrast along the whole height. High permeable layers take most of the injected water resulting in lower sweep efficiency of the other layers. The water breaks through the high permeable zones, significantly increasing the water cut of the produced fluid. Excessive produced water has to be treated in surface facilities which increase the costs of the extraction process. Another disadvantage of the low sweep of a reservoir is a significant amount of remaining oil behind the displacement front...","polymers; conformance control; improved oil recovery; nano-spheres; cross-linked polymers; flow in porous media; electron microscopy; dynamic light scattering","en","doctoral thesis","","978-94-6233-668-1","","","","","","","","","Reservoir Engineering","","",""
"uuid:074d4f96-e7bf-4dde-a7b4-86c9dd2e214f","http://resolver.tudelft.nl/uuid:074d4f96-e7bf-4dde-a7b4-86c9dd2e214f","Increasing the Feasibility of Superconducting Generators for 10 MW Direct-Drive Wind Turbines","Liu, D. (TU Delft DC systems, Energy conversion & Storage)","Ferreira, Jan Abraham (promotor); Polinder, H. (copromotor); Abrahamsen, Asger Bech (copromotor); Delft University of Technology (degree granting institution)","2017","In recent years, superconducting synchronous generators (SCSGs) have been proposed as an alternative to permanent magnet synchronous generators (PMSGs). They are expected to reduce the top head mass and the nacelle size for such large wind turbines. In 2012, the INNWIND.EU project initiated this research to investigate SCSGs for 10-20MWdirect-drive offshore wind turbines. However, the feasibility of SCSGs was limited by a few critical issues, such as high costs, AC losses in the superconducting winding and excessive short circuit torque. Furthermore, SCSG designs proposed in the literature were various but all less competitive than PMSGs.","Feasibility; superconducting generator; direct-drive; wind turbine","en","doctoral thesis","","978-94-6299-627-4","","","","","","","","","DC systems, Energy conversion & Storage","","",""
"uuid:f2d98b1a-59f4-406d-9e0a-2dc011159e6b","http://resolver.tudelft.nl/uuid:f2d98b1a-59f4-406d-9e0a-2dc011159e6b","Universal characterization of wall turbulence for fluids with strong property variations","Patel, A. (TU Delft Energy Technology)","Boersma, B.J. (promotor); Pecnik, Rene (copromotor); Delft University of Technology (degree granting institution)","2017","Wall-bounded turbulence involving mixing of scalars, such as temperature or concentration fields, play an important role in many engineering applications. In applications with large temperature or concentration differences, the variation of scalar dependent thermos physical properties can be strong. In such cases the strong coupling between energy and momentum alters the conventional behavior of turbulence. This alteration results in peculiar momentum and heat transfer characteristics, for which conventional scaling laws for constant property flows fail and cannot be applied. The aim of this work is to characterize wall bounded turbulence for fluids that have large near-wall gradients in thermos physical properties. The focus is on the variable inertia effects at the low-Mach number limit without the influence of buoyancy.","Turbulent boundary layer; Direct numerical simulation; Variable density effects; Scalar dependent properties; Turbulence modeling","en","doctoral thesis","","978-94-6233-661-2","","","","","","","","","Energy Technology","","",""
"uuid:8feebaa3-3dde-4502-b15d-6057127b0bcd","http://resolver.tudelft.nl/uuid:8feebaa3-3dde-4502-b15d-6057127b0bcd","Modeling and Design of Brushless Doubly-Fed Induction Machines","Wang, X. (TU Delft DC systems, Energy conversion & Storage)","Ferreira, Jan Abraham (promotor); Lahaye, D.J.P. (copromotor); Polinder, H. (copromotor); Delft University of Technology (degree granting institution)","2017","The rapid increase of wind power in the power grid results in high grid connection requirements for wind turbines. Moreover, the reliability of wind turbines becomes more and more important, especially in offshore applications. One potential solution for these demands is the wind turbine drive-train based on the brushless doubly-fed induction machine (DFIM). This machine type has no brushes or slip-rings on the rotor side which provides an attractive alternative to the DFIM which is commonly employed in the current market. However, the brushless DFIM has not yet been commercialized. Therefore, the primary objective of this thesis focuses on ‘modeling and design of brushless DFIM, to advance the development of this machine type for wind turbine applications’. A computationally efficient FE model is proposed to evaluate the performance of the brushless DFIM. An efficient, flexible and accurate optimization approach is then developed by combining the computationally efficient FE model with the NSGA-II multi-objective optimization algorithm. Compared with normal induction machines, the brushless DFIM is expected to have more severe noise, vibrations and lower power quality due to many undesired space-harmonics. The 2D multi-slice FE model is applied to investigate whether skewing rotor slots is useful to overcome these drawbacks of brushless DFIMs. Based on the study of the space- and time-harmonics in brushless DFIMs, a computationally efficient method is proposed to investigate the effects of skew at the initial design stage. Sixteen constructions are evaluated to gain more design guidelines for nested-loop rotors. The complicated space- and time-harmonics, the influence of the rotor skew and the influence of the nested-loop configurations are studied and validated by carrying out measurements on a small-scale prototype with four different rotors. The 3D magneto-static FE model is applied to investigate the axial flux due to the skewed slots which is neglected in the 2D multi-slice FE model. Finally, all the modeling methods and the design guidelines are brought together to optimize the design of a 3.2MW brushless DFIM. The results show that the design is improved from the active material cost and the efficiency of the machine points of view by increasing the magnetic loading of the brushless DFIMs. However, the brushless DFIM does not show advantages compared with normal DFIGs and PM generators from the efficiency and the shear stress points of view. However, considering the additional advantages of maintenance and reliability, the brushless DFIMs may provide a feasible application for wind turbine drive-trains.","","en","doctoral thesis","","978-94-6299-625-0","","","","","","","","","DC systems, Energy conversion & Storage","","",""
"uuid:0e7978f3-18a1-4f34-8fce-965a304953dd","http://resolver.tudelft.nl/uuid:0e7978f3-18a1-4f34-8fce-965a304953dd","Computational Modelling of Compaction in Asphaltic Mixtures and Geomaterials","Alipour, A. (TU Delft Pavement Engineering)","Scarpas, Athanasios (promotor); Delft University of Technology (degree granting institution)","2017","Asphaltic mixtures are heterogeneous composite materials consisting of aggregates coated and bound by asphalt binder. The long term performance of asphaltic pavements is highly dependent on the mechanical behaviour of the asphaltic mixture during construction (mixing and compaction) and operation; inadequate mixture compaction leads to faster moisture and oxygen diffusion, ravelling, rutting and poor fatigue life.","","en","doctoral thesis","","978-94-6186-819-0","","","","","","","","","Pavement Engineering","","",""
"uuid:67b99fa5-cb8d-4230-b683-5a045323e772","http://resolver.tudelft.nl/uuid:67b99fa5-cb8d-4230-b683-5a045323e772","Characterizing Dominant Processes in Landfills to Quantify the Emission Potential","van Turnhout, A.G. (TU Delft Geo-engineering)","Heimovaara, T.J. (promotor); Kleerebezem, R. (copromotor); Delft University of Technology (degree granting institution)","2017","Our ever-growing amount of solid waste puts a burden on future generations and the environment due to emissions of contaminants such as CO2, CH4, Cl- and heavy-metals for hundreds of years. It is therefore essential that landfill after-care methods are developed that reduce the emission potential of landfills to acceptable levels within the time-span of one generation. Several treatment methods such as aeration and leachate recirculation have shown promising results in reducing concentrations of problematic compounds in leachate and landfill gas emissions. However for application as full-scale technologies, long term evidence of sustainable reduction in emission potential has yet to be provided in practice. It is not possible to measure emission potential directly. Predictions of future emissions from landfills require emission modeling where emission potential is a crucial parameter. The aim of the research presented in this thesis is to present a conceptual modeling approach which increases the confidence in such long term predictions by reducing
the parameter and model uncertainty in a systematic way. As such the approach allows us to quantify the emission potential. Chapter 2 and 3 of this thesis present an approach to develop and select biochemical and physical process networks in a generic conceptual model that allows us to optimally describe measured emissions from lysimeter experiments under anaerobic and aerobic conditions. These networks give a detailed description of the mass balances of contaminants and bacteria in the solid, liquid and gas phase. As a consequence, main emission pathways and rate-limiting processes are identified. Our results give strong indications that only a relatively small amount of the solid waste material present contributes to the measured emissions. The toolbox developed for this thesis, integrates information from different databases with approaches to obtain and couple thermodynamic/kinetic parameters and processes in order to efficiently evaluate a wide variety of networks via Bayesian inference using quantitative criteria. In chapter 4, the optimal biochemical and physical process networks calibrated at the lysimeter and column scale, are applied to predict the emissions at landfill scale. This is achieved by coupling the process networks to a water balance model that calculates the leachate production using a stochastic residence time distribution of water within the waste-body. The parameters of the stochastic residence time model are obtained by optimization using daily leachate production, rainfall and evaporation measurements. After calibration, the decrease in mass of different contaminants present in the waste body, gives a quantitative estimate of the full scale emission potential as a function of time. Results are shown for measured time series of leachate quantity and leachate quality (e.g. Cl–, Na+ and NH4+), but can easily be extended to other parameters. In chapter 5, the effectiveness of different aeration strategies is investigated based on modeled distributions of oxygen throughout a waste-body. The model is based on Darcy’s law for two-phase flow with parameters measured in laboratory experiments. Modeled gas extraction rates are in reasonable agreement with extraction rates measured at landfills. The results present optimal well configurations and aeration strategies for effective treatment. The thesis concludes with a list of the most important research steps for reducing the uncertainty in the approaches for quantification of full scale emission potential in the near future.","Municipal solid waste; Quantification; Emission potential; Biogeochemical modeling toolbox; Aeration; Recirculation; Stochastic; Hydrology","en","doctoral thesis","","978-94-028-0680-9","","","","","","","","","Geo-engineering","","",""
"uuid:916c5762-2a08-4a60-957f-bbc99e416ea9","http://resolver.tudelft.nl/uuid:916c5762-2a08-4a60-957f-bbc99e416ea9","Spin-orbit interaction in ballistic nanowire devices","Kammhuber, J. (TU Delft QRD/Kouwenhoven Lab)","Kouwenhoven, Leo P. (promotor); Delft University of Technology (degree granting institution)","2017","Similar to their charge, electrons also posses an intrinsic magnetic moment called spin. Whenmoving through an electric field, electrons experience and effective magnetic field in their restframe which will interact with the spin and influence its direction. This spinorbit interaction creates a measurable shift in the splitting of atomic energy levels and in the energy bands of solid state systems. Recently it has been proposed that systems with strong spin-orbit interaction can be used to engineer novel topological states of matter which are predicted to host non-abelian quasi particles. These could generate robust quantum states which are protected against decoherence. The research in this thesis focuses on indium antimonide (InSb) nanowires which combine exceptionally strong spin-orbit interaction with large g-factors and high electron mobilities. This makes them one of the most promising systems for realizing topological qubits based on Majorana zero modes (MZM).","","en","doctoral thesis","","978-90-8593-302-1","","","","Casimir PhD Series, Delft-Leiden 2017-18","","2018-07-01","","","QRD/Kouwenhoven Lab","","",""
"uuid:40f6b033-0e6a-460b-9501-30cf35a99b8d","http://resolver.tudelft.nl/uuid:40f6b033-0e6a-460b-9501-30cf35a99b8d","Experimental Investigation on the Desiccation and Fracturing of Clay","Tollenaar Gonzalez, R.N. (TU Delft Geo-engineering)","Jommi, C. (promotor); van Paassen, L.A. (copromotor); Delft University of Technology (degree granting institution)","2017","Waterways and lakes in low-lying delta areas require regular dredging for maintenance. Reuse of dredged sediments has been proposed for various applications. However, directly after dredging the physical characteristics of these sediments make them generally unsuited for immediate reuse. They are often placed on land, where they are allowed to ripen through a combination of drainage, consolidation and evaporation. At a certain stage during the desiccation process cracks will develop, affecting the physical properties of the material. These include its strength, stiffness and hydraulic conductivity, as well as drainage, water infiltration and evaporation. The soil composition can be likewise altered by biochemical processes, which are stimulated by the ingress of oxygen through the cracked surface. Consequently, identifying how cracks develop in the soil is essential for understanding the behavior of the material.
Much investigation has been performed studying desiccation cracks, but the underlying mechanisms are still not fully understood. It is known that the drying speed affects the final amount of cracks in a soil, which points out to the potential impact of rate effects in soil cracking. The effects of the drying rate might be related to variations in the tensile strength influenced by different shrinkage rates, as well as its impact on the suction development in the soil.
In this thesis different sets of experiments were carried out to study the phenomenon of desiccation fracturing in soil. The first series of tests were carried out in a controlled laboratory environment to study the crack development in drying clay slurries under different initial and boundary conditions. The outcomes of those tests indicated that cracks can propagate in a different way than commonly assumed (namely under the surface) and that fracture characteristics strongly depend on the initial and boundary conditions.
A second series of tests examined the combined effects of pull rates and high water contents on the tensile strength. Particle Image Velocimetry analysis was also carried out on pictures taken during the tests to examine the strains generated. It was found that the effect of pull rate on the tensile strength of the clay was negligible compared to the effect of the water content. Pull rate did affect the stiffness response of the soil. The findings revealed that the influence of the evaporation rate on soil fracturing might be related more to the rate dependency of the stiffness rather than to significant changes in tensile strength.
The last series of experiments investigated the drying behaviour of a clay with different initial water contents and under different evaporative conditions. The small scale evaporation experiments carried out using commercially available suction measuring equipment with an adjusted test procedure. The results showed that the initial conditions have great influence on the drying performance of a soil, which can be partly attributed to the influence of the surface texture and the pore structure. It was observed that under certain circumstances, the evaporation of a soil surface can be higher than that of open water. The different evaporation rates had a marked effect on the water distributions with depth within the soil. The evaporation rate also produced a dynamic response of the soil water retention curve.
At the end, a simple one-dimensional model was set up to try to capture the behavior observed during all the laboratory tests. It also served to analyze the consequences of different hypotheses about the material behaviour on the crack onset in a homogenous soil layer undergoing surface drying.","Soil Cracking; Soil Fracturing; Desiccation; Tensile Strength; Pull Rate; Tensile Test; Geo-PIV; Drying Rate; Suction; Evaporation","en","doctoral thesis","","978-94-92516-59-6","","","","","","","","","Geo-engineering","","",""
"uuid:b1b5332a-33d5-4fd4-b386-3acc48e12003","http://resolver.tudelft.nl/uuid:b1b5332a-33d5-4fd4-b386-3acc48e12003","Mechanisms of boundary layer transition induced by isolated roughnes","Ye, Q. (TU Delft Aerodynamics)","Scarano, F. (promotor); Schrijer, F.F.J. (copromotor); Delft University of Technology (degree granting institution)","2017","Boundary layer transition is a relevant phenomenon in many aerodynamic and aero-thermodynamic problems and has been extensively investigated from the past century till recent times. Among the factors affecting the transition process, surface roughness plays a key role. When a roughness element with sufficiently large height (h) compared to the boundary layer thickness (δ) is immersed in a laminar boundary layer, it will produce spanwise varying disturbances with the potential to accelerate the transition process. In the thesis, a fundamental study is carried out to understand the physical mechanism of isolated roughness element induced transition. Experiments are performed in incompressible flow regime covering both critical and supercritical conditions. Tomographic particle image velocimetry (PIV) is employed as the main experimental diagnostic technique, returning the three-dimensional velocity and vorticity field of the flow.
The three-dimensional wake flow behaviour is firstly identified behind roughness element of micro-ramp geometry. The micro-ramp produces a pair of counter-rotating streamwise vortices in the wake, transporting low momentum fluid away from the wall by the central upwash motion, and sweeping the high momentum flow towards the near-wall region sideward. The shear layer around the central low-speed region is related to the growth of Kelvin-Helmholtz (K-H) instability. The active range of the primary vortices and the central low-speed region in the streamwise direction is associated to the selection of the dominant instability mechanism, which decreases with the increase of roughness-height based Reynolds number (Reh).
The instantaneous flow field reveals that the earliest unstable structures featuring hairpin shape are caused by the K-H instability at the separated shear layer. The evolution of K-H vortices is strongly influenced by Reh. At Reh = 1170, the K-H vortices are lift up under the upwash motion effect of the quasi-streamwise vortices, following by paring, distortion and finally breakdown. The active region of K-H vortices is separated from the inception of turbulent wedge, where early stage transition occurs. When Reh decreases approaching the critical value, the K-H vortices progressed gradually until the overall shear layer is destabilized, indicating the correlation between K-H instability and transition. The POD analysis yields the symmetric (K-H) and asymmetric mode. The disturbance energy associated to the symmetric modes changes with Reh. At higher Reh, the disturbance energy of the symmetric modes quickly decays, having a comparable contribution as the asymmetric modes. When Reh < 1000, the symmetric modes produce a remarkably higher level of disturbance energy until the onset of transition, indicating its dominance.
The effectiveness of roughness element on promoting transition is strongly influenced by its geometry. The bluff-front roughness elements induce horseshoe vortices due to upstream separation. The different rotation direction of these vortices compared to the micro-ramp leads to early inception of sideward growth of fluctuations, and more rapid transition process. While for the slender micro-ramp, significant longer distance is required to for the onset of transition.
In addition, the restoration of dermal wounds also gets perturbed many times during the initial period post-wounding and this might result in the development of, for instance, contractures and hypertrophic scar tissue. Unfortunately, the causal pathways that lead to the formation of contractures and hypertrophic scar tissue are unknown at present. Furthermore, even in the absence of complications, it is very difficult to influence the material properties of developing scar tissue. A better understanding of the mechanisms underlying the (aberrant) healing of dermal wounds will probably improve the treatment of dermal wounds, and will, consequently, reduce the probability of the occurrence of sequelae, such that the newly generated tissue in a recovered wounded area is more akin to the original tissue. Therefore, a lot of resources have been allocated to research the mechanisms with in vivo and in vitro experiments. This has resulted in the production of much knowledge about these mechanisms. However, there is still much that remains understood incompletely. This is partly due to the intrinsic complexity of the wound healing process, but it is also a consequence of the fact that it is very difficult to study the interactions between different components of the wound healing cascade with experimental studies.
A way to deal with this latter issue, is to use mathematical models. With these models it is possible to simulate components of the wound healing cascade and to investigate the interactions between these components. The results obtained with these models might aid in disentangling which components of the wound healing cascade influence the material properties of the scar tissue. Furthermore, these results might aid in providing insights into which components of the wound healing response are disrupted during the formation of contractures and hypertrophic scar tissue. For these reasons several mathematical models were developed during this investigation.
In Chapter 3 a hybrid model is presented that was used to study wound contraction and the development of the distribution of the collagen bundles in relatively small, deep dermal wounds. In this model cells are modeled as discrete, inelastic spheres while the other components are modeled as continuous entities. After obtaining baseline simulation results, the impact of macrophage depletion and the application of a transforming growth factor-beta receptor antagonist on both the degree of wound contraction and overall distribution of the collagen bundles were investigated. Depletion of the macrophages during the execution of the wound healing cascade results in a delayed healing of a wound. Furthermore, the depletion of the macrophages hardly influences the geometrical distribution of the collagen bundles in the recovering wounded area. However, the depletion does result in an increase of the final surface area of the recovered wounded area. The imitation of the application of a transforming growth factor-beta receptor antagonist also results in an increase of the surface area of the recovering wounded area. In addition, the application of the antagonist results in a more uniform distribution of the collagen bundles in the recovered wounded area.
In Chapter 4 a continuum hypothesis-based model is presented that was used to investigate how certain components of the wound environment and the wound healing response might influence the contraction of the wound and the development of the geometrical distribution of collagen bundles in relatively large wounds. In this model all components are modeled as continuous entities. The dermis is modeled as an orthotropic continuous solid with bulk mechanical properties that are locally dependent on both the local concentration and the local geometrical distribution of the collagen bundles. The simulation results show that the distribution of the collagen bundles influences the evolution over time of both the shape of the recovering wounded area and the degree of overall contraction of the wounded area. Interestingly, these effects are solely a consequence of alterations in the initial overall distribution of the collagen bundles, and not a consequence of alterations in the evolution over time of the different cell densities and concentrations of the modeled constituents. In addition, the evolution over time of the shape of the wound is also influenced by the orientation of the collagen bundles relative to the wound while this relative orientation does not influence the evolution over time of the relative surface area of the wound. Furthermore, the simulation results show that ultimately the majority of the collagen molecules ends up permanently oriented toward the center of the wound and in the plane that runs parallel to the surface of the skin when the dependence of the direction of deposition / reorientation of collagen molecules on the direction of movement of cells is included into the model. If this dependence is not included, then this will result ultimately in newly generated tissue with a collagen bundle-distribution that is exactly equal to the collagen-bundle distribution of the surrounding uninjured tissue.
In Chapter 5 a continuum hypothesis-based model is presented that was used to investigate in more detail which elements of the healing response might have a substantial influence on the contraction of burns. That is, a factorial design combined with a regression analysis were used to quantify the individual contributions of variations in the values for certain parameters of the model to the dispersion in the surface area of healing burns. Solely a portion of the dermal layer was included explicitly into the model. The dermal layer is modeled as an isotropic compressible neo-Hookean solid. Wound contraction is caused in the model by temporary pulling forces. These pulling forces are generated by myofibroblasts which are present in the recovering wounded area. Based on the outcomes of the sensitivity analysis it was concluded that most of the variability in the evolution of the surface area of healing burns over time might be attributed to variability in the apoptosis rate of myofibroblasts and, to a lesser extent, the secretion rate of collagen molecules.
In Chapter 6 a continuum hypothesis-based model is presented that was used to investigate what might cause the formation of hypertrophic scar tissue. All components of the model are modeled as continuous entities. Solely a portion of the dermal layer of the skin is modeled explicitly and this portion is modeled as an isotropic compressible neo-Hookean solid. In the model pulling forces are generated by the myofibroblasts that are present in the recovering wounded area. These pulling forces are responsible for both the compaction and the increased thickness of the recovering wounded area. A comparison between the outcomes of the computer simulations obtained in this study and clinical measurements shows that a relatively high apoptosis rate of myofibroblasts results in scar tissue that behaves like normal scar tissue with respect to the evolution of the thickness of the tissue over time, while a relatively low apoptosis rate results in scar tissue that behaves like hypertrophic scar tissue with respect to the evolution of the thickness of the tissue over time. Interestingly, this result is in agreement with the suggestion put forward that the disruption of apoptosis (i.e., a low apoptosis rate) during wound healing might be an important factor in the development of pathological scarring.
In Chapter 7 a continuum hypothesis-based model is presented that was used for the simulation of contracture formation in skin grafts that cover excised burns in order to obtain suggestions regarding the ideal length of splinting therapy and when to start with this therapy such that the therapy is effective optimally. All components of the model are modeled as continuous entities. Solely a portion of the dermal layer is modeled explicitly and this portion is modeled as an isotropic morphoelastic solid. In the model pulling forces are generated by the myofibroblasts which are present in the skin graft. These pulling forces are responsible for the compaction of the skin graft. Based on the simulation results obtained with the presented model it is suggested that the optimal point in time to start with splinting therapy is directly after placement of the skin graft on its recipient bed. Furthermore, the simulation results suggest that it is desirable to continue with splinting therapy until the concentration of the signaling molecules in the grafted area has become negligible such that the formation of contractures can be prevented.","Dermal wound healing; Fibroblasts; Collagen bundles; Wound contraction; Hypertrophic scar tissue; Contracture formation; Biomechanics; neo-Hookean solid; Morphoelasticity; Sensitivity analysis; Moving-grid finite-element method; Element resolution adaptation; Flux-corrected transport limiter; Adaptive time-stepping","en","doctoral thesis","","978-94-6295-661-2","","","","","","","","","Numerical Analysis","","",""
"uuid:f8112b0f-d697-4e5c-bbff-ea7eae5ab50c","http://resolver.tudelft.nl/uuid:f8112b0f-d697-4e5c-bbff-ea7eae5ab50c","Wind turbine rotor aerodynamics: The IEA MEXICO rotor explained","Zhang, Y. (TU Delft Wind Energy)","van Zuijlen, A.H. (promotor); van Bussel, G.J.W. (promotor); Delft University of Technology (degree granting institution)","2017","Wind turbines are operating under very complex and uncontrolled environmental conditions, including atmospheric turbulence, atmospheric boundary layer effects, directional and spatial variations in wind shear, etc. Over the past decades, the size of a commercial wind turbine has increased considerably. All the complex and uncontrolled conditions mentioned above result in uncertainties of aerodynamic loads calculation on very large wind turbine blades and thus better numerical codes are needed for predicting the loads in the design phase. With the aim to eliminate these uncontrolled effects and improve the aerodynamic models, in last decades, several important experimental campaigns of different wind turbine models have been performed in large wind tunnels. The objective of such experiments (e.g. using the NREL wind turbine and the MEXICO rotor) is to provide high quality measurement data which can be used to validate numerical models and improve different fidelity numerical codes, particularly for predicting wind turbine aerodynamic loads.","MEXICO rotor; rotor aerodynamics; CFD; OpenFOAM; ZigZag effects; loads overprediction; transition modeling; turbulence modeling; detached eddy simulation; PIV","en","doctoral thesis","","978-94-6186-815-2","","","","","","","","","Wind Energy","","",""
"uuid:764f8b6b-4fea-4f64-9915-8afe97d54179","http://resolver.tudelft.nl/uuid:764f8b6b-4fea-4f64-9915-8afe97d54179","Static and Dynamic properties of Cubic Chiral Magnets","Qian, F. (TU Delft RST/Neutron and Positron Methods in Materials)","Pappas, C. (promotor); Delft University of Technology (degree granting institution)","2017","The research presented in this thesis focuses on chiral magnets, where skyrmion lattices are stablised by magnetic fields. Neutron scattering, magnetisation and magnetic susceptibility measurements have been performed on several typical chiral magnets such as the multiferroic insulator Cu2OSeO3 and the metallic MnSi and FeGe...","","en","doctoral thesis","","978-94-92516-56-5","","","","","","","","","RST/Neutron and Positron Methods in Materials","","",""
"uuid:839b7277-4c34-4c8b-8869-a0beb896cbd6","http://resolver.tudelft.nl/uuid:839b7277-4c34-4c8b-8869-a0beb896cbd6","Point cloud data fusion for enhancing 2d urban flood modelling","Meesuk, V. (TU Delft Environmental Fluid Mechanics)","Mynett, A.E. (promotor); Vojinovic, Zoran (promotor); Delft University of Technology (degree granting institution)","2017","Modelling urban flood dynamics requires proper handling of a number of complex urban features. Although high-resolution topographic data can nowadays be obtained from aerial LiDAR surveys, such top-view LiDAR data still have difficulties to represent some key components of urban features. Incorrectly representing features like underpasses through buildings or apparent blockage of flow by sky trains may lead to misrepresentation of actual flood propagation, which could easily result in inadequate flood protection measures. Hence, proper handling of urban features plays an important role in enhancing urban flood modelling. This research explores present-day capabilities of using computer-based environments to merge side-view Structure from- Motion data acquisition with top-view LiDAR data to create a novel Multi-Source Views (MSV) topographic representation for enhancing 2D model schematizations. A new MSV topographic data environment was explored for the city of Delft and compared with the conventional top-view LiDAR approach. Based on the experience gained, the effects of different topographic descriptions were explored for 2D urban flood models of (i) Kuala Lumpur, Malaysia for the 2003 flood event; and (ii) Ayutthaya, Thailand for the 2011 flood event. It was observed that adopting the new MSV data as the basis for describing the urban topography, the numerical simulations provide a more realistic representation of complex urban flood dynamics, thus enhancing conventional approaches and revealing specific features like flood watermarks identification and helping to develop improved flood-protection measures.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-1-138-30617-2","","","","Dissertation submitted in fulfillment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of the UNESCO-IHE Institute for Water Education.","","","","","Environmental Fluid Mechanics","","",""
"uuid:1b75db0a-60da-4a6c-8c02-6cc2550c90d0","http://resolver.tudelft.nl/uuid:1b75db0a-60da-4a6c-8c02-6cc2550c90d0","Scalable Video Coding","Choupani, R. (TU Delft Computer Engineering)","Bertels, K.L.M. (promotor); Wong, J.S.S.M. (copromotor); Delft University of Technology (degree granting institution)","2017","With the rapid improvements in digital communication technologies, distributing high-definition visual information has become more widespread. However, the available technologies were not sufficient to support the rising demand for high-definition video. This situation is further complicated when the network resources such as the available bandwidth fluctuates, or packet losses occur during transmission. In this dissertation we present several video compression techniques which are capable of adapting with the varying network conditions. We address both challenges namely, the fluctuations in the available resources such as the bandwidth and processing power, and packet losses.
These problems in turn translates into degradation of the perceived video playback as jitter, and delay before video playback starts. Hence, we concentrate on developing robust and fast adaptive video coding schemes necessary for handling the changes in the physical characteristics of the communication networks. We present a new multi-layer scalable video coding (SVC) method for optimizing the bit-per-pixel rate of the video which is robust against packet losses. The method reduces the quality degradation in presence of data loss by re-organizing the frames in a hierarchical structure and improving the video quality through decomposing each frame suitably to restrict the error propagation.
Moreover, we present a solution for the quality degradation in video reconstruction when the video is scrambled for privacy protection. We also present two methods based on multiple description video coding (MDC) to handle packet losses in networks with a high rate of transmission error.
The proposed methods are based on combining SVC with MDC through decomposing the video into spatial sub-streams in the first method, and SNR sub-streams in the second method. In both proposed methods, the error resilience of the video is increased. The proposed methods have the capability of being used as SVC methods where any data loss or corruption reduces the quality of the video in a minimized way, and except for the case when all descriptions are lost, the video streams do not experience jitter at playback. The proposed methods provide the feasibility of reducing data rate by scaling down the video whenever the connection suffers from a low bandwidth problem. We also propose Discrete Wavelet Transform (DWT)-based optimizations for MDC. A major drawback in MDC methods is their inefficiency in terms of bit-per-pixel which is a consequence of preserving correlation between decomposed video segments. We propose a method based on the self-similarity between DWT coefficients at different frequency levels to improve the coding efficiency of DWT-based MDC. In the proposed method, whenever a description is lost the coefficients at the delivered descriptions are utilized for estimating the missing data using self-similarity property.","Multimedia communication; Scalable Video Coding; Multiple Description Coding","en","doctoral thesis","","978-94-6186-798-8","","","","","","","","","Computer Engineering","","",""
"uuid:4e8f8026-0d1e-4d61-9f61-d20c34685e80","http://resolver.tudelft.nl/uuid:4e8f8026-0d1e-4d61-9f61-d20c34685e80","Impact of High Levels of Wind Penetration on the Exercise of Market Power in the Multi-Area Systems","Moiseeva, E. (TU Delft Energie and Industrie)","Hesamzadeh, M.R. (promotor); Söder, L. (copromotor); Wogrin, S. (copromotor); Delft University of Technology (degree granting institution)","2017","New European energy policies have set a goal of a high share of renewable energy in electricity markets. In the presence of high levels of renewable generation, and especially wind, there is more uncertainty in the supply. It is natural, that volatility in energy production induces the volatility in energy prices. This can create incentives for the generators to exercise market power by traditional means: withholding the output by conventional generators, bidding not the true marginal costs, or using locational market power. In addition, a new type of market power has been recently observed: exercise of market power on ramp rate.
This dissertation focuses on modeling the exercise of market power in power systems with high penetration of wind power. The models consider a single, or multiple profit-maximizing generators. Flexibility is identified as one of the major issues in wind-integrated power systems. Therefore, part of the research studies the behavior of strategic hydropower producers as main providers of flexibility in systems, where hydropower is available.
Developed models are formulated as mathematical and equilibrium problems with equilibrium constraints (MPECs and EPECs). The models are recast as mixed-integer linear programs (MILPs) using discretization. Resulting MILPs can be solved directly by commercially-available MILP solvers, or by applying decomposition. Proposed Modified Benders Decomposition Algorithm (MBDA) significantly improves the computational efficiency.","wind integration; market power; game theory; mathematical programming","en","doctoral thesis","","978-91-7729-434-4","","","","The doctoral research has been carried out in the context of an agreement on joint doctoral supervision (SETS) between Comillas Pontifical University, Madrid, Spain, KTH Royal Institute of Technology, Stockholm, Sweden and Delft University of Technology, the Netherlands.","","","","","Energie and Industrie","","",""
"uuid:3cb7ded7-4308-4199-8afa-7027d2991076","http://resolver.tudelft.nl/uuid:3cb7ded7-4308-4199-8afa-7027d2991076","Analysis and Design of Low-Power Receivers: Exploiting Non-50 Ω Antenna Impedance and Phase-Only Quantization","Liu, Y. (TU Delft Bio-Electronics)","Serdijn, W.A. (promotor); Delft University of Technology (degree granting institution)","2017","Reducing the power consumption of low-power short-range receivers is of critical importance for biomedical and Internet-of-Things applications. Two interesting degrees of freedom (or properties) that have not been fully exploited in the pursuit of low power consumption are the antenna impedance and the phase-only modulation property of FSK/PSK signals. This dissertation explores the possibility of reducing the power consumption of the receiver by utilizing these two degrees of freedom.","","en","doctoral thesis","","978-94-028-0627-4","","","","","","","","","Bio-Electronics","","",""
"uuid:8217e2d7-6e04-496b-bcda-7c7367d8e2bc","http://resolver.tudelft.nl/uuid:8217e2d7-6e04-496b-bcda-7c7367d8e2bc","Aggregation of Plug-in Electric Vehicles in Power Systems for Primary Frequency Control","Izadkhast, S. (TU Delft DC systems, Energy conversion & Storage)","Herder, P.M. (promotor); Garcia-Gonzalez, Pablo (promotor); Delft University of Technology (degree granting institution)","2017","The number of plug-in electric vehicles (PEVs) is likely to increase in the near future and these vehicles will probably be connected to the electric grid most of the day time. PEVs are interesting options to provide a wide variety of services such as primary frequency control (PFC), because they are able to quickly control their active power using electronic power converters. However, to evaluate the impact of PEVs on PFC, one should either carry out complex and time consuming simulation involving a large number of PEVs or formulate and develop aggregate models which could efficiently reduce simulation complexity and time while maintaining accuracy.
This thesis proposes aggregate models of PEVs for PFC. The final aggregate model has been developed gradually through the following steps. First of all, an aggregate model of PEVs for the PFC has been developed where various technical characteristics of PEVs such as operating modes (i.e., idle, disconnected, and charging) and PEV’s state of charge have been formulated and incorporated. Secondly, some technical characteristics of distribution networks have been added to the previous aggregate model of PEVs for the PFC. For this purpose, the power consumed in the network during PFC as well as the maximum allowed current of the lines and transformers have been taken into account. Thirdly, the frequency stability margins of power systems including PEVs have been evaluated and a strategy to design the frequency-droop controller of PEVs for PFC has been described. The controller designed guaranties similar stability margins, in the worst case scenario, to those of the system without PEVs. Finally, a method to evaluate the positive economic impact of PEVs participation in PFC has been proposed.","Aggregation; Plug-in Electric Vehicles; Primary Frequency Control","en","doctoral thesis","","978-84-617-9946-6","","","","The doctoral research has been carried out in the context of an agreement on joint doctoral supervision between KTH Royal Institute of Technology, Stockholm, Sweden, Universidad Pontifical de Comillas, Madrid, Spain and Delft University of Technology, the Netherlands. Erasmus Mundus Sustainable Energy Technologies and Strategies (SETS) joint-doctorate programme. Thesis advisors at TU Delft: 1. Pavol Bauer 2. Laura Ramírez Elizondo","","","","","DC systems, Energy conversion & Storage","","",""
"uuid:2318fe76-2f0f-4fcd-ad04-5c678217683d","http://resolver.tudelft.nl/uuid:2318fe76-2f0f-4fcd-ad04-5c678217683d","Passive seismic interferometry for reflection imaging & monitoring","Almagro Vidal, C. (TU Delft Applied Geophysics and Petrophysics)","Wapenaar, C.P.A. (promotor); Delft University of Technology (degree granting institution)","2017","Passive seismics is the set of applications that endeavours the exploration of the
Earth’s mechanical properties using naturally occurring sources in the subsurface.
Conventional imaging of the subsurface is achieved with the aid of reflection
surveys of body waves from the surface. Passive seismics offers the possibility
to retrieve these reflection surveys using recordings of ambient noise and seismic
tremors, without the use of active sources. This thesis explores the use of novel
applications in passive seismics with the purpose to obtain an improved subsurface
image, compared to those obtained using conventional passive seismic imaging.","","en","doctoral thesis","","978-94-6186-817-6","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:832bb754-82f9-4558-b5fe-1b3bc9ec8358","http://resolver.tudelft.nl/uuid:832bb754-82f9-4558-b5fe-1b3bc9ec8358","Reclaiming Context: Architectural Theory, Pedagogy and Practice since 1950","Komez-Daglioglu, E. (TU Delft OLD Public Buiding)","Riedijk, M. (promotor); Avermaete, T.L.P. (promotor); Delft University of Technology (degree granting institution)","2017","Context is a crucial concept in architecture, despite the frequent ambiguity around its use. It is present in many architectural thoughts and discussions, while a critical discursive reflection is absent from contemporary architectural theory and practice. Situated within this schizophrenic condition in which the notion is both absent and present, this study aims at creating a historical and theoretical basis for a contemporary discussion on context. Discussions on context or alike notions had always existed in the field of architecture but the debate intensified and developed as a multi-layered body of knowledge in the 1950s, when various architects, theorists and teachers cultivated several perspectives on context as to address some of the ill effects of modern architectural orthodoxy and the destructive effects of post- war reconstructions. Despite being a topic of layered and productive debate in the post-war years, context lost popularity in the critical architectural discourse of the 1980s when it was absorbed by postmodern historicism and eclecticism, co-opted by traditionalists and conservationists, and consequentially attacked by the neo-avant-gardes for its blinkered understanding. This research presents a critical archaeology of the context debate, aiming to reclaim the notion by uncovering its erased, forgotten and abandoned dimensions. To do so, it challenges the governing paradigm of 1980s postmodern architecture by making inquiries into the history and genealogy of its particular trajectories with a criticism from within. Taking 1980 as a starting point, coinciding with the First Venice Architecture Biennale, the research traces the debate on context back to the 1950s through an in-depth study and interpretation of the ideas and works of Aldo Rossi, Robert Venturi and Denise Scott Brown and Colin Rowe. This reverse chronology reveals that in the works of these protagonists the understanding of context has shifted from “place to memory”, from “spatial to iconographic” and from “layers to object”, where the former categories still hold the capacity to recover the notion as a critical concept that is intrinsic to the architectural design process. In brief, by drawing upon the vast resources available in different media, such as exhibitions, archival materials, student projects, publications, buildings, etc., the study constructs an outline of “the context thinking” as it was articulated in architectural culture in the period between 1950s and 1980s.","context; contextualism; postmodern architecture; aldo rossi; robert venturi; denise scott brown; colin rowe","en","doctoral thesis","","978-94-6186-812-1","","","","","","","","","OLD Public Buiding","","",""
"uuid:3850fd4d-9256-4925-88bb-9679da5f3aaf","http://resolver.tudelft.nl/uuid:3850fd4d-9256-4925-88bb-9679da5f3aaf","Towards the Engineering of Pulsed Photoconductive Antennas","Garufo, A. (TU Delft Tera-Hertz Sensing)","Neto, A. (promotor); Llombart, Nuria (copromotor); Delft University of Technology (degree granting institution)","2017","In recent years, Terahertz technology has attracted the interest of researchers for its potential applications in a variety of domains. In particular, THz sensing has found application in security screening, medical imaging, spectroscopy, and non-destructive testing. The emergence of all these applications has been driven by the availability of photoconductive antennas, which have made available bandwidth in the THz spectrum at relatively low cost, thanks to several breakthroughs in photonics, and semiconductor technology. Photoconductive antennas are optoelectronic electromagnetic sources that resort to optically
pumped semiconductor materials. Such devices exploit the photoconductivity phenomenon to generate and radiate power over a broadband up to the THz frequencies. However, nowadays the use of photoconductive antennas are confined to niche short-range applications, because of the bottleneck of the low power emitted. Early in this research work, it was understood that such bottleneck came from the fact that there was not a clear description about the coupling between the photocondcutive source and the antenna. For this reason,
this work has been focused to develop a Thévenin or Norton equivalent circuit for the photoconductor generators of photoconductive antennas.
A Norton equivalent circuit for pulsed photoconductive antennas has been derived, starting by the electrodynamic model of the photogeneration of free carriers in laser pumped semiconductor material. Such equivalent circuit allows to maximize the radiated power as function of the geometry of the gap, the properties of the semiconductor material, and the features of the laser pump, providing a clear description of the coupling between the photoconductor generator and the antenna over the operative bandwidth.
An electromagnetic model of the quasi-optical (source-to-detector) channel, typically used for measuring power and spectrum radiated by photoconductive antennas, has been proposed. Such model jointly with the developed Norton equivalent circuit allows a complete characterization of the power budget from the source to the detector. Providing for the first time a complete description about the dispersion introduced by the quasi-optical channel on the energy spectrum radiated by photoconductive antennas. The entire proposed model (equivalent circuit and channel) has been validated by spectrum and power
measurements of photoconductive antenna prototypes.
The proposed equivalent circuit and the electromagnetic model of the quasi-optical channel provide a powerful engineering tool to design photoconductive antennas, opening the way for more standard engineering optimization of wide band laser pumped sources, resorting to the vast heritage of wide band microwave engineering tools that have been developed mostly for analyzing detectors in radiometric domains.
The radiation performances of logarithmic spiral antennas as feed of dense dielectric lenses has been intensively analyzed. The results of the investigation have demonstrated the presence of the leaky wave radiation, when the spiral antenna are printed at the air dielectric interface, leading to a design of a logarithmic spiral antenna lens antenna, which provides an high aperture efficiency over a decade frequency bandwidth. However, only using extremely thin substrate allows to feed this design with a planar feeding system without limiting the bandwidth. A new design of a logarithmic spiral lens antenna has been proposed for relaxing such limitation, introducing a small air gap between the spiral feed and the bottom lens interface, which enhances the leaky wave radiation. Such new design, coupled with a synthesized elliptical lens, achieves directive patterns without sidelobes over a decade frequency bandwidth. Moreover, the new spiral design can be used also as feed of a hemispherical lens with low extension height, when the dispersion of the radiated pulses has to be minimized.
A novel design for photoconductive sources has been proposed, aiming to increase dramatically the radiated power with respect to the current photoconductive antennas. The new source is based on the well established concept in the microwave community of connected array. Thanks to the intrinsic wide band behavior of the connected array, the proposed solution is able to radiate efficiently the wide band energy spectrum generated by the photoconductive source. Such design is suitable to be employed also as receiver of ultra-wide bandwidth radiation, increasing the sensitivity with respect to the current photoconductive receivers. In order to implement the design of the photoconductive connected array, an ad-hoc biasing network has been proposed, in order to properly bias all the array cells, preserving the connected structure of the elements. Moreover, a design of an optical system has been proposed, in order to optically excite all the elements of the photoconductive array coherently. Using the proposed Norton equivalent circuit for photoconductive generator, a photoconductive connected array generating an average power of 2.35mW over a bandwidth from 0.1THz − 0.4THz has been designed. A demonstrator of the proposed photoconductive source design is going to be realized, and a complete characterization of the prototype will be performed by means of power and spectrum measurements, proving the validity of the concept.","photoconductivity; photoconductive antenna; THz source; THz technology; equivalent circuit; dispersion; leaky wave antenna; lens antenna; ultra-wideband antenna; ultra-wideband array","en","doctoral thesis","","978-94-028-0657-1","","","","","","2018-09-01","","","Tera-Hertz Sensing","","",""
"uuid:ac9cac89-36a8-457d-8c3e-cee73091aa93","http://resolver.tudelft.nl/uuid:ac9cac89-36a8-457d-8c3e-cee73091aa93","A process-based, idealized study of salt and sediment dynamics in well-mixed estuaries","Wei, X. (TU Delft Mathematical Physics)","Heemink, A.W. (promotor); Schuttelaars, H.M. (copromotor); Delft University of Technology (degree granting institution)","2017","Estuaries are important ecosystems accommodating a large variety of living species. Estuaries are also important to people by their demand of freshwater for drinking, irrigation, and industry. Due to natural changes and human activities, the estuarine water quality, influenced by both salinity and turbidity (the cloudiness or haziness of water), has been greatly changed in many estuaries and may continue to change in the future. To predict and control the salt intrusion and the occurrence of high turbidity levels, it is essential to understand the physical mechanisms governing the estuarine dynamics. To that end, this thesis provides a systematical investigation of the dominant physical processes which result in salt intrusion and the formation of the Estuarine Turbidity Maxima (ETM’s) in well-mixed estuaries.","well-mixed; salt dynamics; sediment transport; Idealized model; Estuarine turbidity maxima; gravitational circulation; lateral processes; tidal advection","en","doctoral thesis","","978-94-6186-828-2","","","","","","","","","Mathematical Physics","","",""
"uuid:77471930-c823-4aa9-8c4e-08c16950ef6e","http://resolver.tudelft.nl/uuid:77471930-c823-4aa9-8c4e-08c16950ef6e","Self-designing networks and structural influences on safety: Developing a theory on the relation between organizational design and safety in temporary organizations that operate in a dynamic environment","Moorkamp, M. (TU Delft Safety and Security Science)","Ale, B.J.M. (promotor); Kramer, F.J. (copromotor); Delft University of Technology (degree granting institution)","2017","","","en","doctoral thesis","","978-94-028-0635-9","","","","","","","","","Safety and Security Science","","",""
"uuid:22d46f1e-9061-46b0-9726-760c41404b6f","http://resolver.tudelft.nl/uuid:22d46f1e-9061-46b0-9726-760c41404b6f","Exploitation of distributed scatterers in synthetic aperture radar interferometry","Samiei Esfahany, S. (TU Delft Mathematical Geodesy and Positioning)","Hanssen, R.F. (promotor); Delft University of Technology (degree granting institution)","2017","During the last decades, time-series interferometric synthetic aperture radar (InSAR) has emerged as a powerful technique to measure various surface deformation phenomena of the earth. Early generations of time-series InSAR methodologies, i.e. Persistent Scatterer Interferometry (PSI), focused on point targets, which are mainly man-made features with a high density in urban areas and associated infrastructure. Later, methodologies were introduced aiming to extract information from other targets known as distributed scatterers (DS), which are associated with ground resolution cells occurring mainly in rural areas. Unfortunately, the underlying properties and assumptions behind various DS-phase estimation methodologies are sometimes subjective and incomparable, which hampers the objective application of the different methods. Moreover, for some terrain types, such as agricultural terrain or pastures, the feasibility of DS-methodologies is not straightforward.
In view of these challenges, the two main objectives of this study are (i) to formulate and implement the estimation methodology of DS-pixels in a standard geodetic framework and to compare it with other existing methods, and (ii) to assess the feasibility of exploiting distributed scatterers for deformation monitoring over agricultural and pasture areas.
We review state-of-the-art time-series InSAR methodologies with special attention to
processing aspects related to distributed scatterers. From an estimation theory perspective, the key processing step to extract information from DS-pixels is the equivalent single-master (ESM) phase estimation. To situate this estimation in a geodetic framework, a mathematical model is proposed in the form of a Gauss-Markov model. To evaluate the stochastic part of the model, a numerical Monte-Carlo methodology as well as an analytical approach are introduced. Regarding the functional part, the ESM-phase estimation is formulated in the form of a hybrid linear system of observation-equations with both real-value and integer unknowns. The solution of the proposed model is given by the integer least-squares (ILS) estimator. The properties of such an estimator for ESM-phase estimation are described and demonstrated using synthetic and real datasets. Furthermore, to provide a theoretical comparison between the proposed ILS estimator and other existing ESM-phase estimators, a unified mathematical model in the form of a system of observation equations is proposed. Evaluating all the existing DS-methods shows that, although they all provide specific solutions, their fundamental difference is in how they assign weights to the interferometric observations.
The feasibility of exploiting PS, DS, and their combination over agricultural and rural
landscapes is assessed via a case study on a subsidence area near city of Veendam,
the Netherlands, based on the coherence behavior of different types of land use. It is
shown that, under the condition of using the entire time-series, agricultural and pasture areas show only limited improvement in point density compared to the results of PSonly processing. This is due to the seasonal behavior of the temporal coherence, which causes an almost complete drop in coherence during summer periods, mainly as a result of tillage, crop growth and harvesting.
To model this periodicity, a new analytical model is introduced. In this model, the hypothetical movements of elementary scatterers within DS resolution cells are modeled as a stochastic process with non-stationary but periodic increments. The parameters of this model are estimated for pasture areas, and are subsequently used to assess the feasibility of exploiting DS-pixels in agricultural areas by different satellite missions. The results confirm that, assuming a three-year stack of data, the information content in DS-pixels from current C-band and X-band missions is not enough for the successful utilization of their entire time-series. However by using intermittent series, e.g., by processing individual coherent periods, the results indicate that DS-pixels can be exploited: based on the proposed decorrelation model, the short repeat times of Sentinel-1 (6 or 12 days) results in a sufficient number of coherent interferograms over each winter period, enabling DS exploitation even over agricultural and pasture areas.","satellite radar interferometry; InSAR; surface deformation; geodetic estimation; distributed scatterers","en","doctoral thesis","","978-94-6186-806-0","","","","Samiei-Esfahany, S. (2017), Exploitation of distributed scatterers in synthetic aperture radar interferometry. PhD thesis, Delft University of Technology.","","","","","Mathematical Geodesy and Positioning","","",""
"uuid:bdf9afef-48b6-460a-bcd3-91c2e29a196c","http://resolver.tudelft.nl/uuid:bdf9afef-48b6-460a-bcd3-91c2e29a196c","Computational modeling of failure in composites under fatigue loading conditions","Latifi, M. (TU Delft Applied Mechanics)","Sluys, Lambertus J. (promotor); van der Meer, F.P. (copromotor); Delft University of Technology (degree granting institution)","2017","","","en","doctoral thesis","","978-94-92516-52-7","","","","","","","","","Applied Mechanics","","",""
"uuid:147f6475-1b33-4a5a-9e65-abe63a3865ff","http://resolver.tudelft.nl/uuid:147f6475-1b33-4a5a-9e65-abe63a3865ff","Experimental Observation of Non-Ideal Compressible Fluid Dynamics: with Application in Organic Rankine Cycle Power Systems","Mathijssen, T. (TU Delft Flight Performance and Propulsion; TU Delft Energy Technology)","Colonna, Piero (promotor); Guardone, Alberto (promotor); Delft University of Technology (degree granting institution)","2017","","Non-ideal compressible fluid dynamics; nonclassical gasdynamics; dense gas dynamics; rarefaction shock wave; shock tube; liquid-vapour critical point; dynamic modeling","en","doctoral thesis","","978-94-92516-53-4","","","","","","","","","Flight Performance and Propulsion","","",""
"uuid:f943d71a-148d-4bb4-804c-a7a9904ed4bb","http://resolver.tudelft.nl/uuid:f943d71a-148d-4bb4-804c-a7a9904ed4bb","Consumer Heterogeneity, Transport and the Environment","Araghi, Y. (TU Delft Energie and Industrie)","van Wee, G.P. (promotor); Kroesen, M. (copromotor); Delft University of Technology (degree granting institution)","2017","While transport is essential for the functioning of the economy of each country, it is also contributing to CO2 emissions and other externalities, like safety risks and noise exposure. According to the Internal Energy Agency, around 23% of global CO2 emissions is related to the transport sector in 2015, making it second largest emitter after the energy sector (IEA, 2015). The energy sector has long started to stabilize its emissions through the large scale introduction of renewable and clean energy sources. If the transport sector continues to develop as before, this will make this sector perform even worse in terms of its relative emission contribution. Although top-down emission policies have been successful (for example, regulations regarding particulate filters), the increasing transport related emissions worldwide indicates that there is a need for more action. While regulations and technological innovations may decreased emissions, but not enough to reduce emissions to acceptable levels; behavioural change is also necessary (Bristow et al., 2008; Hickman & Banister, 2007). However, imposing behavioural restrictions may be associated with economic costs. Therefore, the existing dilemma is how to reduce the share of transport in global emissions while minimizing unfavourable economic implications.","","en","doctoral thesis","TRAIL Research School","978-90-5584-223-0","","","","TRAIL Thesis Series no. T2017/5","","","","","Energie and Industrie","","",""
"uuid:a17fa324-8783-4578-a838-0f53c8061ddf","http://resolver.tudelft.nl/uuid:a17fa324-8783-4578-a838-0f53c8061ddf","Personalized Energy Services: A Data-Driven Methodology towards Sustainable, Smart Energy Systems","Uttama Nambi, Akshay (TU Delft Embedded Systems)","Langendoen, K.G. (promotor); Venkatesha Prasad, Ranga Rao (copromotor); Delft University of Technology (degree granting institution)","2017","The rapid pace of urbanization has an impact on climate change and other environmental issues. Currently, 54% of the global population lives in cities accounting for two-thirds of global energy demand. Sustainable energy generation and consumption is the top humanity’s problem for the next 50 years. Faced with rising urban population and the need to achieve energy efficiency, urban planners are focusing on sustainable, smart energy systems. This has led to the development of Smart Grids (SG) that employs intelligent monitoring, control and communication technologies to enhance efficiency, reliability and sustainability of power generation and distribution networks.
While energy utilities are optimizing energy generation and distribution, consumers play a key role in sustainable energy usage. Several energy services are provided to the consumers to know households' hourly energy consumption, estimate monthly electricity cost and recommendations to reduce energy consumption. Furthermore, advanced services such as demand response, can now control and influence energy demand at the consumer-end to reduce the overall peak demand and re-shape demand profiles. The effectiveness and adoption of these services highly depend on the consumers’ awareness, their participation and engagement. Current energy services seldomly consider consumer preferences such as their daily behavior, comfort level and energy-consumption pattern. In this thesis, we investigate development of personalized energy services that strive to achieve a balance between efficient-energy consumption and user comfort.
Personalization refers to tailoring energy services based on individual consumers’ characteristics, preferences and behavior. To develop effective personalized energy services a set of challenges need to be tackled. First, fine-grained data collection at user and appliance level is required (data collection challenge). Mechanisms should be devised to collect fine-grained data at various levels in a non-intrusive way with minimal sensors. Second, personalized energy services require detailed user preferences such as their thermal comfort level, appliance usage behavior and daily habits (user preference challenge). Accurate learning models to derive user preferences with minimal training and intrusion are required. Third, energy services developed needs to be easily scalable, from one household to tens and thousands of households (scalability challenge). Mechanisms should be developed to tackle the deluge of data and support distributed storage and processing. Fourth, energy services should deliver real-time feedback or recommendations so that users can promptly act upon it (real time challenge). This calls for development of distributed and low complexity algorithms.
This thesis moves away from traditional SG services -- which hardly consider consumer preferences and comfort -- and proposes a novel approach to develop effective personalized energy services. The proposed energy services provide actionable feedback, raise awareness and promote energy-saving behavior among consumers.
In this thesis, we follow a bottom-up data-driven methodology to develop personalized energy services at various scales -- (i) nano: individual households, (ii) micro: buildings and spaces, and (iii) macro: neighborhoods and cities. To this end, we present our approach -- physical analytics for sustainable, smart energy systems -- that combines IoT data, physical modeling and data analytics to develop intelligent, personalized energy services. Physical analytics fuses data from various Internet of Things (IoT) devices such as smart meters, smart phones and smart watches, along with physical information such as household type, demographics and occupancy to infer energy-usage patterns, user behavior and discover hidden patterns. This approach is used to learn and model user preferences and energy usage, subsequently, employed to develop personalized energy services.
This thesis is organized into three parts. Part I describes how to derive fine-grained information with minimal sensors and intrusion. We present two novel algorithms viz., LocED and PEAT that derive fine-grained information from appliance and user level, respectively. This real-time information is used to raise awareness on energy-usage behavior among occupants. Part II presents personalized energy services targeted at households and buildings. We develop services that shift and/or reduce energy consumption and cost by considering individual consumers’ preferences and comfort. These energy services are aimed at providing actionable feedback to occupants towards sustainable energy usage. Part III presents energy services targeted at neighborhood and city level. These energy services aim to identify target consumers in a neighborhood based on their energy-usage pattern and preferences for various DR programs. Finally, we present data-processing architectures that investigate how to cope with the overwhelming data generated from smart meters towards design and development of sustainable, smart energy systems.
This thesis advocates that the design and development of energy services should follow personalized approach with consumer preferences and comfort given paramount importance. Results show that the personalized energy services developed has significant potential to raise awareness, reduce energy consumption and improve user comfort in smart -- homes, buildings and neighborhoods.","Energy Services; Smart Grids; smart buildings; Energy Disaggregation; User modeling; Personalization; Data Science","en","doctoral thesis","","978-94-6186-813-8","","","","","","","","","Embedded Systems","","",""
"uuid:ebeef0fa-46fe-4947-86c1-c765a583770a","http://resolver.tudelft.nl/uuid:ebeef0fa-46fe-4947-86c1-c765a583770a","Playful Design for Activation: Co-designing serious games for people with moderate to severe dementia to reduce apathy","Anderiesen, H. (TU Delft Applied Ergonomics and Design)","Goossens, R.H.M. (promotor); Sonneveld, M.H. (promotor); Delft University of Technology (degree granting institution)","2017","Research finds that 90% of nursing home residents with dementia suffer from apathy, which negatively influences their physical, cognitive, and emotional well-being. The goal of this project-grounded research is to develop a product-service system that stimulates nursing homes residents, living with moderate to severe dementia, to reduce their apathy. This thesis entails three preceding studies to inform the design project. A systematic literature review that addressed empirical studies that measured the effects of environmental stimuli on the level of physical activity of nursing home residents living with dementia. A qualitative exploration of the social environment of residents in nursing homes to explore the effect of the social aspects on their participation in daily and leisure activities. And a literature review to determine which play experiences can be expected to be suitable for persons in different stages of Alzheimer’s disease. During a co-design process the Active Cues Tovertafel was designed together with the people with dementia, their relatives and carers making use of a ‘Wizard of Oz’ prototype. We developed six serious games for the Tovertafel, which projects playful interactive light animations on existing dining tables in the nursing home environment. The games were evaluated on their effect on the apathy of the residents with moderate to severe dementia during a small-scale study and the results show significant improvement in physical activity. Moreover, the results also indicate improvements in social interaction, happiness, and reduction of anger, fear and sadness. In sum, the present study shows that co-designed serious games can play a beneficial role in the dementia care context.","","en","doctoral thesis","","","","","","","","","","","Applied Ergonomics and Design","","",""
"uuid:31b25b38-45e9-4469-810b-79fe19905a4d","http://resolver.tudelft.nl/uuid:31b25b38-45e9-4469-810b-79fe19905a4d","The Relation Between Structure and Function in Brain Networks: A network science perspective","Meier, J.M. (TU Delft Network Architectures and Services)","Van Mieghem, P.F.A. (promotor); Stam, Jan (promotor); Delft University of Technology (degree granting institution)","2017","Over the last two decades the field of network science has been evolving fast. Many useful applications in a wide variety of disciplines have been found. The application of network science to the brain initiated the interdisciplinary field of complex brain networks. On a macroscopic level, brain regions are taken as nodes in a network. The analysis of pairwise connections between the brain regions as links has provided a new perspective on many problems. The application of network science to neuroscience data helped, for example, to identify the disruptions due to different neurological disorders when comparing healthy and abnormal brain networks. In this dissertation, we focus on the macroscopic level of brain regions and analyze their pairwise connections from a network science perspective. We address different general research questions from network science and exploit their application possibilities towards brain networks. Due to different measurement techniques, one can construct many different representations of brain networks. We thereby distinguish between the structural and functional brain network. Structural brain networks map the anatomical connections between the regions, which we could interpret as the ’streets’ of the brain. On top of these streets, we can measure the traffic with techniques like e.g. magnetoencephalography (MEG) or functional Magnetic Resonance Imaging (fMRI) resulting in so-called functional brain networks. However, the relation between the structural and the functional brain networks is still insufficiently understood. The first main research question of this dissertation focuses on the functional network layer and tries to identify the most important links and motifs of these networks. For this purpose, we propose the union of shortest path trees (USPT) as a new sampling method extracting all the shortest paths of a network (Chapter 2 and 3). After constructing the USPT, we compare the individual functional brain networks of multiple sclerosis patients and healthy controls (Chapter 2). Furthermore, we generalize this sampling method and present a new ranking of all the links based on the USPT (Chapter 3). Regarding the higher-order building blocks of the functional brain networks, we analyze the so-called information flow motifs based on MEG data from different frequency bands (Chapter 4). After researching the local properties of the functional brain networks, we analyze the influence of the underlying structural connections on the emerging information flow. Thus, the second main research question concerns the relationship between the functional and the underlying structural connectivity. Specifically, we analyze which topological properties of the structural networks drive the functional interactions. First, this question is approached in a mathematical and straightforward manner by assuming that an analytic function between the two networks exists (Chapter 5). We investigate this mapping function and its reverse by evaluating empirical individual and group-averaged multimodal data sets. A second approach towards the structure-function relationship employs a simple model of activity spread. The epidemic spreading model is applied on the human connectome to investigate the global patterns of directional information flow in brain networks (Chapter 6). The main focus here lies on the pairwise measure of transfer entropy to investigate the influence of one brain region on another. We present the results for the local and global outcomes of the dynamic spreading process aiming to identify the driving structural properties behind the observed global patterns.","structural brain networks; functional brain networks; effective connectivity; information flow; network motifs; shortest paths; epidemic spreading model","en","doctoral thesis","","978-94-028-0638-0","","","","","","2017-05-24","","","Network Architectures and Services","","",""
"uuid:5293031c-63c2-43bb-a53f-750955a5c91f","http://resolver.tudelft.nl/uuid:5293031c-63c2-43bb-a53f-750955a5c91f","Transport Networks, Land Use and Travel Behaviour: a Long Term Investigation","Kasraian Moghaddam, D. (TU Delft OLD Urban and Regional Development)","van Wee, G.P. (promotor); Maat, C. (copromotor); Delft University of Technology (degree granting institution)","2017","This thesis unravels the long-term relationships between transport networks, land use and travel behaviour at a regional scale. It investigates these relationships by applying various methods to an extensive long-term geo-referenced database, in the case of the Greater Randstad Area in the Netherlands. Its findings shed light on the roles of rail and road networks, land use and spatial policies on the development of cities and the travel behaviour of their inhabitants over time.","","en","doctoral thesis","TRAIL Research School","978-90-5584-221-6","","","","TRAIL Thesis Series no. T2017/4, the Netherlands Research School TRAIL","","","","","OLD Urban and Regional Development","","",""
"uuid:c2833bb2-0662-4208-a013-0c084f05f12e","http://resolver.tudelft.nl/uuid:c2833bb2-0662-4208-a013-0c084f05f12e","Crowdsourcing: Fast Abundant Flexible User Research for Design","Tidball, B.E. (TU Delft Design Conceptualization and Communication)","Stappers, P.J. (promotor); Mulder, I. (promotor); Delft University of Technology (degree granting institution)","2017","User centered design of products, services, and systems is widely accepted. Numerous user research tools and methods exist to engage users and gather users’ insights to inform the design process. Unfortunately, time, effort, and expense of many tools often delay the availability of user insights. To address this concern, a series of studies investigates how designers can use existing crowdsourcing applications to advance their user research techniques, as a means to inform early design decisions with end-user perspectives. The studies examine the utility of crowdsourcing for informing the design process, illuminating possibilities and limitations. The results make the case for crowdsourcing as a new, fast, abundant, and flexible tool for user research. A conceptual framework defining the process and a set of guidelines have been developed to make crowdsourcing accessible, enabling designers to connect with users early and often, as they pursue user centered design solutions.","Crowdsourcing; User Research","en","doctoral thesis","","","","","","","","","","","Design Conceptualization and Communication","","",""
"uuid:8d59fd51-cbac-408d-a3e0-6a15891c2695","http://resolver.tudelft.nl/uuid:8d59fd51-cbac-408d-a3e0-6a15891c2695","Seismic Waveform Inversion: Bump functional, parameterization analysis and imaging ahead of a tunnel-boring machine","Bharadwaj, Pawan (TU Delft Applied Geophysics and Petrophysics)","Mulder, W.A. (promotor); Drijkoningen, G.G. (copromotor); Delft University of Technology (degree granting institution)","2017","During a seismic experiment, mechanical waves are usually generated by various manmade sources. These waves propagate in the subsurface and are recorded at receivers. Modern seismic exploration methods analyze them to infer the mechanical properties of the subsurface; this is commonly referred as quantitative imaging. These properties assist in the determination of the subsurface rock type and structure. Exploration methods are not only useful while looking for the deposits such as crude oil, natural gas and minerals but also for near-surface geotechnical investigation. A motive of this thesis is to adopt these methods to image the subsurface ahead of a tunnel-boring machine for hazard assessment during excavation. Full-waveform inversion (FWI) is a gradient-based optimization problem that is employed in seismic exploration for quantitative imaging of the recorded waves. During FWI, seismic waves are simulated in a computer by using certain physical laws that govern the wave propagation. After inversion, output subsurface properties simulate waves that fit the recorded waves in a least-squares sense. In other words, the gradient-based optimization aims to find the minimum of the least-squares misfit between the simulated and the recorded waves. Finding such a minimum is not straight forward due to the existence of multiple local minima when using the least-squares objective. As a result, it might often happen that the optimizer converges to local minima, where the simulated waves only partially explain the recorded waves. The presence of local minima is associated to the strong non-linear dependence of the recorded waves on the subsurface properties. In this thesis, we attempt to overcome this difficulty. We propose a new measure of misfit between the recorded and the simulated waves. This measure compares the waveforms in a simplified form after taking the absolute value and blurring. We show that the new misfit measure suffers less from the local-minima problem. For robust inversion, we use a multi-objective inversion scheme, where the new measure is used as an auxiliary objective to pull the trapped solution out of the least-squares local minimum whenever necessary. In multi-parameter FWI, more than one kind of subsurface properties are simultaneously estimated. When only the first-order derivatives of the misfit are used during minimization, different choices of subsurface parameterization are not equivalent; they can be interpreted as different preconditioners. Therefore, the choice of parametrization will affect the rate of convergence in multi-parameter FWI and the best choice of parameterization is the one with the highest rate. In this thesis, we also analyse various choices of subsurface parameterization in search of the best one. It is well known that the local-minima problem in FWI can easily be resolved by reliably generating and recording low-frequency waves in the subsurface. Recently, a seismic source capable of generating such low frequencies is developed based on linear synchronous motors technology. Finally, we demonstrated a shear-wave seismic ground prediction system using these sources to enable imaging ahead of a tunnel boring machine (TBM).","seismic imaging; full-waveform inversion; bump functional; parameterization; local minima; tunnel-boring machine","en","doctoral thesis","","978-94-6295-662-9","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:8e3dd0ce-5269-4b53-8cdf-00cd3cb87eed","http://resolver.tudelft.nl/uuid:8e3dd0ce-5269-4b53-8cdf-00cd3cb87eed","Synthesis, Characterization and Properties of Semi-aromatic Polyamide Thermosets","LI, M. (TU Delft Novel Aerospace Materials)","Dingemans, T.J. (promotor); Delft University of Technology (degree granting institution)","2017","The work presented in this thesis describes a route towards semi-crystalline polyamide (PA 10T) thermosets. A mild temperature solution polymerization method was developed to synthesize melt-processable PA 10T precursors with crosslinkable functionalities. These functionalities can be placed at the polymer chain-ends or as pedent groups on the polymer backbone. The reactive precursors were thermally cured into polyamide thermosets and their morphology and thermo-mechanical properties of the final thermosets. Finally, semi-crystalline PA 10T thermoset films were evaluated as single-component high-temperature shape memory materials.","","en","doctoral thesis","","978-94-6186-805-3","","","","","","","","","Novel Aerospace Materials","","",""
"uuid:ee8a035d-95ad-472c-9111-7044e058f9ee","http://resolver.tudelft.nl/uuid:ee8a035d-95ad-472c-9111-7044e058f9ee","Observing surface properties of glaciers: A case study in the Nyainqêntanglha Range, Tibetan Plateau","Shi, J. (TU Delft Optical and Laser Remote Sensing)","Menenti, M. (promotor); Lindenbergh, R.C. (copromotor); Delft University of Technology (degree granting institution)","2017","","","en","doctoral thesis","","","","","","","","","","","Optical and Laser Remote Sensing","","",""
"uuid:342da318-4a11-4989-8c79-df0f0a11468f","http://resolver.tudelft.nl/uuid:342da318-4a11-4989-8c79-df0f0a11468f","Electronic Transport in Helium Beam Modified Graphene and Ballistic Josephson Junctions","Nanda, G. (TU Delft QN/Kavli Nanolab Delft)","Vandersypen, L.M.K. (promotor); Alkemade, P.F.A. (copromotor); Delft University of Technology (degree granting institution)","2017","This thesis describes the capabilities of the helium ion microscope (HIM) and that of graphene to explore fundamental physics and novel applications. While graphene offers superior electronic properties, the helium ion microscope allows us to combine imaging and modification of materials at the nanoscale. We used the capabilities of HIM to grow 3D-AFM probes, which can be used in the critical dimension semiconductor metrology. Moreover, we studied the ion-material interactions, needed to enable the fabrication of functional graphene nanoribbons. Similarly, we used the superior electronic properties of graphene to make ballistic Josephson junctions and studied the current-phase relation (CPR) of these junctions.
The core of this thesis is focused on the fabrication and electronic characterization of He+ beam modified graphene, He+ beam etched graphene nanoribbons, and graphene-based Josephson junctions (JJs). The graphene devices were prepared by a new polymer-free transfer ""van der Waals pick-up"" technique. The fabricated devices comprise graphene encapsulated in hexagonal boron nitride (BN) and contacted along the edge by either a normalmetal (Cr/Au) or by a superconductor. The encapsulation in BN keeps the graphene clean and the edge contacting technique provides transparent interfaces. The thesis is divided into two main topics. In particular, the first three studies are dedicated to the research based on the helium ion microscope, and the next three are dedicated to the research based on boron nitride encapsulated graphene Josephson junctions.","helium ion microscopy; graphene; beam-induced deposition; ion-induced defects; graphene nanoribbons; Josephson junctions; current-phase relation; superconducting quantum interference devices (SQUIDs)","en","doctoral thesis","","978-90-8593-295-6","","","","","","","","","QN/Kavli Nanolab Delft","","",""
"uuid:4feb2454-7d0a-4481-b8c2-4bae411d2e4a","http://resolver.tudelft.nl/uuid:4feb2454-7d0a-4481-b8c2-4bae411d2e4a","Strategic Network Modelling for Passenger Transport Pricing","Smits, E.-S. (TU Delft Transport and Planning)","van Arem, B. (promotor); Bliemer, M.C.J. (promotor); Pel, A.J. (copromotor); Delft University of Technology (degree granting institution)","2017","In the last decade the Netherlands has experienced an economic recession. Now, in 2017, the economy is picking up again. This growth does not only come with advantages because economic growth demands more from the transport system. Congestion is increasing again, the capacity of the train system is now insufficient during peak hours, and the world faces environmental challenges that are partly due to emissions caused by travellers. These negative effects worsen as travellers make rational choices, which could be undesirable from a system, or social welfare, perspective. For example, car drivers do often not choose public transport options, because it costs them more effort; however, if they choose public transport options, then the system improves since congestion and emissions will reduce. Or another example, if travellers choose to avoid peak hours, they might not arrive at their desired time, but then they do not contribute to peak hour congestion or crowding. In addition, the capacity of the transport system is more effectively used if travellers spread out over the day.
Passenger transport pricing can be an incentive for travellers to change their choices, and can therefore be used to mitigate congestion, emissions, and other undesirable effects. Passenger transport pricing is the umbrella term for measures that make passengers pay for their travels. Traditional pricing measures are for example: fuel excise taxes, public transport fares, and periodical registration fees for vehicles. More innovative measures are cordon charges (e.g., in London, Stockholm, and Singapore), special tolling lanes, and peak avoidance projects. When such an innovative measure has different prices for times of the day, and for different locations (i.e., it is time- and space-differentiated), travellers’ choices related to route, mode and departure time can be influenced. By changing these choices, the overall performance of the transport system can improve. Travellers have differences regarding time valuation, preferred departure or arrival times, and car ownership. Therefore, a measure can become even more effective if it also allows to differentiate amongst characteristics of travellers.
However, innovative pricing measures have not been implemented widely across the globe, despite their potential to reduce congestion and emissions. This is primarily due to lack of public and political support. The Netherlands has experienced decades of political discourse and many failed proposals. Low public support did not contribute to (political) agreement either, because it has always fuelled the discussion with dissenting opinions. In the process of designing policies and making decisions, strategic planning models usually estimate (or forecast) the effects of the policy. The preferences of travellers and the transport system are captured by mathematical equations. Such models are always a simplified representation of reality. To apply them to asses pricing measures, they should capture the underlying mechanisms that are important for transport pricing as realistic as possible.
This dissertation identifies disadvantages of current strategic network models for passenger transport pricing and provides methodological advances to resolve them. This is done with a holistic approach that combines game theory, discrete choice analysis, traffic flow theory, and tranport economics into one modelling framework. This framework has many sub-models and provides a toolbox for analysts to determine the effects of innovative pricing schemes. The basic principle for each tool is to make them realistic (so that the results are credible for decision makers), and computationally efficient. The latter means that many different pricing schemes can be computed within reasonable time. By providing the methodological advances, that are briefly discussed in the next sections, this dissertation aims to improve public and political support. For example, the preferences of multiple stakeholders can be considered, the possible conflicts between them can be identified, and solutions based on concepts that aim to resolve these conflicts can be computed.","","en","doctoral thesis","TRAIL Research School","978-90-5584-222-3","","","","TRAIL Thesis Series no. T2017/3, the Netherlands TRAIL Research School","","","","","Transport and Planning","","",""
"uuid:19182ec3-bffe-4b2c-a366-ef58c11d2e4f","http://resolver.tudelft.nl/uuid:19182ec3-bffe-4b2c-a366-ef58c11d2e4f","Surgical lighting","Knulst, A.J. (TU Delft Medical Instruments & Bio-Inspired Technology)","Dankelman, J. (promotor); Delft University of Technology (degree granting institution)","2017","The surgical light is an important tool for surgeons to create and maintain good visibility on the surgical task. Chapter 1 gives background to the field of (surgical) lighting and related terminology. Although the surgical light has been developed strongly since its introduction a long time ago, the last decades only minor developments have been made. This lack of significant development suggests that the current state of surgical lighting is perfectly developed and functions without any flaws. However, literature might give a different perspective. Apparently, despite the lack of significant developments in surgical illumination, the current surgical lighting systems are not good enough yet. This thesis aims to identify problems associated with the use of surgical lights and to improve surgical illumination.","","en","doctoral thesis","","978-94-028-0594-9","","","","","","","","","Medical Instruments & Bio-Inspired Technology","","",""
"uuid:b40864fa-cc4b-43b2-816d-98f810296a24","http://resolver.tudelft.nl/uuid:b40864fa-cc4b-43b2-816d-98f810296a24","Influence of compositions and size on the giant magnetocaloric effect in (Mn,Fe)2(P,Si)-based compounds","Nguyên, V.T. (TU Delft RST/Fundamental Aspects of Materials and Energy)","Brück, E.H. (promotor); van Dijk, N.H. (copromotor); Delft University of Technology (degree granting institution)","2017","","","en","doctoral thesis","","978-94-92516-48-0","","","","","","","","","RST/Fundamental Aspects of Materials and Energy","","",""
"uuid:adc2054d-a2cd-451e-b2f6-1e2f87b3409b","http://resolver.tudelft.nl/uuid:adc2054d-a2cd-451e-b2f6-1e2f87b3409b","Biodiesel production from heterotrophic microalgae","Sano Coelho, R. (TU Delft BT/Bioprocess Engineering)","van der Wielen, L.A.M. (promotor); Teixeira Franco, Telma (promotor); Delft University of Technology (degree granting institution)","2017","This thesis summarizes the results of a doctoral research executed in the State
University of Campinas and in the Technical University of Delft as part of the PhD Dual Degree Program between the two universities. The research project was designed in partnership with Petrobras S. A. (Brazilian Petroleum Corporation), which provided most of the financial support as well as technical cooperation, with the goal of evaluating the potential of heterotrophic microalgae for biofuels production.","","en","doctoral thesis","","978-94-6186-810-7","","","","The doctoral research has been carried out in the context of an agreement on joint doctoral supervision between Universidade Estadual de Campinas, Brazil and Delft University of Technology, the Netherlands. This is a PhD thesis in the dual degree program as agreed between UNICAMP and TU Delft. Esta é uma tese de doutorado no programa de co-tutela conforme acordado entre UNICAMP e TU Delft.","","","","","BT/Bioprocess Engineering","","",""
"uuid:28e0fdfa-72f2-4feb-9fd0-fcc807bb3593","http://resolver.tudelft.nl/uuid:28e0fdfa-72f2-4feb-9fd0-fcc807bb3593","Microsecond reaction kinetics and catalytic mechanism of bacterial cytochrome oxidases","Paulus, A. (TU Delft BT/Enzymology)","de Vries, S. (promotor); Hagen, W.R. (promotor); Delft University of Technology (degree granting institution)","2017","Fundamental biochemical research is of crucial importance for a complete and detailed understanding of what drives enzyme activity and how enzyme kinetic properties are optimized towards survival of the host organism. When cells fail to produce a fully functional enzyme, the organism’s ability to survive or thrive is impacted. In humans, for example, low levels or absence of lactase causes lactose intolerance, while decreased performance of the proton-pumping enzyme cytochrome aa3 oxidase in the mithochondrial electron transport chain in brain cells is linked to Alzheimer’s disease. Direct observation of all steps of the catalytic cycle of bacterial oxidoreductases is challenging, since a full turnover of these enzymes typically takes only ~1 ms. Through targeted mutagenesis of enzymes it is possible to create variants of an enzyme that can onlycatalyze part of the reaction, or that will perform the entire reaction, yet with different kinetics of the individual reaction steps, providing clues as to what drives or limits the enzymatic reaction. Hypotheses based on observations with mutated enzyme variants can be proven or disproven by studying the wildtype uncorrupted enzyme under mild conditions, minimizing artefacts introduced by working in vitro. The stopped-flow spectrophotometer is a valuable tool for kinetic analysis of enzyme reactions, but does not offer the time resolutionrequired to resolve early pre-steady state kinetics. The microsecond freeze-hyperquenching setup (MHQ), on the other hand, is able to create ‘snapshot’ samples of enzymes during catalytic turnover at reaction times down to 74 μs. The quenched samples can be subjected to further analysis by UV-vis or EPR spectroscopy. This thesis decribes the kinetic study of three bacterial oxidoreductases and makes the comparison between the catalytic mechanismsof oxygen reduction (and proton pumping) of each of the three enzymes.","","en","doctoral thesis","","978-94-028-0630-4","","","","","","","","","BT/Enzymology","","",""
"uuid:8cc4134d-1456-45ea-b9f0-b023f7d39630","http://resolver.tudelft.nl/uuid:8cc4134d-1456-45ea-b9f0-b023f7d39630","Directionality of damage growth in fibre metal laminates and hybrid structures","Gupta, M. (TU Delft Structural Integrity & Composites)","Benedictus, R. (promotor); Alderliesten, R.C. (copromotor); Delft University of Technology (degree granting institution)","2017","Fibre-metal laminates (FMLs) have been studied intensively for the past three decades because of their enhanced fatigue properties compared to monolithic metals. Most of these studies have focused on the fatigue damage under in-axis loading. These studies led to the application of FMLs in the aircraft structure in the early 21st century. However, the main application remains limited to the aircraft fuselage where the loading direction remains mostly constant. The few studies in the damage directionality of FMLs show that crack paths in FMLs under off-axis loading can undergo small deflections in biaxial GLAss REinforced aluminium (Glare) grades but show a significant amount of deflection in uniaxial Glare grades. In order to extend FML application to other parts of the aircraft structure where the loading direction is not constant or where uniaxial Glare is required – like aircraft wings - more understanding is required about the directionality of damage in FMLs under off-axis loading. To this effect the present research in damage directionality of FMLs under off-axis loading was undertaken.","Fibre Metal Lamintes; Airbus A380; Mixed mode theory; T-stress; Analytical modelling","en","doctoral thesis","","978-94-6295-609-4","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:ef2e1c2c-f88b-4cca-b52d-b06e06822a22","http://resolver.tudelft.nl/uuid:ef2e1c2c-f88b-4cca-b52d-b06e06822a22","Self-healing Mn+1AXn-phase ceramics","Farle, A.M. (TU Delft (OLD) MSE-1)","van der Zwaag, S. (promotor); Sloof, W.G. (copromotor); Delft University of Technology (degree granting institution)","2017","Damage management and the development of new materials come together in selfhealing Mn+1AXn phase ceramics. These ternary layered carbides and nitrides exhibit a multitude of properties, such as high temperature strength, fracture toughness, thermal and electrical conductivity and machinability, which have been discovered over the past 20 years. In addition, intrinsic crack-gap filling and strength recovery by high temperature oxidation have been demonstrated for Ti2AlC, Cr2AlC and Ti3AlC2. The selective oxidation of the A-element, Aluminium in all known cases, leads to almost full crack gap closure by Al2O3 filling. The dense, strong and well adhering oxide is formed at temperatures above 1000 ±C in atmospheric air and can restore the integrity of a sample even formultiple successive crack-healing cycles.","Ceramics; Self-healing; Oxidation","en","doctoral thesis","","978-94-028-0626-7","","","","","","","","","(OLD) MSE-1","","",""
"uuid:7a46bca3-4105-4cdc-952d-a6d9fcfced76","http://resolver.tudelft.nl/uuid:7a46bca3-4105-4cdc-952d-a6d9fcfced76","Excavation of hard deposits and rocks: On the cutting of saturated rock","Helmons, R.L.J. (TU Delft Offshore and Dredging Engineering)","van Rhee, C. (promotor); Miedema, S.A. (copromotor); Delft University of Technology (degree granting institution)","2017","As a result of the worldwide population and welfare growth, the demand for energy (oil, gas and renewable sources) and raw materials increases. In the last decades, oil and gas are produced from more and more offshore sites and deeper waters. Besides energy, the demand for diverse metals and rare earth elements increases as well. These raw materials are often at the basis of new sustainable technologies e.g. permanent magnets for wind energy and battery packs for electric cars. The availability of these raw materials is essential for a stable development of the world economy. Unfortunately, for some of the crucial raw materials, the availability is sometimes very local and in various cases there is a monopoly forming. To reduce this economic risk, investments are needed to search and extract minerals from new locations. Large, metal-rich fields are found at the bottom of the sea, such as phosphate nodules, manganese nodules, cobalt-rich crusts and vulcanic sulphide deposits (often referred to as Seafloor Massive Sulphide, SMS). These deposits are mainly located in the deep sea, at depths ranging from several hundreds of meters to several kilometers. One of the technical challenges to enable production from these locations is the cutting or excavation process. Experiments have shown that the energy needed to excavate the material increases with water depth. Besides that, it is demonstrated that rock that fails brittle in atmospheric conditions can fail more or less in a plastic fashion when present in a high pressure environment, as would be the case at large water depths. The goal of this research is to identify the physics of the cutting process and to develop this into a model in which the effect of hydrostatic and pore pressures is included. The cutting of rock is initiated by pressing a tool into the rock. As a result, at the tip of the tool a high compressive pressure occurs, which leads to the formation of a crushed zone. Depending on the shape of the tool and the cutting depth, shear failures might emanate from the crushed zone, which will eventually expand as tensile fractures that can reach to the free rock surface. Through this process intact rock will be disintegrated to a granular medium. Additionally, the presence of water in the pores of and surrounding the rock influences the cutting process through drainage effects. The most relevant effects are weakening when compaction and hardening when dilation occurs in shearing and tension. Deformation of the rock causes the pore volume to change, resulting in a under or over pressure. As a result, the pore fluid needs to flow. The magnitude of the potential under pressure is limited through cavitation of the pore fluid, limiting further reduction of the pore pressure. The drainage effects cause the rock cutting process in a submerged environment to show a stronger dependency of both the hydrostatic pressure as well as the deformation rate. The numerical simulations are performed with a 2D DEM (Discrete Element Method). In DEM, the mechanical behavior of a rock is mimicked by gluing loose particles together with brittle bonds. Such a method shows strong resemblance with sedimentary rock. In order to include the effect of an ambient pressure as a result of the water depth and to include the presence of a fluid in the pores of the rock, a pore pressure diffusion equation is added to the model. The discontinuous results obtained with DEM are interpolated to a continuum field through the use of a SP-method (Smoothed Particles). Additionally, SP is used to solve the pore pressure diffusion equation. For that reason, the methodology used in this dissertation is referred to as DEM-SP. Thus far no direct coupling has been found between the input microscopic parameters, that define the properties of and interactions between the particles in DEM, and the resulting bulk properties of the particle assembly. For that reason, a sensitivity analysis is performed in which the effect of the micro-properties on the macroscopic behavior is investigated. Additionally it is proven that the addition of the pore pressure diffusion process to the DEM-SP model corresponds with the effective stress theory. It is also proven that when air is used as a medium in the pores, no significant changes compared to simulations without pore pressure coupling occur. Comparison of the numerical model with a set of tri-axial experiments on shale, in which the deformation rate is varied, shows that the model is well capable to describe both compaction weakening and dilatant hardening. In order to further validate DEM-SP, several experiments from literature are simulated. A comparison of 2D cutting experiments on tiles shows a good match for the chip size, chip shape and the required cutting force. DEM-SP is used to simulated drilling experiments on marble, in which the hydrostatic pressure is varied. These results show that the simulated behavior of the cutting process matches qualitatively with the experiments, i.e. the trend of increasing cutting force with increasing hydrostatic pressure. Furthermore a series of cutting experiments for the purpose of deep sea mining has been simulated. These results match both qualitatively and quantitatively. Additionally, both the experiments and simulations show the existence of a hyperbaric effect. This means that at a hydrostatic pressure which is significantly larger than the tensile strength of the rock the cutting process shear and cataclastic failure are more dominant, while at hydrostatic pressures significantly smaller than the tensile strength the cutting process is dominated by tensile failure and chipforming. Finally, DEM-SP is used to simulate the full cutting motion of a pick point on a rotating cutterhead, in order to investigate the applicability of the method to shallow water depths (<30 m) and to investigate the use of the method for the dredging practice. Even at shallow water depths the effect of an increased hydrostatic pressure shows significant differences. Furthermore, the simulations show a transition from cataclastic towards ductile cutting process based on the cutting depth. Additionally a transition between stick-slip friction of the cut material along the tool is observed, which is an indication for different wear processes. It is proven that DEM-SP is capable of solving drainage related effects in deformation of saturated rock. A range of rock cutting experiments are simulated and the results match well both qualitatively and quantitatively with respect to cutting force and hydrostatic pressure. Further improvement of the model can be achieved by extending the model towards 3D.","Rock Mechanics; Rock cutting; Discrete Element Method (DEM); Smoothed particle","en","doctoral thesis","","978-94-6186-790-2","","","","","","","","","Offshore and Dredging Engineering","","",""
"uuid:5d53817c-0956-4ed7-8716-a8e79eb8c86f","http://resolver.tudelft.nl/uuid:5d53817c-0956-4ed7-8716-a8e79eb8c86f","Constrained and reconfigurable flight control","Joosten, D.A. (TU Delft Team Bart De Schutter)","Verhaegen, M.H.G. (promotor); van den Boom, A.J.J. (copromotor); Delft University of Technology (degree granting institution)","2017","Current jet fighters and modern airliners are hugely complex pieces of machinery. The drawback of this complexity lies in the number of systems and subsystems that may fail for one reason or another. Given the systems complexity of aircraft it is no longer easily possible for the crew to establish what exactly has happened when these fail. This motivates the need to provide means for the diagnosis of failures and automated recovery, i.e. fault tolerant flight control. In general the procedure to make a system fault-tolerant consists of two steps: 1) Fault diagnosis: the existence of faults has to be detected and the faults have to be identified, 2) Control re-design: the controller has to be adapted to the faulty situation so that the overall system continues to satisfy its goal. This thesis focuses on the latter through the application of modern control methods towards reconfigurable flight control. This thesis investigates fault tolerant flight control in the event of actuator or plant faults. A literature survey suggests that model predictive control (MPC) is very suitable for use as fault tolerant flight control method due to its ability to incorporate various constraints. Application of MPC in this setting is the central topic in this thesis. MPC is applied in two different ways in the text. First, a method is presented for finding both a state-observer and the cost function associated with a model predictive controller, based on an already existing output feedback controller. The goal of this exercise is to retain the properties of the existing controller, while adding the constraint handling capabilities of MPC. The second way features the combination of model-based predictive control and the inversion of the dynamics of the system under control into a constrained and globally valid control method for fault-tolerant flight-control purposes. The fact that the approach allows the incorporation of constraints creates the possibility to incorporate additional constraints in case of a failure. Such failures range from relatively straightforward actuator failures to more complicated structural breakdowns where, through the addition of constraints, the aircraft can be kept within its remaining flight envelope. Furthermore, the method is model-based, which allows adaptation of the system model in case of a failure. Both of these properties lead to the fault-tolerant qualities of the method presented. Projection of a polytope onto a lower dimensional polytope is an important element in the combination of MPC and dynamic inversion. A method is presented that avoids the computation of the polytope’s vertices and the application of linear programming methods. The theory presented in this thesis is applied to a benchmark model which constitutes a detailed simulation model of a Boeing 747-200 aircraft, like the aircraft that crashed in the Amsterdam Bijlmer area in 1992.","","en","doctoral thesis","","","","","","This research has been supported financially by technology foundation STW under project number dmr6515.","","","","","Team Bart De Schutter","","",""
"uuid:2d9ac8e0-b922-4fcc-a33d-44a67f7bffad","http://resolver.tudelft.nl/uuid:2d9ac8e0-b922-4fcc-a33d-44a67f7bffad","Optimizing the Performance of Data Analytics Frameworks","Ghit, B.I. (TU Delft Dataintensive Systems)","Epema, D.H.J. (promotor); Delft University of Technology (degree granting institution)","2017","Data analytics frameworks enable users to process large datasets while hiding the complexity of scaling out their computations on large clusters of thousands of machines. Such frameworks parallelize the computations, distribute the data, and tolerate server failures by deploying their own runtime systems and distributed filesystems on subsets of the datacenter resources. Most of the computations required by data analytics applications are conceptually straight-forward and can be performed through massive parallelization of jobs into many fine-grained tasks. Providing efficient and fault-tolerant execution of these tasks in datacenters is ever more challenging and a variety of opportunities for performance optimization still exist. In this thesis we optimize the job performance of data analytics frameworks by addressing several fundamental challenges that arise in datacenters. The first challenge is multi-tenancy: having a large number of users may require isolating their workloads across multiple frameworks. Nevertheless, achieving performance isolation is difficult, because different frameworks may deliver very unbalanced service levels to their users. Second, users have become very demanding from these frameworks, thus expecting timely results for jobs that require only limited resources. However, even with a few long jobs that consume large fractions of the datacenter resources, short jobs may be delayed significantly. Third, improving the job performance in the face of failures is harder still, as we need to allocate extra resources to recompute work which was already done. In order to address these challenges we design, implement, and test several scheduling policies for the evolving usage trends that are derived from the analysis of basic theoretical models. We take an experimental approach and we evaluate the performance of our policies with real-world experiments in a datacenter, using representative workloads and standard benchmarks. Furthermore, we bridge the gap between those experiments and prior theoretical work by performing large-scale simulations of scheduling policies.","","en","doctoral thesis","","978-94-6295-640-7","","","","","","","","","Dataintensive Systems","","",""
"uuid:76fc37fb-a33e-4c17-a48f-01bbaac72377","http://resolver.tudelft.nl/uuid:76fc37fb-a33e-4c17-a48f-01bbaac72377","Hydrate slurry as cold energy storage and distribution medium: Enhancing the performance of refrigeration systems","Zhou, H. (TU Delft Engineering Thermodynamics)","Vlugt, T.J.H. (promotor); Infante Ferreira, C.A. (copromotor); Delft University of Technology (degree granting institution)","2017","The research presented in this thesis focuses on the use of hydrate slurries in the air conditioning and refrigeration areas. Both experimental and mathematical methods have been used. Hydrate slurries have been suggested as promising cold storage materials that can be used in air conditioning systems due to their high latent heat (193 kJ/kg and 387 kJ/kg for the hydrates studied in this thesis) and positive phase change temperature (12.5 °C and 8.0 °C for the hydrates studied in this thesis). However, large scale industrial applications of hydrate slurries are still very limited. This suggests that more research efforts should be devoted to the demonstration of its advantages.","Hydrate slurry; Air conditioning; Growth model; Energy efficiency","en","doctoral thesis","","978-94-6299-595-6","","","","","","","","","Engineering Thermodynamics","","",""
"uuid:a8c53bab-75d6-460f-ba9e-3d81614c51f1","http://resolver.tudelft.nl/uuid:a8c53bab-75d6-460f-ba9e-3d81614c51f1","The Spatial Dimension of House Prices","Gong, Y. (TU Delft OLD Housing Systems)","Boelhouwer, P.J. (promotor); de Haan, J. (promotor); Delft University of Technology (degree granting institution)","2017","The economic reform in China, launched in the late 1970s, gradually promotes the free mobility of capital and labour between rural and urban areas, and between cities. The following housing market reform in the late 1990s thoroughly terminates the socialist allocation of housing and introduces market forces into the housing sector. Such institutional shifts have profound effects on the evolution of the Chinese interurban housing market. Yet, little is known about the spatial behaviour of house prices across cities in the post-reform era. How do the housing markets of different cities organise across space? What is the relationship between the house price dynamics of different cities? To answer these questions, this research performs economic and econometric analysis of the spatial dimension of the Chinese interurban housing market. In addition, this research also concerns the construction of a reliable house price index in the presence of spatial heterogeneity and dependence in the urban housing market of China. A reliable house price index is essential to the analysis of house price dynamic behaviour. However, owing to the data problem, this part is conducted based on the housing market of a Dutch city....","housing prices; urban housing market","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-92516-51-0","","","","A+BE | Architecture and the Built Environment No 4 (2017)","","","","","OLD Housing Systems","","",""
"uuid:e228623a-7844-48b7-97ed-0beda4d4c293","http://resolver.tudelft.nl/uuid:e228623a-7844-48b7-97ed-0beda4d4c293","Guidance control and dynamics of a new generation of geostationary satellites","de Bruijn, F.J. (TU Delft Space Systems Egineering)","Gill, E.K.A. (promotor); Delft University of Technology (degree granting institution)","2017","","geostationary satellites; station-keeping; collocation; geometric constraints; modeling; dynamics; guidance; control","en","doctoral thesis","","978-94-028-0606-9","","","","","","","","","Space Systems Egineering","","",""
"uuid:2149da75-ca29-4804-8672-549efb004048","http://resolver.tudelft.nl/uuid:2149da75-ca29-4804-8672-549efb004048","Doublet deployment strategies for geothermal Hot Sedimentary Aquifer exploitation: Application to the Lower Cretaceous Nieuwerkerk Formation in the West Netherlands Basin","Willems, C.J.L. (TU Delft Reservoir Engineering)","Bruhn, D.F. (promotor); Weltje, G.J. (promotor); Donselaar, M.E. (copromotor); Delft University of Technology (degree granting institution)","2017","Huge amounts of heat are stored in sedimentary aquifers in the Dutch subsurface. The amount of heat would be sufficient to provide our national heat demand for decades without any greenhouse gas emissions. Exploitation of this type of resource started some 10 years ago in the Netherlands. In 2016, 16 geothermal doublet systems had been installed that produce geothermal heat, and each year 2 to 3 new systems are realised. A doublet system consists of a production well that extracts hot formation water from kilometer deep aquifers. After the heat is extracted from the water in heat exchangers, the cooled water is reinjected into the same aquifer at approximately 1 to 1.5 km distance from the production well. Most of the current Dutch doublet systems provide heat for the horticulture sector. These systems have an average net energy production of approximately 10 MWth and therefore hundreds of additional systems are required to significant amounts of our heat consumption with geothermal energy. This PhD thesis investigated doublet system design and deployment strategies to optimise exploitation and increase the possible number of doublet systems exploiting the same aquifer. Based on detailed geological models, subsurface flow simulations are used to evaluate parameters such as required injector-producer distance, the preferred orientation of a well pair with respect to geological trends and required doublet distance to avoid negative interference. Based on the results, regional doublet deployment strategies can be developed to make optimal use of geothermal heat from sedimentary resources.","Hot Sedimentary Aquifers; Geothermal Field development; Direct-use; Low enthalpy geothermal; sustainable energy","en","doctoral thesis","","978-94-92516-49-7","","","","","","","","","Reservoir Engineering","","",""
"uuid:dd9ea945-136c-4b74-bae2-f1a8cf9a6ed9","http://resolver.tudelft.nl/uuid:dd9ea945-136c-4b74-bae2-f1a8cf9a6ed9","Sequentially linear analysis for simulating brittle failure","van de Graaf, A.V. (TU Delft Applied Mechanics)","Rots, J.G. (promotor); Hendriks, M.A.N. (promotor); Delft University of Technology (degree granting institution)","2017","The numerical simulation of brittle failure at structural level with nonlinear finite
element analysis (NLFEA) remains a challenge due to robustness issues. We attribute these problems to the dimensions of real-world structures combined with softening behavior and negative tangent stiffness at local level which may lead to non-convergence, i.e. the applied external loads are not in equilibrium with the internal forces. Also multiple cracks that compete to “survive” and the possibility of bifurcations, i.e. the existence of multiple equilibrium paths, contribute to these problems. However, in engineering practice robust numerical methods become increasingly important. For example, NLFEA may be used to determine the actual load bearing capacity of existing concrete bridges in order to assess whether these meet the current regulations. Also for the prediction of building damage due to underground construction or seismic action NLFEA may be employed.","Sequentially linear analysis; Brittle failure; Finite element analysis; Non-proportional loading; Coulomb friction; Saw-tooth law","en","doctoral thesis","","978-94-6186-799-5","","","","","","","","","Applied Mechanics","","",""
"uuid:5b78d71b-708f-405f-b3b3-ca664b141ce0","http://resolver.tudelft.nl/uuid:5b78d71b-708f-405f-b3b3-ca664b141ce0","The spalling mechanism of fire exposed concrete","Lottman, B.B.G. (TU Delft Steel & Composite Structures)","Walraven, J.C. (promotor); Koenders, E.A.B. (promotor); Delft University of Technology (degree granting institution)","2017","The spalling damage observed to concrete structures after severe fire exposure has been the topic of scientific research for the past decades. This phenomenon is commonly characterised by the sudden and in some cases violent breaking off of concrete pieces from the cross-section. In this thesis the derivation and the numerical results of a finite element based model are presented, using a coupled pore pressure and fracture mechanics approach. The crack patterns obtained are found to be sufficient to reduce the pore pressure to a level representing only a minor contribution. The fracture behaviour during severe fire exposure also revealed that the continued compression of the heated surface layer promotes the formation of thermal instabilities. Based on simulated results, observations from full-scale tests and a conceptual model, a philosophy is argued proposing thermal buckling as the spalling mechanism of fire exposed concrete. --- NEDERLANDSE VERSIE --- De na een ernstige brand waargenomen spatschade aan betonnen constructies is het onderwerp van wetenschappelijk onderzoek in de afgelopen decennia geweest. Dit fenomeen kenmerkt zich algemeen door het plotseling en in sommige gevallen heftige afbreken van betonnen stukken van de doorsnede. In dit proefschrift worden de afleiding en de numerieke resultaten van een op de eindige elementen gebaseerd model gepresenteerd dat gebruik maakt van een gekoppelde benadering tussen de poriëndruk en de breukmechanica. De verkregen scheurpatronen blijken voldoende te zijn om de gasdruk te reduceren tot een niveau hetgeen alleen een geringe toevoeging vertegenwoordigd. Het scheurgedrag gedurende brandbelasting onthulde ook dat de zich continuerende samendrukking van de verhitte oppervlaktelaag de vorming van thermische instabiliteiten bevordert. Gebaseerd op simulatieresultaten, waarnemingen tijdens grootschalige testen en een conceptueel model wordt een filosofie verdedigd die thermische knik voorstelt als het spatmechanisme van aan brand blootgesteld beton.","concrete; fire; (explosive) spalling; pore pressure; fracture mechanics; finite element method; thermal buckling mechanism","en","doctoral thesis","","978-94-028-0623-6","","","","","","","","","Steel & Composite Structures","","",""
"uuid:6fe1dea8-53b3-4734-9e0c-ff01ed393d79","http://resolver.tudelft.nl/uuid:6fe1dea8-53b3-4734-9e0c-ff01ed393d79","Level of detail in 3D city models","Biljecki, F. (TU Delft Urban Data Science)","Stoter, J.E. (promotor); Ledoux, H. (copromotor); Delft University of Technology (degree granting institution)","2017","The concept of level of detail (LOD) describes the content of 3D city models and it plays an essential role during their life cycle. On one hand it comes akin to the concepts of scale in cartography and LOD in computer graphics, on the other hand it is a standalone concept that requires attention. LOD has an influence on tendering and acquisition, and it has a hand in storage, maintenance, and application aspects. However, it has not been significantly researched, and this PhD thesis fills this void.
This thesis reviews dozens of current LOD standards, revealing that most practitioners consider the LOD to be comprised solely of the geometric detail of data and there are disparate views on the concept as a whole. However, the research suggests that the LOD encompasses additional metrics, such as semantics and texture. The thesis formalises the concept, enabling integration and comparison of current LOD standards. The established framework may be applied to cartography and to different forms of 3D geoinformation such as point clouds.
Following the formalised concept, a new LOD specification is presented improving the LOD concept in the current OGC CityGML 2.0 standard, a prominent norm in the 3D GIS industry. The specification introduces 16 LODs for buildings that are shaped after analysing the capabilities of acquisition techniques and a large number of real-world datasets. The improved LOD specification may be integrated in product portfolios and tenders, preventing misunderstandings between stakeholders, and as a better language for communicating the specifics of a dataset to be acquired. The specification also considers different approaches to realise the data. Such geometric references result in dozens of different variants of the same LOD.
3D data according to the LOD specification was generated using a procedural modelling engine that was developed over the course of the research. The engine is capable of producing 3D city models in a large number of different variants and according to the CityGML standard.
The thesis also catalogues the many different ways to create 3D city models. A prominent technique for producing data in a different LOD is generalisation, i.e. simplifying a 3D city model. The inverse---augmenting the LOD of a dataset---has not been researched to a great extent, and this thesis gives an overview of the topic. This research demonstrates that it is possible to generate 3D city models without elevation measurements, inherently augmenting the LOD of coarser data (2D footprints). The method relies on machine learning: several attributes found in 2D datasets may hint at the height of a building, thus enabling extrusion and creating 3D city models suited for several applications.
Some acquisition techniques may result in multi-LOD datasets, and nowadays there are some regions represented in different, independent datasets. However, it was found that possibilities to link such data are deficient. The lack of linking mechanisms inhibits acquisition, storage, and maintenance of multi-LOD data. Two methods for linking features across two or more LODs have been developed resulting in an increased consistency of multi-LOD datasets. The first method links matching geometries across multiple LODs, while the second method establishes a 4D data structure in which the LOD is modelled as the fourth (spatial) dimension.
It is often believed that the more detailed 3D data the better. However, similarly as in computer graphics, dealing with data at fine LODs comes at a cost: such datasets are harder to obtain, their storage footprint is large, and their usage within a spatial analysis may be slow. Scarce research has been dedicated to investigating whether an increase in the LOD of the data brings a comparably significant increase in benefits when the data is used in a spatial analysis.
First, an analysis using real-world multi-LOD data was carried out. Different LODs of spatial data covering the Netherlands was used in a spatial analysis to refine population maps, obtaining different results for each LOD. However, several problems are exposed, revealing that using real data for such investigations is not optimal.
The remainder of the research focuses on using procedurally generated data for such experiments. Synthetic data in several different LODs has been generated and employed for four spatial analyses (estimation of the building shadow, envelope area, volume, and solar irradiation). The experiments result in different conclusions. Finer LODs usually bring some improvement to the quality of the spatial analysis, but not always and such may be negligible. The results of the experiments ultimately depend on the spatial analysis that is considered. The varying results between different spatial analyses make each of them unique. Furthermore, the benefit a finer LOD brings to a spatial analysis is not always clear and easily measurable. In short, striving to produce data at finer LODs may please the eye, but this is not always counter-balanced in the benefit it brings to a spatial analysis.
A further addition to the equation above is that when realised, 3D city models are unavoidably burdened with acquisition errors. An error propagation analysis was performed by disturbing the procedurally generated datasets with a range of simulated positional errors. Comparisons have been made between the intentionally degraded datasets and their error-free counterparts, thus obtaining the magnitude of uncertainty the positional errors cause in a spatial analysis. Based on these experiments, several findings are discovered, most importantly:
1. How the LODs are realised (which geometric references are used) has a larger influence than the LOD. A coarse LOD produced with a favourable geometric reference may yield better results than a finer LOD realised with an unfavourable reference.
2. Positional errors considerably affect spatial analyses. The effect is comparable across similar LODs. Simpler LODs are sligthly less affected by positional errors, but they may contain a large systematic error.
3. Errors induced in the acquisition process generally cancel out the improvement provided by finer LODs. The main conclusion is that in the considered spatial analyses the positional error has a significantly higher impact than the LOD. As a consequence, it is suggested that it is pointless to acquire geoinformation at a fine LOD if the acquisition method is not accurate, and instead it is advised to focus on the improvement of accuracy of the data.
The thesis proposes additional research for future work. For example, since this research focuses specifically on 3D building models, it would be worth extending the research to other urban features such as roads and vegetation. Furthermore, quality control in 3D GIS does not encompass the evaluation of the LOD of data. Hence integration of the LOD in quality standards should be a priority for future work.
Nowadays, news is abundantly available online, allowing users to discover and follow news events. However, online news is often very redundant; most sources basing their stories on previously published works and add only limited new information. Thus, a user often ends up spending significant amount of effort re-reading the same parts of a story before finding relevant and novel information. In Chapter 2 and Chapter 3, we present a novel approach to construct an online news summary for a given topic. Salient sentences are identified by clustering the sentences in the news stream based on the relative proximity of the sentences and the temporal proximity of their publication times. To improve the coherence of a long summary that describes a news topic, we propose to automatically cluster sentences by subtopics in Chapter 4. In Chapter 5, we show how new topics can be detected in the news stream using the same clustering technique.
In real-life decision making, people are often faced with an overload of choices. A recommender system aids the user by reducing the available choices to a shortlist of items that are of interest to the user. In Chapter 6, we learn high-dimensional representations for movies that allow to effectively recommend movies based on a user’s most recently rated movies.","Information retrieval; retrieval algorithms; clustering; recommender systems","en","doctoral thesis","","978-94-6186-803-9","","","","SIKS Dissertation Series No. 2017-19 The research reported in this thesis has been carried out under the auspices of SIKS, the Dutch Research School for Information and Knowledge Systems.","","","","","Multimedia Computing","","",""
"uuid:a75df963-aede-44a7-9216-274cdc7c4278","http://resolver.tudelft.nl/uuid:a75df963-aede-44a7-9216-274cdc7c4278","System-level feature-based modeling of cyber-physical systems: A theoretical framework and methodological fundamentals","Pourtalebi Hendehkhaleh, S. (TU Delft Cyber-Physical Systems)","Horvath, I. (promotor); Delft University of Technology (degree granting institution)","2017","Cyber-physical systems are complex trans-disciplinary systems. Designing this kind of systems requires cooperation of several groups of experts with various backgrounds such as mechatronics and robotics, software engineering, data management, knowledge engineering, system on a chip, embedded systems, humans and systems interaction, and social and cognitive engineering. Design of a new CPS is typically initiated by system designers, who focus on system-level architecting and functional design (pre-embodiment design), and completed by embodiment/detail designers (who focus on components and their interoperation). Although there are numerous software tools to assist designers in detail design, it is hard to find dedicated tools that can sufficiently support CPS designers in pre-embodiment design phase. The tools available for this purpose are mostly based on logical, analytical, and mathematical modeling. They usually apply various levels of abstraction and simplification in modeling. As a result, many important chunks of information about attributes and parameterization of components are ignored or lost in system-level conceptualization. To overcome this problem, we introduced the framework and computational methodology of a system-level feature-based modeling, which can be used as a basis for developing next-generation CPSs modeling tools.
As an general underpinning, we have developed the mereo-operandi theory (MOT), which combines several classic theories in a uniform body and captures information about architectural and operational relations in a system in an integral manner. As a complementary to MOT, the theory of system manifestation features (SMFs) has been proposed that introduced the methodological fundamentals. SMFs have been defined as multi-stage, knowledge-intensive, compound modeling entities. The developed knowledge frames have been implemented as relational tables of the modeling warehouses. The computational procedures required for modeling of CPSs have also been elaborated. For validation, the developed modeling framework was benchmarked against multiple available modeling tools. It has been concluded that SMFs-based modeling offers many new opportunities for compositional modeling of CPSs. Some of the novelties are: (i) imposing strictly physical view, (ii) concurrent modeling of architecture and operation aspects, (iii) uniform information structure for all kinds of components, and (iv) benefiting from active ontologies.","Cyber-Physical Systems; System manifestation features (SMFs); system-level features; system-level modeling; Pre-embodiment design; Mass customization; Feature technology","en","doctoral thesis","","978-94-028-0621-2","","","","","","","","","Cyber-Physical Systems","","",""
"uuid:66553334-94e2-4b82-8a94-8286cc72cf09","http://resolver.tudelft.nl/uuid:66553334-94e2-4b82-8a94-8286cc72cf09","Making Better Batteries: Following Electrochemistry at the Nano Scale with Electron Microscopy","Basak, S. (TU Delft QN/Zandbergen Lab)","Zandbergen, H.W. (promotor); Delft University of Technology (degree granting institution)","2017","With the focus in automobile industry to switch from petroleum-based vehicles to all electric vehicles, the increasing demand on harvesting energy from renewable sources for a safer and greener future and the ever-increasing demand of the portable electronics systems, the need for better batteries is eminent. The ultimate aim of battery research is to develop a low cost, light and small battery that can deliver high-capacity and/or high power. Lithium and sodium batteries are the frontrunners in achieving this ultimate battery. A macro battery is composed of thousands of millions of nanoparticles. Thus, to prepare a better battery we must determine the respective effects of electrode nanoparticle size, shape, structure, grain–grain boundary, defects and doping on the battery performance. To do so electrode nanoparticles need to be probed at the nano-scale to find out the correlation between their morphology, structure and chemical properties and their evolution due to the battery charging-discharging with battery performance. In this thesis we have utilized the unique capability of electron microscope to resolve the microstructural and chemical information at the (sub)nanometer scale to probe the electrode nanoparticles for making better batteries.","Li-ion battery; Li-O2 battery; electrochemistry; transmission electron microscopy; In-situ; MEMS","en","doctoral thesis","","978-90-8593-293-2","","","","Casimir PhD series, Delft-Leiden 2017-09","","2019-10-02","","","QN/Zandbergen Lab","","",""
"uuid:27a5546b-f1c1-4ff7-b0b0-30172a75cc16","http://resolver.tudelft.nl/uuid:27a5546b-f1c1-4ff7-b0b0-30172a75cc16","Engineering Selective and Stable Methanol to Olefins Catalysts","Yarulina, I. (TU Delft ChemE/Catalysis Engineering)","Kapteijn, F. (promotor); Gascon, Jorge (promotor); Delft University of Technology (degree granting institution)","2017","","","en","doctoral thesis","","978-94-028-0613-7","","","","","","2017-12-31","","","ChemE/Catalysis Engineering","","",""
"uuid:1d2dc82e-685b-4a80-963b-3c6a3d0d165f","http://resolver.tudelft.nl/uuid:1d2dc82e-685b-4a80-963b-3c6a3d0d165f","Turbulent axisymmetric base flows: Symmetry and long-term behavior","Gentile, V. (TU Delft Aerodynamics)","Scarano, F. (promotor); van Oudheusden, B.W. (copromotor); Schrijer, F.F.J. (copromotor); Delft University of Technology (degree granting institution)","2017","This thesis deals with the flow around truncated bodies of revolution. Such flows are encountered in a variety of engineering applications relevant to the aerospace transportation industry, notably to space launcher vehicles. The work focuses on the unsteady behavior of the wake and particularly on the dynamics of the recirculation region behind the base.
The manuscript starts with a survey of the past literature on the topic of turbulent axisymmetric wake flows. Salient aspects are discussed mainly in relation to flow topology and dynamical behavior. The vortex shedding process is examined along with the associated instabilities, namely the large-scale wake oscillations, the backflow azimuthal meandering and the transition scenarios exhibited by the wake across the different flow regimes.
Chapter 3 illustrates the current methodology of investigation. The flow facility and the geometrical models used in the experiments are described. The operating principles of the Particle Image Velocimetry (PIV) technique are summarized. The main contributions of uncertainty affecting the present results are defined. Details are provided of the Proper Orthogonal Decomposition (POD) procedure adopted in the analysis of the large-scale fluctuations.
The influence of base geometry and symmetry on the behavior of a turbulent incompressible reattaching flow is addressed in Chapter 4. Afterbody geometries with varying diameter ratios are discussed as to model axisymmetric backward facing step (BFS) flows of varying step heights. Any increase in the afterbody diameter induces earlier shear layer reattachment and inhibits the large-scale shear layer fluctuations. Comparison with equivalent planar BFS flows reveals an opposite scaling of the reattachment distance for the axisymmetric and the two-dimensional flow case, with convergence towards small values of the step height.
The large-scale fluctuations of the turbulent wake behind a circular base are spatio-temporally characterized in chapter 5. It is found that the wake dynamics is dominated by very-low-frequency backflow fluctuations in proximity of the stagnation point on the base, while it undergoes a global radial displacement closer to the rear-stagnation point.
The very-low-frequency turbulent wake unsteadiness is examined in chapter 6 under the effects of a small pitch angle. It is found that the reversed-flow region tends to stabilize away from the body axis of symmetry with increasing angles between the body and the freestream flow. Analysis of the instantaneous velocity field and POD of the velocity fluctuations gives evidence of a backflow large-scale unsteadiness only within 0.1° deviations from axisymmetric inflow conditions.
The near-wake azimuthal organization in presence of an afterbody is analyzed in chapter 7 within different azimuthal-radial planes behind the base and for different diameter ratios. The afterbody is found not to alter the shear layer behavior significantly, but it interferes with the inner backflow meandering. It is shown that the wake unsteadiness of an afterbody flow is dominated by the shear layer development.
The main findings from the preceding chapters are summarized at the end of the manuscript. The conclusions of the present research are drawn and possible directions for future research on the topic of turbulent wake dynamics are outlined.","","en","doctoral thesis","","","","","","","","","","","Aerodynamics","","",""
"uuid:5657a63d-1549-4080-8805-a122679cb707","http://resolver.tudelft.nl/uuid:5657a63d-1549-4080-8805-a122679cb707","Conceptual Design Study for In-flight Refueling of Passenger Aircraft","Mo, L. (TU Delft Flight Performance and Propulsion)","Veldhuis, L.L.M. (promotor); la Rocca, G. (copromotor); Delft University of Technology (degree granting institution)","2017","","","en","doctoral thesis","","978-94-6295-630-8","","","","","","","","","Flight Performance and Propulsion","","",""
"uuid:23580f57-9ed1-4fa3-95ad-b8fe7cc8ba05","http://resolver.tudelft.nl/uuid:23580f57-9ed1-4fa3-95ad-b8fe7cc8ba05","Time-lapse microscopy study of noise in development","Gritti, N. (TU Delft BN/Sander Tans Lab)","Tans, S.J. (promotor); van Zon, J.S. (copromotor); Delft University of Technology (degree granting institution)","2017","","developmental biology; systems biology; microfabrication; fluorescence microscopy; timelapse microscopy","en","doctoral thesis","","978-94-92323-13-2","","","","","","2019-04-01","","","BN/Sander Tans Lab","","",""
"uuid:e7d8b0d5-8d7a-4d68-b106-c37f30d455a4","http://resolver.tudelft.nl/uuid:e7d8b0d5-8d7a-4d68-b106-c37f30d455a4","Systematic Framework for Teleoperation with Haptic Shared Control","Smisek, J. (TU Delft Control & Simulation)","Mulder, Max (promotor); Schiele, A. (copromotor); van Paassen, M.M. (copromotor); Delft University of Technology (degree granting institution)","2017","Teleoperation – performing tasks remotely by controlling a robot – permits the execution of many important tasks that would otherwise be infeasible for people to carry out directly. Nuclear accident recovery, deep water operations, and remote satellite servicing are just three examples. Remote task execution principally offers two extremes for control of the teleoperated robot: direct tele manipulation, which provides flexible task execution, but requires continuous operator attention, and automation, which lacks flexibility but offers superior performance in predictable and repetitive tasks (where the human assumes a supervisory role). This dissertation explores a third option, termed hap- tic shared control, which lies in-between these two extremes, and in which the control forces exerted by the human operator are continuously merged with ‘guidance’ forces generated by the automation. In a haptic shared control system, the operators continually contribute to the task execution, keeping their skills and situational awareness. It is common practice to design the haptic shared control systems heuristically, by iteratively adjusting them to the satisfaction of the system designer, primarily based on human-in- the-loop experiments. In this dissertation, we aim to improve this design and evaluation process. Our goal is to follow a system-theoretic approach and formalize the design procedures of haptic shared control systems applied to teleoperation. Such a formalization should provide designers of future HSC systems with a better understanding and more control over the design process, with the ultimate goal of making the HSC systems safer, easier and more intuitive to use, and overall to perform better. The research goal of this dissertation has been divided into three parts.","Teleoperation; shared control; communication delay; haptics","en","doctoral thesis","","978-94-028-0612-0","","","","","","","","","Control & Simulation","","",""
"uuid:e786ee1f-8fea-4ef2-a67b-08fad87ae0f1","http://resolver.tudelft.nl/uuid:e786ee1f-8fea-4ef2-a67b-08fad87ae0f1","Multiple-site damage crack growth behaviour in Fibre Metal Laminate structures","Wang, W. (TU Delft Structural Integrity & Composites)","Benedictus, R. (promotor); Rans, C.D. (copromotor); Delft University of Technology (degree granting institution)","2017","Fibre metal laminates (FMLs)were developed and refined for their superior crack growth resistance and critical damage size that complimented the damage tolerance design philosophy utilized in the aerospace sector. Robust damage tolerance tools have been developed for FMLs. However, they tend to focus on the evolution of an isolated crack. There is also a risk that they will be invalidated overtime as a result of the occurrence of multiple cracks within one structure (one form of widespread fatigue damage). To combat another failure due to widespread fatigue damage, the airworthiness regulations were revised to include the concept of a Limit of Validity (LOV) of the damage tolerance analyses. Consequently, it is crucial to examine fatigue crack growth (FCG) in FMLs containing Multiple-site Damage (MSD) cracks despite their superior damage tolerance merits. The focus of this thesis therefore is to analyse MSD crack growth in FML structures. Mechanically fastened FML joints are potentially weak structural designs that are susceptible to MSD due to the stress rising contributors such as secondary bending, pin loading and open holes subjected to bypass loading. In this thesis, predictive models were developed to address several key mechanisms that affect FCG in FML joints containing MSD, and validated with corresponding experimental work. Then the predictive models were systematically integrated and implemented for FML joints. It was identified that the nature of fatigue in FMLs led to the load redistribution mechanism as the key factor to be modelled in predicting MSD growth in FMLs. The structural stiffness reductions caused by the presence of multiple cracks resulted in load redistribution from the other cracks to the single crack to be analysed, exacerbating the total stress intensity factor (SIF) experienced at the tips of the single crack, increasing the crack growth rate (CGR). The load redistribution mechanism was first substantiated by investigating FCG in FMLs containing discretely notched layers. The prediction model fairly captured the load redistribution mechanism by idealizing the notches in the metal layers as removals of metal strips. The crack acceleration over a major portion of the crack propagation was well predicted with the model; however, the surge in CGR over roughly 3 mm crack length prior to the link-up was underestimated since the plasticity interaction was not accounted for. The capability of modelling the load redistribution mechanism allows the states of multiple cracks to be analysed one by one. It was found that the load redistribution could not be symmetric for every crack and non-symmetric crack configurations therefore developed in FMLs with finite width. Hence, non-symmetric crack growth in FMLs was also investigated in this work. It was also found that both crack tip non-symmetry and delamination shape non-symmetry affected the crack growth in the metal layers. The model for non-symmetric crack growth in FMLs was validated with experimental data. Good correlation was observed. The model for MSD growth in FML panels sequentially analyses each crack state. The other cracks are idealized as removals of metal strips when analyzing the state of a single crack. This non-physical idealization of the cracks led to consistently conservative prediction results in comparison with the test data. Nevertheless, the prediction model provided good predictions of the evolution of MSD configurations. Additionally, it was proven that a very non-conservative predicted fatigue life could be obtained if the load redistribution mechanism was not considered. The effects of pin loading on FCG in FMLs were also investigated. The test data showed very rapid growth of the crack in the vicinity of the pin loading. The CGR decreased with increasing crack length. The model applied the principle of superposition to split the non-symmetric tension-pin loading into simpler tensile loading and a pair of point loads acting on the crack flanks. The SIFs for the simpler loading cases were derived and superposed to obtain the total SIF as a result of the tension-pin loading. The predicted CGR and equivalent delamination shape correlated with the measurements very well, but the model failed to predict the crack path and the measured delamination shape which were trivial issues for this work. The relevance and applicability of the developed models in this thesis for predicting the MSD behaviour in mechanically fastened FML joints was examined. The predicted results captured the trends of the measured CGR in FML joints containing MSD cracks, although there were some discrepancies. The discrepancies are mainly due to the two major shortcomings of the model which are neglecting the load redistribution over multiple fastener rows and neglecting the effects of secondary bending stresses.","Fatigue Crack Growth; fibre metal laminates; Multiple site damage; load redistribution","en","doctoral thesis","","978-94-6295-642-1","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:cc2f32f8-6c6f-4675-a47c-37b70ed4e30c","http://resolver.tudelft.nl/uuid:cc2f32f8-6c6f-4675-a47c-37b70ed4e30c","Aeroelastic Limit-Cycle Oscillations resulting from Aerodynamic Non-Linearities","van Rooij, A.C.L.M. (TU Delft Aerodynamics)","Bijl, H. (promotor); Dwight, R.P. (copromotor); Delft University of Technology (degree granting institution)","2017","Aerodynamic non-linearities, such as shock waves, boundary layer separation or boundary layer transition, may cause an amplitude limitation of the oscillations induced by the fluid flow around a structure. These aeroelastic limit-cycle oscillations (LCOs) resulting from aerodynamic non-linearities have been studied numerically in this PhD thesis. A frequency domain-based method called ADePK has been developed to study the bifurcation behaviour of the LCO solutions. From several extensive (structural) parameter studies, it was found that aerodynamic non-linearities, especially shock wave motions, can lead to limit-cycle oscillations that occur already below the linearly predicted flutter boundary.","Aeroelasticity; Limit-cycle oscillations; Unsteady aerodynamics; Bifurcation behaviour; Structural parameter variations","en","doctoral thesis","","978-94-6186-794-0","","","","","","","","","Aerodynamics","","",""
"uuid:4f1b06bc-3d01-4107-8c85-21c0e278c4bd","http://resolver.tudelft.nl/uuid:4f1b06bc-3d01-4107-8c85-21c0e278c4bd","Dielectric Coatings for High Voltage Gas Insulated Switchgear","van der Born, D. (TU Delft DC systems, Energy conversion & Storage)","Smit, J.J. (promotor); Delft University of Technology (degree granting institution)","2017","","GIS; Switchgear; High Voltage; Coating","en","doctoral thesis","","978-94-6299-576-5","","","","","","","","","DC systems, Energy conversion & Storage","","",""
"uuid:27802bfc-c459-4c71-acc8-7678ca3cc5d4","http://resolver.tudelft.nl/uuid:27802bfc-c459-4c71-acc8-7678ca3cc5d4","Porous Organic Frameworks in Catalysis","Bavykina, A.V. (TU Delft ChemE/Catalysis Engineering)","Kapteijn, F. (promotor); Makkee, M. (promotor); Gascon, Jorge (promotor); Delft University of Technology (degree granting institution)","2017","This thesis focuses on the development of functional Porous Organic Frameworks (POFs) for various catalytic applications. POF supported molecular catalysts allow to combine the advantages of both homogeneous and heterogeneous catalysts, offering excellent prospects in the quest to heterogenize homogeneous catalysts. The framework tunability of POFs can be used to obtain an optimal catalytic performance, while the fully heterogeneous character of the POFs allow easy handling and recycling. In some cases, they may even directly participate in the catalytic cycle by activating substrates.","porous materials; porous organic framework; porous aromatic framework; Covalent organic framework; Catalysis","en","doctoral thesis","","978-94-028-0594-9","","","","","","","","","ChemE/Catalysis Engineering","","",""
"uuid:0f0259f1-0c33-442f-b851-86a846e736fc","http://resolver.tudelft.nl/uuid:0f0259f1-0c33-442f-b851-86a846e736fc","HCI in interactive segmentation: Human-computer interaction in interactive segmentation of CT images for radiotherapy","Ramkumar, A. (TU Delft Mechatronic Design)","Niessen, W.J. (promotor); Stappers, P.J. (promotor); Song, Y. (copromotor); Delft University of Technology (degree granting institution)","2017","","User Interfaces and Human Computer Interaction","en","doctoral thesis","","978-94-92516-47-3","","","","","","","","","Mechatronic Design","","",""
"uuid:be493008-78cc-46fa-937e-ee7de4559d98","http://resolver.tudelft.nl/uuid:be493008-78cc-46fa-937e-ee7de4559d98","Large-scale copyright enforcement and human rights safeguards in online markets: A comparative study of 22 sanctioning mechanisms from eight enforcement strategies in six countries between 2004 and 2014","Kreiken, F.H. (TU Delft Organisation & Governance)","van Eeten, M.J.G. (promotor); Delft University of Technology (degree granting institution)","2017","The Internet has facilitated large-scale copyright infringement. Fighting this one case at a time via the standard civil law procedures is costly in terms of time and money. In response, copyright holders have adopted new strategies that they hoped would be more effective at large-scale enforcement. The question is how these large-scale enforcement procedures impact procedural safeguards, most notably due process and fair trial. Empirical research into large-scale recent enforcement strategies has been limited and tended to focus on individual strategies, rather than on comparative analysis across different strategies and jurisdictions. This dissertation sets out to fill this gap. It presents a comparative empirical study of 22 sanctioning mechanisms from eight enforcement strategies in six countries between 2004 and 2014. It adds to the discussion on the regulation of copyrights and can help policymakers by illustrating the effect of choices made in different countries. For researchers in the field of information policy and law, it provides a detailed description of different enforcement initiatives and adds to the studies on human rights. This study shows that copyright enforcement procedures are able to scale-up only by offering fewer procedural safeguards to sanctioned parties. Similarly, procedures that impact on a larger scale provide less severe sanctions. The research has also shown that infringement levels are by and large unchanged, and that enforcement procedures create substantial costs, a significant portion of which are externalized to the state and to third parties.","copyright; enforcement; safeguards; due process; fair trial; privacy; internet; governance","en","doctoral thesis","","978-90-79787-69-2","","","","NGInfra PhD Thesis Series on Infrastructures; 81 Advisor: David Koepsell","","","","","Organisation & Governance","","",""
"uuid:9a8812d1-d152-4a68-bd17-c88261f06481","http://resolver.tudelft.nl/uuid:9a8812d1-d152-4a68-bd17-c88261f06481","Centralized electricity generation in offshore wind farms using hydraulic networks","Jarquin Laguna, A. (TU Delft Offshore Engineering; TU Delft Support Delft Center for Systems and Control)","Metrikine, A. (promotor); van Bussel, G.J.W. (promotor); van Wingerden, J.W. (copromotor); Delft University of Technology (degree granting institution)","2017","The work presented in this thesis explores a new way of generation, collection and transmission of wind energy inside a wind farm, in which the electrical conversion does not occur during any intermediate conversion step before the energy has reached the offshore central platform. A centralized approach for electricity generation is considered through the use of fluid power technology. In the proposed concept the conventional geared or direct-drive power drivetrain is replaced by a positive displacement pump. In this manner the rotor nacelle assemblies are dedicated to pressurize water into a hydraulic network. The high pressure water is then collected from the wind turbines of the farm and redirected into a central offshore platform where electricity is generated through a Pelton turbine. A numerical model is developed to describe the energy conversion process as well as the main dynamic behaviour of the proposed hydraulic wind power plant. The model is able to capture the relevant physics from the dynamic interaction between different turbines coupled to a common hydraulic network and controller. Two case studies are considered in the time-domain simulations for a hypothetical hydraulic wind farm subject to turbulent wind conditions. The performance and operational parameters of individual turbines are compared with those of a reference wind farm with conventional technology turbines, using the same wind farm layout and environmental conditions. For the presented case study, results indicate that the individual wind turbines are able to operate within the operational limits with the current pressure control concept. Despite the stochastic turbulent wind conditions and wake effects, the hydraulic wind farm is able to produce electricity with reasonable performance in both below and above rated conditions.
Consequently, the aim of this thesis is to provide designers and design managers with guidelines and insights, which can aid the design and implementation of Smart Product-Service Systems (Smart PSSs) with increased and lasting value for companies and consumers. This information is of relevance for designers because the role that they play in the development of Smart PSSs is likely to increase, just as the presence of these offerings in the market continues to grow. Designers ought to be well prepared for such relatively new design scenarios. It is of great importance that designers understand the particularities of Smart PSSs design, its opportunities and challenges, and the likely contribution of their activities to the development of meaningful value propositions. By doing so designers can contribute to the efficient development of Smart PSSs, and the design of value propositions that are cherished by consumers over time.
To achieve our research aim, two particular perspectives were followed. First, we investigated the aspects influencing the design and definition of Smart PSSs during the development phase. Regarding this perspective, two topics were addressed: the ‘characteristics of Smart PSSs’, and ‘the Smart PSS design process’. These topics were further translated into two specific research questions: What set of design characteristics can designers use while defining Smart PSS value propositions? And, How can designers support the design process of Smart PSSs? The second defined perspective is the effect of design decisions on consumers’ experiences with Smart PSSs. Concerning this perspective, one topic and one research question were addressed. The topic was defined as ‘consumers’ reactions to Smart PSSs’, and the research question stated as follows: How can designers trigger positive consumer responses with Smart PSSs?
The thesis follows a multidisciplinary research approach, building from theories of different fields, such as operations management, design management, service design, and traditional PSS design. Furthermore, the three research questions outlined above were investigated by means of four qualitative and one quantitative studies, reported in the empirical chapters Chapter 3, Chapter 4, and Chapter 5.
Chapter 3 focuses on the first research question: What set of design characteristics can designers use while defining Smart PSS value propositions? The research question was investigated by means of two qualitative studies: Study #1-a and Study #1-b. Seven characteristics of Smart PSSs were identified: 1) consumer empowerment, 2) individualization of services, 3) community feeling, 4) individual/shared experience, 5) product ownership, 6) service involvement, and 7) continuous growth. These characteristics can be shaped in various ways, through various features. Importantly, the characteristics of Smart PSSs can be used when defining Smart PSSs at different levels of abstraction, and for different goals during the design process. For example, to define the specifics of individual elements in the system (e.g., features in the e-service), or during co-creation sessions among stakeholders on strategic aspects that can influence the system and its implementation.
Chapter 4 addressed the second research question: How can designers support the design process of Smart PSSs? Three sub-questions were further defined, which guided our research efforts. All these sub-questions were investigated by means of a qualitative approach reported as Study #2.
The first sub-question was the following: What are the elements of the Smart PSS design process? In this regard, we found the design process of Smart PSSs to have much in common with that of traditional PSSs, but also to display distinct differences. Smart PSS design can be described as involving a large number of stakeholders with varying needs and goals towards value propositions. Smart PSSs are highly context dependent, where context helps to define the value propositions for different users. Smart PSS design provides designers with broadened design options on how to define and implement the Smart PSS value proposition due to its multi-touchpoint nature. Furthermore, Smart PSSs are ever-growing, ever-evolving, and this dynamism is translated into a design process that is ongoing.
The second sub-question was stated as follows: What are the challenges of Smart PSS design? In this regards, we found the elements of Smart PSS design to lead to seven challenges of Smart PSS design: 1) defining the value proposition, 2) maintaining the value proposition over time, 3) creating high-quality interactions, 4) creating coherence in the Smart PSS, 5) stakeholder management, 6) the clear communication of design goals, and 7) the selection of means and tools in the design process. Importantly, these challenges are rooted in one or more elements of Smart PSS design outlined above. However, we found the broadened design options of Smart PSS design, and the ever-growing nature of Smart PSSs, to be particularly distinct of this development context, and to create a complexity in the design process that can be overwhelming for designers.
The third and last sub-question reported in Chapter 4 was the following: What are the designer role/contributions that help tackle design challenges? Our findings point to five roles/contributions that are being used by designers to tackle design challenges while supporting the Smart PSS design process. Namely, designers were described as: 1) guardians of user experiences, 2) foreseers of future scenarios, 3) integrators of stakeholders needs, 4) problem solvers, and 5) visualizers of goals. We found the identified roles/contributions to belong to the set of design skills long discussed by the design community, and to be effective in dealing with the above challenges. Based on these insights, we conclude that the current skills set of designers contributes to dealing with the complexity of the Smart PSS design process. However, designers should be made aware of the distinct elements of Smart PSS design and the design challenges likely to be encountered, so that they can be better prepared and use their skills more effectively.
Chapter 5 reports on the third research question investigated in the research project: How can designers trigger positive consumer responses with Smart PSSs? This question was investigated by means of two distinct studies, namely, Study #3 and Study #4.
The aim of Study #3 was to address the following sub-question: What is the effect of coherence between products and service elements on consumers’ evaluations of Smart PSSs? To this end, an experimental study with consumers was conducted. The effect of coherence was studied by manipulating the symbolic meaning ‘professionalism’ of a product and service elements of a fictional rental car solution. Importantly, potential incoherencies between product and service elements were anticipated to look unreliable in the eyes of consumers and negatively affected their evaluations of the Smart PSS. Our results validate this assumption and indicate that consumers value the coherence in Smart PSSs. By creating coherence between the elements of the Smart PSS, designers can help evoke assurance with consumers, which results in a more positive evaluation of the overall offering.
The aim of Study #4 was to address the following two sub-questions: 1) How do consumers’ experiences with Smart PSSs develop over time, and 2) What factors should designers consider when defining user experiences with Smart PSSs? To answer these sub-questions, a longitudinal, qualitative research approach was followed. Overall, users’ experiences with Smart PSSs were found to be complex and cyclic. The multi-touchpoint nature of Smart PSSs was found to be a pressing element on how users’ experiences develop. The variety of elements in the system can complicate the understanding of the value proposition of each touchpoint, but also of the Smart PSS as a whole. Furthermore, users’ experiences are cyclic because Smart PSSs offer users the unique possibility to renew their value propositions over time, by means of new elements in the system, features, and content. However, every time the system changes, and users implement changes in their value propositions, they enter an orientation cycle that is influential of their continued engagement with the Smart PSS.
Finally, we identified four main factors that affect the transition from orientation to incorporation in users’ experiences with Smart PSSs: 1) quality of information, 2) number of options in the system, 3) coherence of functionality, and 4) product attributes. Several features in the Smart PSSs can influence these factors. For example, accuracy of data, and the format in which information is presented, are different features that can influence the quality of information in the system. Furthermore, identified factors and features have been associated with different steps in the temporality of users’ experiences with Smart PSSs.
Overall, it can be concluded that Smart PSSs are complex solutions, for designers and consumers alike. The design of Smart PSSs poses several important challenges, outlined through the several empirical studies reported in this thesis. Challenges are rooted in several elements of the Smart PSS design process, and of these, there are two that particularly pronounced design complexities: the multi-touchpoint, and the ever-growing, ever-evolving nature of Smart PSSs. For designers, these elements complicate the definition of the value proposition during the design process. For consumers, they complicate the understanding of the Smart PSS and their interaction with it. Importantly, designers can play important roles and make important contributions to the design process, which tackle specific design challenges and aid in the development of meaningful Smart PSSs value propositions to consumers.
In terms of the relevance of our research, Chapter 7 discusses the theoretical contribution and practical implications of our findings. Particularly, research findings are translated into ten design guidelines (practical Do’s and Don’ts) for Smart PSS design. In line with the two perspectives followed in this thesis, these guidelines point to two district areas where designers’ roles/contributions gain relevance: the efficacy of the design process, and the creation of meaningful value propositions. Such information is relevant because it can help designers to gauge the need to adapt their best practices (i.e., tools, skills) to the design of Smart PSSs. Furthermore, the guidelines and insight presented in Chapter 7 can help designers to manage and maximize the experience of users, and trigger positive responses, at specific stages of the user experience.
With a plethora of declarations, initiatives, marketing and assessments, there is a need to assess what the stakeholders want in order to make decisions regarding an institutions sustainability. Ultimately, students are the ones using these sustainability marketing materials to assist in their decision at which institution they will pursue their studies. The sheer volume of interpretations of the word sustainability with regards to higher education institution leaves ample room for potentially misguided initiatives or marketing.
A universal system for assessing a higher educational institution’s sustainability has not been translated into a measurable reality. It is proposed that a universal system would help create a common understanding of sustainability within higher education institutions and would help in stakeholder understanding, institutional accountability and impactful application of sustainable initiatives.
This research looked to answer if a holistic framework could be created that would aid stakeholders in reviewing a university’s level of sustainability. And, if so, if this vision of a fully sustainable university could be translated into a measurable reality.
The research was approached in a structured way. Each chapter represents a published and peer-reviewed step towards addressing if a holistic framework could be created that would aid students in reviewing sustainability tools, assessments and marketing. The qualitative and quantitative conclusions from each chapter influenced the subsequent chapters, eventually leading to the creation and testing of two digital tools. The interpretations of these published chapters are found in the conclusion of this dissertation.
To assist the reader in effectively navigating this dissertation, an overview of the research questions, the methodology, and the summarized results are outlined below in Figure 0.1. A more detailed summary of each of the chapters follows.","sustainability; education institutions; assesment systems","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-92516-50-3","","","","A+BE | Architecture and the Built Environment No 3 (2017)","","","","","Climate Design and Sustainability","","",""
"uuid:5257000e-2c42-4d4e-9fac-2177eae3d6ac","http://resolver.tudelft.nl/uuid:5257000e-2c42-4d4e-9fac-2177eae3d6ac","Maximal regularity for parabolic equations with measurable dependence on time and applications","Gallarati, C. (TU Delft Analysis)","van Neerven, J.M.A.M. (promotor); Veraar, M.C. (copromotor); Delft University of Technology (degree granting institution)","2017","The subject of this thesis is the study of maximal Lp-regularity of the Cauchy problem u'(t)+A(t)u(t)=f(t), t∈ (0,T), u(0)=x.
We assume (A(t))_{t∈ (0,T)} to be a family of closed operators on a Banach space X0, with constant domain D(A(t))=X1 for every t∈ (0,T). Maximal Lp-regularity means that for all f∈ Lp(0,T;X0), the solution of the above evolution problem is such that u', Au are both in Lp(0,T;X0). In the first part of the thesis, we introduce a new operator theoretic approach to maximal Lp-regularity in the case the dependence t→A(t) is just measurable. The abstract method is then applied to concrete parabolic PDEs: we consider equations and systems of elliptic differential operators of even order, with coefficients measurable in the time variable and continuous in the space variables, and we show that they have maximal Lp-regularity on Lq(\Rd), for every p,q∈(1,∞). These results gives an alternative approach to several PDE results in the literature, where only the cases p=q or q≤p were considered. As a further example, we apply our abstract approach also to higher order differential operators with general boundary conditions, on the half space, under the same assumptions.
The last part of this thesis is based on a different approach and it is devoted to the study of maximal Lp-regularity on Lq(\Rd+) of an elliptic differential operator of higher order with coefficients in the class of vanishing mean oscillation both in the time and the space variables, and general boundary conditions of Lopatinskii-Shapiro type.","Integral operators; maximal Lp-regularity; functional calculus,; selliptic and parabolic equations; Ap-weights; R-boundedness; extrapolation; quasi-linear PDE; Fourier multipliers; the Lopatinskii–Shapiro condition; mixed-norm","en","doctoral thesis","","978-94-028-0554-3","","","","","","","","","Analysis","","",""
"uuid:5b36ba74-d629-4ee2-9f08-edeb33d5ca59","http://resolver.tudelft.nl/uuid:5b36ba74-d629-4ee2-9f08-edeb33d5ca59","Me against myself: Addressing personal dilemmas through design","Ozkaramanli, D. (TU Delft Design Aesthetics)","Desmet, P.M.A. (promotor); Ozcan Vieira, E. (copromotor); Delft University of Technology (degree granting institution)","2017","You have bought a bag of candy to keep yourself entertained while watching movies in the comfort of your home. Your intention is to keep the candy bag in your cabinet for several weeks, and to only treat yourself with some candy when watching movies. However, you somehow find the bag emptied while watching your first movie. And although eating the delicious candy by the handful was certainly enjoyable, you also feel guilty for finishing the entire bag at once. This is only one example of many dilemmas we encounter in everyday life. In this thesis, dilemmas are defined as experiences with three main ingredients: (1) mutually exclusive choices, (2) conflicting concerns, and (3) mixed emotions. Figure 1 shows the framework of dilemmas, which illustrates these ingredients related to the conflict between enjoying candy while watching a movie versus eating moderately to maintain good health (see Chapter 5). The articulation of these three ingredients enables us to provide a more elaborate definition of dilemmas:
People experience a dilemma when they are faced with two mutually exclusive choices, both of which touch upon their personal concerns, and the simultaneous fulfillment of both choices is challenging, if not impossible, to obtain or achieve. Because of this challenge, people experience both positive and negative emotions toward each alternative.","","en","doctoral thesis","","978-94-6186-789-6","","","","","","","","","Design Aesthetics","","",""
"uuid:ea709745-7238-47e0-90d1-c8381fd34f39","http://resolver.tudelft.nl/uuid:ea709745-7238-47e0-90d1-c8381fd34f39","Computational aeroacoustic approaches for wind turbine blade noise prediction","van der Velden, W.C.P. (TU Delft Aerodynamics)","Bijl, H. (promotor); van Zuijlen, A.H. (copromotor); Delft University of Technology (degree granting institution)","2017","","Wind turbine noise; Aeroacoustics; CAA; CFD","en","doctoral thesis","","978-94-6186-756-8","","","","","","","","","Aerodynamics","","",""
"uuid:39ffac87-c07b-42ae-b706-f3afe69ba21b","http://resolver.tudelft.nl/uuid:39ffac87-c07b-42ae-b706-f3afe69ba21b","A theory of thermodynamics for nanoscale quantum systems","Ng, N.H.Y. (TU Delft Quantum Information and Software)","Wehner, S.D.C. (promotor); Delft University of Technology (degree granting institution)","2017","Thermodynamics is one of the main pillars of theoretical physics, and it has a special appeal of having wide applicability to a large variety of different physical systems. However, many assumptions in thermodynamics apply only to systems which are bulk material, i.e. consisting a large number of microscopic classical particles. Due to the advancement of designing nanoscale engines, especially in the light of devices that are used today in the processing of quantum information, is thermodynamics still applicable? Can we refine the core principles of thermodynamics to suit such nanoscale quantum systems as well? The central aim of this thesis is to construct a theory of thermodynamics that holds for nanoscale quantum systems, even those as small and simple as a single qubit. We do this by starting out from the core basics of quantum theory: unitary dynamics on closed quantum systems. We adapt a resource theoretic approach inspired by quantum information theory, which defines the quantum states and operations allowed to be used in a thermodynamic evolution. With this framework that naturally adopts the first law as an energy preserving condition, we show the refinement of both the zeroeth and second law of thermodynamics. The zeroeth law explains the physical significance of the Gibbs thermal state. On the other hand, we show that the second law sees refinement in the quantum nanoregime: instead of having the free energy as the sole quantity dictating the possibility of a thermodynamic state transition, we derive a family of generalized free energies that also constitute necessary conditions for a transition to occur. Moreover, these conditions become sufficient for states which are block-diagonal in the energy eigenbasis. In this thesis, we also brought our approach of thermodynamics to the next step: we apply our findings on the second laws, in order to analyze the maximum achievable efficiency for quantum heat engines. In classical thermodynamics, the Carnot efficiency has been long known as the theoretical maximum which does not depend on the specific structure of the thermal baths used, but only on its temperature. With the additional free energies we discover, we show that although quantum heat engines may achieve the Carnot efficiency, such an achievability is no longer independent of the Hamiltonians of the thermal baths. In other words, we find additional restrictions that surface in the study of quantum nanoscale heat engines, which are a direct consequence of the generalized second laws. This has provided us with a deeper understanding into the fundamental limitations of how efficient devices can be made in the realm of microscopic quantum systems.","quantum thermodynamics; quantum information theory; resource theories; single-shot work extraction; quantum heat engines","en","doctoral thesis","","978-94-6295-588-2","","","","","","","","","Quantum Information and Software","","",""
"uuid:088f8e60-3e6f-4fcf-85af-be8555beb635","http://resolver.tudelft.nl/uuid:088f8e60-3e6f-4fcf-85af-be8555beb635","The Nano-Aperture Ion Source","van Kouwen, L. (TU Delft ImPhys/Charged Particle Optics)","Kruit, P. (promotor); Delft University of Technology (degree granting institution)","2017","","","en","doctoral thesis","","978-94-6186-791-9","","","","","","2018-03-27","","","ImPhys/Charged Particle Optics","","",""
"uuid:5ada7892-3836-440e-ac58-451a45524ae0","http://resolver.tudelft.nl/uuid:5ada7892-3836-440e-ac58-451a45524ae0","Metal Organic Frameworks for Gas-phase Capacitive Sensing","Sachdeva, S. (TU Delft OLD ChemE/Organic Materials and Interfaces)","Sudhölter, Ernst J. R. (promotor); Gascon, Jorge (promotor); de Smet, L.C.P.M. (copromotor); Delft University of Technology (degree granting institution)","2017","","","en","doctoral thesis","","978-94-6186-797-1","","","","","","","","","OLD ChemE/Organic Materials and Interfaces","","",""
"uuid:4f467feb-7155-4465-9508-650f3f507847","http://resolver.tudelft.nl/uuid:4f467feb-7155-4465-9508-650f3f507847","Efficient numerical methods for partitioned fluid-structure interaction simulations","Blom, D.S. (TU Delft Aerodynamics)","Bijl, H. (promotor); van Zuijlen, A.H. (copromotor); Delft University of Technology (degree granting institution)","2017","Fluid-structure interaction simulations are crucial for many engineering problems. For example, the blood flow around new heart valves or the deployment of airbags during a car crash are often modeled with fluidstructure interaction simulations. Also, to design safe parachutes, simulations are carried out to model the unsteady deformations of the parachute during a jump. Thus, there is an apparent need for multi-physics software codes which can model fluid-structure interaction problems.
However, current state-of-the-art solvers cannot be used for design or optimization studies of for example aircraft structures due to long simulation times. This is mainly caused by a large number of coupling iterations needed to reach convergence within each time step for a strongly coupled fluid-structure interaction simulation. Also, a large number of time steps are required to reach an acceptable accuracy in time for unsteady simulations. Hence, there is an urgency for efficiency improvements of fluid-structure interaction solvers.
In this thesis, two approaches are investigated to decrease the computational times for a fluid-structure interaction simulation: multi-level acceleration of the coupled problem, and the use of higher order time integration schemes.","fluid-structure interaction; manifold mapping; strongly coupled; spectral deferred correction; higher order time integration","en","doctoral thesis","","","","","","","","","","","Aerodynamics","","",""
"uuid:117aac8c-bd12-48fb-ae93-834f1ee62417","http://resolver.tudelft.nl/uuid:117aac8c-bd12-48fb-ae93-834f1ee62417","Superconducting InSb nanowire devices","Szombati, D.B. (TU Delft QRD/Kouwenhoven Lab)","Kouwenhoven, Leo P. (promotor); Delft University of Technology (degree granting institution)","2017","Josephson junctions form a two-level system which is used as a building block for many types of superconducting qubits. Junctions fabricated from semiconducting nanowires are gate-tunable and offer electrostatically adjustable Josephson energy, highly desirable in qubit architecture. Studying nanowire weak links is therefore important for future quantum computing applications. The inherent spin-orbit interaction and high g-factor of InSb nanowires promise rich physics when combined with superconductivity, especially when an external magnetic field is applied. In particular, it can give rise to topological state of matter including Majorana bound states, paving the way for a novel type of fault-tolerant topological qubit. Such quantum computation can be realized when Majorana bound states are braided through a network of topological wires. Probing the magnitude and phase of the supercurrent through InSb nanowires provides insight on the feasibility of realizing topological states in these wires. This thesis describes experiments measuring the critical current and density of states of InSb nanowire Josephson junctions which are either voltage- current- or phase-biased, as the chemical potential or magnetic field inside the wire is changed.
In Chapter 3, the critical current through an InSb nanowire with NbTiN electrodes is measured. The critical current can be as high as ∼100 nA but decays rapidly with magnetic field followed by an aperiodic oscillation. Numerical simulations of the supercurrent through the nanowire show that this supercurrent profile is caused mostly by the interference between the transverse modes carrying the supercurrent inside the nanowire. This so--called orbital effect becomes significant beyond 100 mT, while the spin--orbit and Zeeman interactions become substantial at magnetic field of the order 1 T.
The Josephson energy through cross-shaped nanowires, grown by merging individual InSb nanowires, is investigated in Chapter 4. A finite Josephson coupling is measured through all branches of the nanocross, even when the length of the weak link extends beyond 1 μm. This is a requirement for braiding Majorana bound states hosted in such nanowire networks.
In Chapter 5 we build a quantum dot with two superconducting and a normal contact using the three legs of a nanowire cross. The superconducting terminals are joined in a loop such that superconducting interference can be probed by threading a flux. The density of states as a function of voltage bias, dot chemical potential and flux is probed through the quantum dot via the normal lead acting as a tunnel probe. It is revealed that the proximity effect can be turned on and off via both the bias and gate voltage. The pairing amplitude on the dot remains finite for in-plane magnetic field values up to 600 mT, suggesting that the nanowire cross platform is suitable for braiding, since a topological state can be reached at 100-200 mT. As the conductance through the dot is sensitive to the flux through the loop, the device may also be used as a mangetometer converting flux to current with a sensitivity of $1 nA/Φ0.
The superconducting phase across a nanowire quantum dot as a function of the magnitude and direction of a large in-plane magnetic field is investigated in Chapter 6. The nanowire is embedded in a DC-SQUID where one arm consists of a gate-defined quantum dot in the nanowire and the other is a nanowire reference junction, also gate-tunable. By measuring the critical current through the SQUID as a function of the flux and the chemical potential of the dot, we can detect the change of phase through the ground state of the dot. At zero-field we measure the 0-$\pi$ transition of the quantum dot Josephson junction as the ground state parity of the dot changes. When the magnetic field exceeds 100 mT a 0-φ transition is measured indicating the presence of an anomalous supercurrent flow at vanishing phase difference across the quantum dot. This anomalous current is enabled by the breaking of the chiral symmetry due to spin-orbit interaction in the nanowire and the time-reversal symmetry breaking of the magnetic field. The phase of the 0-φ transition, or equivalently the magnitude of the anomalous current, can be tuned continuously via the gate underneath the dot. Such a φ0 junction may serve as a phase bias element and have applications in superconducting spintronics.
Chapter 7 focuses on future experiments aiming to detect and control Majorana bound states in a superconducting InSb nanowire. Such devices can be expanded to a braiding circuit, realizing a topological quantum computer.","","en","doctoral thesis","","978-90-8593-291-8","","","","","","","","","QRD/Kouwenhoven Lab","","",""
"uuid:e84894d6-87d2-4006-a8c2-d9fbfacabddc","http://resolver.tudelft.nl/uuid:e84894d6-87d2-4006-a8c2-d9fbfacabddc","Aeolian Sediment Availability and Transport","Hoonhout, B.M. (TU Delft Coastal Engineering)","Stive, M.J.F. (promotor); de Vries, S. (copromotor); Delft University of Technology (degree granting institution)","2017","This thesis explores the nature of aeolian sediment availability and its influence on aeolian sediment transport. The aim is to improve large scale and long term aeolian sediment transport estimates in (nourished) coastal environments. The generally poor performance of aeolian sediment transport models with respect to measurements in coastal environments is often accredited to limitations in sediment availability. Sediment availability can be limited by particular properties of the bed surface. For example, if the beach is moist or covered with non-erodible elements, like shells. If sediment availability is limited, the aeolian sediment transport rate is governed by the sediment availability rather than the wind transport capacity. Aeolian sediment availability is rather intangible as sediment availability is not only affected by aeolian processes, but also by marine and meteorological processes that act on a variety of spatial and temporal scales. The Sand Motor 21 Mm3 mega nourishment is used to quantify the spatiotemporal variations in aeolian sediment availability and its effect on aeolian sediment transport. The Sand Motor was constructed in 2011 along the Dutch coast. Aeolian sediment accumulation in the Sand Motor region is low compared to the wind transport capacity, while the Sand Motor itself is virtually permanently exposed to wind and accommodates large fetches. Aeolian sediment availability is therefore likely to dominate aeolian sediment accumulation. Multi-annual bi-monthly measurements of the Sand Motor's topography are used for a large scale aeolian sediment budget analysis. The analysis revealed that aeolian sediment supply from the dry beach area, that is almost permanently exposed to wind, diminished a half year after construction of the Sand Motor. The reduction in aeolian sediment supply is likely due to the development of a beach armor layer. In the subsequent years, two-third of the aeolian sediment deposits originate from the low-lying beaches that are frequently flooded and therefore often moist. The importance of the low-lying beaches in the Sand Motor region is tested during a six-week field campaign. Gradients in aeolian sediment transport are measured during the field campaign as to localize aeolian sediment source and sink areas. A consistent supply from the intertidal beach area was measured that was temporarily deposited at the higher dry beach. The temporary deposits were transported further during high water, when sediment supply from the intertidal beach ceased, resulting in a continuous sediment supply to the dunes. The temporary deposition of sediment at the dry beach was likely promoted by the presence of a berm that affects the local wind shear. Moreover, the berm edge coincided with the onset of the beach armor layer that might have further promoted deposition of sediment. The measurements on spatiotemporal variations in aeolian sediment availability and supply inspired an attempt to capture the characteristics of aeolian sediment availability in coastal environments in a comprehensive model approach. The resulting model simulates spatiotemporal variations in bed surface properties and their combined influence on aeolian sediment availability and transport. The implementation of multi-fraction aeolian sediment transport in the model introduces the recurrence relation between aeolian sediment availability and transport through self-grading of sediment. The model was applied in a four-year hindcast of the Sand Motor mega nourishment as first field validation. The model reproduces the multi-annual aeolian sediment erosion and deposition volumes, and the relative importance of the intertidal beach area as source of aeolian sediment well. Seasonal variations in aeolian sediment transport are incidentally missed by the model. The model accuracy is reflected in a R2 value of 0.93 when comparing time series of measured and modeled total aeolian sediment transport volumes in the four years since construction of the Sand Motor. The results suggest that indeed significant limitations in sediment availability, due to soil moisture content and beach armoring, govern aeolian sediment transport in the Sand Motor region. A comparison with a simulation without limitation in sediment availability suggests that aeolian sediment availability in the Sand Motor region is limited to about 25% of the wind transport capacity. Moreover, both spatial and temporal variations in aeolian sediment availability as well as the recurrence relation between aeolian sediment availability and transport are essential to accurate long term and large scale aeolian sediment transport estimates.","aeolian; sediment availability; sediment transport; sand motor; field measurements; numerical modeling; aeolis","en","doctoral thesis","","978-94-6332-152-5","","","","","","","","","Coastal Engineering","","",""
"uuid:29a2b04a-a38f-4919-bf58-50d40b4ebaed","http://resolver.tudelft.nl/uuid:29a2b04a-a38f-4919-bf58-50d40b4ebaed","Simulating co-diffusion of innovations: Feedback technology & behavioral change","Jensen, T. (TU Delft Energie and Industrie)","Herder, P.M. (promotor); Chappin, E.J.L. (copromotor); Delft University of Technology (degree granting institution)","2017","","","en","doctoral thesis","","978-94-6186-785-8","","","","","","","","","Energie and Industrie","","",""
"uuid:6e798054-211f-4396-8a76-eeaf41aba53e","http://resolver.tudelft.nl/uuid:6e798054-211f-4396-8a76-eeaf41aba53e","A Methodical Approach on Conceptual Structural Design","Horikx, M.P. (TU Delft Integral Design & Management)","van der Horst, A.Q.C. (promotor); de Ridder, H.A.J. (promotor); Delft University of Technology (degree granting institution)","2017","The subject of this academic research thesis is a methodical approach on the complex problem-solving process of structural conceptual design.
For this relatively unexplored problem, an exploratory research is conducted by systematically zooming in from the whole to the part and from coarse to fine. The so explored methodical approach on conceptual structural design leads to a controlled build-up of insight into the behaviour of the structure and supports the actual successive design decisions during the conceptual design phase.","Structural engineering; Conceptual design; Control","en","doctoral thesis","","978-94-6186-763-6","","","","","","","","","Integral Design & Management","","",""
"uuid:064f0a35-5d76-42e8-a1ad-3afb5916dd3c","http://resolver.tudelft.nl/uuid:064f0a35-5d76-42e8-a1ad-3afb5916dd3c","Data-driven modelling of protein synthesis: A sequence perspective","Gritsenko, A. (TU Delft Pattern Recognition and Bioinformatics)","de Ridder, D. (promotor); Reinders, M.J.T. (promotor); Delft University of Technology (degree granting institution)","2017","Recent advances in DNA sequencing, synthesis and genetic engineering have enabled the introduction of choice DNA sequences into living cells. This is an exciting prospect for the field of industrial biotechnology, which aims at using microorganisms to produce foods, beverages, pharmaceuticals and fine- and bulk chemicals in a sustainable fashion. Biotechnologists often achieve this by genetically engineering these microorganisms to introduce novel production pathways using genes found in other strains or species. However, detailed understanding of gene expression regulation remains elusive, especially at the level of translation; thus, when it comes to writing DNA to express proteins at user-specified levels, we are still miles away.
Second generation DNA sequencing technologies have made it easy and affordable to reconstruct the genomes of industrially relevant microbes, thus providing better reference sequences for genetic engineering. However, technological limitations allow for reconstructing only parts of the entire genomes unambiguously, thus requiring additional scaffolding steps to obtain genome-length reconstructions. We propose a method that improves genome scaffolding by integrating heterogeneous sources of information on genome contiguity. These methods improve the quality of genome reconstructions at the cost of a limited number of additional errors.
The ease and affordability of DNA sequencing has also led to the development of a number of biological assays which exploit sequencing, among which the ribosome profiling assay. This assay allows for unprecedented examination of the process of protein synthesis by recording positions of actively translating ribosomes across thousands of living cells. We employed these data to develop data-driven models of Saccharomyces cerevisiae protein synthesis. A relatively simple model was used to re-design genes for heterologous expression; a second, more complex model yielded insights into the process of translation. Our models suggest that protein synthesis is limited at the stage of initiation, and that codon translation rates are not determined by tRNA levels alone, and appear to be sequence context-dependent.
Finally, the combination of DNA synthesis and sequencing offers the possibility to perform high-throughput in vivo assays to study the effect of user-designed sequences. We used this approach to study translation initiation at Internal Ribosome Entry Sites (IRESs). We identified short sequence elements predictive of IRES activity in viruses and humans, and obtained insights into the effect of element sequence, multiplicity and position on IRES activity. We propose a high-level architecture of viral and cellular IRESs, and offer a mechanistic explanation for differences between IRES architectures of different virus types.","translation; protein synthesis; sequence modelling; sequence analysis; cap-independent translation","en","doctoral thesis","","978-94-028-0559-8","","","","","","","","","Pattern Recognition and Bioinformatics","","",""
"uuid:5f094e4b-fef9-4216-abbe-3277adc90b28","http://resolver.tudelft.nl/uuid:5f094e4b-fef9-4216-abbe-3277adc90b28","Sediment dynamics on intertidal mudflats: A study based on in situ measurements and numerical modelling","Zhu, Q. (TU Delft Coastal Engineering)","Wang, Zhengbing (promotor); Yang, S.L. (promotor); van Prooijen, Bram (copromotor); Delft University of Technology (degree granting institution)","2017","Tidal flats provide essential ecosystem services (e.g. coastal protection and function as habitats sustaining coastal food webs). They are also under pressure due to climate change and human interventions. To investigate the sediment dynamic processes on tidal flats for a better understanding of the morphological development, in situ measurements were carried out on three tidal flats in the Yangtze Estuary (China) and in the Westerschelde Estuary (the Netherlands). The results show that erosion and deposition stages alternate due to the competition between hydrodynamic forces and bed strength, and due to variability of suspended sediment concentration. A bed-level change (BLC) model has been developed based on this understanding. Sediment dynamic processes on tidal flats are governed by tidal forcing exhibiting neap-spring variation, as well as by random wind events. The impacts from wind events were investigated by an integrated approach based on in situ measurements and the BLC model. This integrated approach also helped to reveal the spatial variability of bed erosional characteristics of tidal flats under influence of diatoms. This work not only improves the prediction of morphological changes of tidal flats, but also has implications for coastal engineering and for studies of coastal ecology and the coastal environment.","","en","doctoral thesis","","978-94-6295-607-0","","","","","","","","","Coastal Engineering","","",""
"uuid:6d1d906e-f37e-4e3f-85de-bf8fe68bd8d8","http://resolver.tudelft.nl/uuid:6d1d906e-f37e-4e3f-85de-bf8fe68bd8d8","Transversal waves and vibrations in axially moving continua","Gaiko, N. (TU Delft Mathematical Physics)","van Horssen, W.T. (promotor); Delft University of Technology (degree granting institution)","2017","","Axially moving continua; boundary damping; dynamic stability analysis; singular perturbation; perturbation methods; resonance","en","doctoral thesis","","","","","","","","","","","Mathematical Physics","","",""
"uuid:5dac3509-019c-40c8-a1bf-7db9b7b41f6d","http://resolver.tudelft.nl/uuid:5dac3509-019c-40c8-a1bf-7db9b7b41f6d","Modular molecular gels: Control over design, formation and properties","Poolman, J.M. (TU Delft ChemE/Advanced Soft Matter)","van Esch, J.H. (promotor); Eelkema, R. (copromotor); Delft University of Technology (degree granting institution)","2017","","","en","doctoral thesis","","","","","","","","","","","ChemE/Advanced Soft Matter","","",""
"uuid:9c25df0e-2df0-4d30-b9aa-d95a31fcaafd","http://resolver.tudelft.nl/uuid:9c25df0e-2df0-4d30-b9aa-d95a31fcaafd","Moisture damage susceptibility of asphalt mixtures: Experimental characterization and modelling","Varveri, Aikaterini (TU Delft Pavement Engineering)","Scarpas, Athanasios (promotor); Delft University of Technology (degree granting institution)","2017","A well-functioning, long-lasting and safe highway infrastructure network ensures the mobility of people and facilitates the transport of goods, promoting thus environmental, economic, and social sustainability. The development of sustainable highway infrastructure requires, among other activities, the construction of pavement systems with enhanced durability. Moisture damage in asphalt pavements is associated with inferior performance, unexpected failures and reduced service life. All of these contribute to the increase of operational and maintenance costs in order to fulfill the intended service life of the pavement system. Moreover, global warming and climate change events such as temperature extremes, high mean precipitation and rainfall intensity may further increase the probability and rate of pavement deterioration.
This dissertation aims to obtain an advanced understanding of the influence of moisture on pavement durability by developing a set of tools, i.e. experimental methods and computational models, which will provide insight into the fundamental moisture damage processes and on their impact on pavement systems. Based on this knowledge, researchers and practitioners will be able not only to design pavements with increased resiliency, thereby providing reliable services to road users, but also to minimize the risks in the face of changing climate conditions.
Moisture diffusion is well-known to degrade the mechanical properties of asphalt mortars, namely bitumen, filler and sand, thus increasing the propensity of pavements to cracking. To determine the changes in the cohesion properties of the mortar, uniaxial tension tests were performed. Mortar samples were prepared and then subjected to five combinations of moisture and thermal conditioning, in an attempt to reproduce the various conditioning states that pavements undergo in the field, before being tested. Tensile strength and fracture energy were used to evaluate the changes in mechanical properties due to the various conditioning protocols. To post-process the experimental data, a new data analysis procedure was suggested in order to obtain a more accurate calculation of fracture energy. The procedure uses nonlinear finite element analysis to specify the unloading response outside the fracture zone, and then utilizes this information to compute the fracture energy of the binders. This methodology yields a framework for the calculation of fracture energy when only force-displacement data are available and therefore the estimation of the true stress-strain curve is not feasible.
The experimental investigation revealed the deteriorating impact of moisture on the fracture characteristics of asphalt mortars, especially as regards to their low temperature properties. These effects were not reversible upon drying. On the contrary, the application of a drying cycle caused embrittlement of the mortars and indicated that continuous wet and drying cycles in the field may result in materials with poor performance characteristics. Also, the application of freeze-thaw cycles was shown to increase the susceptibility of mortars to low temperature cracking. Nevertheless, on the whole, the effect of freeze-thaw on fracture properties was observed to depend on the conditioning state (dry or wet) and composition of the mortars. The use of additives, such as hydrated lime filler and SBS modifiers, were found to improve the wet strength and fracture energy of the mortars. On the basis of moisture uptake measurements, it was confirmed that the chemical composition influences significantly the diffusivity characteristics of the mortars. Also, the maximum moisture uptake was found to be the main parameter that dictates the intensity of mortar damage.
In addition, moisture susceptibility was studied at mixture level. At this level, besides moisture diffusion, excess pore pressure can contribute to the degradation of mixture performance depending on the mixture type, the traffic loading and the environmental conditions. Hence, a moisture conditioning protocol that comprises two conditioning types, namely bath immersion and pore pressure application, was proposed for evaluating susceptibility of asphalt mixtures to moisture. Also, evidence was collected of the effect that dynamic pore pressure has on mixture degradation by means of X-ray computed tomography and image analysis techniques. The two damage mechanisms were found to be relatively independent from each other, suggesting that an asphalt mixture can be more prone to one damage mode than the other, depending on its composition. The proposed protocol captures both processes that occur when water interacts with a pavement and can provide more reliable conclusions with regard to mixture sensitivity.
In order to improve our perception of the influence of material microstructures on moisture sensitivity of the asphalt composite, an energy-based elasto-visco-plastic model with softening was implemented to model damage due to the coupled effects of moisture diffusion and mechanical loading. The model consists of a generalized Maxwell model, with hyperelastic springs and viscous time-dependent components, in series with an inelastic component that accounts for the irreversible processes within the microstructure of the material. Then, a computational scheme was proposed by means of a staggered approach: first a three-dimensional diffusion model was applied to obtain information on the accumulation of moisture within the mixtures and then the elasto-visco-plastic model was used to quantify mortar damage due to moisture diffusion. This method was successfully applied to study the influence of mixture morphology on moisture sensitivity. The results demonstrated that moisture content in a mixture strongly depends on its morphology, whereas the interconnectivity of the voids network controls the rate of damage development. Also, the analysis revealed the positive effect of using binders with high resistivity against moisture and quantified the benefits that would arise due to this choice, especially when designing porous mixtures that have an intrinsic sensitivity to moisture due to their morphological characteristics.
More broadly, frost damage can be classified as part of the moisture damage related mechanisms. In the field, frost damage can be mainly attributed to the expansion of water accumulated in the pores of the pavement at sub-zero temperatures that causes additional stresses to the pavement structure. A numerical scheme to simulate frost damage was proposed. This scheme comprises a model that simulates the volume expansion of water during the water-to-ice phase-change, a thermal conduction model to simulate temperature distribution in the pavement, and the elasto-visco-plastic model to determine critical areas with a propensity to cracking on the basis of the pavement stresses.
In conclusion, this thesis contributes to establishing a relationship of the physico-mechanical properties of the constituent materials and mixture morphology with the moisture susceptibility of pavement structures. The proposed experimental methods and computational models can serve as tools to investigate a great variety of parameters before a pavement structure is actually built. This allows for new materials and mixture designs to be investigated and the risks involved with their use to be minimized.
Development of an experimentally validated numerical Fluid-Structure-Interaction model for the prediction of, operating condition induced, diaphragm deformation in piston diaphragm pumps","Fluid structure interaction; Piston diaphragm pump; Positive displacement pump","en","doctoral thesis","","978-9-49-183715-9","","","","","","","","","Offshore and Dredging Engineering","","",""
"uuid:9efb571c-0441-4690-84ca-7c5d5e8bfea6","http://resolver.tudelft.nl/uuid:9efb571c-0441-4690-84ca-7c5d5e8bfea6","Aerodynamic Interaction between Propeller and Vortex","Yang, Y. (TU Delft Flight Performance and Propulsion)","Veldhuis, L.L.M. (promotor); Eitelberg, G. (promotor); Delft University of Technology (degree granting institution)","2017","","Propeller; Vortex; ground vortex; Blade vortex interaction","en","doctoral thesis","","9789462335806","","","","","","","","","Flight Performance and Propulsion","","",""
"uuid:ffb27f86-a5c7-43ac-8f10-96fe9f43a950","http://resolver.tudelft.nl/uuid:ffb27f86-a5c7-43ac-8f10-96fe9f43a950","Fire sprinklers and water quality in domestic drinking water systems: A novel approach to improve public safety in homes","Zlatanovic, L. (TU Delft Sanitary Engineering)","van der Hoek, J.P. (promotor); Vreeburg, J.H.G. (copromotor); Delft University of Technology (degree granting institution)","2017","","","en","doctoral thesis","","978-94-6186-793-3","","","","","","","","","Sanitary Engineering","","",""
"uuid:f6059ab2-adb2-4647-81be-09847fa9bd9f","http://resolver.tudelft.nl/uuid:f6059ab2-adb2-4647-81be-09847fa9bd9f","Optimization-based adaptive optics for optical coherence tomography","Verstraete, H.R.G.W. (TU Delft Team Raf Van de Plas)","Verhaegen, M.H.G. (promotor); Kalkman, J. (copromotor); Wahls, S. (copromotor); Delft University of Technology (degree granting institution)","2017","Optical coherence tomography (OCT) is a technique for non-invasive imaging based on low coherence interferometry. Its main application is found in ophthalmology, where it is used for 3D in vivo imaging of the cornea and the retina. OCT has evolved over the past decade as one of the most important ancillary tests in ophthalmic practice, providing great diagnostic value for disease screening and monitoring. In retinal OCT imaging, the lateral resolution is not determined by the pupil size, but instead it is limited by optical wavefront aberrations of the cornea and lens. These aberrations reduce the OCT image resolution and lower the signal to noise ratio. To obtain high quality OCT images the optical aberrations can be removed using adaptive optics (AO).
In general, AO consists of an adaptive optical element and a wavefront sensor. The adaptive element, such as a deformable mirror, is used to reshape the wavefront and remove the undesired aberrations. The wavefront sensor measures the aberrations by reconstructing the phase of the wavefront, which is used to determine the correction on the wavefront applied by the deformable mirror. However, the use of a wavefront sensor has some disadvantages. It requires light being directed out of the imaging path onto the wavefront sensor. This leads to a loss of signal in the imaging path and can result in non-common optical path errors in the aberrations estimation procedure. Additionally, the use of a deformable mirror and a wavefront sensor leads to a bulky and expensive OCT setup.
The work presented in this thesis has the goal of reducing the cost and bulkiness of an AO-OCT system. First, we investigate the influence of optical wavefront aberrations to the OCT signal strength. The establishment of the relation between aberrations and the OCT signal strength is key to estimating and correcting the aberrations based on single OCT scans. By using Fresnel optical wave propagation and determining the fiber coupling efficiency, we find that the OCT transfer function, i.e. the function that expresses the relation between the aberrations and OCT signal strength, is quasi-convex. We determine both analytically and experimentally the transfer function for both reflective and scattering media, such as a mirror and Scotch tape sample. Additionally, if the OCT system and its optical properties are well-known we demonstrate a method to correct a defocus aberration in one step.
Second, we use the OCT transfer function to develop and determine an efficient wavefront sensorless (WFSL) AO optimization procedure. WFSL-AO methods aim to correct the aberrations without using a wavefront sensor, but instead base the determination of the wavefront on the imaging signal itself. This eliminates the use of the wavefront sensor, its extra cost and its disadvantages from an AO-OCT setup. To keep up with the OCT imaging rate, which is of the order of several tens of kHz, the algorithm has to be computationally efficient. Furthermore, there are no analytic derivatives available for the optimization and the OCT signal is very noisy. Finally, the derivative-free optimization algorithm also has to be able to determine the aberrations accurately when dealing with a minimum number of noisy measurements. We developed the Data-based Online Nonlinear extremum-seeker (DONE) algorithm. Every iteration, the DONE algorithm updates a surrogate function, which is based on random Fourier expansions (RFE) of the OCT transfer function, with a new OCT signal measurement. The optimum of the RFE surrogate function is then found with a well-known (quasi-Newton) optimization method. We demonstrate the effectiveness of the DONE algorithm compared to other optimization algorithms for WFSL-AO on biological and non-biological samples. We conclude that DONE has a smaller convergence error, while maintaining similar or faster convergence speeds compared to the other algorithms.
Third, we demonstrate a fully functional WFSL-AO OCT setup for retinal imaging. We use a state-of-the-art deformable lens with 18 actuators, rather than a deformable mirror, which leads to a smaller and more integrated WFSL-AO setup. The WFSL-AO OCT setup is successfully used for in vivo retinal OCT imaging and demonstrates that the DONE algorithm can remove the ocular wavefront aberrations with the deformable lens during in vivo OCT imaging. By developing a new algorithm and exploring the options for adaptive components, we have succeeded in retinal WFSL-AO OCT.
In a broader perspective, we show that the DONE algorithm is suitable for other applications than WFSL-AO OCT. We demonstrate that the DONE derivative-free optimization algorithm is robust towards noisy measurements for applications in robotics, microscopy and optical beam forming networks.","Adaptive Optics; Optical coherence tomography; Optimization; wavefront sensorless adaptive optics; OCT; derivative-free optimization","en","doctoral thesis","","978-94-92516-40-4","","","","","","","","","Team Raf Van de Plas","","",""
"uuid:69cdb100-ccda-4c96-95b1-692f17dec0dc","http://resolver.tudelft.nl/uuid:69cdb100-ccda-4c96-95b1-692f17dec0dc","Sustainable Food by Design: Co-design and Sustainable Consumption Among the Urban Middle Class of Vietnam","de Koning, J.I.J.C. (TU Delft Design for Sustainability)","Brezet, J.C. (promotor); van Engelen, J.M.L. (promotor); Crul, M.R.M. (copromotor); Delft University of Technology (degree granting institution)","2017","Growing unsustainable consumption in Vietnam is a pressing issue, especially in urban areas. The effects of rapid economic growth, industrialization and increasing wealth in combination with a young, growing population makes that the middle class of Vietnam is on the rise. This movement within the population is making room to form and introduce new consumption patterns; patterns that are both sustainable as well as adapted to the improving living standards.
This thesis points out that food is the most promising category to start building these new consumption patterns from. In Vietnam both consumers and producers are looking for ways to make their practices sustainable. Design can help building and giving form to new behaviour patterns, products and services. However, creating more trust and understanding between the Vietnamese food consumers and producers is essential. Co-design specifically could enable the creation of trust and understanding as well as create a learning environment; ultimately leading to a better adapted, more attractive and sustainable food system in Vietnam.","Sustainability; Sustainable lifestyles; Sustainability consumption; Co-design; Co-creation; Emerging markets; Food; Vietnam; Middle class; Consumer behavior; Innovation","en","doctoral thesis","","9789065624079","","","","","","","","","Design for Sustainability","","",""
"uuid:1dbabb9d-a825-4dc0-9927-d1bf8d03ae93","http://resolver.tudelft.nl/uuid:1dbabb9d-a825-4dc0-9927-d1bf8d03ae93","Strategic Modeling of Global Container Transport Networks: Exploring the future of port-hinterland and maritime container transport networks","Halim, R.A. (TU Delft Transport and Logistics)","Tavasszy, Lorant (promotor); Kwakkel, J.H. (copromotor); Delft University of Technology (degree granting institution)","2017","Uncertainties in future global trade flows due to changes in trade agreements, transport technologies or sustainability policies, will affect the patterns of global freight transport and, as a consequence, also affect the demand for major freight transport infrastructures such as ports and hinterland networks. Policy makers face the challenge of making robust policies and investments that sustain and promote economic development amidst the various uncertainties. This thesis proposes a set of empirically grounded quantitative models of global freight transport that can support strategic decision making about investments in freight transport infrastructures. We specify, estimate and validate these models for both maritime and hinterland transport, and apply them in comprehensive analyses of the EU’s and the global container transport networks.","global container transport; freight transport modeling; port-hinterland transport networks; global maritime networks; uncertainty; scenario discovery","en","doctoral thesis","TRAIL Research School","978-90-5584-220-9","","","","TRAIL Thesis Series no. T2017/1, the Netherlands Research School TRAIL","","","","","Transport and Logistics","","",""
"uuid:8e71db73-d77d-4e53-ae13-33add0a9c5aa","http://resolver.tudelft.nl/uuid:8e71db73-d77d-4e53-ae13-33add0a9c5aa","Semiconductor Nanowire Josephson Junctions: In the search for the Majorana","van Woerkom, D.J. (TU Delft QRD/Kouwenhoven Lab)","Kouwenhoven, Leo P. (promotor); Delft University of Technology (degree granting institution)","2017","Due to the collective behavior of electrons, exotic states can appear in condensed matter systems. In this PhD thesis, we investigate semiconducting nanowire Josephson junctions that potentially have Majorana zero modes (MZM) as exotic states. MZM are expected to form a robust quantum bit and quantum operations are done by interchange, otherwise known as braiding. The presence of MZM in a Josephson junction creates a topological junction, with properties which are drastically different from a normal Josephson junction. Understanding such topological junctions is of key importance in developing circuits for MZM braiding.","Josephson junctions; Andreev bound state; InSb; InAs; Majorana; semiconductor nanowire","en","doctoral thesis","","978-90-8593-282-6","","","","","","","","","QRD/Kouwenhoven Lab","","",""
"uuid:16b23e6e-8e04-49b7-b487-90543c3bd139","http://resolver.tudelft.nl/uuid:16b23e6e-8e04-49b7-b487-90543c3bd139","Tailoring the Hydrogen Detection Properties of Metal Hydrides","Boelsma, C. (TU Delft ChemE/Materials for Energy Conversion and Storage)","Dam, B. (promotor); Delft University of Technology (degree granting institution)","2017","Hydrogen plays an essential role in many sectors of the industry. For example, hydrogen is necessary to produce ammonia, it can be used to determine the quality of products (hydrogen is produced during food ageing), or it can result in medical diagnostics (e.g. lactose intolerance). In addition, hydrogen will play an important role as an energy carrier in a sustainable economy. As hydrogen is a colorless, odorless, and tasteless gas with a low ignition energy combined with a high energy output, hydrogen detection is therefore essential.
We find that thin film metal hydrides are extremely suitable for hydrogen detection. The optical change combined with the possibility to change the pressure range by means of changing the layer thickness, the interfaces, or the alloy fraction indicate the large flexibility to adjust the sensing properties to the safety requirements. In addition, the discovery of the extraordinary hydrogen sensing properties of Mg-Zr-H and, in particular, Hf-H shows that there are still many new sensor-possibilities available to explore.
Inland vessels should be designed in such a way that they should always be capable of manoeuvring without significantly harming the cost-effectiveness of operations. One of the biggest differences between seagoing ships and inland vessels is the rudder configuration. Conventionally, seagoing ships have similar single-rudder configurations while inland vessels have more complex multiple-rudder configurations. Although multiple-rudder configurations can have a positive effect on manoeuvrability, they often have a negative effect on resistance and, therefore, also a negative effect on the fuel consumption.
Quantitative impacts of the rudder configuration on ship manoeuvrability have not been fully understood, especially for multiple-rudder configurations with complex rudder profiles. These differences in the rudder configuration may significantly change the ship manoeuvring behaviours and, therefore, should require further research. Moreover, to compare and evaluate the manoeuvring performance of inland vessels with different configurations, the existing manoeuvring tests and standards for inland vessels are less elaborate than those for seagoing ships. The above-mentioned considerations formulate the following main research question: What are the proper rudder configurations to achieve well manoeuvrable inland vessels without significant loss of navigation efficiency?
The main research question of this thesis can be answered through resolving four key research questions as follows:
Q1. What are the practical manoeuvres to evaluate and compare the manoeuvring performance of inland vessels?
Q2. How does the rudder configuration affect the rudder hydrodynamic characteristics?
Q3. How do changes in the rudder configuration affect the ship manoeuvrability in specific manoeuvres?
Q4. How to choose a proper rudder configuration according to the required manoeuvring performance?
An accurate estimation of rudder forces and moments is needed to quantify the impacts of the rudder configurations on ship manoeuvring performance. This thesis applied Reynolds-Averaged Navier-Stokes (RANS) simulations to obtain rudder hydrodynamic characteristics and integrated the RANS results into manoeuvring models. Additionally, new manoeuvres and criteria have been proposed for prediction and evaluation of inland vessel manoeuvrability. Simulations of ships with various rudder configurations were conducted to analyse the impacts of rudder configurations on ship manoeuvrability in different classic and proposed test manoeuvres. Accordingly, guidance on rudders for inland vessel manoeuvrability has been summarised for practical engineers to make proper design choices.
Through the research presented in this thesis, it is clear that different rudder configurations have different hydrodynamic characteristics, which are influenced by the profile, the parameters, and the type of a specific configuration. New regression formulas have been proposed for naval architects to quickly estimate the rudder induced forces and moments in manoeuvring. Furthermore, an integrated manoeuvring model has been proposed and validated for both seagoing ships and inland vessels. Using the proposed regression formulas and manoeuvring model, the impacts of rudder configurations on inland vessel manoeuvrability have been studied.
The manoeuvring performance of a typical inland vessel can be improved by 5% to 30% by changing the rudder configuration. The rudder configuration should be capable of providing sufficient manoeuvring forces and then optimised to reduce the rudder induced resistance. In general, well-streamlined profiles are good for efficiency but not as good as high-lift profiles for effectiveness. As a summary, the ship manoeuvring performance can be improved by using effective profiles, enlarging the total rudder area, accelerating the rudder inflow velocity, increasing the effective rudder aspect ratios, and enlarging the spacing among multiple rudders.","inland vessels; inland vessel manoeuvrability; ship manoeuvrability; rudder configurations; manoeuvring simulations; rudder profiles; rudder prarameters; rudder design; rudder hydrodynamic characteristics; Computational Fluid Dynamics (CFD)","en","doctoral thesis","","978-94-6233-5622","","","","","","2021-08-27","","","Ship Design, Production and Operations","","",""
"uuid:f46c5e6e-a21c-4bc2-b8ca-6175897e60e5","http://resolver.tudelft.nl/uuid:f46c5e6e-a21c-4bc2-b8ca-6175897e60e5","User capacities and operation forces: Requirements for body-powered upper-limb prostheses","Hichert, M. (TU Delft Biomechatronics & Human-Machine Control)","Veeger, H.E.J. (promotor); Plettenburg, D.H. (copromotor); Abbink, D.A. (copromotor); Delft University of Technology (degree granting institution)","2017","In the Netherlands approximately 3750 persons have an arm defect: they miss (part of) their hand, forearm or even their entire arm. The majority of these people are in the possession of a prosthesis. This prosthesis can be purely cosmetic, or offer the user some grasping function. The latter can either be a body-powered or a myo-electric prosthesis. A myo-electric prosthesis is controlled by electrical signals originating from the contraction of muscles of the user and is powered by electric motors. Body-powered prostheses are operated by body movements, which are captured by a harness and transmitted through a Bowden cable to the prehensor. Unfortunately, 23-45% of the users are so dissatisfied with their chosen prosthesis that they decide not to wear it. Thus, prostheses need to be improved.
This thesis focuses on the improvement of body-powered prostheses, which offer several advantages compared to myo-electric prostheses: they are much lighter, cheaper and more reliable and – perhaps most importantly – offer the user extended proprioceptive feedback about the prehensor’s movements and exerted grip force. On the down side, body-powered prostheses currently require high operation forces, causing pain and fatigue during or after use, and potentially limiting the inherent advantages in perception and control. Additionally, users complain about the comfort and outer appearance of the harness, the design of which still looks like that of the Count of Beaufort in 1860.
Lowering the operation forces will most likely increase the pinch force control accuracy and reduce fatigue and pain during or after operation and therefore improve the prosthesis’ functionality. To which level cable forces need to be lowered is up till now unknown; it is assumed that lowering operation forces is effective, but only up the point where the control forces are still clearly distinguishable from noise (like inefficiencies in prehensor or cable friction).
The goal of this thesis is to quantify the perception and control capabilities of prosthesis users as a function of body-powered prosthesis design elements, such as mechanical properties of the prehensor, or an alternative harness. The obtained quantified understanding is intended to guide improvements in body-powered prosthesis design, to enhance the quality of life of upper-limb prosthesis users and to prevent (repetitive strain) injuries.
First, a range of maximum cable operation forces between 87 N and 538 N was established for a representative group of prosthesis users (Chapter 2). When the corrected values for fatigue-free operation (20% of the individually measured maximum force) were compared to the required operation forces of ten commercially available body-powered prostheses, it was concluded that only one of these could be operated fatigue-free. Based on the available results, cable forces should not exceed 38 N for the average female, and 66 N for the average male for most activities in daily life, to enable users to operate their prosthesis fatigue-free.
A second study investigated the effect of cable operation forces (15 N versus 51 N) on the ability to transport a test object (Chapter 3). The object was a mechanical egg: too high cable forces would ‘break’ the object; too low cable forces would cause the operator to drop it. The results indicated that the egg was transferred successfully more often at the low cable operation force settings than at the high force setting.
A third study investigated users’ perception and control abilities by utilizing a force reproduction task (Chapter 4). For successful object manipulation we desire a small difference between the intended and actually applied force on an object, as well as only minor fluctuations in the applied force level. In a force reproduction task the force reproduction error resembles the difference between the intended and actually applied force, whereas the force variability indicates the force fluctuations. The results showed a decreasing force reproduction error with increasing cable excursions for force levels of 10 and 20 N, and a decreasing force variability for decreasing operation force levels varying between 10 and 40 N. Thus, low force levels and large cable excursions contribute to improved force perception and control.
In the fourth and final study an alternative harness design, the Ipsilateral Scapular Cutaneous Anchor System, was compared with the traditional figure-of-nine harness, as comfort of the harness was identified as being an issue in body-powered prosthesis (Chapter 5). In terms of perception and control capacities of users no differences between the two systems were found for operation forces ranging from 10 to 40 N. It could thus be concluded that the Anchor system appears to be a valid alternative to the traditional harness at low operation force levels as performance is comparable while comfort is reportedly better.
In conclusion, this thesis shows that the operation forces which prosthesis users are required to exert are an important factor in body-powered prosthesis design. For most commercially available body-powered prostheses, the control cable forces are too high to be used on a daily basis. To enable users to operate a body-powered prosthesis fatigue-free during the day ‒ every day – with the provision of high quality feedback and adequate prehensor control, operation forces should not exceed 38 N for the average female and 66 N for the average male user. A long operation movement stroke and thus a large cable excursion does contribute to increased prehensor control. For the suggested low operation force levels the Ipsilateral Scapular Cutaneous Anchor System provides a good alternative for the traditional harness.","","en","doctoral thesis","","978-94-6186-764-3","","","","","","","","","Biomechatronics & Human-Machine Control","","",""
"uuid:24ee6615-2555-4b64-8950-a77c9d969806","http://resolver.tudelft.nl/uuid:24ee6615-2555-4b64-8950-a77c9d969806","Reliability of long heterogeneous slopes in 3d: Model performance and conditional simulation","Li, Y. (TU Delft Geo-engineering)","Hicks, M.A. (promotor); Delft University of Technology (degree granting institution)","2017","Highway embankments, river dykes and sea dykes usually have a uniform cross-section and extend for a long distance in the third dimension. These long soil structures are generally characterised by spatially varying soil properties, i.e. soil heterogeneity. Slope stability failures of these structures may have significant economic and societal consequences. Thus, it is of particular interest for engineers to investigate the influence of soil spatial variability on the stability and failure mechanisms of long ‘linear’ structures. For example, earthen levee flood protection systems can be viewed as series systems, where failure at one location, or failure of one component, can result in catastrophic failure of the entire flood protection system and result in tragic loss of life, damage to fundamental infrastructure, and substantial economic impact to the immediate and surrounding regions. In order to ensure the desired level of flood protection system performance, standards in the Netherlands explicitly require probabilistic designs. For example, this may include the use of semi-analytical tools, such as Calle’s 2.5D method and the 3D method of Vanmarcke. However, these (semi-) analytical models make certain simplifying assumptions; in particular, that of a finite length cylindrical failure mechanism. The random finite element method (RFEM) has found increasing use for long engineered slopes in recent years, due to its conceptual simplicity to implement and its capability to comprehensively analyse the effects of soil spatial variability. As a simulation method, RFEM can be applied to large and complex systems, without the need to include some of the rigid idealisations and/or simplifications necessary for analytical solutions, resulting in more realistic models. Therefore, RFEMcan be used as a comparative tool in investigating the performance of simpler methods. Its biggest disadvantage is that it tends to be computationally expensive. The main body of this thesis is devoted to comparative studies (in terms of statistics of the realised factor of safety, the reliability and the failure consequences with respect to potential failure length and volume) of the three above models, for a range of spatial statistics of the soil shear strength. In particular, the relative performance of RFEM and Vanmarcke’s model is investigated for a relatively short slope (of length 10 times the height) for which the length effect may be ignored. For horizontal scales of fluctuation that are large compared to the slope length, the two approaches give similar results, because most of the failure surfaces computed in the RFEM analyses are then approximately cylindrical and propagate along the entire length of the slope, thereby matching Vanmarcke’s assumption and resulting failure length for this limiting condition. In contrast, for smaller values (i.e. less than the slope length), the two approaches can give significantly different results, with the RFEM response of the slope generally being much weaker than the Vanmarcke solution, apparently due to different predicted failure lengths and the influence of the cylinder ends in the simpler model. A second comparative study using all three models involves the so-called length effect (i.e. the increase of the probability of failure as the total slope length increases) for very long slopes (of length up to 100 times the height), using HPC strategies developed in this thesis. In contrast to the level crossing approach adopted in the two (semi-) analytical models, a simple power law equation was utilised with RFEM, which was validated based on the principles of probability of multiple independent (failure) events within the length of the slope. It is shown that RFEM predicts the smallest reliability indices for the range of cases considered. However, the solutions predicted by Vanmarcke’s and Calle’s models move closer to the RFEMresults at larger horizontal scales of fluctuation. Discrete failure lengths have been quantified in RFEMand comparedwith predicted failure lengths using the simpler models, in order to provide a rational explanation for the differences observed. Moreover, the ® factor used with Calle’s model in Dutch practice was investigated thoroughly via random fields for various degrees of spatial variability, enabling a comprehensive evaluation of its influence. While the unconditional RFEM is used as a baseline stochastic method to make the comparative studies, the conditional RFEM was implemented and applied in the last part of the thesis to two example geotechnical applications. The first example focuses on the efficient design of site investigation plans (i.e. optimum locations and sampling intensity) in a 3D soil deposit. A sampling efficiency index was defined and used as an indicator of the efficiency of a site’s plan. A ‘posterior’ distribution of the structure performance, after taking account of the spatial distribution of all the measured CPT data, was derived and showed a significant reduction in the uncertainty compared to the ‘prior’ distribution of the structure response by using the unconditional simulation based on random field theory. An optimal sampling position for the excavation of a slope was identified, both for a single stage of site investigation and a two stage site investigation. Moreover, an optimal sampling distance of half the horizontal scale of fluctuation was identified when an exponential correlation function is used. The second example is devoted to cost-effective designs of an excavated 3D slope. For the problem analysed, a steeper slope was found to be sufficiently reliable (i.e. in line with Eurocode 7) when conditional random fields were used. This was in contrast to the finding of unconditional simulations, due to the greater uncertainty due to only making partial use of available measurement data. The potential benefit of a 3D conditional simulation in geotechnical cost-effective designs has therefore been highlighted.","conditional simulation; heterogeneity; length effect; reliability; risk; slope stability","en","doctoral thesis","","978-94-92516-44-2","","","","","","","","","Geo-engineering","","",""
"uuid:9968b155-539f-4e40-9562-5996a2843aa8","http://resolver.tudelft.nl/uuid:9968b155-539f-4e40-9562-5996a2843aa8","Bayesian networks for levee system reliability: Reliability updating and model verification","Roscoe, K. (TU Delft Hydraulic Structures and Flood Risk)","Vrijling, J.K. (promotor); Vrouwenvelder, A.C.W.M. (promotor); Delft University of Technology (degree granting institution)","2017","","","en","doctoral thesis","","978-90-6824-059-7","","","","","","","","","Hydraulic Structures and Flood Risk","","",""
"uuid:c11002b2-f033-44ac-bba9-fb3d32283d51","http://resolver.tudelft.nl/uuid:c11002b2-f033-44ac-bba9-fb3d32283d51","In-situ Transmission Electron Microscopy Studies on Graphene","Vicarelli, L. (TU Delft QN/Zandbergen Lab)","Zandbergen, H.W. (promotor); Delft University of Technology (degree granting institution)","2017","","In-situ; transmission electron microscopy; graphene; nanoribbons; direct sculpting; self-healing; MEMS heater; electron holography","en","doctoral thesis","","978-90-8593-286-4","","","","Casimir PhD Series, Delft-Leiden 2017-1 This research was financially supported by ERC: project 267922, ""NemlnTEM""","","","","","QN/Zandbergen Lab","","",""
"uuid:205f36da-9d4b-4c28-a326-864b27cb857d","http://resolver.tudelft.nl/uuid:205f36da-9d4b-4c28-a326-864b27cb857d","Pollutant dispersion in wall-bounded turbulent flows: an experimental assessment","Eisma, H.E. (TU Delft Fluid Mechanics)","Westerweel, J. (promotor); Elsinga, G.E. (copromotor); Delft University of Technology (degree granting institution)","2017","","","en","doctoral thesis","","978-94-6233-527-1","","","","","","","","","Fluid Mechanics","","",""
"uuid:23221904-c9c1-4537-af7f-abdbb9df06ed","http://resolver.tudelft.nl/uuid:23221904-c9c1-4537-af7f-abdbb9df06ed","Individual beam control in multi electron beam systems","Zonnevylle, A.C. (TU Delft ImPhys/Charged Particle Optics)","Kruit, P. (promotor); Delft University of Technology (degree granting institution)","2017","","","en","doctoral thesis","","978-94-6186-783-4","","","","","","","","","ImPhys/Charged Particle Optics","","",""
"uuid:8d7ac522-9d79-4af5-b2e3-7eeccab06055","http://resolver.tudelft.nl/uuid:8d7ac522-9d79-4af5-b2e3-7eeccab06055","Molecular Electronics: When Multiple Orbitals Matter","Koole, M. (TU Delft QN/van der Zant Lab)","van der Zant, H.S.J. (promotor); Delft University of Technology (degree granting institution)","2017","","Molecular electronics; charge transport; conjugation; Kondo effect; quantum interference; electromigration; nanotechnology","en","doctoral thesis","","978-90-8593-284-0","","","","Casimir PhD Series, Delft-Leiden 2016-41","","","","","QN/van der Zant Lab","","",""
"uuid:318d88af-e25e-4a7e-8d37-18770fe980c4","http://resolver.tudelft.nl/uuid:318d88af-e25e-4a7e-8d37-18770fe980c4","Resilience and Application Deployment in Software-Defined Networks","van Adrichem, N.L.M. (TU Delft Network Architectures and Services)","Van Mieghem, P.F.A. (promotor); Kuipers, F.A. (copromotor); Delft University of Technology (degree granting institution)","2017","In the past century, numerous iterations of automation have changed our society significantly. In that perspective, the professional and personal availability of computing devices interconnected through the Internet has changed the way we eat, live and treat each other. Today, the Internet is a service as crucial to our society as public access to electricity, gas and water supplies. Due to its successful adoption, the Internet now serves applications that were unthinkable at the time of its initial designs when social media, online global market places and video streaming were still far out of reasonable imaginary reach. Early research initiatives worked on realizing a global network of interconnected computers, an aim clearly realized by the successful implementation of the Internet and the fact that the infrastructure still suffices to provide connectivity to an unforeseen growth and change in usage. The research field of future Internet aims at long-term improvements of the Internet architecture, trying to improve the network infrastructure such that it will also facilitate future growth and applications.
In this dissertation, we have contributed to the field of future Internet by proposing, implementing and evaluating infrastructure improvements. Most of our work revolves around Software-Defined Networking (SDN), a network management architecture aiming at logical centralization and softwarization of network control through the separation of data plane and control plane functionality. In particular, we have assessed the feasibility and accuracy of network monitoring through SDN (see chapter 3), as well as contributed to the robustness and recovery of such networks under topology failure by speeding up failure detection and recovery (see chapter 4) and precomputation of network-wide per-failure protection paths (see chapter 5).
In addition to SDN, we have contributed to Information-Centric Networking (ICN), a network architecture optimizing content distribution by implementing network-layer forwarding techniques and cache-placement strategies based on content identifiers. We have contributed to this field by introducing a globally-accessible namespace maintaining a feasible global-routing-table size through separation and translation of context-related and location-aggregated name components (see chapter 6). Considering the same demand for centralization and softwarization of network control found in SDN applies to other network architectures, we have designed a protocol-agnostic SDN scheme enabling fine-grained control of application-specific forwarding schemes. With our prototype, we evaluate an implementation of such an SDN-controlled ICN, demonstrating correct functionality in both partial and fully upgraded networks (see chapter 7).
Besides working on future Internet topics, we have also taken a step aside and looked at more recent Internet architecture improvements. Specifically, we have performed measurements on the Domain Name System’s Security Extensions (DNSSEC). From these measurements we provide insight into the level of implementation and correctness of DNSSEC configuration. Through categorization of errors we explain their main causes and find the common denominators in misconfiguration (see chapter 8).","","en","doctoral thesis","","978-94-6186-784-1","","","","","","","","","Network Architectures and Services","","",""
"uuid:a07ea6a4-be73-42a6-89b5-e92d99bb6256","http://resolver.tudelft.nl/uuid:a07ea6a4-be73-42a6-89b5-e92d99bb6256","Design Optimisation of Practical Variable Stiffness and Thickness Laminates","Peeters, D.M.J. (TU Delft Aerospace Structures & Computational Mechanics)","Bisagni, C. (promotor); Abdalla, M.M. (copromotor); Delft University of Technology (degree granting institution)","2017","The use of composite materials in airplanes has been increasing over the last decades, mainly due to the high strength-to-weight and stiffness-to-weight ratio of composites. Traditionally, the possible fibre angles are often restricted to 0°, ±45° and 90°, referred to as conventional laminates. However, with the rise of fibre placement machines, not only can any ply angle be placed, the fibres can even be steered onto curved paths. By steering the fibres, the mechanical properties of the material are made spatially varying while maintaining material continuity; hence these laminates are called variable stiffness laminates.
This thesis proposes an optimisation approach that exploits the possibilities of variable stiffness laminates, while posing limitations to the steering radius to guarantee the optimised design is manufacturable. Furthermore, the design guidelines are interpreted for variable stiffness laminates and posed as constraints, increasing the (industrial) feasibility of the optimised design.
In addition to steering the fibres, layers can also be dropped to obtain laminates with varying mechanical properties. This is also incorporated in the optimisation algorithm, leading to variable thickness laminates. Finally, the combination of steering fibres and dropping layers is implemented as well, leading to variable stiffness, variable thickness laminates.","","en","doctoral thesis","","978-94-6299-523-9","","","","","","","","","Aerospace Structures & Computational Mechanics","","",""
"uuid:3519c954-ab49-45a9-b4c7-41867e2f38cb","http://resolver.tudelft.nl/uuid:3519c954-ab49-45a9-b4c7-41867e2f38cb","Measuring and modelling salt and heat transport in low-land drainage canals: Flow and stratification effects of saline seepage","Hilgersom, K.P. (TU Delft Water Resources)","van de Giesen, N.C. (promotor); Zijlema, Marcel (copromotor); Delft University of Technology (degree granting institution)","2017","This thesis explores a new measuring approach to quantify the seepage flux from boils. Boils are preferential groundwater seeps and are a consequence of the groundwater flow that works its way through the soil matrix by creating vents of higher conductive material. In the Netherlands, boils often occur in deep polders (reclaimed lakes situated 4–7 m below sea level), transporting water directly from the deep aquifer. Because this saline aquifer is connected to the sea, the pressure difference between the sea water level and the polder water level is the main driver of the upward seepage flux. At the surface, boils seep out through canal beds and sometimes on land. This thesis focusses on boils that directly discharge into polder drainage canals. Although boils are usually highly saline compared to the fresh surface water, this research also includes an example of a relatively fresh boil.
The seeping groundwater has a fairly constant temperature throughout the year.
Because the surface water temperature fluctuates over the year and over the day,
temperature is an ideal tracer to measure the groundwater - surface water interaction. Previous studies applied temperature and salinity samples taken at different depths in the soil to quantify the boil seepage flux. Because the boil vents are usually not strictly vertical and can be disturbed when probing the soil, this research aims to measure the boil seepage flux from a surface water perspective. The intended measurement approach samples the surface water at a very high resolution in three dimensions, compares the temperature profiles with those in a free-surface transport model, and infers the boil flux as the bottom boundary flux of the model.
Over the past decades, fibre-optic distributed temperature sensing (DTS) has
developed toward an effective means to obtain spatially distributed temperature
samples. When releasing a laser signal through an optical fibre, the returning signal carries temperature information in its wavelengths. Current DTS machines allow measuring temperature down to every 25 cm along a fibre-optic cable. To obtain even higher resolutions, researchers often wrap cables to a coil. However, cable bends and the construction supporting the coil affect the measurement accuracy. Chapter 3 investigates how cable bends influence the temperature measurements in coil-wrapped DTS set-ups in order to account for this in the design of a highresolution DTS set-up for this research. It is concluded that, with a decreasing bending radius, the cable bends increasingly affect the temperature measurements in multiple ways. The non-linearity in the bend-induced decay of the laser signal complicates compensation for these effects and requires a very careful temperature calibration approach.
To avoid continuously bent cables, the design of the three-dimensional (3-D)
high-resolution DTS set-up applied a weaving pattern instead of coils (Chapter 4). This way, cables are only bent at each turnaround, intermitted by straight stretches of 1 m. By selecting the desired vertical spacing of the woven ’layers’, one can customize the vertical resolution of the set-up. To infer the seepage flux from the stream bed, the design of the set-up required very high resolutions near the bottom boundary. The set-up proved to measure very detailed temperature profiles in a water body, and even uncovered unexpected seeps in a laboratory set-up to simulate boil seepage. In the field, the measured temperatures near the stream bed displayed an accumulation of sediment around the boil during the measurement periods. Most interestingly for the current application, the detailed temperature profiles were able to capture double-diffusive phenomena.
Double-diffusion occurs when two adjacent water layers have different temperatures and salinities, and the density gradients for the temperature and salinity are opposed. For example, when cold (denser) and fresh (lighter) water overtops a warm and saline water layer, a system of convective layers develops with a very sharp temperature and salinity interface between the convective layers (i.e., double-diffusive convection). A more curious phenomenon occurs when the warm and saline layer is on top. In this case, a finger-like pattern develops at the sharp interface between the layers (i.e., salt-fingering). These systems are different from normal diffusive interfaces, which tend to fade over time. Therefore, water bodies with salt and temperature gradients demand a careful modelling of the flow processes.
To accurately model the boil-covering water body with large density gradients,
a mass and momentum conservative free-surface model was selected. The model was extended with a transport module and modules accounting for temperature and salinity dependent densities, viscosities, and specific heats. Moreover, the model was extended with the option to include atmospheric heat exchange in the calculations. The performance of the model was tested on a solar pond (Chapter 5). Such ponds are double-diffusive convective water bodies with very strong density gradients, which store solar energy as heat in their bottom hypersaline layer. The model well captured the flow of warm water along the sloping edge of the solar pond and demonstrated the onset of small seiches in the pond due to the density gradients. The onset of convective layers was also captured, although their extents were not in complete agreement with measurement data. In general, the results confirmed the model capability to simulate double-diffusive convection.
Due to the boil’s circular shape and the availability of 3-D temperature profiles, a 3-D modelling grid would be preferable for the boil seepage simulations. The dense grid needed for the transport simulation, however, yields too large computation times. Therefore, Chapter 6 investigated the potential of a quasi 3-D axisymmetric set-up for these simulations. To this end, the 2-DV model code was extended with few additional terms which hardly increased the computation time and kept the solution procedure mass and momentum conservative. Qualitative case studies demonstrated the model capability to simulate salt-fingers and double-diffusive convection. An analytical benchmark was set up for the axisymmetric expansion of an unconditionally stable layer from a central cold and saline seepage inflow. For the case of laminar flow conditions, the model results were in agreement with the analytical solution. Turbulent convection dispersed heat and salt significantly quicker.
The unexpected seeps in the laboratory set-up for boil seepage simulations complicated the comparison of these measurements with model output, because the exact flow paths were unknown and could not be modelled. Chapter 7 shows a comparison of the measurements with model results for the intended seepage flow. Although double-diffusive convective and unconditionally stable layers develop in both the model and the measurement results, the growth rates, and specifically the locations where the layers grow at a faster rate, are different. Moreover, the unexpected seeps seem to have a higher flow velocity, leading to a larger mixing of heat at the interface between the layers. It is concluded that the model can not be validated based on the laboratory data and additional measurements are recommended.
Although the horizontal stream flow across the boil should be negligible when
applying an axisymmetric modelling approach, knowledge of the stream discharge is still relevant. For this reason, this thesis starts with exploring the possibilities to modernize and potentially automate the rising bubble technique for discharge measurement (Chapter 2). The study shows that the complicated dual camera set-up and position calculations for the air bubbles in previous publications can be avoided with modern image processing algorithms. Reflecting sun light sometimes impedes the visibility of the air bubbles on the water surface. We displayed an example of how a statistical tool still uncovers the signatures of air bubbles in digital images that would normally hardly be visible. Such tools could also be applied in pattern recognition algorithms that automatically find the air bubbles on the water surface. Although further research is necessary, the results seem to support the hypothesis that the rising bubble technique can be applied as an automatic discharge measurement technique.
We conclude that the boil seepage inversion from double-diffusive models is
currently still very challenging (Chapter 8). The locations and extents of double-diffusive convection cells and salt-fingers are dependent on sub-grid processes.
Moreover, these phenomena are very sensitive to local density gradients which will never be modelled ’perfectly’. The importance of model boundary conditions when the layer of seepage water is still thin could also affect the inversion of the seepage flux at the bottom boundary. For all these issues, the local temperature deviations can highly influence the inversion step, yielding a high noise in the outcome. Nevertheless, we see potential in a less complicated inversion of the growth of an unconditionally stable layer above a cold and saline boil after the water body is fully mixed. This approach still requires high-resolution temperature measurements. Further research to this method is recommended.","boil seepage; fibre-optics; distributed temperature sensing; double-diffusion; non-hydrostatic model; salt and heat transport","en","doctoral thesis","","978-94-6186-774-2","","","","Funded by: The Netherlands Organisation for Scientific Research (NWO), project number 842.00.004","","","","","Water Resources","","",""
"uuid:ca1bfbcd-34bc-4f66-9a1e-1a089a4ab3c0","http://resolver.tudelft.nl/uuid:ca1bfbcd-34bc-4f66-9a1e-1a089a4ab3c0","flexZhouse: New business model for aff ordable housing in Malaysia","Bin Mohd Noor, M.Z. (TU Delft Design & Construction Management)","Wamelink, J.W.F. (promotor); Gruis, V.H. (promotor); Heintz, John L. (copromotor); Delft University of Technology (degree granting institution)","2017","Central to this PhD research was the problem of the lack of aff ordable housing for young starters in Malaysia. The solutions for aff ordable housing that are available in the market do not truly solve the problem from the customer’s point of view. Hence, it was important to analyse the contributing factors associated with the term ‘aff ordability’. The term touches upon interconnected elements that cover many issues ranging from demand (housing needs, demographics, household income, quality housing) to supply (the authorities’ requirements, design, cost, sustainability and procurement). In this thesis, we discuss some of the problems related to the supply and demand issues and examine a possible intervention to solve the problem.
This research contributed to the body of knowledge by employing a prescriptive strategy and designing an innovative fl exZhouse business model (BM), and by applying an in-depth strategy that revealed why the problem exists and why there is still no appropriate solution. The result provides a description of the situation that young starters fi nd themselves in, the reactions of the industry’s key players and the policies that hamper innovation in the housing market.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-92516-39-8","","","","A+BE | Architecture and the Built Environment No 2 (2017)","","","","","Design & Construction Management","","",""
"uuid:df03c9c4-8767-463f-9511-62ff99cda8e7","http://resolver.tudelft.nl/uuid:df03c9c4-8767-463f-9511-62ff99cda8e7","Effect of sulphide on enhanced biological phosphorus removal","Rubio Rincon, F.J. (TU Delft BT/Environmental Biotechnology; IHE Delft Institute for Water Education)","Brdjanovic, Damir (promotor); van Loosdrecht, Mark C.M. (promotor); Delft University of Technology (degree granting institution)","2017","The enhanced biological removal of phosphorus (EBPR) is a popular process due to high removal efficiency, low operational costs, and the possibility of phosphorus recovery. Nevertheless, the stability of the EBPR depends on different factors such as: temperature, pH, and the presence of toxic compounds. While extensive studies have researched the effects of temperature and pH on EBPR systems, little is known about the effects of different toxic compounds on EBPR. For example, sulphide has shown to inhibit different microbial activities in the WWTP, but the knowledge about its effects on EBPR is limited. Whereas the sulphide generated in the sewage can cause a shock effect on EBPR, the continuously exposure to sulphide potentially generated in WWTP can cause the acclimatization and adaptation of the biomass. This research suggests that sulphate reducing bacteria can proliferate in WWTP, as they are reversibly inhibited by the recirculation of sludge through anaerobic-anoxic-oxic conditions. The research enhances the understanding of the effect of sulphide on the anaerobic-oxic metabolism of PAO. It suggests that the filamentous bacteria Thiothrix caldifontis could play an important role in the biological removal of phosphorus. It questions the ability of PAO to generate energy from nitrate respiration and its use for the anoxic phosphorus uptake. Thus, the results obtained in this research can be used to understand the stability of the EBPR process under anaerobic-anoxic-oxic conditions, especially when exposed to the presence of sulphide.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-1-138-03997-1","","","","Dissertation submitted in fulfillment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of the UNESCO-IHE Institute for Water Education.","","","","","BT/Environmental Biotechnology","","",""
"uuid:a551b9a2-b5da-4a51-8a3b-3d7f410d67cc","http://resolver.tudelft.nl/uuid:a551b9a2-b5da-4a51-8a3b-3d7f410d67cc","Mixed Discrete-Continuous Railway Disruption-Length Models with Copulas","Zilko, A.A. (TU Delft Applied Probability)","Redig, F.H.J. (copromotor); Kurowicka, D. (copromotor); Delft University of Technology (degree granting institution)","2017","The uncertainty of railway disruption length hinders the performance of the Operational Control Centre Rail (OCCR) in Utrecht. One way to model this uncertainty is by representing the disruption length as a probabilistic distribution. A dependence model, taking the form of a joint distribution, between the disruption length and several observable influencing factors is constructed for a particular type of disruption. From the model, the conditional distribution of disruption length can be computed by conditioning the model on the observed values of the influencing factors. In this thesis, the joint distribution is constructed using the concept of copula and vines. One focus of this thesis is to study this construction when the variables involved are both discrete and continuous. We show that this can still be done, despite the more expensive parameters estimation. One value from the conditional distribution of disruption length needs to be chosen as the prediction. To investigate the effect of different choices of prediction, the model is tested in four case studies concerning a railway disruption occurring in the area of Houten, the Netherlands. The model is used together with the short-turning and the passenger flow models, developed by the Department of Transport and Planning of Delft University of Technology. Different predictions are made and the impact on the passengers is measured in terms of the total generalized travel time.","railway disruptions; Dependence model; copula; vines; railway traffic management; Disruption","en","doctoral thesis","","978-94-6186-776-6","","","","","","","","","Applied Probability","","",""
"uuid:7757cae6-cb88-49f2-aba1-e6fcd764a9c9","http://resolver.tudelft.nl/uuid:7757cae6-cb88-49f2-aba1-e6fcd764a9c9","Managing the uncertain risks of nanoparticles: Aligning responsibility and relationships","Spruit, S. (TU Delft Ethics & Philosophy of Technology; TU Delft Organisation & Governance)","van de Poel, I.R. (promotor); Doorn, N. (copromotor); Delft University of Technology (degree granting institution)","2017","Technological developments in the field of nanoscience and nanotechnology have led to the development of several newly engineered nanoparticles. These materials are already being applied in a variety of consumer contexts, as well as business products, and are expected to be more widely used in the coming years. Despite all the promises, novel nanoparticles are still accompanied by scientific uncertainty about their hazardous effects. Due to these uncertainties, conventional methods for managing risk are deemed insufficient. In response to this, it has been proposed that the management of uncertain risks requires a forwardlooking notion of responsibility, which entails various kinds of anticipatory activities to ensure an adequate and timely response to emerging risks. The question of who should bear this responsibility often remains implicit in such discussions. However, innovation processes cannot be responsible, nor can they reflect on or account for what they do, or make intentional choices. Ultimately, responsibility should rest with particular individuals. At the same time, it has been recognized that the collaborative nature of innovation processes creates problems for the allocation of responsibility to individual actors such as scientists, engineers and product developers. Activities that lead to the development of nanoparticles and nanoproducts are often complex and distributed; they take place at multiple locations, combine insights from several disciplines and involve many different agents. This suggests that in order to determine a viable allocation of responsibility for uncertain nanoparticle risks, it is necessary to reflect on the way people interact in the field, and how this influences the capacity of individuals to take responsibility for emerging hazards. This thesis contributes to this discussion by exploring the relationships between people who are involved in various ways in the development and use of nanoparticles, and by exploring how such relationships should be taken into account in the allocation of responsibility. The ultimate aim is to align the responsibilities that people have with the relevant relationships in the field. The goal of the thesis is primarily normative: it develops a framework to ethically assess relationships. However, the normative analysis is strongly empirically informed: it is supported by data from two case studies, one on the use of nanoparticles in a work environment and one on the use of nanoparticles for Managing the uncertain risks of nanoparticles 162 land remediation, complemented with literature from the empirical sciences and a collaborative paper with two nano-engineers. As a first step, this thesis explores how relationships matter to the way we deal with the uncertain risks of nanoparticles. Exploring the possibility of informed consent in relation to nanoparticles, Chapter 2 shows that several features of relationships, such as dependency, proximity and the existence of shared interest can influence the quality of decision-making processes about uncertain risks. Following this, Chapter 3 shows that relationships, such as those between employer and employees, can give rise to duties of care for uncertain risks. Chapter 4 argues that the existence of relationships is necessary in order to be able to respond to new and emerging hazards in the field of nanoscience and nanotechnology. On this basis, Chapter 4 argues that in some cases there is an obligation to establish relationships. In particular cases, where there is a need for collective action, but no such collective exists, individual engineers involved in innovation processes would have a duty to collectivize: they must organize themselves into a collective that can adequately act upon emerging and unwanted hazards. Finally, in Chapter 5, this thesis explores the characteristics required of such relationships to foster responsibility in nanoparticle development. In doing so, I shift from a narrow notion of responsibility focused on dealing with risks to a broader conception of responsibility that not only takes into account risks, but includes the potential benefits of innovation as well. The chapter develops a framework to characterize morally relevant features of relationships based on the Ethics of Care. Several features of relationships are identified that can be used to evaluate whether relationships amongst those developing and using nanoparticles are caring. These include dependency, power, attention, responsiveness, emotional engagement and availability. The usability of this framework is explored by applying it to the context of innovation in relation to nanoparticles used in the context of land and water remediation. In Chapter 6, the thesis concludes with a reflection on the alignment of relationships and responsibilities, discussing whether relationships should be adjusted to responsibilities or – vice versa – whether the responsibilities we allocate for nanoparticle risks should be adjusted to the relationships at hand. I suggest that there is a middle ground between these two options – an understanding of responsibility that is based on a certain characterization of relationships. This understanding holds that having ‘the right kind of relationships’ is part of what it means to take responsibility. This thesis ends with a discussion of the implications of these findings for the practice of and scholarship in Responsible Research and Innovation.","","en","doctoral thesis","","978-90-386-4212-3","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:21b58ffc-6c38-477a-93dd-f84701601e81","http://resolver.tudelft.nl/uuid:21b58ffc-6c38-477a-93dd-f84701601e81","Climate-responsive design: A framework for an energy concept design-decision support tool for architects using principles of climate-responsive design","Looman, R.H.J. (TU Delft Climate Design and Sustainability)","van den Dobbelsteen, A.A.J.F. (promotor); Tenpierik, M.J. (copromotor); Delft University of Technology (degree granting institution)","2017","In climate-responsive design the building becomes an intermediary in its own energy housekeeping, forming a link between the harvest of climate resources and low-energy provision of comfort. Essential here is the employment of climate-responsive building elements; structural and architectural elements in which the energy infrastructure is far-reaching integrated. Research is conducted on what knowledge is needed in the early stages of the design process and how to transfer and transform that knowledge to the field of the architect in order for them to successfully implement the principles of climate-responsive design. Content, form and functional requirements provide the framework for a design-decision support tool. A concept tool has been presented to architects in the field.","climate-responsive design; built environment; building design","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-92516-36-7","","","","A+BE | Architecture and the Built Environment No 1 (2017)","","","","","Climate Design and Sustainability","","",""
"uuid:9c46dcd4-e68e-42de-a7ab-676a3e631f0d","http://resolver.tudelft.nl/uuid:9c46dcd4-e68e-42de-a7ab-676a3e631f0d","Reliability Aware Computing Platforms Design and Lifetime Management","Cucu Laurenciu, N. (TU Delft Computer Engineering)","Bertels, K.L.M. (promotor); Cotofana, S.D. (copromotor); Delft University of Technology (degree granting institution)","2017","Aggressive CMOS technology feature size down-scaling into the deca nanometer regime, while benefiting performance and yield, determined device characteristics variability increase w.r.t. their nominal values, which can lead to large spreads in delay, power, and robustness, and make devices more prone to aging and noise induced failures during in-field usage. Because of transistor’s gate dielectric increasing power density and electric field the nanoscale Integrated Circuits (ICs) failure mechanisms accelerating factors have become more severe than ever, which can cause higher failure rate during ICs useful life and early aging onset. As a result, meeting the reliability targets with viable costs in this landscape becomes a significant challenge, requiring to be addressed in an unitary manner from design time to run time. To this end, we propose a holistic reliability aware design and lifetime management framework concerned (i) at design time, with providing a reliability enhanced adaptive architecture fabric, and (ii) at run time, with observing and dynamically managing fabric’s wear-out profile such that user defined Quality-of-Service requirements are fulfilled, and with maintaining a full-life reliability log to be utilized as auxiliary information during the next IC generation design. Specifically, we first introduce design time transistor and circuit level aging models, which provide the foundation for a 4-dimensional Design Space Exploration (DSE) meant to identify a reliability optimized circuit realization compliant with area, power, and delay constraints. Subsequently, to enable the creation of a low cost but yet accurate fabric observation infrastructure, we propose a methodology to minimize the number of aging sensors to be deployed in a circuit and identify their location, and introduce a sensor design able to directly capture circuit level amalgamated effects of concomitant degradation mechanisms. Furthermore, to make the information collected from sensors meaningful to the run-time management framework we introduce a circuit level model that can estimate the overall circuit aging and predict its End-of-Life based on imprecise sensors measurements, while taking into account the degradation nonlinearities. Finally, to provide more DSE reliability enhancement options we focus on the realization of reliable data transport and processing with unreliable components, and propose: (i) a codec for reliable energy efficient medium/long range data transport, and (ii) a methodology to obtain Error Correction Codes protected data processing units with an output error rate smaller than the fabrication technology gate error rate.","Reliability; Reliability Aware Computation; Dynamic Lifetime Reliability Management; Reliability Assessment","en","doctoral thesis","","978-94-6186-780-3","","","","","","","","","Computer Engineering","","",""
"uuid:4890151d-0052-4107-93f1-3ab2c5283cf8","http://resolver.tudelft.nl/uuid:4890151d-0052-4107-93f1-3ab2c5283cf8","Whole slide imaging systems for digital pathology","Shakeri, S.M. (TU Delft ImPhys/Quantitative Imaging)","van Vliet, L.J. (promotor); Stallinga, S. (copromotor); Delft University of Technology (degree granting institution)","2017","Digital pathology is based on the use of digital images of tissues for diagnosis of diseases. In the emerging clinical practice of digital pathology, images of tissue slides are acquired with a high-resolution and high-throughput automated microscope, a so called Whole Slide Imaging (WSI) system. We designed, built and characterized a modular WSI platform for conducting two- and three-dimensional brightfield microscopy, the most common modality in this field.","","en","doctoral thesis","","978-94-6186-781-0","","","","","","","","","ImPhys/Quantitative Imaging","","",""
"uuid:8b277229-6a8c-4e8f-942f-9c485da1e1ce","http://resolver.tudelft.nl/uuid:8b277229-6a8c-4e8f-942f-9c485da1e1ce","Multiwavelength observations of active galactic nuclei: Using current facilities and development of enabling technologies","Janssen, R.M.J. (TU Delft QN/Klapwijk Lab)","Klapwijk, T.M. (promotor); Rottgering, HJA (promotor); Baselmans, J.J.A. (copromotor); Delft University of Technology (degree granting institution)","2017","At the center of every galaxy there is a super-massive black hole of a million or more solar masses. In most galaxies the presence of this black hole can only be detected through its gravitational attraction, which affects the motion of nearby stars. However, in about 10% of the galaxies the super-massive black hole is the engine of one of the most luminous phenomena in the universe: an active galactic nucleus (AGN). In the local universe there are two types of AGN: ‘Radiative-mode’ and ‘Jet-mode’ AGN. In this thesis I show that these two AGN types are hosted by different galaxies and have different infrared properties. ‘Radiative-mode’ AGN are the ‘classical’ AGN which are bright emitters across the entire electromagnetic spectrum. They are thought to be powered by a super-massive black hole accreting matter at a high rate. I show that ‘radiative-mode’ AGN are predominantly found in intermediate mass galaxies with blue and green optical colors. These colors are indicative of active or recently terminated star formation and a young stellar population. Due to the presence of torus of hot dust near the black hole, galaxies with a ‘radiative-mode’ AGN typically show an excess of mid-infrared emission. ‘Jet-mode’ AGN lack the bright optical emission and excess infrared emission of ‘jet-mode’
AGN and can only be identified by means of their radio jet – a stream of relativistic particles that can reach far outside the AGN’s host galaxy. This absence of electromagnetic radiation and prominence of the radio jet is thought to be the result of the low accretion rate of the super-massive black hole driving this AGN type. I show that ‘jet-mode’ AGN have a strong preference for the most massive galaxies, which typically have little star formation. The presence or absence of a dusty torus and the resulting difference in broadband mid-infrared emission could be a powerful tool to separate ‘radiative-mode’ and ‘jet-mode’ AGN without using spectroscopy. Unfortunately, the inherent scatter in the mid-infrared emission of galaxies due to dust heated by stars is too large to separate the two populations reliably.
Far-infrared observations could help resolve this, by constraining the mid-infrared contribution of dust heated by stars. However, current far-infrared surveys do not have the depth or the area to give the number statistics required to calibrate this procedure. In this thesis I have investigated the properties of microwave kinetic inductance detectors. These detectors will enable the far-infrared instruments with 10.000 pixels as a result of their inherent potential for frequency domain multiplexing. This is a huge leap from the 100 pixel far-infrared instruments currently on telescopes. I have shown that microwave kinetic inductance detectors made from NbTiN and Al can satisfy all the requirements to enable a new generation of large format far-infrared cameras, which are required to constrain the far-infrared emission of many galaxies.","","en","doctoral thesis","","978-90-8593-285-7","","","","Casimir PhD Series, Delft-Leiden, 2016-40","","","","","QN/Klapwijk Lab","","",""
"uuid:d85dec82-72dd-4b06-b6a0-2751c0ee6049","http://resolver.tudelft.nl/uuid:d85dec82-72dd-4b06-b6a0-2751c0ee6049","Passive seismic multiscale subsurface imaging and characterization by utilizing natural quakes","Nishitsuji, Y. (TU Delft Applied Geophysics and Petrophysics)","Wapenaar, C.P.A. (promotor); Draganov, D.S. (copromotor); Delft University of Technology (degree granting institution)","2017","This thesis investigates the potential of passive seismic methods that make use of body waves, and especially the passive reflection method, as cost-effective applications for multiscale subsurface imaging and characterization. For this purpose, we develop several seismic techniques for different scales: basin, crustal, and lithospheric. For the basin scale, we developed horizontal- and vertical-components spectral ratio of global earthquake phases to estimate the basin depth. We also used the Sp-wave method and analysis of the frequency-dependent quality factor to characterize the basin’s heterogeneities. The results show good agreement with active-seismic profiles. At the crustal scale, we investigated the application of seismic interferometry (SI). Comparison among different SI methodologies suggests that multidimensional deconvolution based on the truncated singular-value decomposition gives better structural imaging than do the conventional crosscorrelation or crosscoherence approaches, but also better than multidimensional deconvolution based on the damped least-squares scheme. This crustal-scale SI could be useful, for example, as a prescreening-exploration tool for deep geothermal reservoirs whose targets can be as deep as 10 km. At the lithospheric scale we studied not only the Earth, but also the Moon. For the Earth, we applied SI with global phases to obtain detailed images of aseismic parts of a subduction slab. Although the interpretation of the imaging results of the aseismic parts is not sufficiently decisive, the results suggest that the applied method is helpful for imaging aseismic parts of slabs. Furthermore, the radiation efficiency of intermediate-depth earthquakes is estimated to understand the source mechanism as a function of focal depth. The results indicate that there is a larger amount of non-radiated energy for intermediate-depth earthquakes. This implies one of the mechanisms for the slabs to be aseismic at certain depths. For the Moon, we applied SI to deep moonquakes to obtain reflection imaging of the lunar subsurface. With this application, the lunar Moho is interpreted to be around 50 km depth, indicating the potential usefulness of SI for other celestial bodies. Following the results obtained in this thesis, we conclude that the passive seismic methods with natural quakes have excellent potential usage in both the resource industry and academia.","","en","doctoral thesis","","978-94-6186-769-8","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:a01a0a14-798f-44f7-9b5e-88b56b7a4a84","http://resolver.tudelft.nl/uuid:a01a0a14-798f-44f7-9b5e-88b56b7a4a84","Multiwavelets and outlier detection for troubled-cell indication in discontinuous Galerkin methods","Vuik, M.J. (TU Delft Mathematical Physics)","Heemink, A.W. (promotor); Ryan, J.K. (copromotor); Delft University of Technology (degree granting institution)","2017","This dissertation addresses practical use of multiwavelets and outlier detection for troubled-cell indication for discontinuous Galerkin (DG) methods. For smooth solutions, the DG approximation converges to the exact solution with a high order of accuracy. However, problems may arise when shock waves or discontinuities appear: non-physical spurious oscillations are formed close to these discontinuous regions. These oscillations can be prevented by applying a limiter near these regions. One of the difficulties in using a limiter is identifying the difference between a true discontinuity and a local extremum of the approximation. Troubled-cell indicators can help to detect this difference and identify the discontinuous regions (so-called ’troubled cells’) where a limiter should be applied.
In this dissertation, a multiwavelet formulation is used to decompose the DG approximation. The multiwavelet coefficients act as a troubled-cell indicator since they suddenly increase in the neighborhood of a discontinuity. This leads to the definition of a new multiwavelet indicator that detects elements as troubled if the coefficient is large enough in absolute value. Here, a problem-dependent parameter is needed to define the strictness of the indicator. To forgo the reliance on a parameter, a new outlier-detection algorithm is defined that uses boxplot theory. This method can also be applied to different troubled-cell indicators.
Results are shown for regular one-dimensional and tensor-product two-dimensional meshes, as well as for irregular meshes in one dimension and triangular meshes in two dimensions.","Runge-Kutta discontinuous Galerkin method; high-order methods; limiters; shock detection; multiresolution analysis; wavelets; multiwavelets; troubled cells; outlier detection; boxplots","en","doctoral thesis","","978-94-92516-24-4","","","","","","","","","Mathematical Physics","","",""
"uuid:3fb5d84e-39ee-43e8-87c6-21871900dabb","http://resolver.tudelft.nl/uuid:3fb5d84e-39ee-43e8-87c6-21871900dabb","Development and applications of high-performance small-animal SPECT","Ivashchenko, O. (TU Delft RST/Biomedical Imaging)","Beekman, F.J. (promotor); Delft University of Technology (degree granting institution)","2017","","SPECT; preclinical imaging; molecular imaging","en","doctoral thesis","","978-94-92516-35-0","","","","","","2017-01-24","","","RST/Biomedical Imaging","","",""
"uuid:b6866d16-9a29-4fac-9f73-42a5ad26fc0f","http://resolver.tudelft.nl/uuid:b6866d16-9a29-4fac-9f73-42a5ad26fc0f","Non-intrusive Near-field Characterization of Microwave Circuits and Devices","Hou, R. (TU Delft Electronics)","de Vreede, L.C.N. (promotor); Spirito, M. (copromotor); Delft University of Technology (degree granting institution)","2017","","","en","doctoral thesis","","978-94-6295-592-9","","","","","","","","","Electronics","","",""
"uuid:02965443-f1e5-440c-b9af-0e0648be9552","http://resolver.tudelft.nl/uuid:02965443-f1e5-440c-b9af-0e0648be9552","Autonomous Conflict Detection and Resolution for Unmanned Aerial Vehicles: On integration into the Airspace System","Jenie, Y.I. (TU Delft Control & Simulation)","Hoekstra, J.M. (promotor); van Kampen, E. (copromotor); Delft University of Technology (degree granting institution)","2017","In the last decade, the commercial values of Unmanned Aerial Vehicles (UAV), defined as devices that are capable of sustainable flights in the atmosphere that do not require to have a human (pilot) on-board, become widely recognized thanks to the advancement of technology in materials, sensors, computation, and telemetry. As UAVs are becoming cheaper and more user-friendly, many companies are motivated to incorporate them in their everyday business, such as for delivery services, journalisms, or providing Internet services.All of commercial prospective applications for UAVs, however, can only be achieved once the vehicles are fully integrated into the airspace system. This is not the case yet, since UAV operations, in most part of the world, are strictly regulated to fly only within the visual line of sight (VLOS) of the ground pilot, forbidding the otherwise beyond visual line of sight (BVLOS) flight. One main reason for such strict regulations is the apprehension about the safety of UAV operations, which are likely to be heterogeneous due to the possible large variation of UAVs in the airspace, each with their own preference on how to interact with other UAVs and with the current (manned) air traffic. Hence, airspace management, especially in the mitigation of mid-air conflicts and collisions, is expected to become much more complex, compromising the overall safety.Therefore, the problem of safe UAV integration into the airspace is the selected topic for this research, especially in the development of Conflict Detection and Resolution (CD&R) systems. The particular system describes any procedures and devices for vehicles to mitigate potential mid-air conflicts and collisions. For a UAV, this system needs to consider a wide range of obstacles it might encounter, from a static unmoving object to other vehicles with completely different characteristics. Moreover, there can be interactions between two UAVs with different levels of CD&R system awareness. Only when their CD&R systems are fully defined and regulated to handle such diverse scenarios, can UAVs be fully integrated into the airspace.The main goal of this research is to define and evaluate systems for detecting and resolving possible mid-air conflicts of Unmanned Aerial Vehicles, specifically to support safe beyond visual line-of-sight operations in an integrated airspace. This goal is achieved by addressing the four research problems, i.e. the airspace incompatibility, the CD&R diversity, the doubt on UAV safety, and the UAV autonomous CD&R inadequacy. Directly from those problems, four research questions are formulated as follows:>What structure can be defined to manage the CD&R system for UAVs operating in an integrated airspace?>How can the diverse UAV CD&R approaches be classified into a comprehensive taxonomy that is compatible with the current airspace?>How can the safety parameters of the integrated airspace, under influence of a heterogeneous CD&R approaches, can be determined?>How can an autonomous CD&R system for UAVs be defined to handle potential conflicts, seeing the vehicle as part of the integrated traffic in the airspace?To address the first question, this research proposes a taxonomy of CD\&R approaches for UAV operating in an integrated airspace. Possible approaches for UAVs are surveyed and broken down based on their types of surveillance, coordination, maneuver, and autonomy. The factors are combined back into several `generic approaches’, for example, the Traffic Warningand Collision Avoidance System (TCAS) in manned flight can be seen as CD\&R that uses combination of \textit{a distributed dependent surveillance, an explicit coordination, an escape maneuver, and conducted manually}. The approaches that fits the scheme of UAV integration are then selected methodically, resulting in a novel taxonomy of UAV CD\&R approaches.From the generic approaches in the taxonomy, a multi-layered architecture is developed in this research, managing CD\&R procedures in the airspace that are compatible with the manned flights, while also embracing those that are unique to UAVs'. The multi-layered feature means that instead of relying on only one CD\&R approach, UAVs can implement multiple approach in a fail-safe concept, ensuring that even in a case when one approach fails, there are still available layers that can prevent direct collisions. Six CD\&R approaches from the taxonomy are further selected as the safety layers, which included the layer of (1) Procedural, (2) Manual, (3) Cooperative, (4) Non-cooperative (5) Escape, and (6) Emergency approaches. A brief implementation of the multi-layered CD&R architecture suggesting that it usage depends closely on the type of mission: in a particular mission some layers might become less necessary, while in others they might be important. The proposed architecture, however, is lacking definitions of physical thresholds between layers, such as the distance or time-to-collision, which need to be defined specifically for each type of UAV. This is warranted for the future work for UAVs air traffic management, but might only be truly be defined once the BVLOS flights of UAVs are allowed in the airspace.Answering the second research question, the previously proposed taxonomy is attributed to available CD&R methods in the literature, in order to determine their fitness and whether they are complementary or interchangeable from one to another. A total of 64 CD&R methods are evaluated, ranging from preflight calculations on deterministic maps, such as a Global Path Planning, to reactive avoidances with on-board sensors, such as by using the Velocity Obstacle method. Using the taxonomy, the position of each approaches in the overall safety management scheme, such as by using a multi-layered architecture, can be defined. The taxonomy attribution has shown that many of the available methods fall outside the taxonomy, and suggests the need to concentrate research more to parts where representative methods are lacking. On further evaluation, it also becomes apparent that the diversity of CD&R preferences only existed within the walls of laboratories, due to the current UAV flight limitation to only within VLOS. Nevertheless, the taxonomy potentially can aid both developers and authorities in deciding an adequate CD&R approach(es) to ensure safety of an upcoming BVLOS flight in an integrated airspace.The third question is addressed by setting up a series of Monte Carlo simulation to derive two safety parameters, i.e. the frequencies of near mid-air collisions (NMAC), and of mid-air collisions (MAC). The former represents how often two UAVs fly closer to each other than a certain thresholds, which is set to be 50 meters in most of the discussion in this dissertation, while the later describe the actual body-to-body collision between vehicles. The use of the Monte Carlo simulations is meant to overcome the limitation of available analytical methods in literature, by incorporating the effect of distributed CD\&R system, as well as the heterogeneous condition setup for the airspace. The method, however, has rarely been preferred in the safety parameter derivation, due to its significantly time-consuming process to obtain any meaningful results. This problem is addressed in this research by simulating in high-density setups, of which results are scaled down latter on, to more realistic densities of an airspace.Two CD&R protocols are modeled in the simulations, first one is the cooperative protocol, where each vehicle conduct avoidance that is implicitly coordinated by common rules-of-the-air, and the second one is the non-cooperative protocol, where each vehicle avoids with preferences that are randomly given. A certain target level of safety (TLS) is defined as well in research, to measured the collective performance of the CD&R systems, in which the frequency of NMACs and MACs should be lower than 10E-2 and 10E-7 per hour, respectively. Those values of TLS are proposed on the basis of the equivalent values in manned-flight history for the last decade.As the results, while maintaining the TLS of the airspace, the distributed cooperative CD&R protocol is able to increase the maximum number of operating UAV in one flight level to almost ten times the number when no CD&R is applied. This would mean that for a city like Chicago that has an area of more than five-thousand kilometer-square, a total of 45 UAVs can operate independently in one altitude. It is also concluded that a much better results are obtained while using the cooperative protocol, which justifies the necessity of order in the airspace, which in this case is the implementation of the Right-of-way rules.The usefulness of Monte Carlo simulations method is demonstrated in this research, testing various CD&R algorithms and protocols in a vast number of possible conditions, including those that are previously unpredicted. The downside of the method still appears, however, in which it cannot derive any meaningful results for the frequency of MACs within the number of samples tested, due to the rareness of MACs even in a high-density setups. Hence, more samples are recommended for the future work, along with further extension to include aircraft dynamic model inside the simulations.The fourth question is addressed in this research by introducing two novel CD&R algorithms which are adequate to fill in specific layers in the CD&R architecture explained before. The first algorithm is the Selective Velocity Obstacle (SVO) method, an extension of the Velocity Obstacle method (VO-method) with additional criteria for implicit coordination. This CD&R method is developed specifically for the Cooperative layer in the CD&R architecture, which is based on the unlikeliness of the future airspace to exist without some sort of order or coordination, such as the Right-of-way rules. The SVO is also used as the basis of the cooperative CD&R protocol in the previously explained NMAC frequency derivation using Monte Carlo simulations.The second algorithm is the Three-dimensional Velocity Obstacle (3DVO) method that represent the VO-method in three-dimensional space, obtaining a much wider range of resolution possibilities. The three-dimensional resolution is performed in arbitrary avoidance planes, which number and direction can be set according to the UAV maneuverability. Furthermore, since it is designed to fill the Escape layer from the architecture, the 3DVO is equipped with Buffer Velocity Zones, an additional algorithm to anticipate adverse movements of uncoordinated obstacles. It is discovered, however, that the addition of the Buffer Velocity zones increases the algorithm performance more significantly than the number of Avoidance Planes available.Both the SVO and 3DVO method have been validated by series of Monte Carlo simulations in a stressful heterogeneous airspace setup, in which they were able to significantly reduce the frequencies of NMACs and MACs, and hence are promising to support BVLOS operation in an integrated airspace. Both method, however, are lacking of vehicle dynamic model, which can significantly change the result, especially in the Escape layer, in which avoidance happen in a close range. Moreover, experiments to proof both concepts is also warranted for future works, especially in testing an actual BVLOS flight where the UAVs autonomously interact with the heterogeneous airspace. Furthermore, adequate algorithm to fill other layers in the architecture is also mandatory to support a complete BVLOS flight. This will further enrich the available CD&R approaches that can be selected for UAV operation in an integrated airspace. Therefore, on the basis of the research performed in this dissertation, it is concluded that safe integration of UAVs into the airspace is very much feasible. The conclusion is supported by numerous simulations that have been conducted, demonstrating the possibility to reach the airspace TLS by resorting to an autonomous CD&R system, which is distributed and works independently in each vehicles. The low risk of UAV operations, even in a heterogeneous airspace conditions, is validated even more by the rarity of NMACs and MACs occurrences to the point that an artificially exaggerated setup, such as a super conflict or a high-density airspace, is required to measure the operational safety.While many CD&R approaches for UAVs in literature have not been designed for a BVLOS flight in an integrated airspace, their algorithm can be adjusted to conform the proposed taxonomy. An example of such adjustment is presented in this dissertation by the extension of the VO-method into SVO method that fits the Cooperative approach, and 3DVO that is designed for the Escape approach. With the large diversity of CD&R approach in literature, validation in a heterogeneous setup is a necessity, either by simulations or by actual flight experiments.Compared to back in mid 2011 when this research was initiated, in this 2016 commercial use of UAVs are increasingly getting exposed to the general public. Regulations are being updated to define UAVs' airworthiness and widens their area of operations. Operator awareness of the regulations is also increasing as it is shown by the booming of registered number of drone owners. At the same time, drone advocacy groups are assembled to push regulatory policies to allow UAV operations, especially for BVLOS flight. These indicates that UAV integration into the airspace is inevitable, and that CD&R systems to support safety in such airspace is urgently needed. Therefore, at one point perhaps it is best for the authorities to simply start to accommodate the BVLOS flight in the airspace, allowing both UAVs and their CD&R system to mature based on experience they can gain in a real situation. As it has been shown in the history of manned-flight deregulation, this can create a competitive environment that pushes both manufacturer and operator to continuously strive for safety improvements in an integrated airspace system.","Airspace Management; Airspace Integration; Autonomous Collision Avoidance; Conflict Detection and Resolution; Monte Carlo Simulation; Safety Analysis; Unmanned Aerial Vehicle; Velocity Obstacle Method","en","doctoral thesis","","978-94-6186-779-7","","","","","","","","","Control & Simulation","","",""
"uuid:22fbab3c-5da6-4a15-9275-ae01cd22f54f","http://resolver.tudelft.nl/uuid:22fbab3c-5da6-4a15-9275-ae01cd22f54f","Multiscale computational modeling of size effects in carbon nanotube-polymer composites","Malagu, M. (TU Delft Applied Mechanics; University of Ferrara)","Tralli, AM (promotor); Sluys, Lambertus J. (promotor); Benvenuti, Elena (copromotor); Simone, A. (copromotor); Delft University of Technology (degree granting institution)","2017","The development of carbon nanotube(CNT)-polymer composites advocates for a better understanding of their physical and mechanical properties that depend on the diameter of the embedded CNTs. Given that the experimental assessment of size effects is extremely difficult, the use of numerical models can be enormously helpful. However, since size effects might be observed both at the nano- and the macroscale, an adequate multiscale procedure is required.
In this thesis, numerical techniques are explored to develop a multiscale approach for the analysis of size effects in the elastic response of CNT-polymer composites. Atomistic simulations, such a molecular mechanics and molecular dynamics, are used for the characterization of the composites and their components at the nanoscale. The obtained results are then used to investigate size effects in the macroscopic properties of CNT-polymer composites using continuum models and efficient finite element techniques.
Molecular mechanics simulations on tensile carbon nanotubes show that their axial stiffness and axial strain field depend on the CNT diameter. Moreover, it is found that the axial strain field can be accurately reproduced using nonlocal continuum models if optimal nonlocal parameters, that vary with the nanotube diameter, and a suitable nonlocal kernel are used.
Although the numerical solution of nonlocal problems is typically challenging, higher order B-spline finite elements overcome the issues encountered when standard approximation techniques are employed. Further, molecular dynamics simulations on CNT-polymer composites show that the CNT diameter alters the atomic structure and the mechanical properties of the ordered layer of polymer chains forming around the nanotube —the interphase. Such a layer has a significant impact on the mechanical properties of the composite. Although the role of the nanotubes during elastic deformation of the composite is negligible due to the weak nonbonded interface interactions, the interphase–thanks to its highly ordered atomic structure–is shown to enhance its mechanical properties. Here, molecular mechanics simulations at the nanoscale and the numerical solution of an equivalent continuum model at the macroscale indicate that the composite stiffness increases when the diameter of the carbon nanotubes is decreased.
When possible, the reliability of the results in this thesis has been assessed by means of analytical models and experimental or numerical results in the literature. Therefore, this study proposes a computational framework to improve our understanding of the mechanical response of CNT-polymer composites and the size effects on their elastic properties.","carbon nanotubes; carbon nanotube-polymer composites; size effects; finite element method; atomistic simulations","en","doctoral thesis","","9789402804928","","","","","","","","","Applied Mechanics","","",""
"uuid:f8db79bd-3980-4797-8814-a863517d8fd7","http://resolver.tudelft.nl/uuid:f8db79bd-3980-4797-8814-a863517d8fd7","Versatile Structured Illumination Microscopy","Chakrova, N. (TU Delft ImPhys/Quantitative Imaging)","van Vliet, L.J. (promotor); Stallinga, S. (copromotor); Rieger, B. (copromotor); Delft University of Technology (degree granting institution)","2017","","","en","doctoral thesis","","978-94-6299-524-6","","","","","","","","","ImPhys/Quantitative Imaging","","",""
"uuid:325ebcfb-f920-400c-8ef6-21b2305b6920","http://resolver.tudelft.nl/uuid:325ebcfb-f920-400c-8ef6-21b2305b6920","Ice-induced vibrations of vertically sided offshore structures","Hendrikse, H. (TU Delft Applied Mechanics)","Metrikine, A. (promotor); Loset, Sveinung (promotor); Delft University of Technology (degree granting institution)","2017","Offshore developments in ice-covered waters, such as the Arctic Ocean or Baltic Sea, have received increasing attention from the petroleum and wind power industries over the past decade. Sustainable developments in such waters can contribute to a balanced energy future provided that the deployed offshore structures are designed to be safe. The potential development of ice-induced vibrations has to be considered in the design of bottom founded offshore structures with a vertically sided waterline cross-section subject to ice. These vibrations, originating from dynamic interaction between the ice and structure, can result in high global peak loads and significantly contribute to the fatigue of structures. A governing theory which can explain the development of ice-induced vibrations has not yet been defined, despite several decades of research. As a consequence the tools required for detailed design of structures subject to ice-induced vibrations are not yet available. The main objective of this study is to define a physical mechanism which can explain the development of ice-induced vibrations and is consistent with existing experimental and full-scale observations. A literature study and new experiments in the large ice-basin at HSVA in Hamburg have resulted in the identification of key features of the interaction process. A new theory has been proposed, namely that the variations in the contact area between the intact ice and structure govern ice-induced vibrations. These variations result from the velocity dependent deformation and failure behaviour of the ice. Based on the theory a phenomenological model for the prediction of ice-induced vibrations has been developed of which the predictions have been shown to be consistent with experimental observations. Additionally, the limiting effect of ice buckling on ice-induced vibrations has been studied and practical application of the model illustrated on the basis of simulation examples.","Ice-induced vibrations; ice engineering; offshore structures","en","doctoral thesis","","978-94-6186-746-9","","","","","","","","","Applied Mechanics","","",""
"uuid:66004c3b-3cc7-46ec-a970-c9fc1145e889","http://resolver.tudelft.nl/uuid:66004c3b-3cc7-46ec-a970-c9fc1145e889","Shining light on human breath analysis with quantum cascade laser spectroscopy","Reyes Reyes, A. (TU Delft ImPhys/Optics)","Urbach, Paul (promotor); Bhattacharya, N. (promotor); Delft University of Technology (degree granting institution)","2017","In the search for new non-invasive diagnostic methods, healthcare researchers have turned their attention to exhaled human breath. Breath consists of thousands of molecular compounds in very low concentrations, in the order of parts per million by volume (ppmv), parts per billion by volume (ppbv) and parts per trillion by volume (pptv). They are the result of the different biological process taking place inside the body. When a disease is present the production of specific molecules is altered. In this work in particular, we investigate two cases. One being the concentration of acetone on minors with type 1 diabetes (T1D). In the second case we compare the breath of three groups to establish
significant differences and to identify relevant molecules. The groups under study are healthy children, children with asthma and children with cystic fibrosis (CF).
The main challenges in human breath research are the detection of concentration changes in small quantities and the establishment of a direct relation between specific molecules and particular diseases. We use quantum cascade lasers (QCLs), a multipass cell and Mercury Cadmium Telluride (MCT) detectors to study the absorption of the molecular components of breath. We improve the identification of molecules by applying a multiline fitting algorithm.
The different molecules present in breath have a strong absorption signature in the mid-infrared. For this reason we use QCLs emitting in the region between 832 and 1262.55 cm-1. In this region each molecular species has a unique absorption fingerprint that allows its identification. The absorption is magnified by increasing the interaction distance between the light of the QCLs and the gas sample. We use a multipass cell with two astigmatic mirrors. The multiple reflections in the mirrors provide an effective interaction distance of 54.36 meters inside a volume of only 0.6 liters. For the detection we use MCT detectors directly because the QCLs emit a very specific wavenumber at a time. The absorption spectra are built by scanning the wavenumber of the QCLs twice: first with the multipass cell empty, to build a reference, and then with the breath sample to measure the absorption.
The scan of the QCLs eliminates the need of extra elements to separate the wavenumbers to build the absorption spectra. However, scanning over a broad wavenumber region introduces a new challenge: to guarantee its repeatability. This includes the assurance that the QCLs emit the same wavenumbers with the same intensities in every single scan. Only by minimizing the variability between independent scans we can create reliable absorption spectra and improve the sensitivity of the setup. We use two MCT detectors to monitor the intensity fluctuations. One detector is dedicated to monitor the intensity fluctuations of the QCLs while the other detector measures the intensity of the QCLs after the light has crossed the multipass cell. The variation of the wavenumber emission produces that independent scans are warped and uncorrelated with respect to each other. We implement two methods to correlate the measurements taken with the empty multipass cell and the measurements with the breath samples: a scan correlation using selected wavenumbers and a scan correlation using semiparametric time warping. Both methods are successful in obtaining a meaningful absorption spectrum. The selection of wavenumbers is more adequate to study molecules with a smooth profile and the semiparametric time warping method is more suitable for molecules with sharp absorption features. The result of the wavenumber and intensity corrections give the system a noise equivalent absorption sensitivity (NEAS) of 2.99×10−7cm−1Hz−1/2. With this NEAS we can detect ppbv concentrations of acetone in presence of 2% of water in the same wavenumber region. The complexity of the gas mixture in breath makes the identification of specific molecular components difficult. We implement a multiline fitting algorithm to analyze specific molecules and determine their concentrations. We use this method to study the concentration of acetone and methane in the exhaled breath of healthy children. For acetone we use its absorption signature in the 1150 - 1250 cm-1 region. Our results show that the production of acetone in healthy children is below the standard range established for healthy adults, between 0.39 and 1.09 ppmv. But the information and studies in this regard are limited and therefore more studies should be performed. In the case of methane we use its absorption fingerprint between 1258 and 1262.5 cm-1. The methane concentration in the breath of the participants is below 1 ppmv, which classifies them as non-producers. Given the small number of participants, eleven, this result is in accordance with previous reports establishing that only 10% to 20% of the children are methane producers. We perform a specific study to investigate the acetone concentration in the exhaled breath of T1D patients. We analyze the breath of two minors and one adult T1D patient, and the breath of one healthy volunteer. Simultaneously, we measure the blood glucose and ketone concentrations in blood to inspect their relation with acetone in exhaled breath. For each volunteer, we performed a series of measurements over a period of time, including overnight fasting of 11 ± 1 hours and during ketosis-hyperglycemia events for the minors. The results highlight the importance of performing personalized studies because the response of the minors to the presence of ketosis was consistent but unique for each individual. As in the case of healthy children mentioned above, we also find that the acetone concentration in the breath of T1D minors in stable conditions is lower than the standard range for healthy adults. This emphasizes the need to perform more studies with children and specifically with T1D minors. We strongly believe that a better understanding of the production of acetone in exhaled breath can help to develop new diagnostic methods. For example, it can be used to detect chronic ketosis, which is a condition that many children present in the early stages of T1D. In many cases children live with chronic ketosis for years before being diagnosed with T1D. By detecting abnormal concentrations of acetone we can help to diagnose T1D earlier. In a separate study we explore the clinical applicability of our spectroscopic setup by comparing the exhaled breath of 35 healthy children, 39 children with stable asthma and 15 with stable CF. Their age range is 6 – 18 years. We collect two to four exhaled breath samples in Tedlar bags and obtain their absorption spectrum in the region between 832 and 1262.55 cm-1. The results show a poor repeatability (Spearman’s ρ = 0.36 to 0.46) and agreement of the complete profiles. However, we identify wavenumber regions where the profiles are significantly different. Using these regions and the information from two molecular databases we present a list of molecules that can be used to discriminate between healthy children and children with asthma or CF. Our suggestion is to perform more studies and use the identified molecules as basis to understand the underlying inflammatory processes of asthma and CF. This study shows that the identification of the molecular components of exhaled breath is important and may be useful to develop new personalized treatments. Because scientists like to dream about the future, we also explore the future possibilities in exhaled breath research. We strongly believe the next generation of exhaled breath systems will be a hybrid between optical detection systems, electrochemical methods and nanotechnology. This idea is firmly supported by the latest developments in small hollow waveguides for lasers and the most advanced pre-concentration and filtering methods for gas samples. Furthermore, the growing interest in new, non-invasive medical systems is making exhaled breath research a very important player in the global economy. We cannot foresee all the benefits exhaled breath research can offer to society but without doubt its value is immense.","Human breath analysis; Quantum cascade laser spectroscopy; Trace gas detection","en","doctoral thesis","","978-94-6186-767-4","","","","","","2017-01-20","","","ImPhys/Optics","","",""
"uuid:ca460c12-bf40-4f40-9240-d6c8aa5c37ca","http://resolver.tudelft.nl/uuid:ca460c12-bf40-4f40-9240-d6c8aa5c37ca","Monolithic 3D Wafer Level Integration: Applied for Smart LED Wafer Level Packaging","Koladouz Esfahani, Z. (TU Delft Electronic Components, Technology and Materials)","Zhang, Kouchi (promotor); van Zeijl, H.W. (copromotor); Delft University of Technology (degree granting institution)","2017","","System-in-Package; 3D wafer-level integration; LED; high aspect ratio lithography; multi-step imaging; smart silicon interposer; side-wall photodiode; blue/UV selective photodetector; sensor readout; BiCMOS process; wafer-level optic; multidisciplinary simulation; optical simulation","en","doctoral thesis","","978-94-028-0513-0","","","","","","2019-01-19","","","Electronic Components, Technology and Materials","","",""
"uuid:c4b2c0ce-fe42-47c4-9103-c124c05bfcad","http://resolver.tudelft.nl/uuid:c4b2c0ce-fe42-47c4-9103-c124c05bfcad","Seeding Moral Responsibility in Ownership: How to Deal with Uncertain Risks of GMOs","Robaey, Z.H. (TU Delft Ethics & Philosophy of Technology)","van de Poel, I.R. (promotor); Delft University of Technology (degree granting institution)","2017","","","en","doctoral thesis","","978-90-386-4205-5","","","","","","","","","Ethics & Philosophy of Technology","","",""
"uuid:fc197d29-f9bd-4b0e-a4d5-485344d8d429","http://resolver.tudelft.nl/uuid:fc197d29-f9bd-4b0e-a4d5-485344d8d429","Numerical Analysis and Experimental Verification of Stresses Building up in Microelectronics Packaging","Rezaie Adli, A.R. (TU Delft Emerging Materials; TU Delft Precision and Microsystems Engineering)","Jansen, K.M.B. (promotor); Ernst, L.J. (promotor); Delft University of Technology (degree granting institution)","2017","This thesis comprises a thorough study of the microelectronics packaging process by means of various experimental and numerical methods to estimate the process induced residual stresses. The main objective of the packaging is to encapsulate the die, interconnections and the other exposed internal components by providing mechanical protection, heat dissipation, electrical insulation and etc. It is a three stage process comprising encapsulation of the die, complete polymerization at a preset mold temperature and cooling to room temperature. Thermosetting polymers are used as the encapsulant in this process due to their unique mechanical, thermal and electrical properties. The packaging results in residual stress build-up both during the molding and later due to the cyclic thermo-mechanical loading of the electronic or electromechanical devices in which the encapsulated package is fixed. These residual stresses are initiated by the crosslinking (curing) of the epoxy polymer during the molding. Crosslink formation is the property of the thermosetting polymers, which is accompanied by stiffness build-up and shrinkage under constrained boundary conditions. Besides, the encapsulation molding is conducted at a high cure temperature (≈175oC). Hence, the subsequent cooling to room temperature leads to further shrinkage of the cured polymer along with the other encapsulated package components and the CTE mismatch between the layers adds up to the total residual stress inside the package. The key to a reliable simulation lies in an accurate representation of the material and mechanical behavior of the polymer. In this thesis, the time, temperature and conversion dependent behavior of the epoxy molding compound (EMC) is determined by various experimental methods including DSC, DMA, rheometer and PVT and the relevant material behaviors are modeled and implemented in 1D and 2D numerical methods. For verification of the numerical results a novel experimental method is used providing the real time stress measuring capability during packaging.","","en","doctoral thesis","","978-94-6186-777-3","","","","","","","","Precision and Microsystems Engineering","Emerging Materials","","",""
"uuid:f214f594-a21f-4318-9f29-9776d60ab06c","http://resolver.tudelft.nl/uuid:f214f594-a21f-4318-9f29-9776d60ab06c","Quantum Noise Effects in e-Beam Lithography and Metrology","Verduin, T. (TU Delft ImPhys/Charged Particle Optics)","Kruit, P. (promotor); Hagen, C.W. (copromotor); Delft University of Technology (degree granting institution)","2017","","","en","doctoral thesis","","978-94-6186-782-7","","","","","","","","","ImPhys/Charged Particle Optics","","",""
"uuid:7a2d4e15-e28b-4645-94d0-64b972064c87","http://resolver.tudelft.nl/uuid:7a2d4e15-e28b-4645-94d0-64b972064c87","Thickness effect in composite laminates in static and fatigue loading","Lahuerta Calahorra, F. (TU Delft Applied Mechanics; TU Delft CITG Section Building Engineering)","Sluys, Lambertus J. (promotor); van der Meer, F.P. (copromotor); Nijssen, Rogier (copromotor); Delft University of Technology (degree granting institution)","2017","Thick Laminates (above 6mm) are increasingly present in large composites structures such as wind turbine blades. Designs are based on static and fatigue coupon tests performed on 1-4mm thin laminates. However, a thickness effect has been observed in limited available experimental data. For this reason standard experimental data cannot automatically be transferred to thicker laminates.
Different factors are suspected to be involved in the decrease of static and dynamic performance of thick laminates. These include the effect of self-heating, a mechanical scaling effect and the manufacturing process influence.
Self-heating during fatigue is related to the material energy loss factor. During dynamic loading a certain percentage of mechanical energy is dissipated into heat, leading to a rise in material temperature. When the temperature approaches the maximum service temperature of the material, a reduction in fatigue life can be observed. The work proposes an FE method to forecast self-heating, which is validated by using empirical data.
Scaling effects and coupon geometry influence the results of thickness scaled coupon tests. The thickness effect was studied with the help of compression and tension tests on thickness scaled coupons. In order to reduce the test effects of the scaled coupon tests the coupon geometry and clamping system are designed for optimal load introduction.
The manufacturing process and curing cycles are reported as one of the leading causes to explain possible scaling effects. Through-thickness lamina properties were studied using the sub-laminates technique. In this way, it was possible to relate the in-plane lamina properties with the manufacturing properties conditions. A relation between the mechanical properties and the process conditions is proposed.
In the case of static and fatigue properties, the sub-laminates tests report a large variation in resin related properties which is dependent on the manufacturing process. Scaled tests are studied from this point of view; the scaling effect is related to the manufacturing process, and the assumption of uniform strength fields is considered not valid for thick laminates in comparison with thin laminates.
Preferential flow also affects tracer transport in subsurface flow systems. The celerity in unsaturated flow represents the maximum water velocity in a soil, and it may be used to predict the first arrival time of a conservative tracer. The celerity function is derived from the soil hydraulic conductivity function for unsaturated flow, and is used to derive the breakthrough curve of a conservative tracer under advective transport. Analysis of the bimodal hydraulic function for a dual-permeability model shows that different parameter sets may result in similar soil hydraulic conductivity behavior, but distinctly different celerity behavior.
In Chapter 4, a 2D hydro-mechanical model is developed using COMSOL multi-physics modeling software to couple a dual-permeability model with a linear-elastic model. Numerical experiments are conducted for two different rainfall events on a synthetic slope. The influence of preferential flow on slope stability is quantified by comparing the simulated slope failure area for single-permeability model and dual-permeability models. The single-permeability model only simulate regular wetting fronts propagating downward without representing the preferential flow. In contrast, the dual-permeability model can simulate the influence of preferential flow including the enhanced drainage that facilities pressure dissipation under low-intensity rainfall, as well as the fast pressure build-up that may trigger landslides under high-intensity rainfall. The dual-permeability model resulted in a smaller failure area than the corresponding single-permeability model under low-intensity rainfall, while the dual-permeability model resulted in a larger failure area and earlier timing than the corresponding single-permeability model for high-intensity rainfall.
In Chapter 5, a parsimonious 1D hydro-mechanical model is developed for field application by coupling a 1D dual-permeability model with an infinite slope stability analysis approach. The numerical model is benchmarked against the HYDRUS-1D for the simulation of non-equilibrium flow. In Chapter 6, the model is applied to simulate the pressure response in a clay-shales slope located in northern Italy. In the study area, preferential flow paths such as tension cracks and macropores are widespread. Intense rain-pulses in the summer can cause nearly-instant pressure responses which may reactivate landslide movement. The water exchange coefficient of the dual-permeability model is calibrated for two single-pulse rainfall-events in the summer, while all other parameters are obtained from field investigations. Results from the dual-permeability model are compared to previously published outcomes using a linear-diffusion equation, where the diffusion coefficient was calibrated for each rainfall event separately. The dual-permeability model explicitly accounts for the influence of both matrix flow and preferential flow on water flow and pressure propagation in variably saturated soils, and is able to simulate the measured pressure response to multi-pulse rainfall-events quite well even in the winter time. Results indicate that the dual-permeability model may be more appropriate for the prediction of landslide-triggering when the pore water pressure response is influenced by preferential flow under high-intensity rainfall.
A ship is constructed by first building large steel blocks, referred to as sections. Steel parts and profiles are welded together to create sections during the section building process. At the conclusion of section building, time is reserved for installing components in a section. The hull of the ship is formed by welding these sections together on a slipway or drydock. This process is referred to as erection. European shipyards mainly focus on planning the steel-related tasks of the section building and erection processes. However, their workload has shifted in recent years to become increasingly dominated by outfitting tasks. This mismatch further worsens the outfitting-related problems facing these shipyards.
Automatic production planning can potentially mitigate some of the main problems facing European shipyards building complex ships. However, to maximize the effectiveness of such an approach, an integrated method must be created which considers all relevant portions of the shipbuilding process: erection, section building, and outfitting. This dissertation develops an Integrated Shipbuilding Planning Method. This method uses the characteristics of a shipyard, the geometry of a ship, and major project milestones to automatically generate an integrated erection, section building, and outfitting plan. The Integrated Shipbuilding Planning Method was not designed to replace existing shipyard planners, but instead enhance their decision-making abilities. The method aims to provide these planners with a set of high-quality production schedules that can be used as a starting point for drafting the initial plan.
The foundation of Integrated Shipbuilding Planning Method is based on a mathematical model of the shipbuilding process. This model was synthesized from existing literature, expert opinion, and an analysis of the operations of a typical European shipyard. This model explicitly defines the geometric, operational, and temporal relationships that constrain the shipbuilding process. Novel techniques were developed to automatically extract several of these constraints from the data readily available in a shipyard. The mathematical model also defines the objectives used to measure the quality of a production schedule. A combination of multi-objective genetic algorithms and custom designed heuristics were used to solve the proposed mathematical model. This solution approach tailored historically successful optimization techniques to the specific problem structure of scheduling shipbuilding tasks. Although the developed solution approach does not guarantee that the optimal solution will be found, it allows for sufficiently high-quality solutions to be discovered in reasonable computational times.
The Integrated Shipbuilding Planning Method was evaluated with a test case of a pipelaying ship recently delivered from a Dutch shipyard. This method created a variety of high-quality production plans of both the erection and section building processes in a reasonable computational time. The automatically generated production schedules significantly outperformed those manually generated by the shipyard planners. Especially large gains were seen with respect to the evenness of the outfitting workload and the time available to install components on the slipway. Furthermore, the negligible run time allows planners to quickly make adjustments and test different scenarios. The input data required for creating the section building and erection schedules matches the information that shipyard planners have access to at the start of a new project. Not only was the Integrated Shipbuilding Planning Method able to optimize the planning of the erection and section building independently, it was also shown to be capable of concurrently optimizing the planning of both processes.
Implementing the Integrated Shipbuilding Planning Method in a shipyard for automatically scheduling the section building and erection processes should be relatively straightforward. This method works with the same data (both input and output) as the shipyard planners drafting the initial production schedules. A shipyard would still need to adapt the method to their own process by incorporating their own production data; modifying the constraints and objective to match their production process; tuning the parameters of the solution technique; and implementing the result in the work flow of their planners. However, the global approach and algorithms underlying the solution technique are directly applicable.
A detailed outfitting schedule was also created for the test case ship using the Integrated Shipbuilding Planning Method. Although a high-quality solution was found, the required computational time was somewhat extensive due to the large problem size and complex nature of the relationships constraining the installation of outfitting components. The detailed outfitting schedule was used to determine the influence of the outfitting process on erection and section building. To generate the detailed outfitting schedule, a high level of geometric detail was required because such a schedule is defined on the component level. Such detailed geometry, however, is generally not fully available prior to the onset of outfitting due to the concurrent nature of the detailed engineering and production processes of modern European shipyards. The full implementation of the Integrated Shipbuilding Planning Method for automatically generating detailed outfitting schedules is currently limited by the extensive computational requirements and the timely availability of detailed geometric data.
The Integrated Shipbuilding Planning Method was also used to examine two production scenarios to demonstrate its applicability in making strategic decisions. The method was first used to evaluate the performance of three different block building strategies in relation to the erection and section building processes. A recommendation was given for the best strategies assuming the shipyard prioritized having a level resource demand. The effect of the implementation of multi-skilled workers on the outfitting process was also examined. This scenario determined the effect of six different types of multi-skilled mounting teams on the total number of mounting teams required to build the test case ship. In both cases, the scenario analyses provided additional, useful information which could aid a shipyard in making strategic decisions. Because strategic decisions are generally based on historical data, the timely availability of detailed geometric data should not hinder the applicability of the Integrated Shipbuilding Planning Method for supporting such decisions.
The Integrated Shipbuilding Planning Method is novel for several reasons. First, this method is the only automatic planning method developed for shipbuilding that fully incorporates the outfitting process. This method is also the first example of a scheduling methodology that concurrently plans the erection and section building tasks of a shipbuilding project. Furthermore, this approach demonstrates the feasibility of using a priority-based heuristic function in a multi-objective genetic algorithm to effectively schedule a large set of production tasks. Lastly, the production scenarios examined using the Integrated Shipbuilding Planning Method prove that it is possible for a shipyard to use optimization techniques to support strategic planning decisions.","","en","doctoral thesis","","978-94-6186-771-1","","","","","","","","","Ship Design, Production and Operations","","",""
"uuid:271c4360-e242-42aa-8974-72fe012365ee","http://resolver.tudelft.nl/uuid:271c4360-e242-42aa-8974-72fe012365ee","Subgrid is Dancing with Sediment: A Full Subgrid Approach for Morphodynamic Modelling","Volp, N.D. (TU Delft Environmental Fluid Mechanics)","Stelling, G.S. (promotor); Pietrzak, J.D. (promotor); van Prooijen, Bram (copromotor); Delft University of Technology (degree granting institution)","2017","","","en","doctoral thesis","","978-94-6233-506-6","","","","","","","","","Environmental Fluid Mechanics","","",""
"uuid:89a78ae9-7ffb-4260-b25d-698854210fa8","http://resolver.tudelft.nl/uuid:89a78ae9-7ffb-4260-b25d-698854210fa8","Added value of distribution in rainfall-runoff models for the Meuse basin","de Boer-Euser, Tanja (TU Delft Water Resources)","Savenije, Hubert (promotor); Hrachowitz, M. (copromotor); Delft University of Technology (degree granting institution)","2017","Why do equal precipitation events not lead to equal discharge events across space and time? The easy answer would be because catchments are different, which then leads to the second question: Why do hydrologists often use the same rainfall-runoff model for different catchments? Probably because specifying and distributing hydrological processes across catchments is not straightforward. It requires catchment data and proper tools to evaluate the details and spatial representation of the modelled processes. However, making a model more specific and distributed can improve the performance and predictive power of the hydrological model. Therefore, this thesis evaluates the added value of including spatial characteristics in rainfall-runoff models.
Most model experiments in this thesis are carried out in the Ourthe catchment, a subcatchment of the Meuse basin. This catchment has a strong seasonal behaviour, responds quickly to precipitation and has a large influence on peak flows in the Meuse. It has a variety of landscapes, among which steep forested slopes and flat agricultural fields.
This thesis proposes a new evaluation framework (Framework to Assess Realism of Model structures (FARM)), based on different characteristics of the hydrograph (hydrological signatures). Key element of this framework is that it evaluates both performance (good reproduction of signatures) and consistency (reproduction of multiple signatures with the same parameter set). This framework is used together with various other model evaluation tools to evaluate models at three levels: internal model behaviour, model performance and consistency, and predictive power.
The root zone storage capacity (Sr) of vegetation is an important parameter in conceptual rainfall-runoff models. It largely determines the partitioning of precipitation into evaporation and discharge. Distribution of a climate derived Sr-value (i.e., based on precipitation and evaporation) was compared with Sr-values derived from soil samples in 32 New Zealand catchments. The comparison is based on spatial patterns and a model experiment. It is concluded that climate is a better estimator for Sr than soil, especially in wet catchments. Within the Meuse basin, climate derived Sr -values have been estimated as well; applying these newly derived storage estimates improved model results.
Two types of distribution have been tested for the Ourthe catchment: the distribution of meteorological forcing and the distribution of model structure. The distribution of forcing was based on spatially variable precipitation and potential evaporation. These were averaged at different levels within in the model, thereby creating four levels of model state distribution. The model structure was distributed by using two hydrological response units (HRUs), representing wetlands and hillslopes. Eventually, a lumped and a distributed model structure were compared, each with four levels of model state (forcing) distribution. From this, it is concluded that distribution of model structure is more important than distribution of forcing. However, if the model structure is distributed, the forcing should be distributed as well.
Knowing that distribution of model structure is relevant, more detailed process conceptualisations have been tested for the Ourthe Orientale, a subcatchment of the Ourthe. An additional agricultural HRU was introduced for which Hortonian overland flow and frost in the topsoil are assumed to be relevant. In addition, a degree-day based snow module has been added to all HRUs. Adding these process conceptualisations improved the performance and consistency of the model on an event basis. However, the implemented processes and the related signatures are sensitive to errors in forcing and model outliers and should therefore be implemented carefully.
This thesis finishes with two explorative comparisons; one comparing the newly developed model of the Ourthe Orientale catchment with other catchments; the second between the newly developed model and other models, including the HBV configuration currently used for operational forecasting in the Meuse basin. These comparisons were carried out based on visual inspections of parts of the hydrograph. The results show that the newly developed model can be applied in neighbouring catchments with similar performance. The comparison with other models demonstrates that a very quick overland flow component and a parallel configuration of fast and slow runoff generating reservoirs is important to reproduce the dynamics of the hydrograph related to different time scales. Both aspects are included in the newly developed model. As a results, the newly developed model is better able to reproduce most of the dynamics of the hydrograph than the operational HBV configuration, used at the moment of writing.
Distribution and detailed process conceptualisation are very beneficial for rainfall-runoff modelling of the Ourthe catchment. However, they should be applied with care. Conceptual models are a strong simplification of reality. When confronting them only with discharge data, there is a risk of misinterpreting other hydrological processes.
This thesis suggests two possible opportunities to further improve conceptual models. First, catchment understanding could be increased by adding more physical meaning to the models, such as the climate derived root zone storage capacity. And second, remote sensing and plot scale data could be combined to link hydrological processes at different scales. In this way conceptual models can probably be used to get more insight into scaling issues, which occur when moving from hillslope to catchment scale.
phase of this research work, simulated aircraft-based volcanic ash measurements, will be assimilated into a transport model to identify the potential benefit of this kind of observations in an assimilation system. The results show that assimilating aircraft-based measurements can improve the state of ash clouds, and can provide an improved forecast. We also show that for an advice on the aeroplane flying level, aircraft-based measurements should preferably be taken at this level. Furthermore
it is shown that in order to make an acceptable advice for aviation decision makers, accurate knowledge about uncertainties of ESPs and measurements is of great importance.
The forecast accuracy of distal volcanic ash clouds is important for providing valid aviation advice during volcanic ash eruptions. However, because the distal part of a volcanic ash plume is far from the volcano, the influence of eruption information on this part becomes rather indirect and uncertain, resulting in inaccurate volcanic ash forecasts in these distal areas. In this thesis, we use real-life aircraft in situ observations, measured in the North-West part of Germany during the 2010 Eyjafjallajökull eruption, in an ensemble-based data assimilation system to investigate the potential improvement on the forecast accuracy with regard to the distal volcanic ash plume. We show that the error of the analyzed volcanic ash state can be significantly reduced by assimilating real-life in situ measurements. After assimilation, it is shown that the model-based aviation advice for Germany, the Netherlands and Luxembourg can be improved. We suggest that with suitable aircrafts measuring once per day across the distal volcanic ash plume, the description and prediction of volcanic ash clouds in these areas can be improved significantly.
Among the data assimilation approaches, the ensemble Kalman filter (EnKF) is a well-known and popular method. A proper covariance localization strategy in the analysis step of EnKF is essential for reducing spurious covariances caused by the finite ensemble size, as shown for this application for assimilation of aircraft in situ measurements. After analyzing the characteristics of the physical forecast error covariances, we present a two-way tracking approach to define the localization matrix
for covariance localization. The result shows that the Two-way-tracking Localized EnKF (TL-EnKF) effectively maintains the correctly specified physical covariances and largely reduces the spurious ones. The computational cost of TL-EnKF is also evaluated and is shown to be advantageous for both serial and parallel implementations. Compared to the commonly used distance-based covariance localization, the two-way tracking approach is shown to be more suitable. In addition, the covariance inflation approach is verified as an additional improvement to TL-EnKF to achieve more accurate results.
A timely prediction requires that the computations of the data assimilation system can be performed quickly (at least than the Wall-clock). We therefore investigate strategies for accelerating the data assimilation algorithm. Based on evaluations of the computational time, the analysis step of the assimilation turns out to be the most expensive part. After a study on the characteristics of the ensemble ash state, we propose a mask-state algorithm which records the sparsity information of the full ensemble state matrix and transforms the full matrix into a relatively small one. This will reduce the computational cost in the analysis step. Experimental results show the mask-state algorithm significantly speeds up the analysis step. Subsequently, the total amount of computing time for volcanic ash data assimilation is reduced to an acceptable level. The mask-state algorithm is generic and thus can be embedded in any ensemble-based data assimilation framework. Moreover, ensemble-based data assimilation with the mask-state algorithm is promising and flexible, because it implements exactly the standard data assimilation without any approximation and it realizes the satisfying performance without any change of the full model.
Infrared satellite measurements of volcanic ash mass loadings are often used as
input observations for the assimilation scheme. However, these satellite-retrieved
data are often two-dimensional (2D), and cannot easily be combined with a three-dimensional (3D) volcanic ash model to improve the volcanic ash state. By integrating available data including ash mass loadings, cloud top heights and thickness information, we propose a satellite observational operator (SOO) that translates satellite-retrieved 2D volcanic ash mass loadings to 3D concentrations at the top layer of the ash cloud. Ensemble-based data assimilation is used to assimilate the extracted measurements of ash concentrations. The results show that satellite data assimilation can force the volcanic ash state to match the satellite observations, and that it improves the forecast of the ash state. Comparison with highly accurate aircraft in situ measurements shows that the effective duration of the improved volcanic ash forecasts is about a half day.","Data assimilation; volcanic ash forecast; aircraft data; satellite data; high performance computing; spurious correlations","en","doctoral thesis","","978-94-92516-34-3","","","","","","","","","Mathematical Physics","","",""
"uuid:21528dbe-4238-4d82-be9d-a533856a489f","http://resolver.tudelft.nl/uuid:21528dbe-4238-4d82-be9d-a533856a489f","Determinsitic Prediction of Waves and Wave Induced Vessel Motions. Future telling by ising nautical radr as a remote wave sensor","Naaijen, Peter","","2017","","hydrodynamics","","doctoral thesis","","","","","","","","2022-12-31","Mechanical, Maritime and Materials Engineering","Marine and Transport Technology","Ship Hydromechanics and Structures","","",""
"uuid:2a1f0bd2-ac41-4707-8d3a-b7f2c74d8ad4","http://resolver.tudelft.nl/uuid:2a1f0bd2-ac41-4707-8d3a-b7f2c74d8ad4","Impacts of rudder configurations on inland vessel manoeuvrability","Liu, Jialun","Hopman, H. (promotor)","2017","","hydrodynamics","","doctoral thesis","","","","","","","","","Mechanical, Maritime and Materials Engineering","Marine and Transport Technology","Ship Design, Production and Operation","","",""
"uuid:58b35a4e-c6b1-4fd4-942b-601fdbb38e77","http://resolver.tudelft.nl/uuid:58b35a4e-c6b1-4fd4-942b-601fdbb38e77","The Design of Integrated Frequency Sources and their Application to Wideband FM Demodulation","Visweswaran, A. (TU Delft Electronics)","Long, J..R.. (promotor); Delft University of Technology (degree granting institution)","2017","","","en","doctoral thesis","","978-94-6186-801-5","","","","","","","","","Electronics","","",""
"uuid:95a303ae-565c-41e9-91a7-0dea4d4207a1","http://resolver.tudelft.nl/uuid:95a303ae-565c-41e9-91a7-0dea4d4207a1","Multiscale Computational Modeling of Brittle and Ductile Materials under Dynamic Loading","Karamnejad, A. (TU Delft Applied Mechanics)","Sluys, Lambertus J. (promotor); Delft University of Technology (degree granting institution)","2016","The computational homogenization method enables to derive the overall behavior of heterogeneous materials from their local-scale response. In this method, a representative volume element (RVE) is assigned to a macroscopic material point and the constitutive law for the macroscopic model at that point is obtained by solving a boundary value problem for the RVE. However, the standard computational homogenization scheme cannot be used when strain localization occurs and does not account for dynamic effects at the local-scale. Furthermore, in the computational homogenization scheme, at each iteration, a boundary value problem should be solved for RVEs associated to the integration points of macroscopic elements which leads to high computational cost. When the problem is nonlinear (material and/or geometrical nonlinearities), the computational cost may become more than used for direct numerical simulation (DNS).
This study aims at developing computational and numerical homogenization schemes which account for strain localization, dynamic effects at the local-scale and large deformations and strains. Furthermore, strategies are presented to decrease the computational cost while preserving accuracy. Different heterogeneous structures consisting of quasi-brittle materials, hyperelastic materials and polymer materials are studied and proper homogenization schemes are presented.","","en","doctoral thesis","","978-94-6186-760-5","","","","","","","","","Applied Mechanics","","",""
"uuid:91549107-7a2e-470c-8e08-cc3aa7200f42","http://resolver.tudelft.nl/uuid:91549107-7a2e-470c-8e08-cc3aa7200f42","Process Improving in Sleeve Gastrectomy","van Rutte, P.W.J. (TU Delft Applied Ergonomics and Design)","Goossens, R.H.M. (promotor); Nienhuijs, SW (copromotor); Delft University of Technology (degree granting institution)","2016","","","en","doctoral thesis","","","","","","","","","","","Applied Ergonomics and Design","","",""
"uuid:c107cc92-d275-45df-ad56-b754e8ead98c","http://resolver.tudelft.nl/uuid:c107cc92-d275-45df-ad56-b754e8ead98c","Robustness of complex networks: Theory and application","Wang, X. (TU Delft Network Architectures and Services)","Van Mieghem, P.F.A. (promotor); Kooij, Robert (promotor); Delft University of Technology (degree granting institution)","2016","Failures of networks, such as power outages in power systems, congestions in
transportation networks, paralyse our daily life and introduce a tremendous cascading effect on our society. Networks should be constructed and operated in a robust way against random failures or deliberate attacks.
We study how to add a single link into an existing network such that the robustness of the network is maximally improved among all the possibilities. A graph metric, the effective graph resistance, is employed to quantify the robustness of the network. Though exhaustive search guarantees the optimal solution, the computational complexity is high and is not scalable with the increase of network size. We propose strategies that take into account the structural and spectral properties of networks and indicate links whose addition result in a high robustness level.","Complex Networks; Robustness of Networks; Graph Spectra; Power Grids; Metro Networks; Line Graph; Eigenvectors/Eigenvalues; Interdependent Networks","en","doctoral thesis","","978-94-6186-775-9","","","","","","","","","Network Architectures and Services","","",""
"uuid:31e9f07b-4070-4a53-8c82-57c26a2bcae6","http://resolver.tudelft.nl/uuid:31e9f07b-4070-4a53-8c82-57c26a2bcae6","Methods for Dynamic Contrast Enhanced MRI","van Schie, J.J.N. (TU Delft ImPhys/Quantitative Imaging)","van Vliet, L.J. (promotor); Stoker, J (promotor); Vos, F.M. (copromotor); Lavini, C (promotor); Delft University of Technology (degree granting institution)","2016","Dynamic Contrast Enhanced MRI is an important technique to assess the pharmacokinetic properties of tissues. This thesis addresses two major steps necessary for quantitative DCE-MRI: the estimation of the tissue’s T1-time and local B1-field strength, and the estimation of the time-dependent concentration of contrast agent in the blood supply to the tissue of interest. In quantitative pharmacokinetic analysis, the perfusion and vascularization of tissues are estimated by measuring the response to an intravenous injection of contrast agent. This analysis relies on knowledge of the concentrations of contrast agent in both the tissue and in the blood perfusing the tissue. The contrast agent affects the T1 relaxation time of the tissue, and if the T1-time of a tissue is known, the concentration profile can be computed. However, local B1-inhomogeneities can affect the MRI signal strength, complicating the measurement of T1 using conventional methods. Furthermore, the inflow of fresh blood into the field of view causes an additional, location dependent signal enhancement in the blood, which makes a direct measurement of the T1-time (and thus the concentration) in blood impossible. This thesis introduces a new method to estimate a T1-map of tissues in the presence of B1-inhomogeneities. We do this by combining two MRI scans that can each be acquired within breath-holds: one that yields a precise T1-map, though biased by the inhomogeneous B1-field; and one that delivers an unbiased, but imprecise estimate. Combining the information of these two scans yields an estimate of the B1-field, which is then used to correct the T1-map. We validate our method in a phantom study, and in an in vivo study. We found that the proposed method successfully merges the high resolution of the first method with the insensitivity to B1-inhomogeneities of the second. This thesis also introduces a new method to estimate the time-dependent concentration of contrast agent in blood (i.e., the arterial input function (AIF)), which is affected by signal enhancement due to the inflow effect. We do this by first estimating the number of RF-pulses by incorporating knowledge about the average AIF in a population. We then use the number of pulses to re-estimate the concentration from the measured MRI signal, thereby correcting for the inflow effect. We validate our method by means of Monte Carlo simulations and with a controlled flow phantom experiment. We then apply our method to two patient datasets, and use the estimated arterial input function for pharmacokinetic modelling. The first dataset consisted of patients with spine related injuries, and was acquired under a variety of scan settings to assess the method’s robustness. The second dataset consisted of patients with Crohn’s Disease which had a clinically relevant CDEIS score available. In both datasets, we found that our method yields realistic pharmacokinetic model parameters. Instead, estimating the AIF from a distally placed region of interest, as is often done in literature, led to large variation and unrealistic parameters. Furthermore, in the Crohn’s patients we found a better correlation between the estimated pharmacokinetic parameter Ktrans and the CDEIS score, compared to traditional methods. Though the rationale for developing these methods were the presence of B1- inhomogeneities, and pronounced inflow effects in the aorta, other applications of pharmacokinetic modelling (e.g., in other parts of the body) may benefit from our methods, since they are generally applicable.","","en","doctoral thesis","","978-94-6295-521-9","","","","The work was financially supported by VIGOR++ (European Union’s Seventh Framework Program, No. 270379).","","","","","ImPhys/Quantitative Imaging","","",""
"uuid:2ca107b4-202d-4638-a044-d45649b89275","http://resolver.tudelft.nl/uuid:2ca107b4-202d-4638-a044-d45649b89275","Automatic 3D Routing for the Physical Design of Electrical Wiring Interconnection Systems for Aircraft","Zhu, Z. (TU Delft Flight Performance and Propulsion)","la Rocca, G. (promotor); van Tooren, Michel (promotor); Delft University of Technology (degree granting institution)","2016","Harness 3D routing is one of the most challenging steps in the design of aircraft Electrical Wiring Interconnection System (EWIS), due to the intrinsic complexity of the EWIS, the increasing number of applying design constraints, and its dependency on the design changes of the airframe and installed systems. The current routing process employed by EWIS design is largely based on the manual work of expert engineers, partially supported by conventional CAD systems. As a result, the routing process is quite inefficient, error prone and unable to deliver optimal solutions. Although many harness components are selected from catalogues and the design process is largely repetitive and rule based, it has been found that none or very limited automation solutions, which can significantly decrease the workload of engineers and increase their efficiency, are currently available. In this research, an innovative approach is proposed to solve the 3D routing automation as an optimization problem. Knowledge Based Engineering (KBE) and optimization methods are proposed to achieve the minimum cost routing solutions that satisfy all relevant design rules and constraints.
The basic idea is to achieve the optimal EWIS routing solutions by optimizing the position of the harnesses clamping points, which are used as way-points to route the harnesses inside the aircraft digital mock-up. The challenge to solve this optimization problem is that the number and initial value of design variables, namely the number and position of clamping points, are not known a priori. In order to handle this challenge, a two-step, hybrid optimization strategy has been devised. The first step, called Initialization, uses a road map based path finding method to generate a preliminary harness definition, including the required number and preliminary position of its clamping points. The second step, called Refinement, uses a conventional optimization method to move the position of the clamps and refine the preliminary harness definition aiming for the minimum cost and the satisfaction of all the design constraints. This approach has been implemented into a KBE application and tested on several routing cases. The results demonstrate that the proposed method is capable of handling cases of representative geometric complexity and design constraints and delivering proper 3D harness models in full automation.
Recent advances in mass spectrometry have led to a draft of the human proteome. With current mass spectrometry based techniques, these types of large scale studies remain an enormous effort. Therefore, there is a great need for breakthrough technologies to push proteomics from fundamental research into the clinic.
Genomics has benefitted from fast and inexpensive emerging single-molecule techniques. We envision similar effects for single-molecule protein sequencing. In this thesis we present our technology that will allow us to analyze protein expression profiles of samples as small as a single cell with large dynamic range.
Back in 2011, when this project was initiated, there was hardly any literature available on this topic. However, the past years more research groups openly shifted their focus to single-molecule protein sequencing. In Chapter 1, we give an overview of recent efforts to establish single-molecule protein sequencing. The foremost reason for the absence of highly sensitive and high-throughput protein sequencing techniques is the complexity of primary protein structures compared to DNA/RNA molecules. Where DNA and RNA consist of four unique building blocks, proteins are built from 20 distinctive amino acids.
Independent of the read out method of choice, this requires the detection of 20 distinguishable signals. A non-trivial challenge. Fortunately, a limited number of proteins occur compared to the theoretical number that could be created using 20 unique building blocks. While the exact number of protein coding genes in the human genome is still under debate, the number is believed to be roughly 20,000, resulting in a number of protein products that is finite. This, together with protein databases such as UniProt, allows for an alternative way of identifying protein sequences. Rather than detecting every single element, as is essential for DNA sequencing, we choose to focus on detecting the sequence of a subset of elements.","Protein sequencing; proteomics; single-molecule fluorescence; ClpXP","en","doctoral thesis","","978-90-8593-281-9","","","","Casimir PhD Series, Delft - Leiden 2016-37","","","","","BN/Chirlmin Joo Lab","","",""
"uuid:43aceb1b-b125-4f05-ad37-102fa1c388f7","http://resolver.tudelft.nl/uuid:43aceb1b-b125-4f05-ad37-102fa1c388f7","Stochastic Optimal Control Based on Monte Carlo Simulation and Least-Squares Regressiond","Cong, F. (TU Delft Numerical Analysis)","Oosterlee, C.W. (promotor); Delft University of Technology (degree granting institution)","2016","In the financial engineering field, many problems can be formulated as stochastic control problems. A unique feature of the stochastic control problem is that uncertain factors are involved in the evolution of the controlled system and thus the objective function in the stochastic control is typically formed by an expectation operator. There are in general two approaches to solve this kind of problems. One can reformulate the problem to be a deterministic problem and solve the corresponding partial differential equation. Alternatively, one calculates conditional expectations occurring in the problem by either numerical integration orMonte Carlo methods.
We focus on solving various types ofmulti-period stochastic control problems via the Monte Carlo approach. We employ the Bellman dynamic programming principle so that a multi-period control problem can be transformed into a composition of several singleperiod control problems, that can be solved recursively. For each single-period control problem, conditional expectations with different filtrations need to be calculated. In order to avoid nested simulation (i.e. Monte Carlo simulation within aMonte Carlo simulation), which may be very time consuming, we implement Monte Carlo simulation and cross-path least-squares regression. So-called “regress-later” and “bundling” approaches are introduced in our algorithms to make them highly accurate and robust. In most cases, high quality results can be obtained within seconds.","Stochastic optimization; portfolio management; Monte Carlo simulations; least squares regression","en","doctoral thesis","","978-94-6186-753-7","","","","","","","","","Numerical Analysis","","",""
"uuid:820d9e6b-4ac4-4283-bbcf-cc090d17fa2c","http://resolver.tudelft.nl/uuid:820d9e6b-4ac4-4283-bbcf-cc090d17fa2c","An Integrated Approach to Optimised Airport Environmental Management","Heblij, S.J. (TU Delft Air Transport & Operations)","Curran, R. (promotor); Visser, H.G. (copromotor); Delft University of Technology (degree granting institution)","2016","Airports around the world continue to face issues related to the environmental impact of aviation. Mitigation measures have therefore been implemented at many of these airports. Attaining an optimal combination of these mitigation measures is a complex process and because of these complexities, this process does not always result in the most efficient solution in terms of environmental impact.
It is expected that some of the inefficiencies of mitigation measures can be eliminated by using a process that is based on three main principles: to use mathematical optimisation in order to select the best mitigation options, to evaluate multiple performance areas simultaneously, and to evaluate multiple mitigation options at multiple levels of aggregation simultaneously. These three principles have therefore been implemented as capabilities in an integrated decision support system, to determine whether such a system could help improve the airport environmental management process.
Based on the results obtained with the developed support system, the benefits resulting from each of the three capabilities are demonstrated. But ultimately, it is shown that especially the combination of these three capabilities, integrated into a single support system, contributes to improving the airport environmental management process.","air traffic; Environmental; optimisation; Noise","en","doctoral thesis","","978-90-826306-0-2","","","","","","2016-12-20","","","Air Transport & Operations","","",""
"uuid:12fc9d70-990e-43f0-930e-96f9a1b2879d","http://resolver.tudelft.nl/uuid:12fc9d70-990e-43f0-930e-96f9a1b2879d","Iron Studies in Man using Instrumental Neutron Activation Analysis and Enriched Stable Activable Isotopes","Yagob Mohamed, T.I. (TU Delft RST/Applied Radiation & Isotopes)","Wolterbeek, H.T. (promotor); van de Wiel, A. (promotor); Delft University of Technology (degree granting institution)","2016","Iron is an essential trace element involved in many processes in the human body. Some disorders like iron deficiency anaemia and haemochromatosis directly result from changes in iron status, while on the other hand iron metabolism changes during illness. Since these adjustments in the iron handling of the body may have consequences for the clinical outcome and treatment of patients, a reliable and accurate test to measure iron concentrations and to study iron metabolism in normal and pathological conditions is required. A number of studies in this thesis demonstrate that instrumental neutron activation analysis (INAA) is an adequate method to measure even low concentrations of iron reliably not only in blood, but also in urine, faeces and red blood cells. A great advantage of this technique in contrast to techniques such as mass spectrometry is, that materials in which iron should be measured, hardly need preparation before measurement. It is even possible to measure iron concentrations in complete meals without the need to take small samples. INAA is also able to measure low concentrations of an enriched stable iron isotope facilitating the use of such an isotope in clinical studies without exposure to radiation. In a first clinical study using INAA it could be demonstrated that Sudanese patients with severe iron deficiency anaemia also have severe zinc deficiency. Since in INAA access to a nuclear reactor facility is necessary, the technique should be considered more suitable for research than for routine use.","iron metabolism disorders; INAA; ICP-MS; enriched stable isotopes","en","doctoral thesis","","978-94-6295-584-4","","","","","","","","","RST/Applied Radiation & Isotopes","","",""
"uuid:b5902d0d-a315-4bbb-a639-6bbdc4c0e733","http://resolver.tudelft.nl/uuid:b5902d0d-a315-4bbb-a639-6bbdc4c0e733","Public Infrastructure in China: Explaining Growth and Spatial Inequality","Yu, N. (TU Delft Organisation & Governance)","de Jong, W.M. (promotor); de Bruijn, J.A. (promotor); Storm, S.T.H. (copromotor); Delft University of Technology (degree granting institution)","2016","Public infrastructure is often mentioned as a key to promoting economic growth and development. This belief has been supported by the observation of rich countries, such as the U.S., Japan and those in Western Europe, where plenty of infrastructures developed during times of rapid economic growth. China has been one of the world’s fastest-growing and most important emerging economies in recent decades with good performance of public infrastructure. However, China’s transition to a market-based economy has created new problems, among which is the growing regional inequality in per capita income. The interior region (near west) and far western regions lag far behind the coastal region in economic progress. Both theoretical and empirical evidence is could help explain the economic growth and increasing regional disparity in China.
To answer these questions, the book is organized in the following way: in chapter 1 the regional distribution pattern of the public infrastructure and economic development in China is introduced, the problem of infrastructure-led growth and disparity is diagnosed, and the research question is posed; in chapter 2 the causal linkages between transport infrastructure and economic growth in China are determined at national and regional levels separately; after identifying the causality between transport infrastructure and economic development, chapter 3 estimates the impact of transport stock on overall economic growth, and on growth at the regional level as well; the long-run effects of education attainment and its distribution on China’s growth in China are estimated in chapter 4; chapter 5 examines the distributive impact of public infrastructure (both transport infrastructure and education), highlighting the role of road infrastructure in narrowing China’s spatial concentration and inequity; chapter 6 provides a synthetic answer to the research question based on all theoretical and empirical study in the previous chapters.
Therefore, rather than providing recommendations for the Chinese governments about how much they should invest in infrastructure projects, this book aims at understanding the real role of public infrastructure in China’s growth and disparity, and illustrating how public infrastructure investment plan changes can achieve economic efficiency and spatial equity.provided to support the public infrastructure-led growth hypothesis, it is questionable, however, whether investment in infrastructure has been helpful in spurring economy, and in reducing the growing coastal-interior gap in China, considering that plenty of large infrastructure projects have been constructed or planned in the less-developed interiors. Therefore, this study explores both if and how public investment in infrastructure","","en","doctoral thesis","","978-94-6186-741-4","","","","","","","","","Organisation & Governance","","",""
"uuid:e7778a5c-3013-40ed-9567-bceaffc57ab9","http://resolver.tudelft.nl/uuid:e7778a5c-3013-40ed-9567-bceaffc57ab9","Modelling relationships between a comfortable indoor environment, perception and performance change","Roelofsen, C.P.G. (TU Delft Applied Ergonomics and Design)","Vink, P. (promotor); Delft University of Technology (degree granting institution)","2016","The attention within the communitymoves from sustainability to a healthy society. The loss of health in our society, as a result of aging, lifestyle as well as a creeping loss of attention to the primary requirement of building (i.e. Health improvement), is a major problem. The real-estate world is able to reverse this loss of health by providing healthy environments as a holistic model for the prevention of building and lifestyle related health problems, as well as to support the well-being and performance of people. The real-estate world in that respect has to offer something to the world of health. A healthy environment is of value to people and organizations. By evaluating buildings in the context of sustainability, health and performance of people is an investment in the quality of the work environment, which will result in a reduced environmental impact, better performance, greater well-being and better health. The result is a win-win situation for all participants in the housing process. Today studies and conferences still show that there is room for improvement and that indoor environmental quality is often still poor, despite policy directives, standards and guidelines. This PhD thesis attempts to support finding improvements needed to create a good indoor environment. Much knowledge is available in the literature, but difficult to access for practioners and is hard to translate this knowledge into comparison of different options. This PhD thesis tries to fill a gap between theory and practice. An attempt is made to model a large part of the knowledge that is available in such a way that it will become accessible for the professional practice.","","en","doctoral thesis","","978-94-91602-75-7","","","","","","","","","Applied Ergonomics and Design","","",""
"uuid:de1d4543-9bbe-4a2f-ac9a-f648f4066d0f","http://resolver.tudelft.nl/uuid:de1d4543-9bbe-4a2f-ac9a-f648f4066d0f","Cluster management system design for big data infrastructures","Gupta, S. (TU Delft Algorithmics)","Witteveen, C. (promotor); de Kleer, J (copromotor); Delft University of Technology (degree granting institution)","2016","In recent years,we have seen amajor shift in computing systems: data volumes are growing very fast, but hardware capabilities to store, process, and transfer the massive data are not speeding up at the same rate. Today, data are generated from a variety of sources, such as social networking websites, business transactions, banking sectors, etc. These data are valuable and contain lots of vital information if they are analyzed efficiently. The processing capabilities of single machines, however, are not sufficient enough, which
makes it harder to use them for data analysis. As a result, most web companies, but also the traditional business organizations, research labs, and universities, are scaling out their major computational frameworks to clusters of thousands of machines. To find the hidden and interesting insights from the data, in addition to simple queries, also complex machine learning algorithms and graphs processing are becoming a common choice in many areas. Nowadays, the problem to collect, store and analyze these data is called the Big Data problem.","","en","doctoral thesis","","978-94-6186-757-5","","","","","","","","","Algorithmics","","",""
"uuid:d194ac2a-3176-4550-a5d1-ae231c3a44fd","http://resolver.tudelft.nl/uuid:d194ac2a-3176-4550-a5d1-ae231c3a44fd","Capacity Drop on Freeways: Traffic Dynamics, Theory and Modeling","Yuan, K. (TU Delft Transport and Planning)","Hoogendoorn, S.P. (promotor); Knoop, V.L. (copromotor); Delft University of Technology (degree granting institution)","2016","Earlier studies on the traffic flow on freeways reveal that queue discharge rate cannot reach as high as the free-flow capacity. This important phenomenon is called the “Capacity drop”, which indicates that the potential freeway capacity cannot be fully utilized when discharging traffic jams. Even though a majority of empirical and analytical research has already been conducted to understand the capacity drop, several relevant challenges still need to be addressed. Those challenges include (1) characterizing more empirical features of the capacity drop, (2) incorporating the capacity drop into macroscopic models, (3) revealing mechanism related to driver behaviors behind the capacity drop and incorporating the mechanism into microscopic models. This thesis fills the research gap.","","en","doctoral thesis","TRAIL Research School","978-90-5584-212-4","","","","TRAIL Thesis Series no. T2016/24, the Netherlands TRAIL Research School","","","","","Transport and Planning","","",""
"uuid:2d85d7a6-c6db-4f0a-a736-416e48ab5433","http://resolver.tudelft.nl/uuid:2d85d7a6-c6db-4f0a-a736-416e48ab5433","Coordinated planning of inland vessels for large seaports","Li, S. (TU Delft Transport Engineering and Logistics)","Lodewijks, G. (promotor); Negenborn, R.R. (copromotor); Delft University of Technology (degree granting institution)","2016","Seaports are crucial nodes in international trade and transport. Some of the cargoes arriving at seaports are transshipped to other ports, while others are transported to inland destinations. Every time an inland container vessel enters the port, it calls at many different terminals spread over the port area. Two coordination problems exist in the planning of inland vessels in large seaports: firstly, the long stay in the port and secondly, the insufficient terminal and quay planning with respect to the sailing schedules of sea-going vessels and inland vessels. To solve these problems, this thesis aims to improve the reliability and efficiency of inland vessel transport in seaports. To achieve this, efficient handling of inland container vessels in large seaports is required. This could improve the efficiency and reliability of inland waterway transport from seaports to hinterland and vice versa. Meanwhile, this could also contribute to enhancing of the inter-terminal transport in large seaports, as a potential solution for alleviating congestion on roads. Moreover, efficient handling of inland vessels could facilitate flexible planning of transport over water, so that this transport mode can be better integrated into the synchromodal transport chain.
Therefore, three classes of automatic coordination methods are proposed in this thesis for seaports with different sizes: for small seaports, a partially-cooperative coordination method with single-level interaction based on distributed constraint optimization is proposed; for medium-sized seaports, a partially-cooperative coordination method with multi-level interactions based on MIP, coordination rules and constraint programming, is proposed; for large seaports, a fully-cooperative coordination method with multi-level interactions based on Benders decomposition and large neighborhood search is proposed.
With the proposed methods, firstly, the vessel operators can decide to what extent they would like to be coordinated, either partially-cooperative or fully-cooperative; secondly, terminal operators can also use these methods to estimate how much time each inland vessels spends at the terminal, in order to determine the schedules of terminal operations; thirdly, whenever real-time disturbances or accidents happen, the previously generated rotations might be no longer optimal, the proposed methods can take into account these disturbances and generate new and better rotations for vessel operators based on the up-to-date information. Simulation results demonstrate that the proposed approach can significantly reduce the round-trip time the inland vessels spend in the port, as well as reducing the time they spend waiting at container terminals.
Moreover, from the perspective of using inland vessels also for inter-terminal transport (ITT) in seaports, vessel operators can use the proposed methods to decide whether they are willing to cooperate to take extra ITT containers based on the possible extra round-trip time and waiting time that they may spend. Moreover, terminal operators can also estimate how much extra ITT containers could be transported by the incoming vessels during different times of a day, so that they can plan the ITT containers to be transported by inland vessels accordingly.
To conclude, this thesis investigates the operational planning of inland vessels in seaports. This thesis shows the potential of the proposed new approaches for improving the efficiency and reliability of inland vessel transport in seaports.","","en","doctoral thesis","","978-90-5584-216-2","","","","","","","","","Transport Engineering and Logistics","","",""
"uuid:1a6dab8e-ebbd-41a1-bd5e-866a9050fc68","http://resolver.tudelft.nl/uuid:1a6dab8e-ebbd-41a1-bd5e-866a9050fc68","Radar networks performance analysis and topology optimization","Ivashko, I. (TU Delft Microwave Sensing, Signals & Systems)","Yarovoy, Alexander (promotor); Krasnov, O.A. (copromotor); Delft University of Technology (degree granting institution)","2016","The ultimate goal of any sensing system is to build situation awareness. Existing
solutions for a single radar node that have to assure extended areas of coverage with
high resolution measurements (in range, cross-range, and Doppler) are physically
cumbersome (large antenna size) and typically require large operational resources (high
transmit power, wide bandwidth and long integration time).
Combining data from multiple spatially separated nodes located at several locations
offers a possibility to use radars with low-cost omnidirectional antennas to cover wide
areas and overcome operational limitations such as sector blockage due to landscape or
high-rise buildings. Thus, performance of the complete system becomes dependent not
only on the parameters of a single radar node, but on the number of nodes and their
location (system topology) as well. A proper selection of both node-related (transmit
power, operational frequency and bandwidth, integration time, etc.) and system-related
(node location, node cooperation) resources is an important design task, which forms
the major focus of this thesis.
The first part of this dissertation is dedicated to the development of the radar
network performance assessment tool,while the second part provides the framework for
radar network topology optimization. The potential accuracy of the target parameters
estimation has been used for radar network performance assessment. The developed
tool incorporates parameters of a single radar node as well as system parameters
(positions of the nodes and their cooperation), evaluated using Cramér-Rao lower
bound. Using the tools developed, performance of different types of radar networks have
been studied and compared in this thesis. For the radar network topology optimization
several convex and greedy algorithms have been used, making the optimization
approach versatile. Validation and performance comparison of the optimization
algorithms have been performed in this thesis.
The results obtained in this research can be used to evaluate the potential
performance of radar networks for different applications and provide a solution to key
problems of their topology design.","radar networks; convex optimization; greedy optimization; Cramér-Rao lower bound; frame potential","en","doctoral thesis","","978-94-6186-751-3","","","","","","","","","Microwave Sensing, Signals & Systems","","",""
"uuid:26d1ec89-e6e6-4d5f-b4d0-2591c4aa406a","http://resolver.tudelft.nl/uuid:26d1ec89-e6e6-4d5f-b4d0-2591c4aa406a","Delamination Analysis of A Class of AP-PLY Composite Laminates","Zheng, W. (TU Delft Aerospace Structures & Computational Mechanics)","Bisagni, C. (promotor); Kassapoglou, C. (copromotor); Delft University of Technology (degree granting institution)","2016","A recently developed fiber placement architecture, AP-PLY, has been shown to give significantly improved damage tolerance characteristics of composite structures. The behavior of delaminations resulting from low speed impact damage is of particular concern. Major attention has been paid to expand current knowledge on the delamination response of simple AP-PLY composite structure and move towards in-depth understanding of the failure mechanisms behind the damage tolerance. This thesis presents the approaches to predict delamination onset and analyze delamination growth, in support of the search of the optimum woven pattern for AP-PLY composite laminates. The recovered interlaminar stress between layers combined with the maximum stress criterion determined the delamination onset of simple AP-PLY composite laminate under out-of-plane loads. 2D finite element models with cohesive elements inserted in the interfaces of woven layers have been built to evaluate the delamination initiation and propagation in the different woven patterns of simple AP-PLY composite beams. The parameters of the woven pattern, such as the woven angle, the number of woven plies, the number of straight filled plies, and the location of the woven patterns in through the thickness direction, were investigated and shown to have a significant effect on delamination creation and growth. An energy method based on beam theory was proposed to analyze the strain energy release rate (SERR) of an existing crack in an AP-PLY beam structure. The developed analytical method was implemented in isotropic materials and the obtained SERR of a crack was validated by reference results and finite element solutions. The general behavior of crack growth on the left or right crack tip was evaluated and basic trends leading to crack propagation on one side of the crack were established. A correction factor was introduced to improve the accuracy of the SERR of a small crack through the numerical calculation. The singularity of crack tip caused by dissimilar materials was investigated and was found that the inclusion of the singularity effect could increase the accuracy for small cracks. It has been shown that the neutral axis needs to be relocated to decouple the bending and membrane behavior of unsymmetrical composite laminates, thus to meet the requirement of minimizing the strain energy of the delaminated beam to calculate the SERR of a delaminated composite beam. The calculated SERR of a crack in a composite beam has been verified by comparing with a finite element model. The woven plies in AP-PLY composite laminate altered the layup and two conventional laminates with different stacking sequences were identified in an AP-PLY composite laminate based on the assumption that the resin areas were ignored. A step by step approach was developed to obtain the SERR of a crack that goes across different materials. The analytical SERR determined when two materials are used in sequence, sets the stage for optimization of AP-PLY composite laminates without taking account of the effect of the resin area. The procedure of optimization of simple AP-PLY pattern was proposed and industry may benefit for many applications. An equivalent stiffness approach was used to model regions containing resin pockets and straight or inclined composite layers. A series of three point bending tests was carried out where the failure process and loading capacity were evaluated. The methodology, procedure of optimization, philosophy outlined in this thesis might also be applied to the more complicated fully woven AP-PLY composite laminates. The work in this thesis contributes to the understanding of the behavior of AP-PLY composite laminates with delaminations.","AP-PLY Composite Laminates; Interwoven Structures; Delamination Onset; Delamination Growth; Cohesive zone element; Energy release rate","en","doctoral thesis","","978-94-6186-765-0","","","","","","","","","Aerospace Structures & Computational Mechanics","","",""
"uuid:dc3bb80c-781f-4eb6-b331-356a0165bdef","http://resolver.tudelft.nl/uuid:dc3bb80c-781f-4eb6-b331-356a0165bdef","The Influence of Herding on Departure Choice in Case of an Evacuation: Design and Analysis of a Serious Gaming Experimental Set-up","van den Berg, M. (TU Delft Transport and Planning)","Hoogendoorn, S.P. (promotor); van Nes, R. (copromotor); Delft University of Technology (degree granting institution)","2016","Extensive research is available on travel choice behaviour which occurs during evacuations in case of natural disasters. Due to the disadvantages of existing data collection techniques, more research is needed to better understand evacuation choice behaviour. The main objective of this thesis is twofold: (1) to develop, apply and assess a new experimental set-up to study evacuation choice behaviour and (2) to quantify the effect of herding on evacuation choice behaviour.
The developed experimental set-up consists of the serious game Everscape and a questionnaire. In Everscape, participants are confronted with an earthquake and have to evacuate from a tsunami. The Everscape data consist per second of the exact location and viewing direction of each participant. The questionnaire collects information on characteristics of the participants (e.g. age, gender) and focusses on what participants did during the Everscape part of the experiment and why they did this.
In total 14 experiments were conducted in which around 400 people participated. The data collected with these experiments were analysed and two main conclusions were drawn. On the one hand, the results support results from literature, meaning realistic evacuation behaviour is found with the experimental set-up. On the other hand, a first step is made towards validly quantifying the effect of herding behaviour on evacuation decisions with empirical data.","Evacuation; Herding; Serious gaming; Natural disaster; Departure choice","en","doctoral thesis","TRAIL Research School","978-90-5584-215-5","","","","TRAIL Thesis Series no. T2016/22, the Netherlands Research School TRAIL","","","","","Transport and Planning","","",""
"uuid:8ba12d0d-9bb0-4303-9e0f-0fcc322b678e","http://resolver.tudelft.nl/uuid:8ba12d0d-9bb0-4303-9e0f-0fcc322b678e","Optical singularities and nonlinear effects near plasmonic nanostructures","de Hoogh, A.K. (TU Delft QN/van der Zant Lab)","Kuipers, L. (promotor); Delft University of Technology (degree granting institution)","2016","One promising way to manipulate light on the nanoscale is to exploit the properties of light when it interacts with metallic elements. Light can, for instance, be guided along the interface of a metal and a dielectric. These guided waves are called surface plasmon polaritons (SPPs), and they occur because the collective oscillations of free electrons of the metal interact with the light waves and vice versa. The wavelength of the SPP itself is (much) shorter than the wavelength of light, resulting in a tight confinement and strong enhancement of the field. The light fields interacting with plasmonic systems can vary more rigorously in space than normal beams; they can be much richer in structure and exhibit fascinating patterns. A variety of different plasmonic platforms have been studied or proposed in literature to create unique structured field patterns.","","en","doctoral thesis","","978-94-92323-12-5","","","","","","","","","QN/van der Zant Lab","","",""
"uuid:f6aefbb0-1b95-44e9-a4dc-8e6c02d94f37","http://resolver.tudelft.nl/uuid:f6aefbb0-1b95-44e9-a4dc-8e6c02d94f37","Coordination of waterborne AGVs","Zheng, H. (TU Delft Transport Engineering and Logistics)","Lodewijks, G. (promotor); Negenborn, R.R. (copromotor); Delft University of Technology (degree granting institution)","2016","The possible larger amount of container throughput and the limited handling capacities of existing infrastructures impose increasingly high pressure on large ports in improving competitiveness. Inside container terminals, land-side automated guided vehicles have been used extensively for decades to improve terminal operational efficiency and sustainability. Transport between terminals, i.e., Inter Terminal Transport (ITT), is currently mainly realized by manned trucks. However, road traffic has already been heavy in port areas with limited land. For geographically complex ports, e.g., the port of Rotterdam, travel distances by land can be much longer than by water between some terminals. Expanding the existing physical transportation infrastructure might be an option, at extremely high costs nonetheless. As an alternative, more efficient and sustainable ways for port logistics need to be investigated.","","en","doctoral thesis","","978-90-5584-218-6","","","","","","","","","Transport Engineering and Logistics","","",""
"uuid:0c2c14c8-9550-449d-b1ff-7e0588ccd6c2","http://resolver.tudelft.nl/uuid:0c2c14c8-9550-449d-b1ff-7e0588ccd6c2","To craft, by design, for sustainability: Towards holistic sustainability design for developing-country enterprises","Reubens, R.R.R. (TU Delft Design for Sustainability)","Brezet, J.C. (promotor); Christiaans, H.H.C.M. (promotor); Diehl, J.C. (copromotor); Delft University of Technology (degree granting institution)","2016","Current sustainable design initiatives and approaches are already looking at using industrial techniques and technologies to recontextualize renewable materials to create innovative products and systems to suit global markets. However, the design outputs from these initiatives—while being mindful of ecological sustainability and targeting sustainability markets—do not leverage the huge workforce and cultural resources available in developing countries, where thesematerials occur abundantly and form part of traditional craft practice. These products, therefore, disregard the need and opportunity for design to also consider the social, cultural and economic dimensions of sustainability—and thus serve as a vehicle for holistic sustainability. This is a missed opportunity to holistically impact sustainability—and sustainable development—especially since craftspeople in the developing world are increasingly vulnerable to unsustainabilities caused by a loss of markets resulting from the influx of industrial products. If design were to build upon traditional developing-world craft production-to-consumption systems, rather than bypass them in favor of a mainstream, industrialized technology-push approach, the resultant products would be built on culturally sustainable traditions, using ecologically sustainable materials, crafted in a labor-intensive manner, and target viable sustainability-aligned markets; thus orchestrating holistically sustainable production-toconsumption systems. Actualizing this potential calls for alternative design approaches that can generate collective benefits to the ecology, society, economy and culture in developing countries. This research, therefore, aims to improve sustainability design approaches, and thereby practice, especially in the domain of MSMEs working with renewable materials in developing countries.","","en","doctoral thesis","","978-94-6186-770-4","","","","","","","","","Design for Sustainability","","",""
"uuid:c4b72e7a-47a0-45b2-8b82-f50ae91888ab","http://resolver.tudelft.nl/uuid:c4b72e7a-47a0-45b2-8b82-f50ae91888ab","Chemicals from Glycerol Bifunctional Catalysts for the Conversion of Biomass Components","ten Dam, J. (TU Delft BT/Biocatalysis)","Hanefeld, U. (promotor); Kapteijn, F. (promotor); Delft University of Technology (degree granting institution)","2016","The production of renewable chemicals is gaining attention over the past few years. The natural resources from which they can be derived in a sustainable way are most abundant in sugars, cellulose and hemi cellulose. These highly functionalized molecules need to be de-functionalized in order to be feedstocks for the chemical industry. A fundamentally different approach to chemistry thus becomes necessary, since the traditionally employed oil-based chemicals normally lack functionality. This new chemical toolbox needs to be designed to guarantee the demands of future generations at a reasonable price. The surplus of functionality in sugars and glycerol consists of alcohol groups. To yield suitable renewable chemicals these natural products need to be defunctionalized by means of dehydroxylation. Here we review the possible approaches and evaluate them from a fundamental chemical aspect. The chapter closes with an outline of the research described in this thesis.","Renewable Chemicals; Polyols; Glycerol; Hydrogenolysis; Dehydroxylation","en","doctoral thesis","","9789462955332","","","","","","","","","BT/Biocatalysis","","",""
"uuid:c5fd38bb-5345-48af-bca9-16f1d88744ac","http://resolver.tudelft.nl/uuid:c5fd38bb-5345-48af-bca9-16f1d88744ac","Multiscale modeling of mesoscale phenomena in weld pools","Kidess, A. (TU Delft ChemE/Transport Phenomena)","Kleijn, C.R. (promotor); Kenjeres, S. (copromotor); Delft University of Technology (degree granting institution)","2016","","Welding; Fluid- and Aerodynamics; Thermocapillary flow; Marangoni flow; Solidification","en","doctoral thesis","","978-94-92516-22-0","","","","","","","","","ChemE/Transport Phenomena","","",""
"uuid:73f63a00-972d-4b83-8c9f-cce7dc14e048","http://resolver.tudelft.nl/uuid:73f63a00-972d-4b83-8c9f-cce7dc14e048","Quantum error correction with spins in diamond","Cramer, J. (TU Delft QID/Hanson Lab)","Hanson, R. (promotor); Taminiau, T.H. (copromotor); Delft University of Technology (degree granting institution)","2016","Digital information based on the laws of quantum mechanics promisses powerful new ways of computation and communication. However, quantum information is very fragile; inevitable errors continuously build up and eventually all information is lost. Therefore, realistic large-scale quantum information processing requires the protection of quantum bits (qubits) against errors. In this thesis we present the experimental implementation of quantum error correction protocols based on spins in diamond. In such protocols, a quantum state is protected against errors by encoding in multiple qubits. Errors can be detected and corrected by measurement of correlations, so-called stabilizer-measurements, on these qubits.The experimental work presented in this thesis employs multiple spins in diamond as qubits to explore and implement error correction protocols. The nitrogen-vacancy (NV) centre in diamond is a lattice defect consisting of a nitrogen atom (N) and a vacancy (V) on two adjacent diamond lattice sites. This defect effectively results in an electronic spin that can be addressed as a qubit. The spin state can be manipulated by microwave fields and optically read out. At liquid helium temperatures (cryogenic temperature, ~4 K = -269 C), the NV electron spin provides high-fidelity single-shot readout and long coherence times.The NV centre is surrounded by naturally available (1.1% abundance) nuclear C13 spins. As the number of spins that are close enough to the NV centre to be strongly coupled is limited, we employ the weakly coupled nuclear spins in the spin bath of the NV centre. Using dynamical decoupling techniques these nuclear spins can be detected via the NV electron spin through the hyperfine interaction. The nuclear spins are long-lived and robust against optical excitation of the NV electron spin, which can make these spins a robust quantum register for quantum error correction.In Ch. 4 we demonstrate universal control over multiple of such weakly coupled nuclear C13 spins in the environment of the NV centre at ambient temperatures. We demonstrate initialization, control and read-out of individual nuclear spins. Finally, we implement a quantum error correction protocol by encoding a quantum state in the NV electron spin and two nuclear spins. Errors are detected by un-encoding the quantum state back to the electron spin and correction via a double controlled operation.For universal fault-tolerant quantum computations it is essential that the quantum information remains encoded at all times. In Ch. 5 we present multiple rounds of quantum error correction and active feedback on a continuously encoded qubit at cryogenic temperatures. A quantum state is protected by encoding in three weakly coupled spins. Errors are detected via high-fidelity non-demolition readout of the NV electron spin and actively corrected using fast classical electronics. We demonstrate that an actively error-corrected qubit is robust against phase flip errors and show that a superposition state can live longer than the best physical qubit in the encoding.The presented methods and results can be extended to a range of future experiments. In Ch. 6 we propose the implementation of five-qubit quantum error correction, the smallest code to correct for general single-qubit errors on the physical qubits in the encoding, by extending the experimental methods as developed in Chs. 4&5. Besides the exploration and development of larger error correction protocols and fault-tolerant quantum computing, the presented quantum register based in spins in diamond can be employed as a quantum node and combined with recent advances in the realization of quantum entanglement over large distances to form quantum networks. These networks can be used to study both fundamental questions as well as future applications in quantum information technology.","","en","doctoral thesis","","978-90-8593-270-3","","","","Casimir PhD Series Delft-Leiden 2016-26","","","","","QID/Hanson Lab","","",""
"uuid:faedef77-b7dc-47c7-a6d5-891e95baab70","http://resolver.tudelft.nl/uuid:faedef77-b7dc-47c7-a6d5-891e95baab70","The origin of preferential flow and non-equilibrium transport in unsaturated heterogeneous porous systems","Baviskar, S.M. (TU Delft Geo-engineering)","Heimovaara, T.J. (promotor); Delft University of Technology (degree granting institution)","2016","Stabilization of waste bodies of landfill is achieved by treating the waste body using irrigation, recycling of leachate combined with landfill gas extraction and/or aeration. In order for the regulators and landfill operators to agree on a required level of after care, a quantitative estimation of remaining long-term emission potential is required. Our hypothesis is that, material heterogeneity in unsaturated systems is the origin of preferential flow and that infiltration patterns and rates are the controlling factors affecting non-equilibrium solute transport. We developed a coupled flow and transport model using a finite difference method implemented as a MATLAB toolbox, which we named Variably Saturated Flow and Transport (VarSatFT).
Using VarSatFT, we analysed water flow and solute transport in different unsaturated heterogeneous small scale systems. The origin of preferential flow and non-equilibrium solute transport lies in the funnelling of flow and advective transport through the high permeable zones. This leads to concentration gradients between the solutes in the mobile and immobile pore space due to the flushing of the mobile pore space with fresh (rain) water. The variation in infiltration rates and patterns are found to be the controlling factors for the magnitude non-equilibrium solute transport. The findings from the numerical analyses were verified in lab scale experiments. Infiltration and leachate recirculation can stimulate biodegradation if sufficient water is added to significantly increase water content Our findings indicate the severe limitations associated with single continuum modelling methods of water flow and solute transport for full scale landfills, especially when leachate concentrations need to predicted.
For the laboratory experiments we required data on the water retention parameters from well sorted sands near saturation. We developed an approach developed using the vertical distribution of water content along a TDR probe. We performed these measurements in a multi-step drainage experiment at moments when flow had ceased so that hydrostatic conditions can be assumed. This gives a direct measurement of the water retention curve. Combining the water retention curve with the model for TDR waveforms and the pressure head distribution from the hydrostatic conditions allowed for the parameters in the unsaturated water retention curve to be optimized using the Bayesian inference scheme. The approach we developed reduces the number of parameters compared with other TDR approaches which optimize water content at every node along TDR probe. This approach is suitable to quantify water retention parameters for samples with long heights and samples with uniform particle size distributions.","preferential flow; non-equilibrium transport; landfill emission potential; flow and transport modelling; time domain reflectometry; soil water retention parameters","en","doctoral thesis","","978-94-6186-758-2","","","","","","","","","Geo-engineering","","",""
"uuid:d4214bb0-5bfd-43fe-af42-01247762b661","http://resolver.tudelft.nl/uuid:d4214bb0-5bfd-43fe-af42-01247762b661","Design Methodology for Additive Manufacturing: Supporting Designers in the Exploitation of Additive Manufacturing Affordances","Doubrovski, E.L. (TU Delft Mechatronic Design)","Geraedts, Jo M.P. (promotor); Horvath, I. (promotor); Verlinden, J.C. (copromotor); Delft University of Technology (degree granting institution)","2016","","","en","doctoral thesis","","978-90-6562-401-7","","","","","","","","","Mechatronic Design","","",""
"uuid:080a025d-f059-42c8-9229-c3dba2dd4400","http://resolver.tudelft.nl/uuid:080a025d-f059-42c8-9229-c3dba2dd4400","A seismic vibrator driven by linear synchronous motors: Developing a prototype vibrator, investigating the vibrator-ground contact and exploring robust signal design","Noorlandt, R.P. (TU Delft Applied Geophysics and Petrophysics)","Wapenaar, C.P.A. (promotor); Drijkoningen, G.G. (copromotor); Delft University of Technology (degree granting institution)","2016","The seismic method is an important indirect method to investigate the subsurface of the earth. By analyzing how the earth affects the propagation of mechanical waves, the structure of the earth and its seismic properties can be inferred. The seismic vibrator is the most commonly used land source in active-source exploration and monitoring to generate these waves and is the subject of this thesis. The goal of a seismic vibrator is to produce seismic waves with a known signal signature. Commonly sinusoidal signals whose frequency varies over time, called sweeps, are used for this purpose. These signals are typically quite lengthy to compensate for the fact that the instantaneous amplitudes of the vibrator are relatively weak compared to the ones from impulsive sources and the target depths faced. Via the processing step of correlation, the lengthy source signature is collapsed and virtual records are generated as if the vibrator would have released all energy at once. The quality of these virtual records depends on the ability of the vibrator engines to generate the force signature wanted and the ability of the vibrator mechanics and the vibrator-ground interaction to successfully transform the driving force to a seismic wave. In this thesis we investigate the feasibility of driving a vibrator with linear synchronous motors, the influence of drive level on the signals a vibrator generates, the effect of the vibrator-ground coupling, and the possibilities to design more robust source signals.","vibrator; seismic source; linear synchronous motors; LSM; vibrator-ground contact; contact mechanics; sweep; chirp; optimal signals","en","doctoral thesis","","978-94-92516-23-7","","","","","","","","","Applied Geophysics and Petrophysics","","",""
"uuid:d5850b96-ec5b-4639-aac0-e05f81681800","http://resolver.tudelft.nl/uuid:d5850b96-ec5b-4639-aac0-e05f81681800","Cr(VI)-free pre-treatments for adhesive bonding of aerospace aluminium alloys","Abrahami, S.T. (TU Delft (OLD) MSE-6)","Terryn, H.A. (promotor); Mol, J.M.C. (copromotor); Delft University of Technology (degree granting institution)","2016","For more than six decades, chromic acid anodizing (CAA) has been the central process in the surface pre-treatment of aluminium for adhesively bonded aircraft structures in Europe. Unfortunately, this electrolyte contains hexavalent chromium (Cr(VI)), a compound known for its toxicity and carcinogenic properties. The approaching ban on the use of hexavalent chromium (Cr(VI)) makes its elimination a high-priority R&D topic within the aerospace industry and the Cr(VI)-era will soon have to come to an end. Anodizing aluminium in acid electrolytes produces a self-ordered porous oxide layer with a thin barrier layer underneath. This special type of oxide readily adheres to the organic resin and provides protection against corrosion. Although Cr(VI)-free candidates such as sulphuric acid- (SAA), phosphoric acid- (PAA) and mixtures of phosphoric-sulphuric acid anodizing (PSA) can be used to create this type of structure, the excellent adhesion and corrosion resistance that is currently achieved by the Cr(VI)-based process is not easily matched. To gain a better understanding of the underlying physical and chemical mechanisms that contribute to the adhesion and durability in these structures, this study investigates the correlation between the oxide’s chemical and morphological characteristics, as influenced by the anodizing electrolyte, and bond performance. The major challenge in the mechanistic understanding of the adhesion in bonded components is to differentiate between the different forces acting at the oxide/resin interface. In the first part of this PhD thesis, studies focus on the role of surface chemistry. To exclude the contribution of mechanical interlocking between the oxide and the resin, featureless oxides were prepared by stopping the anodizing during the formation of the barrier layer. Surface characterization of the different anodic oxides by means of Fourier transform infrared (FTIR) and X-ray Photoelectron Spectroscopy (XPS) revealed no significant net change in the acid-base properties of the different anodic oxides. It was found that local chemical changes were introduced due to the incorporation of electrolyte-driven anions. Therefore, a model was developed to quantify the relative amounts of O2-, OH−, PO4 3−, and SO4 2−, showing significant changes in the type and amount of surface species. Consequently, measurements showed that the pretreatments and the molecule type affected oxide/molecule interfacial interactions. To evaluate the contribution of adsorptive interaction in practice, peel tests were performed on featureless oxides bonded with commercial aerospace adhesives. Results showed that significant initial dry adhesion is achieved with FM 73 epoxy without mechanical interlocking, and independent of the type of pretreatment. However, the formed bonding was not water resistant, with the amount of applied stress needed for peeling linearly increasing with the amount of surface hydroxyls. Moreover, the application of a thin γ-APS silane layer before bonding with epoxy has confirmed that the stability of the interface is also determined by the nature of the bond, showing much more stable interfaces in the presence of covalent interactions. When peel tests were performed with a phenolic-based adhesive (Redux 775), no correlation to the surface chemistry was found. Nevertheless, the bonded joints on the basis of the weakly acidic character of the phenolic adhesive showed better resistance to corrosion in salt spray tests, compared to those on the basis of the epoxy adhesive. Therefore, we conclude that both oxide surface- and adhesive chemistries play a role in the formation and long-term stability of the oxide/resin interface. In the second part of this thesis industrial porous oxides were applied. Fundamental investigations show that changing the voltage during anodizing can produce morphological variations across the oxide thickness. The effect of the initial voltage sweeps, however, was limited by the oxide dissolution action of phosphoric acid in PSA, since prolonged anodizing in this electrolyte not only leads to an increase of the pore diameter, but also completely dissolves the upper most part of the oxide. Morphological changes were distinguished between geometrical modifications that affect the pore size and changes in the surface roughness that was caused by extended chemical dissolution at higher anodizing temperatures and/or phosphoric acid concentration. Measured carbon concentration profiles within the pores using high-resolution transmission electron microscopy (TEM) coupled with energy-dispersive X-ray spectroscopy (EDS) indicated that resin penetration is affected by both aspects. Moreover, mechanical performance in peel tests indicates that these parameters, rather than the oxide layer thickness are critical for moisture-resistant adhesion. Both adhesion mechanisms: adsorption and mechanical interlocking seem to contribute to the adhesion in these structural bonds. A higher degree of dissolution during anodizing is beneficial for the adhesion, facilitating a composite-like interphase. Too much dissolution, however, reduces the resistance to bondline corrosion. Overall, the presented results illustrate the need to consider both chemical and morphological changes in the selection of Cr(VI)-free alternatives for structural adhesive bonding.","Aluminium; Cr(VI)-free; Surface pretreatments; Anodizing; Adhesive bonding; Adhesion; Durability","en","doctoral thesis","","978-94-91909-43-6","","","","This research was carried out under the project number M11.6.12473 in the framework of the Research Program of the Materials innovation institute (M2i) in the Netherlands","","","","","(OLD) MSE-6","","",""
"uuid:baeca8ab-8534-4006-bb4a-d7c36921150a","http://resolver.tudelft.nl/uuid:baeca8ab-8534-4006-bb4a-d7c36921150a","A template-based control architecture for dynamic legged locomotion","Shahbazi Aghbelagh, M. (TU Delft OLD Intelligent Control & Robotics)","Babuska, R. (promotor); Delgado Lopes, G.A. (copromotor); Delft University of Technology (degree granting institution)","2016","","","en","doctoral thesis","","978-94-6186-750-6","","","","","","2017-12-07","","","OLD Intelligent Control & Robotics","","",""
"uuid:c351e276-e7e2-4153-98e6-bea6882cfb30","http://resolver.tudelft.nl/uuid:c351e276-e7e2-4153-98e6-bea6882cfb30","Concrete in dynamic tension: The fracture process","Vegt, I. (TU Delft Materials and Environment)","van Breugel, K. (promotor); Weerheijm, J. (copromotor); Delft University of Technology (degree granting institution)","2016","The fracture properties of concrete are rate dependent. In this thesis the results on tensile tests at static, moderate and high loading rate are presented. The results show the influence of the loading rate not only the tensile strength, but also on the fracture energy, stress-defromation relation and fracture parameters, like fracture lengths and width of the fracture zone. The failure mechanisms are reconstructed and the dominant mechanisms behind the rate dependency are identified. By using basic principles of fracture mechanics and a simple model based on the Stefan effect, the loading rates at which the mechanisms have significant effect have been determined. The dominant mechanisms found in the research can be implemented in dynamic models and the acquired data set can be used to validate numerical models.","Fracture process; Dynamic loading; Concrete","en","doctoral thesis","","978-94-6186-747-6","","","","","","","","","Materials and Environment","","",""
"uuid:b40d40dd-f155-4b60-a245-04bdd0760861","http://resolver.tudelft.nl/uuid:b40d40dd-f155-4b60-a245-04bdd0760861","Participation and Interaction in Projects: A Game-Theoretic Analysis","Polevoy, G. (TU Delft Algorithmics)","Witteveen, C. (promotor); Jonker, C.M. (promotor); de Weerdt, M.M. (copromotor); Delft University of Technology (degree granting institution)","2016","Much of what agents (people, robots, etc.) do is dividing effort between several activities. In order to facilitate efficient divisions, we study contributions to such activities and advise on stable divisions that result in high social
welfare. To this end, for each model (game), we find the Nash equilibria and their social welfare. A Nash equilibrium is division where no agent can increase her utility if the others do not change their behavior. The social welfare is defined as the sum of the utilities of all the agents. We concentrate on value-creating activities and on reciprocation (interactions where agents react on the previous actions). The value-creating activities model work projects, co-authoring articles, writing to Wikipedia, etc. We assume that all the agents who contribute to such an activity at least a predefined threshold share of the final value of the activity. Examples of reciprocation activities are politics and relationships with colleagues. We prove the actions stabilize around a limit value. Then, we assume that agents
strategically set their own interaction habits and model this as a game. We finally analyze dividing own effort between several reciprocal interactions. We lay the foundation of realistic mathematical modeling and analysis of effort
division between activities and provide advice about what the agents should do in order to maximize the personal and the social welfare.","Game theory; agent; Projects; Value creation; interaction; reciprocation; threshold; Nash-equilibrium; Efficiency; price of anarchy; price of stability; simulations; fictitious play; competition; interaction graph; Perron-Frobenius","en","doctoral thesis","","978-94-6186-766-7","","","","SIKS Dissertation Series No. 2016-49. The research reported in this thesis has been carried out under the auspices of SIKS, the Dutch Research School for Information and Knowledge Systems.","","","","","Algorithmics","","",""
"uuid:1cda75d5-8998-49fe-997e-b38c9b7f8b8b","http://resolver.tudelft.nl/uuid:1cda75d5-8998-49fe-997e-b38c9b7f8b8b","Full wavefield migration: Seismic imaging using multiple scattering effects","Davydenko, M.V. (TU Delft ImPhys/Acoustical Wavefield Imaging)","van Vliet, L.J. (promotor); Verschuur, D.J. (copromotor); Delft University of Technology (degree granting institution)","2016","Seismic imaging aims at revealing the structural information of the subsurface using the reflected wavefields captured by sensors usually located at the surface. Wave propagation is a complex phenomenon and the measured data contain a set of backscattered events including not only primary reflections, but also surface-related and interbed multiples. Additionally, transmission effects also play an important role in the wave propagation. However, most of the current imaging algorithms, being based on single scattering assumptions, can handle only primary reflections and all other effects are treated as noise that produces false structures (crosstalk) in the resulting image. To avoid this, data used by conventional imaging algorithms is usually preprocessed in a such way that primaries are separated from the rest of the arrivals. However, imaging only the first category of events excludes the available information contained by multiple scattering. Furthermore, as a perfect multiple removal process is a challenge, residual crosstalk is often visible in the final image. The main topic of this thesis is to develop an imaging algorithm that can correctly handle such complex scattering effects. The main motivation is aimed at extracting complete information from the reflection data by using the multiples and, thereby, avoiding their elimination as a preprocessing step. The problem is solved by considering the imaging process as an inverse problem, where the measured data forms the data space and the unknown reflectivities constitute the model space. Solving of the inverse problem requires forward modeling and computing the gradient. The former is based on the modelling approach where amplitudes of the modeled data are driven exclusively by the reflectivity model (to be estimated), whereas travel times are dependent only on the provided migration velocity model. Moreover, because the forward model is based on a recursive scheme (the Bremmer series) it is also possible to efficiently simulate data with any combination of multiple scattering. Therefore, by minimising the misfit between the observed and the modeled data the crosstalk from multiples in the estimated reflectivity model is suppressed, because the process of fitting the data is not based anymore on the single scattering assumption. An important component in the inversion process is extracting a model update for the reflectivities from the data misfit. It is also important to mention that complex wavefields are involved in the ’imaging condition’ step, which clearly shows the contribution of the complex scattering. Therefore, the final inversion-based imaging process is called Full Wavefield Migration (FWM) and it is especially suited for situations where primaries provide a limited illumination of the subsurface, which can be compensated by the multiples. Furthermore, extensions of the method have been proposed as well, like primary/multiple separation, source field estimation, deblending and missing data reconstruction. The virtues of FWM are successfully demonstrated on several numerical and field data examples.","","en","doctoral thesis","","987-94-6186-768-1","","","","","","","","","ImPhys/Acoustical Wavefield Imaging","","",""
"uuid:56e48c89-6d06-4ae3-8033-2e913ee09bee","http://resolver.tudelft.nl/uuid:56e48c89-6d06-4ae3-8033-2e913ee09bee","From Access to Re-use: A user's perspective on public sector availability","Welle Donker, F.M. (TU Delft OLD Geo-information and Land Development)","Korthals Altes, W.K. (promotor); van Loenen, B. (copromotor); Delft University of Technology (degree granting institution)","2016","Availability of public sector (geo-)information is essential for effective and efficient government policy-making. In addition, it is also associated with realising other ambitions, such as a more transparent and accountable government, more citizens’ participation in democratic processes, (co-)generation of solutions to societal problems, and to increase economic value due to companies creating innovative and value-added products and services with public sector (geo-)information as a resource. Especially the latter ambition has been the subject of many publications stressing the enormous potential economic value of re-use of public sector (geo-)information by companies. Previous research indicated that re-users of public sector information encountered barriers related to technical, organisational, legal and financial aspects. These barriers were considered to be the main reason why in Europe, the economic value due to value added products appeared to be lagging. Especially restrictive licence conditions and high licence fees were often cited to be the main barriers for re-users in Europe compared to re-users in the United States. Fortunately, more public sector information becomes available as open data and a trickle of products and services based on public sector information can be witnessed. However, the predicted tsunami of value-added products and services has not quite eventuated yet. This dissertation aims to assess which barriers re-users of public sector information still face after the introduction of open data, and what can be done to alleviate these barriers to lift the maturity of open data to a higher level.","public sector information; Accessibility; Open Data; re-use of PSI; open data business models; open data maturity","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-92516-27-5","","","","A+BE | Architecture and the Built Environment No 21 (2016)","","","","","OLD Geo-information and Land Development","","",""
"uuid:cf1e2110-0ce7-4cc1-956b-f221d5f7b605","http://resolver.tudelft.nl/uuid:cf1e2110-0ce7-4cc1-956b-f221d5f7b605","Iterative data-driven load control for flexible wind turbine rotors","Navalkar, S.T. (TU Delft Team Raf Van de Plas)","van Kuik, G.A.M. (promotor); van Wingerden, J.W. (copromotor); Delft University of Technology (degree granting institution)","2016","Wind energy has reached a high degree ofmaturity: for wind-rich onshore locations, it is already competitive with conventional energy sources. However, for low-wind, remote and offshore regions, research efforts are still required to enhance its economic viability. While it is possible to reduce the cost of energy by upscaling wind turbines, it is believed that we may be approaching a plateau in turbine size. Beyond this plateau, the material costs associated with the high dynamic turbine loads would outweigh the benefits of increased energy capture. To postpone this plateau, research is currently being carried out in the active control of loads for lightweight, flexible rotors. Traditional control for wind turbines involves the use of fixed-structure low order controllers, the gains of which are often hand-tuned separately for each turbine class. However, for the increasingly multivariable plant, such time-invariant approaches may no longer yield good performance. As such, the thesis focusses specifically on datadriven control for these flexible turbines. First, different data-driven approaches in the literature are evaluated and categorised as two-step approaches; which involves distinct online identification and control steps; and direct approaches, which uses data to iteratively tune fixed-structure controller gains. The approaches need to be modified to be made tractable in real time for implementation on wind turbines. For time-varying plants, such as wind turbines, it is often interesting to performidentification repeatedly over time for the two-step data-driven approach. Conventional recursive identification is extended in this thesis through the use of the nuclear norm. The benefit of the nuclear norm is evident in that it increases responsiveness of the algorithm, through the suppression of the effect of external noise. Identification can be readily combined with repetitive control for reducing periodic loads in the Subspace Predictive Repetitive Control (SPRC) technique. SPRC can be performed in a restricted basis function subspace, thus reducing the computational complexity and providing smooth control signals. The control law is stabilising and performs well as long as the identification converges to relatively good estimates, and the system dynamics change slowly. For varying wind speed, the approach above would require continuous reïdentification. As an alternative, a direct data-driven approach, Iterative Feedback Tuning (IFT) has been extended to gain-schedule tuning and for designing a Linear Parameter- Varying (LPV) controller for an LPV plant. This requires an exponential increase in the number of tuning experiments per iteration; however, structure can be used to reduce computational complexity. IFT-LPV converges to a locally optimal low-order controller. These data-driven approaches are evaluated for the load control of flexible rotors. A review of the state of the art shows that, for the low-frequency region of the load spectrum, full-span pitch control has demonstrable control authority. For higher frequencies, among the new actuators, it is found that trailing-edge flaps have the highest level of technological maturity. Aeroservoelastic simulations are carried out to show the potential of the data-driven approaches. SPRC is able to adaptively tune itself to achieve average blade load reductions close to those achieved by conventional approaches under similar conditions. For these load reductions the actuator duty is roughly half of that with the conventional approach. IFT-LPV has been used to tune a feedforward controller that works on similar basis functions scheduled on the azimuth. It can provide the correct control action irrespective of wind conditions. To expand the load control design space, pitch control is designed to stabilise an upwind turbine in yaw, without the yaw drive. This approach enables a trade-off between blade and support structure loads. SPRC is then investigated with wind tunnel experiments for pitch control of a scaled wind turbine. It reduces deterministic loads by over 60%with strict control over the pitch activity, and can also compensate for asymmetric blade control authority and changed operating conditions adaptively. Further, on this setup, the concept of IPC has been shown to perform yaw stabilisation for an upwind turbine for the first time. The setup blades are then redesigned to include free-floating trailing-edge flaps. First-principles models are set up for the system, and it is found that the system shows a low wind-speed form of flutter; this is validated experimentally. Recursive identification, using the nuclear norm, is able to track the unstable mode damping, and detect flutter twice as fast as conventional methods. Finally, a feedforward controller is tuned using IFT for combined pitch and flap control; the load peaks at 1P and 2P are almost entirely attenuated. IFT is also able to tune an linear gain schedule for operation across a range of wind speeds. It is concluded that iterative methods for data-driven control perform well for the highly uncertain control problem of flexible rotor load alleviation. For this, use has to be made of the structure of the problem. The two-step approach (such as SPRC), with combined recursive identification and control law synthesis, provides a convex first approximation of the desired controller. With the help of direct approaches, (like IFT-LPV), the controller structure can be reduced and fine-tuned to improve the control performance. Such quasi-feedforward data-driven approaches can complement the existing turbine control structure and achieve enhanced load control performance for flexible rotors.","Load control of wind turbines; data-driven control; recursive identification; repetitive control; iterative feedback tuning; free-floating flaps; individual pitch control; flutter detection","en","doctoral thesis","","","","","","","","","","","Team Raf Van de Plas","","",""
"uuid:ffdfd697-640c-419a-b39a-539d57b17d60","http://resolver.tudelft.nl/uuid:ffdfd697-640c-419a-b39a-539d57b17d60","Novel thermal error reduction techniques in temperature domain","Morishima, T. (TU Delft Mechatronic Systems Design)","Munnig Schmidt, R.H. (promotor); van Ostayen, R.A.J. (copromotor); Delft University of Technology (degree granting institution)","2016","The relative importance of thermal errors in precision machines has increased in recent years. The requirements on precision machines such as higher precision and higher productivity become more stringent over product generations. Improved precision of precision machines has increased the importance of thermal errors in precision machines. To increase the productivity of precision machines, more energy needs to be consumed in general, resulting in increased heat generation and thermal errors, and poorer precision performance. To further improve the precision of precision machines while achieving increased productivity at the same time, thermal errors in precision machines need to be studied and reduced further.","","en","doctoral thesis","","978-94-028-0423-2","","","","","","","","","Mechatronic Systems Design","","",""
"uuid:8c66ca0a-605e-4f22-a4f1-f59b7e9ac874","http://resolver.tudelft.nl/uuid:8c66ca0a-605e-4f22-a4f1-f59b7e9ac874","Making Fashion Sustainable: The Role of Designers","van der Velden, N.M. (TU Delft Design for Sustainability)","Brezet, J.C. (promotor); Delft University of Technology (degree granting institution)","2016","The dissertation ‘Making Fashion Sustainable – The Role of Designers’ describes the PhD research of Natascha M. van der Velden on the envisioned role designers could take responsibility for in the transition towards a more sustainable fashion industry.
The current worldwide textile and apparel system is unsustainable – from both an environmental as well as a social point of view. The clothing industry is associated with (un-)sustainability problems ranging from materials depletion and toxic emissions to social exploitation.
This thesis argues that knowledge about life cycle thinking and life cycle assessment (LCA, as a method to calculate ‘eco- and socio-burden’) could accelerate the transformation towards a more sustainable fashion production system. Therefore, designers are encouraged to include findings that result from the application of the LCA method, in the fashion design process, with the aim to gain insights into the sustainability hotspots over the clothing products lifecycle. This knowledge can help designers to apply ecodesign and create ‘Life Cycle Clothing’, and is intended to enhance the self-empowerment based learning and probing process of the designers, the makers and the wearers of fashion. It is envisaged, within the wider context of the national and international governance of the fashion branch sustainable development future, that (i) the analytical methods and ecodesign approaches from this study, together with (ii) the self-empowerment process, will be essential elements (even necessary conditions) for a successful transition.
The conclusions of the research suggest practical guidelines for designers who are willing to adopt a different role than many of their predecessors, and – possibly with help from the tranS-LCA-tor – become a frontrunner of sustainable fashion by adding quantitative sustainability assessment to their ‘portfolio of skills’.","","en","doctoral thesis","","978-94-6186-754-4","","","","","","2016-12-05","","","Design for Sustainability","","",""
"uuid:9489dc5c-49da-41d3-a4fa-b122e77e98c6","http://resolver.tudelft.nl/uuid:9489dc5c-49da-41d3-a4fa-b122e77e98c6","Cyber-physical solution for an engagement enhancing rehabilitation system","Li, C. (TU Delft Cyber-Physical Systems)","Horvath, I. (promotor); Ji, L (promotor); Rusak, Z. (copromotor); Delft University of Technology (degree granting institution)","2016","","","en","doctoral thesis","","","","","","","","","","","Cyber-Physical Systems","","",""
"uuid:a30f03a2-ea65-44fe-9e63-1551e2722450","http://resolver.tudelft.nl/uuid:a30f03a2-ea65-44fe-9e63-1551e2722450","Self-organizing energy-autonomous systems","Liu, Q. (TU Delft Embedded Systems)","Brazier, F.M. (promotor); Langendoen, K.G. (promotor); Warnier, Martijn (copromotor); Pawełczak, Przemysław (copromotor); Delft University of Technology (degree granting institution)","2016","With the rapid development of mobile technology, more and more devices connect to the Internet of Things (IoT). The management of such large-scale networks becomes a challenge. Firstly, a large number of heterogeneous devices are distributed over a wide area, leading to a variation of the requirements of users, the performance of mobile devices, and the application scenarios. As the size of the IoT increases, the complexity of controlling such systems becomes a challenge. Most existing solutions choose global control, and are designed for a specific type of application scenario. However, any changes in the network, e.g. topology, node density, etc., affect the control schedule of the central node. Once the context changes beyond the adaptation ability, the system can hardly function anymore. Furthermore, the center node is the single break point in the control structure. Therefore, it is critical to find a solution with autonomous management, in which networks are organized and controlled by the local management of each node. Secondly, maintaining the power supply for a large number of battery-operated mobile devices in the IoT becomes a challenge. The most direct solution is to replace batteries of devices periodically. However, this costs much money, time, and human resources. Increasing the size of the battery is another commonly used approach, but this enlarges the formand weight of devices, which is unsuitable for application scenarios where size and weight of devices should be minimized. Therefore, we need an approach where devices have autonomous energy, in which batteries of mobile devices can be wirelessly charged. Based on the motivation above, the research of this dissertation is positioned in the area of autonomic computing. The proposed systems are self-adaptive self-organized and use radio-frequency based wireless power transfer. Specifically, nodes in the network can achieve global operation, based on local information exchange and control of each node, and increase battery lifetime by harvesting energy from transmitted radio waves and decreasing the duty cycle of radio in the communication protocol. In the area of self-adaptive self-organization systems, we explore controlling networks based on local information exchange. The global operation of the whole network is controlled by local management of each node. The advantage is that nodes do not need to collect a large amount of global information, which largely decreases the communication complexity of the network. We leverage this mechanism in two case studies. First, we target data aggregation in mobile networks. Our algorithmuses evolutionary dynamics to select and spread the configuration of each node, and the network automatically adapts to the variation of application scenarios. The network can optimize configurations without predesigned setup for a specific scenario. In the second case study, we design an algorithmto achieve distance estimation with self-organization in large-scale mobile networks. The algorithm uses messages collected by local information exchange for statistical calculation, and the network collectively estimates distances between nodes in the network. This improves the accuracy and extends the application area of the existing distance estimation approaches. In the area of wireless power transfer systems, the main contribution is based on the exploration of increasing the efficiency of energy transmission and utilization in mobile devices using radio-frequency based wireless power transfer. First of all, we exploit the properties of active and backscatter radio for increasing the energy efficiency of harvesters. We demonstrate the world’s first hybrid radio platformthat combines the strengths of active radio (long range and robustness to interference) and backscatter radio (low power consumption). We design a switching mechanism that selects active radio or backscatter radio for different radio channel qualities. The measurement results onmobile devices prove that harvesting and saving radio energy is not the only choice to provide autonomous energy, and that backscatter radio for communication is more energy efficient for some applications on mobile devices. Second, we save energy on the charger side to make wireless power transfer green. Wireless power transfer based on radio frequency radiation and rectification is fairly inefficient due to power decaying with distance, antenna polarization, etc. To save energy in chargers, we monitor the idle charging state in wireless power transfer networks and switch off the energy transmitters when the received energy is too lowfor rectification. Although this systemdoes not directly increase the efficiency of the radio harvesting process, the saved energy in chargers largely boosts the energy efficiency of the whole wireless power transfer network. The system is especially valuable for increasing the lifetime ofmobile chargers powered by batteries. Finally, to demonstrate the value of energy autonomy in real applications, we select indoor localization using wireless power transfer as a case study. We design a battery-less indoor localization system that can operate perpetually under wireless power transfer. The novel localization method operates at energy levels that are within the energy budget provided by wireless power transfer today, and the communication schedule is well-designed to minimize the amount of idle listening. We use off-the-shelf devices to implement and deploy the system. It proves the feasibility of using long-range wireless power transfer for mobile systems.","SELF-ORGANIZING; ENERGY-AUTONOMOUS","en","doctoral thesis","","978-94-6186-762-9","","","","This work was carried out in the ASCI graduate school of TUDelft. ASCI dissertation series number: 363","","","","","Embedded Systems","","",""
"uuid:b3333087-029e-4927-901b-3eb4b96b08ca","http://resolver.tudelft.nl/uuid:b3333087-029e-4927-901b-3eb4b96b08ca","Collecting lessons learned: How project-based organizations in the oil and gas industry learn from their projects","Buttler, T. (TU Delft System Engineering)","Verbraeck, A. (promotor); Lukosch, S.G. (copromotor); Delft University of Technology (degree granting institution)","2016","Project-based organizations collect lessons learned in order to improve the performance of projects. They aim to repeat successes by using positive lessons learned, and to avoid repeating negative experiences by using negative lessons learned. Cooke-Davies (2002) claimed that the ability to learn from past projects for future projects is a key success factor in project management. Despite their importance, organizations can be ineffective in collecting and using lessons learned.","systems engineering; knowledge management; project management; organizational learning; collaboration engineering; teams in the workplace; small groups","en","doctoral thesis","","978-94-6186-759-9","","","","","","","","","System Engineering","","",""
"uuid:f7b33aaa-6b21-4f8a-9fd7-022bec55f114","http://resolver.tudelft.nl/uuid:f7b33aaa-6b21-4f8a-9fd7-022bec55f114","RF CMOS Oscillators for Cellular Applications","Shahmohammadi, M. (TU Delft Electronics)","Staszewski, R.B. (promotor); Delft University of Technology (degree granting institution)","2016","","RF; oscillator; 1/f noise up-conversion; impulse sensitivity function; wide tuning range; Colpitts oscillator; coupled oscillators; all-digital phase-locked loop (ADPLL)","en","doctoral thesis","","978-94-6233-477-9","","","","","","","","","Electronics","","",""
"uuid:2792af2d-a3c2-4e94-be5a-35a6e9119af7","http://resolver.tudelft.nl/uuid:2792af2d-a3c2-4e94-be5a-35a6e9119af7","Multi-agent control of urban transportation networks and of hybrid systems with limited information sharing","Luo, R. (TU Delft Team Bart De Schutter)","De Schutter, B.H.K. (promotor); van den Boom, A.J.J. (copromotor); Delft University of Technology (degree granting institution)","2016","This thesis aims at developing efficient methods for control of large-scale systems by employing state-of-the-art control methods and optimization techniques. This thesis is divided into two parts. In the first part, we address dynamic traffic routing for urban transportation networks. In the second part, we address multi-agent model predictive control of a class of hybrid systems with limiting information sharing and subject to global hard constraints.","multi-agent control; model predictive control; dynamic traffic routing; hybrid systems","en","doctoral thesis","TRAIL Research School","978-90-5584-213-1","","","","TRAIL Thesis Series T2016/21, the Netherlands TRAIL Research School","","","","","Team Bart De Schutter","","",""
"uuid:54f6098c-4c99-4129-8d8c-353a9411c0ae","http://resolver.tudelft.nl/uuid:54f6098c-4c99-4129-8d8c-353a9411c0ae","Turbulence and turbulent heat transfer at supercritical pressure","Peeters, J.W.R. (TU Delft Energy Technology)","Boersma, B.J. (promotor); van der Hagen, T.H.J.J. (promotor); Pecnik, Rene (copromotor); Delft University of Technology (degree granting institution)","2016","","Turbulence; Heat transfer; Supercritical Pressure; Direct Numerical Simulations","en","doctoral thesis","","978-94-6233-482-3","","","","","","","","","Energy Technology","","",""
"uuid:f1b6f6e0-1542-4744-943e-04c7c551213b","http://resolver.tudelft.nl/uuid:f1b6f6e0-1542-4744-943e-04c7c551213b","Burial-related fracturing in sub-horizontal and folded reservoirs: Geometry, geomechanics and impact on permeability","Bisdom, K. (TU Delft Applied Geology)","Bertotti, G. (promotor); Delft University of Technology (degree granting institution)","2016","","Natural fractures; aperture; equivalent permeability; deterministic modelling","en","doctoral thesis","","9789461867407","","","","","","","","","Applied Geology","","",""
"uuid:6787a6ac-a3fc-4735-a0c1-2d0f46f3ac3d","http://resolver.tudelft.nl/uuid:6787a6ac-a3fc-4735-a0c1-2d0f46f3ac3d","Alignment of Partnering with Construction IT: Exploration and Synthesis of network strategies to integrate BIM-enabled Supply Chains","Papadonikolaki, E. (TU Delft Design & Construction Management)","Wamelink, J.W.F. (promotor); Vrijhoef, R. (promotor); Delft University of Technology (degree granting institution)","2016","Supply Chain Management (SCM) and Building Information Modelling (BIM) are seen as innovations that can manage complexities in construction by focusing on integrating processes and products respectively. Whereas these two innovations have been considered compatible, their practical combination has been mainly anecdotal. The Netherlands was the locale of this study, where both SCM and BIM have been popular approaches. The research objective is to explore their real-world combination and propose strategies for the alignment of SCM and BIM, by viewing Supply Chain (SC) partnering as the inter-organisational proxy of SCM. The main question is: “How to align the SCM philosophy with BIM technologies to achieve integration in the construction industry? What aspects contribute to this alignment?”. The methodology was mixed and both qualitative and quantitative data were analysed. The overarching method was case study research and the unit of analysis was the firm, also referred to as ‘actor’.
After a semi-chronological review of the relevant literature, the two constructs of SCM and BIM were found interdependent in product-, process-, and actor-related (P/P/A) dimensions. The study consisted of four other consecutive studies. First, empirical insights into the practical implementation of SC partnering and BIM were obtained via the exploration of five cases. Second, a conceptual model for the quantitative analysis of the product-, process-, and actor-related dimensions was designed. Third, this model and mixed methods were applied to two polar (extreme) cases to analyse the contractual (typically SC-related), digital (typically BIM-related), and informal interactions among the involved actors. Fourth, an additional theoretical exploration of the BIM-enabled SC partnerships took place with focusing also on intra-organisational relations within the involved firms. After the four studies, the findings were systematically combined to create the theoretical synthesis, i.e. generate theory. Three consecutive steps of ‘construct’, ‘internal’, and ‘external’ validity took place after the synthesis, to define the transferability of findings. The systematic combination of findings deduced two routes to achieve SC integration in construction: (a) product-related (emphasis on BIM tools), and (b) actor-related (emphasis on SCM philosophy).
The two observed routes to SC integration emerged from the data of the polar cases. Two complementary sets of strategies for SC integration were derived afterwards. These strategies could ease the identification of which route is the ‘closest fit’ to SC integration, and then support the decision-making of how to pursue it. As the concept of BIM is currently a hot topic, it might be wise to undertake a ‘product-related’ route to integration and gradually introduce strategies from the ‘actor-related’ route. However, the ‘actor-related’ route could attain long-term integration and thus, long-lasting relations among the multi-actor networks. The key aspects of the alignment of partnering with construction IT for BIM-enabled SC partnerships are:
- The identification of whether the SC complexity is of process-, product- or actor-related nature;
- The deployed BIM collaboration patterns, i.e. ad-hoc, linear or distributed;
- The SC coordination mechanisms, e.g. centralised or decentralised;
- The relation between formal and informal aspects, e.g. symmetric or asymmetric;
- The emerging intra-organisational relations due to BIM and SCM implementation;
- The hierarchical level that BIM-enabled SC partnership decision-making pertains.
As the construction industry evolves into an information-driven sector, the alignment of construction IT with inter-organisational management is preeminent for managing the inherent complexities of the industry. In parallel, embracing inter-organisational approaches for information management such as BIM is a promisingway forward for SCM and construction management.","Building Information Modelling; BIM; supply chain management; SCM; Partnering","en","doctoral thesis","A+BE | Architecture and the Built Environment","9789492516190","","","","A+BE | Architecture and the Build Environment No 20 (2016)","","2016-11-29","","","Design & Construction Management","","",""
"uuid:92eb1908-8ac7-41ea-aa1e-09cbbdef4f55","http://resolver.tudelft.nl/uuid:92eb1908-8ac7-41ea-aa1e-09cbbdef4f55","Yield and flow stress of steel in the austenitic state","van Liempt, P. (TU Delft (OLD) MSE-3)","Sietsma, J. (promotor); Delft University of Technology (degree granting institution)","2016","","","en","doctoral thesis","","","","","","This research was funded by Tata Steel Research and Development and carried out under project no.MC10.07297 in the framework of the Research Program of the Materials innovation institute(M2i).","","","","","(OLD) MSE-3","","",""
"uuid:e4ae4035-72dc-41cc-bad0-eab044e0613a","http://resolver.tudelft.nl/uuid:e4ae4035-72dc-41cc-bad0-eab044e0613a","Improving Flood Prediction Assimilating Uncertain Crowdsourced Data into Hydrologic and Hydraulic Models","Mazzoleni, M. (TU Delft Water Resources)","Solomatine, D.P. (promotor); Alfonso, L (copromotor); Delft University of Technology (degree granting institution)","2016","Monitoring stations have been used for decades to measure hydrological variables,
and mathematical water models used to predict floods can be enhanced by the
incorporation of these observations, i.e. by data assimilation. The assimilation of
remotely sensed water level observations in hydrological and hydraulic modelling
has become more attractive due to their availability and spatially distributed nature.","","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-1-138-03590-4","","","","Dissertation submitted in fulfilment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of the UNESCO-IHE Institute for Water Education.","","","","","Water Resources","","",""
"uuid:6d23ede7-8f1f-494e-930c-e2ecc6546160","http://resolver.tudelft.nl/uuid:6d23ede7-8f1f-494e-930c-e2ecc6546160","Infrasonic Interferometry: Probing the atmosphere with acoustic noise from the oceans","Fricke, J.T. (TU Delft Applied Geophysics and Petrophysics)","Simons, D.G. (promotor); Wapenaar, C.P.A. (promotor); Evers, L.G. (promotor); Delft University of Technology (degree granting institution)","2016","In this thesis, the possibilities of infrasonic interferometry for probing the troposphere and stratosphere with microbaroms are explored. Infrasonic interferometry determines the delay time between two sensors by cross correlating their infrasound recordings. The obtained delay time can be used to estimate the Green’s function, which is the impulse response of the medium. Until now this approach was applied in other research fields, e.g., oceanography, seismology and ultrasonics, but it was used only occasionally for infrasound studies. In this thesis, the new research field of infrasonic interferometry is explored from the theoretical basics, via numerical experiments with synthetic data, up to the application to tropospheric and stratospheric microbaroms.
1- to study the role and importance of small water reservoirs in (semi-)arid regions (case study: Upper East Region of Ghana);
2- to investigate effects of atmospheric stability conditions over small lakes in heat fluxes;
3- to use over-land measured values of air temperature to estimate unknown water surface temperature values which is needed for heat exchange estimation;
4- to calculate heat and mass transfer coefficients accurately by using a Computational Fluid Dynamics (CFD)-based approach (CFDEvap Model);
5- to investigate temperature dynamcis as well as circulation in small water bodies to develop a comprehensive framework (Shallow Small Lake Framework: SSLF);
6- to study the small water surfaces and Atmospheric Boundary Layer (ABL) Interactions using CFD;","Small Shallow Lakes; Computational Fluid Dynamics (CFD); Atmospheric Boundary Layer (ABL); Evaporation; Turbulence; Arid and Semi-arid Regions","en","doctoral thesis","","978-94-028-0444-7","","","","","","","","","Water Resources","","",""
"uuid:6461fab4-564a-4b91-851f-d27c96434991","http://resolver.tudelft.nl/uuid:6461fab4-564a-4b91-851f-d27c96434991","Microbiology in swimming pools: UV-based treatment versus chlorination","Peters, M.C.F.M. (TU Delft Sanitary Engineering)","Rietveld, L.C. (promotor); Vrouwenvelder, J.S. (promotor); de Kreuk, M.K. (copromotor); Delft University of Technology (degree granting institution)","2016","","","en","doctoral thesis","","","","","","","","","","","Sanitary Engineering","","",""
"uuid:7811cb22-c40b-45a5-a71f-202050e06d66","http://resolver.tudelft.nl/uuid:7811cb22-c40b-45a5-a71f-202050e06d66","Adaptive planning for resilient coastal waterfronts: Linking flood risk reduction with urban development in Rotterdam and New York City","van Veelen, P.C. (TU Delft OLD Urban Compositions)","Meyer, Han (promotor); van de Ven, F.H.M. (copromotor); Delft University of Technology (degree granting institution)","2016","Many delta cities and urbanized coastal regions are facing increasing risks of flooding due to climate change and sea level rise and will have to adapt. The question that is central to this study is what opportunities offer spatial development and urban renewal to speed up the process of adaptation, and reduce costs, while creating spatial value and wider benefits for society. To answer this question a new method is introduced that is based on the development of adaptation paths and analysing urban dynamics. The method has been tested in two flood-prone waterfront areas in Rotterdam and New York. Both cases have shown that identifying opportunities for adaptation based on an analysis of planned investment projects and life cycle of buildings and urban infrastructure helps to systematically assess the effectiveness of adaptation paths. Moreover, it helps to identify new spatial transformations that provide opportunities for adaptation that were not yet identified, or positively assessed before. This makes it easier to synchronize adaptation measures in space and time, and to develop more comprehensive strategies. Both cases show that in intensively built-up environments, such as Rotterdam and New York, adapting existing buildings, despite changes in the building regulations is not a sustainable solution to increasing flood risk. In the long term an integral solution by developing integrated flood protection as part of the redevelopment of the waterfront is necessary. To make this possible, it is however necessary to develop new financial arrangements that fairly distribute the costs and benefits of the stakeholders and to make major changes in the policy regime.","","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-92516-21-3","","","","A+BE | Architecture and the Built Environment No 19 (2016)","","2016-11-25","","","OLD Urban Compositions","","",""
"uuid:8da2aedf-ac26-4763-9271-66c6981007fe","http://resolver.tudelft.nl/uuid:8da2aedf-ac26-4763-9271-66c6981007fe","A virtual sleepcoach for people suffering from insomnia","Horsch, C.H.G. (TU Delft Interactive Intelligence)","Neerincx, M.A. (promotor); Brinkman, W.P. (copromotor); Delft University of Technology (degree granting institution)","2016","People suffering from insomnia have problems falling asleep or staying asleep. Insomnia impairs people’s daily life and their quality of life decreases. Approximately 10% of the population suffers from insomnia. The common treatment for insomnia is cognitive behavioural therapy for insomnia (CBT-I), mostly delivered by a therapist that people see once a week. A disadvantage of the current practice of insomnia treatment is the limited accessibility of insomnia treatment. Moreover, adherence to CBT-I exercises seems to be difficult. A virtual sleep coach that is provided through a smartphone might be a possible solution to both of these drawbacks. A virtual coach is never tired, never frustrated, never forgets things, and never gives up. Furthermore, it could improve accessibility, give tailored background information, offer personalized advice and feedback, monitor progress, provide support, and automatically track
behaviour. Additionally, the majority of people in wealthy nations own a smartphone and emerging countries are expected to follow soon, making this type of intervention readily accessible to a large group of people. In short, a virtual sleep coach seems to be a good opportunity to improve traditional CBT-I. Concurrent to developing such a virtual sleep coach, answers to the question of how persuasive strategies can contribute to treatment adherence in an effective virtual sleep coach are explored.","","en","doctoral thesis","","978-94-6186-732-2","","","","","","","","","Interactive Intelligence","","",""
"uuid:e7340a8e-1815-469c-9e46-ddea1ef17b04","http://resolver.tudelft.nl/uuid:e7340a8e-1815-469c-9e46-ddea1ef17b04","Fluidized Nanoparticle Agglomerates: Formation, Characterization, and Dynamics","Fabre, A. (TU Delft ChemE/Product and Process Engineering)","van Ommen, J.R. (promotor); Kreutzer, M.T. (promotor); Delft University of Technology (degree granting institution)","2016","Nanoparticles have properties of interest in biology, physics, ecology, geology, chemistry, medicine, aerospace, food science, and engineering among many other fields, due to their intrinsic properties arising from their large surface area to volume ratio and small scale. Most nanoparticle applications require particle’s surface adaptations, for which numerous methods have been developed. For this purpose, the characteristics of fluidization that make it an attractive processing technique are the large gas-solid contact area, no solvent, potential scalability, and suitability for continuous processing. Nanoparticles are not fluidized individually, but rather as clusters, which formdue to the relatively large interparticle forces. As a result, fluidization dynamics is strongly linked to nanoparticle agglomeration.","","en","doctoral thesis","","978-94-6186-721-6","","","","","","","","","ChemE/Product and Process Engineering","","",""
"uuid:85efaf4c-9dce-4111-bc91-7171b9da4b77","http://resolver.tudelft.nl/uuid:85efaf4c-9dce-4111-bc91-7171b9da4b77","A Methodology for the Design of Kite-Power Control Systems","Fechner, U. (TU Delft Wind Energy; TU Delft Delft University Wind energy research institute)","van Bussel, G.J.W. (promotor); Schmehl, R. (copromotor); Delft University of Technology (degree granting institution)","2016","","kite control; airborne wind-energy; distributed control; kite-power systems; efficiency; pumping kite-power system","en","doctoral thesis","","978-94-028-0409-6","","","","","","","","","Wind Energy","","",""
"uuid:9020a4c1-f81a-4a8c-86f4-024452585505","http://resolver.tudelft.nl/uuid:9020a4c1-f81a-4a8c-86f4-024452585505","Virtualizing The Internet of Things","Sarkar, C. (TU Delft Embedded Systems)","Langendoen, K.G. (promotor); Venkatesha Prasad, Ranga Rao (copromotor); Delft University of Technology (degree granting institution)","2016","Computers were invented to automate the labour-intensive computing process. The advancement of semiconductor technology has reduced the form-factor and cost of computers, and increased their usability. This has gradually introduced computers in various control and automation systems. The further rise of miniatuarized computing devices paves the way for autonomous monitoring using embedded devices. In the last two decades, we observed a huge surge of such monitoring and control systems. These systems are generally termed as the wireless sensor and actuator networks (WSAN). In a WSAN, a number of sensor nodes monitors a deployment area where data collection by humans is either difficult or costly. These devices collaboratively report their sensor readings to a centralized node called the sink. The sink is connected with the Internet, thus delivers the data to the outside world. This way the deployment region can be monitored remotely. Similarly, some actuators can also be controlled remotely through the sink.
In the last decade, the concept of the Internet of Things (IoT) has evolved where any device can be reached by any other device/system/human being from anywhere and anytime. Thus, WSANs can be seen as a precursor of IoT. However, the vision of IoT is not limited to mere remote connectivity. Unlike traditional WSAN, where devices are deployed in remote/critical locations for specific purposes, IoT devices would be integrated into our daily surroundings assisting us in every aspect of life. As the embedded devices are resource constrained, energy and computational efficiency is a major challenge for both WSAN and IoT devices. However, the problem escalates as the IoT devices are expected to perform a number of tasks as opposed to a specific task as performed by classical WSANs. Moreover, the goal of IoT is to take humans out of the control loop or reduce the human intervention as much as possible. This requires devices to exchange data and cooperate among themselves. Thus, IoT devices need to act smartly fulfilling various requirements within its resource constraints.
Every existing and upcoming device and network would be part of the IoT ecosystem. As the number of devices is expected to grow multifold, managing these devices will be a challenge. Especially since these devices are under the control of various entities/organizations. Not to mention that the manufacturers of various devices and their specifications would also vary significantly. To accomplish the vision of IoT these devices need to be able to cooperate and collaborate among themselves even if they are managed differently. This thesis brings forward the concept of virtualization in IoT to tackle the challenges of a global IoT ecosystem.
The first challenge that we tackle is how to virtualize the IoT. We propose a reference architectural model for IoT called DIAT. The reference architecture follows a layered design principle where each layer groups a number of similar functionalities together. This enables easy development of existing and new functionalities of each layer independently. To validate the feasibility and usability of such an architectural model, we developed a system based on a practical IoT-application scenario. To this extent, we developed a controller (iLTC) that operates the heating and lighting systems in an office environment such that these devices operate energy efficiently. At the same time the system ensures a comfortable surroundings for the occupants while eliminating any direct involvement from the occupants.
As WSANs are an integral part of the IoT ecosystem, next, we revisited some of the classic problems of WSANs in the wake of virtualizing the IoT. As energy efficiency is one of the biggest issues in WSANs, we propose a solution to reduce the overall traffic in a network without affecting the quality of data/monitoring. We achieved this by virtualizing the WSAN, which leads to higher cooperation among the devices and a higher operational optimization. We developed the virtual sensing framework (VSF) that exploits the inherent correlation among the sensor nodes to predict sensor readings (virtual sensing). The basic idea is that if a number of nodes are highly correlated, sensor readings from only one of them is sufficient to predict the readings for rest of them. Due to virtualization, such a cooperation among the nodes is possible. This reduces the amount of data transfer within the network, which leads to energy-efficient network operation.
Further, we developed an efficient data collection protocol, called Sleeping Beauty that complements the virtualized sensor network. Based on a centralized schedule, nodes deliver their sensor readings to the sink reliably and efficiently. The accomplishment of a centralized schedule depends on network-wide time synchronization. As the hardware clock of an embedded device drifts significantly within a short time span, we developed a simple self-rectification mechanism such that the overhead of synchronizing the network periodically can be reduced significantly. This technique can be used by any protocol that requires time synchronization other than Sleeping Beauty.
Timely data collection is another desired aspect of IoT as opposed to classical WSANs where latency is generally compromised in order to achieve a higher energy efficiency. We developed a communication mechanism, called Rapid that not only delivers the sensor readings in a fixed time bound, it also reduces the energy consumption. Rapid forms a number of clusters on-the-fly, where the cluster-heads collect data from the cluster-members and send an aggregated packet to the sink. By exploiting the capture effect, Rapid achieves parallelization for intra-cluster communications. Further, it exploits the constructive interference based fast flooding to deliver the aggregated data, which eliminates hop-by-hop flow scheduling. These two factors reduces the overall end-to-end delay of all the flows.
The proposition of this thesis is that by means of virtualization, traditional WSANs can be easily integrated into the grand vision of IoT. We proposed a reference architecture, validated by means of a case study, and developed several amendments to classical WSAN data collection, making it consume less energy and achieve lower latency. We are convinced that virtualization can be applied effectively to other (WSAN) functionality as well. The future of IoT is looking bright.","virtualization; Internet of Things (IoT); virtual sensing","en","doctoral thesis","","978-94-6186-748-3","","","","","","","","","Embedded Systems","","",""
"uuid:7c67133f-6c68-4385-a7e6-0b43fd5e2045","http://resolver.tudelft.nl/uuid:7c67133f-6c68-4385-a7e6-0b43fd5e2045","Design and Analyses of Porous Concrete for Safety Applications","Agar Ozbek, AS","van Breugel, K. (promotor); Weerheijm, J. (copromotor); Delft University of Technology (degree granting institution)","2016","","","en","doctoral thesis","","","","","","","","","","Engineering Structures","","","",""
"uuid:bece20f8-1d72-425e-b4b1-5d817e54f762","http://resolver.tudelft.nl/uuid:bece20f8-1d72-425e-b4b1-5d817e54f762","Autonomous crack healing in Cr2AlC and Ti2AlC MAX phase","Shen, L. (TU Delft (OLD) MSE-1)","van der Zwaag, S. (promotor); Sloof, W.G. (copromotor); Delft University of Technology (degree granting institution)","2016","The excellent mechanical properties in combination with the capability to autonomously repair microcracks when exposed to air of high temperatures make certain MAX phase metallo ceramics promising candidate materials for components in a turbine engine, in particular for those components exposed to high temperatures and having the risk of being exposed to erosion due to loose airborne particles being sucked into the engine. Cr2AlC is a member of the family of self healing MAX phases but relatively little is known about its healing behaviour under controlled laboratory conditions or simulated turbine engine conditions as a function of its synthesis, composition and microstructure. The aim of the work as described in this thesis was to study the healing behaviour of
(micro-) cracks formed by erosive damage.","Cr2AlC; Ti2AlC; Self-healing; Erosion; High temperature","en","doctoral thesis","","978-94-6186-723-0","","","","","","","","","(OLD) MSE-1","","",""
"uuid:625d1a77-b39e-417a-9658-122676c47fb5","http://resolver.tudelft.nl/uuid:625d1a77-b39e-417a-9658-122676c47fb5","Radio frequency energy harvesting and low power data transmission for autonomous wireless sensor nodes","Rodrigues Mansano, A.L. (TU Delft Bio-Electronics)","Serdijn, W.A. (promotor); Delft University of Technology (degree granting institution)","2016","Since the Internet of Things (IoT) is expected to be the new technology to drive the semiconductor industry, significant research efforts have been made to develop new circuit and system techniques for autonomous/very low-power operation of wireless sensor nodes. Very low-power consumption of sensors is key to increase battery lifetime or allow for battery-less (autonomous) operation of sensors, which contributes to preventing or reducing the high maintenance costs of battery supplied sensors and reduce the amount of discarded batteries.
This thesis, entitled Radio Frequency Energy Harvesting and Low Power Data Transmission for Autonomous Wireless Sensor Nodes, presents very low-power consumption circuit and system techniques combined with energy harvesting that allow the creation of autonomous wireless sensor nodes. This work focuses on three main challenges: 1) how to improve energy harvesting efficiency, 2) how to minimize power consumption of data transmission and 3) how to combine low-power techniques and energy harvesting in a system. These challenges are addressed in this thesis with on-PCB and Integrated Circuits (IC) solutions.
The efficiency of radio frequency (RF) energy harvesting is improved by proposing a new topology of a charge-pump rectifier. The proposed topology uses a voltage boosting network to compensate for the voltage drop in the transistors. The new topology is presented and analyzed. Simulation results are compared to the analytical analysis and measurement results of the circuit that has been fabricated in a 0.18um CMOS technology and operates at 13.53 MHz.
Although the efficiency of RF energy harvesting is improved using the above technique, at the same time, low power techniques in data transmission should be developed to save energy. Pulse width modulation and impulse transmission techniques to minimize power consumption have been developed and are presented in this thesis.
The developed pulse modulation circuitry has been fabricated in 0.18um CMOS technology as part of a System on Chip (SoC). The new impulse transmitter topology for low-voltage low-power operation has been fabricated on PCB with micro-wave discrete components. Theoretical analysis, simulations and measurements results are shown to prove the impulse transmitter concept.
The circuits developed are integrated in a SoC with energy harvesting to prove the concept of autonomous wireless sensor nodes. Two sensor nodes have been designed and measured: one for autonomous temperature monitoring and the second for autonomous ECG monitoring. Both designs operate from wireless power without the use of batteries. Finally, the work developed in this thesis is summarized and future research possibilities are discussed.","RF; Energy Harvesting; Low-power; Wireless Sensor","en","doctoral thesis","","","","","","","","","","","Bio-Electronics","","",""
"uuid:6f5065d6-3a43-4f3f-839f-c530dc438851","http://resolver.tudelft.nl/uuid:6f5065d6-3a43-4f3f-839f-c530dc438851","Fault diagnosis and maintenance optimization for interconnected systems: With applications to railway and climate control systems","Verbert, K.A.J. (TU Delft Team Bart De Schutter)","De Schutter, B.H.K. (promotor); Babuska, R. (promotor); Delft University of Technology (degree granting institution)","2016","For many systems, like medical devices, nuclear reactors, and transportation systems, an adequate maintenance optimization approach is essential to ensure high levels of reliability and safety while keeping operational costs low. A promising approach towards this goal is condition-based maintenance, which plans maintenance only when the system health indicates a need for it. To infer the system health, monitoring devices are installed to collect health-related data. The path from the monitoring data to a maintenance schedule then involves the following steps:
1. fault diagnosis, i.e. detecting abnormal system behavior and identifying its cause;
2. failure prognosis, i.e. predicting future system health;
3. maintenance optimization, i.e. determining the required type of maintenance as well as the optimal time to perform the maintenance task.
Although various methods have been published for all three tasks, discrepancies still exist between the assumptions made in the literature and the conditions encountered in practice. These discrepancies include, e.g., unrealistic assumptions regarding the absence of component interdependencies and regarding the (number of) available monitoring signals. This thesis contributes to resolving these discrepancies by proposing methods for fault diagnosis, failure prognosis, and maintenance optimization, particularly focusing on narrowing the gap between theory and practice. When treating the individual tasks, the dependencies between fault diagnosis, failure prognosis, and maintenance optimization are explicitly taken into account.
This thesis is dedicated to develop signal processing algorithms to design highspeed wireless transceivers that can perform in highly reflective and harsh environments. The start of this research work initiated as a collaboration between TU Delft and an industrial partner, on a research aimed at short range gigabit wireless link within a lithography machine. The underlying unique wireless environment, together with the challenging specifications of the communication link for mechatronic systems, made this a compelling research project.","","en","doctoral thesis","","978-94-6186-744-5","","","","","","","","","Signal Processing Systems","","",""
"uuid:b65e6e12-a85e-4846-9122-0bb9be47a762","http://resolver.tudelft.nl/uuid:b65e6e12-a85e-4846-9122-0bb9be47a762","Microscopic modelling of walking behaviour","Campanella, M.C. (TU Delft Transport and Planning)","Hoogendoorn, S.P. (promotor); Daamen, W. (copromotor); Delft University of Technology (degree granting institution)","2016","How to model and simulate large pedestrian facilities accurately? This dissertation answers this question by analysing current pedestrian models and expanding one. The developed model (Nomad) describes pedestrians as individuals that minimise walking efforts. The model is expanded including behaviours that correspond to activities performed in pedestrian facilities such as train stations and airports. The model is calibrated and validated using a novel methodology that obtain parameters of general use.
To address the above challenge, this thesis proposes the design of a framework of novel methods and tools for the integration, visualization, and exploratory analysis of large-scale and heterogeneous social urban data to facilitate the understanding of urban dynamics. The research focuses particularly on the spatiotemporal dynamics of human activity in cities, as inferred from different sources of social urban data. The main objective is to provide new means to enable the incorporation of heterogeneous social urban data into city analytics, and to explore the influence of emerging data sources on the understanding of cities and their dynamics.
In mitigating the various heterogeneities, a methodology for the transformation of heterogeneous data for cities into multidimensional linked urban data is, therefore, designed. The methodology follows an ontology-based data integration approach and accommodates a variety of semantic (web) and linked data technologies. A use case of data interlinkage is used as a demonstrator of the proposed methodology. The use case employs nine real-world large-scale spatiotemporal data sets from three public transportation organizations, covering the entire public transport network of the city of Athens, Greece.
To further encourage the consumption of linked urban data by planners and policy-makers, a set of webbased tools for the visual representation of ontologies and linked data is designed and developed. The tools – comprising the OSMoSys framework – provide graphical user interfaces for the visual representation, browsing, and interactive exploration of both ontologies and linked urban data.
After introducing methods and tools for data integration, visual exploration of linked urban data, and derivation of various attributes of people and places from different social urban data, it is examined how they can all be combined into a single platform. To achieve this, a novel web-based system (coined SocialGlass) for the visualization and exploratory analysis of human activity dynamics is designed. The system combines data from various geo-enabled social media (i.e. Twitter, Instagram, Sina Weibo) and LBSNs (i.e. Foursquare), sensor networks (i.e. GPS trackers, Wi-Fi cameras), and conventional socioeconomic urban records, but also has the potential to employ custom datasets from other sources.
A real-world case study is used as a demonstrator of the capacities of the proposed web-based system in the study of urban dynamics. The case study explores the potential impact of a city-scale event (i.e. the Amsterdam Light festival 2015) on the activity and movement patterns of different social categories (i.e. residents, non-residents, foreign tourists), as compared to their daily and hourly routines in the periods before and after the event. The aim of the case study is twofold. First, to assess the potential and limitations of the proposed system and, second, to investigate how different sources of social urban data could influence the understanding of urban dynamics.
The contribution of this doctoral thesis is the design and development of a framework of novel methods and tools that enables the fusion of heterogeneous multidimensional data for cities. The framework could foster planners, researchers, and policy makers to capitalize on the new possibilities given by emerging social urban data. Having a deep understanding of the spatiotemporal dynamics of cities and, especially of the activity and movement behavior of people, is expected to play a crucial role in addressing the challenges of rapid urbanization. Overall, the framework proposed by this research has potential to open avenues of quantitative explorations of urban dynamics, contributing to the development of a new science of cities.","Urban dynamics; Social urban data; Urban data science; GIScience; Human activity; Spatiotemporal analysis; Machine learning; Semantic integration; SocialGlass; OSMoSys","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-92516-20-6","","","","A+BE | Architecture and the Build Environment No 18 (2016)","","2016-11-17","","","Web Information Systems","","",""
"uuid:9035bf0a-0d24-44f1-819c-9e064ecfad45","http://resolver.tudelft.nl/uuid:9035bf0a-0d24-44f1-819c-9e064ecfad45","A Context-Aware m-Health Application: Towards a Design Model for Developing Rural Areas","Nyakaisiki, S.R. (TU Delft Information and Communication Technology)","Tan, Y. (promotor); Bouwman, W.A.G.A. (promotor); Delft University of Technology (degree granting institution)","2016","","","en","doctoral thesis","","","","","","","","","","","Information and Communication Technology","","",""
"uuid:286c36e8-3cac-403c-9d0a-72a5232c5093","http://resolver.tudelft.nl/uuid:286c36e8-3cac-403c-9d0a-72a5232c5093","Role of vegetation on river bank accretion","Vargas Luna, A. (TU Delft Environmental Fluid Mechanics)","Uijttewaal, W.S.J. (promotor); Crosato, A. (copromotor); Delft University of Technology (degree granting institution)","2016","There is rising awareness of the need to include the effects of vegetation in studies dealing with the morphological response of rivers. Vegetation growth on river banks and floodplains alters the river bed topography, reduces the bank erosion rates and enhances the development of new floodplains through river bank accretion. The role of riparian vegetation on river morphology is examined in this thesis, with particular attention to its effects on bank accretion, focusing on lowland streams in temperate climates. The work is based on the combination of extensive literature review, small- and large-scale laboratory experiments, field observations and numerical simulations in order to overcome the shortcomings of single approaches. The results of the study demonstrated that vegetation is essential for the accretion of river banks in non-clay dominated environments, highlighting the role of the colonization of new deposits by plants, which is strongly influenced by the hydrologic regime. Vegetation establishment plays a key role on the stabilization of the channel-width and on the vertical accretion of both levees and floodplains. The vertical accretion and channel incision induced by colonizing plants showed that vegetation colonization increases the amplitude and length of the bars in the main channel, affecting the final river planform. The outcomes of this research emphasize the relevance of considering the effects of vegetation on the river management and on the designing, planning and maintenance programs of restoration projects. To advance in the understanding of the dynamics of river banks, future research is also recommended on quantifying of the role of root systems and fine sediments on the reinforcement and consolidation processes of soils.","River morphodynamics; Bank accretion; Vegetation modelling","en","doctoral thesis","","978-94-6233-438-0","","","","","","2017-01-31","","","Environmental Fluid Mechanics","","",""
"uuid:7c1f62db-9a5a-4e02-8f11-488d6a299500","http://resolver.tudelft.nl/uuid:7c1f62db-9a5a-4e02-8f11-488d6a299500","Control-Theoretic Models of Feedforward in Manual Control","Drop, F.M. (TU Delft Control & Simulation)","Mulder, Max (promotor); Bülthoff, Heinrich H. (promotor); Pool, D.M. (copromotor); Delft University of Technology (degree granting institution)","2016","Understanding how humans control a vehicle (cars, aircraft, bicycles, etc.) enables engineers to design faster, safer, more comfortable, more energy efficient, more versatile, and thus better vehicles. In a typical control task, the Human Controller (HC) gives control inputs to a vehicle such that it follows a particular reference path (e.g., the road) accurately. The HC is simultaneously required to attenuate the effect of disturbances (e.g., turbulence) perturbing the intended path of the vehicle. To do so, the HC can use a control organization that resembles a closed-loop feedback controller, a feedforward controller, or a combination of both. Previous research has shown that a purely closed-loop feedback control organization is observed only in specific control tasks, that do not resemble realistic control tasks, in which the information presented to the human is very limited. In realistic tasks, a feedforward control strategy is to be expected; yet, almost all previously available HC models describe the human as a pure feedback controller lacking the important feedforward response. Therefore, the goal of the research described in this thesis was to obtain a fundamental understanding of feedforward in human manual control.
First, a novel system identification method was developed, which was necessary to identify human control dynamics in control tasks involving realistic reference signals. Second, the novel identification method was used to investigate three important aspects of feedforward through human-in-the-loop experiments which resulted in a control-theoretical model of feedforward in manual control. The central element of the feedforward model is the inverse of the vehicle dynamics, equal to the theoretically ideal feedforward dynamics. However, it was also found that the HC is not able to apply a feedforward response with these ideal dynamics, and that limitations in the perception, cognition, and action loop need to be modeled by additional model elements: a gain, a time delay, and a low-pass filter.
Overall, the thesis demonstrated that feedforward is indeed an essential part of human manual control behavior and should be accounted for in many human-machine applications.","System identification; Human Factors; Manual control; Feedforward; Feedback; Human-Machine Interaction; Vehicle design; Helicopter design; Aircraft design","en","doctoral thesis","","978-94-6186-728-5","","","","","","","","","Control & Simulation","","",""
"uuid:b038f8a2-d2db-46fc-8419-3141f21faa1c","http://resolver.tudelft.nl/uuid:b038f8a2-d2db-46fc-8419-3141f21faa1c","Surf Wave Hydrodynamics in the Coastal Environment","Salmon, J.E. (TU Delft Environmental Fluid Mechanics)","Pietrzak, J.D. (promotor); Holthuijsen, L.H. (copromotor); Delft University of Technology (degree granting institution)","2016","Stochastic wave models play a central role in our present-day wave modelling capabilities. They are frequently used to compute wave statistics, to generate boundary conditions and to include wave effects in coupled model systems. Historically, such models were developed to predict the wave field evolution in deep water where the conditions of Gaussianity generally hold. However, in recent decades, such models have been applied to the shallower coastal environment where the stochastic representation of the dominant wave physics becomes questionable. This is primarily due to the increased influence of wave nonlinearity and the additional depth-induced wave processes that are dominant in this region.
Unfortunately, the two most dominant wave processes in the surf zone: depth-induced wave breaking and nonlinear triad wave-wave interactions are also the least well represented and understood. This is due to both their complexity and the scarcity of analytical solutions for realistic wave fields. As such, they represent a significant obstacle in the accurate modelling of the wave dynamics in the coastal region. Providing accurate representations of these wave processes is essential to answering the questions demanded from stochastic wave models from coastal engineers for coastal management and design. Such advancements are necessary to improve our understanding of wave-induced processes, to reduce costs in managing the coastal environment and to tackle contemporary issues such as uncertainties with respect to increased sea level rise.
Due to the complexity of depth-induced wave breaking, a complete representation of this wave process does not exist for both stochastic and deterministic modelling frameworks. Although there is extensive literature on the subject of parameterizing depth-induced wave breaking in a stochastic sense, these parameterizations are inconsistent with theory, observations and (deterministic) model predictions. In particular, present-day modelling defaults perform poorly over (near-)horizontal bathymetries with over-enhanced wave dissipation of locally-generated waves and insufficient dissipation of swell waves. Equally, nonlinear triad wave-wave interactions are poorly represented in stochastic wave models due to the problem of closure and the impractical computational expense of more accurate representations. In particular, the most commonly applied parameterization in the wave literature incorrectly predicts the evolution of the spectral shape, and the convergence to an equilibrium high-frequency tail deep in the surf zone. Correctly resolving these issues is essential for the management of many of the activities occurring at the coast; from the design of coastal defenses to feasibility studies for wave energy converters, from port operation and availability to vessel navigation, from understanding the ecology at the coast to the fisheries, and from managing leisure and tourism to safety at the coast.
In this work, we investigate the process of depth-induced wave breaking through a comprehensive analysis of the literature and a comparison of modelling performance. Here, we use an extensive set of wave observations representing a large range of wave conditions and bathymetric profiles. The analysis demonstrates that no currently available depth-induced breaking source term is capable of sufficiently representing the process of depth-induced wave breaking. This is shown to be in agreement with the wave literature with parameterizations either over-predicting wave dissipation for locally generated waves or under-predicting wave dissipation for non-locally generated waves over (near-)horizontal bathymetries. To address this issue, a new joint scaling using both local wave and bathymetric conditions is proposed. Using both the normalized characteristic wave number and local bottom slope unifies two approaches prevalent in the wave literature. This is shown to improve the model performance for the dissipation of both locally and non-locally generated waves over (near-)horizontal bathymetries.
Furthermore, the validity of the assumption that wave dissipation can be modelled as analogous to a 1D dissipative bore is explored. Subsequently, a heuristic directional modification is introduced for depth-induced wave breaking dissipation models. This directionally partitions the 2D spectrum into several directional partitions that are assumed to be unidirectional. Model results demonstrate that the effect of the directional partitioning is to reduce the dissipation of wave energy and to enhance the significant wave height; in agreement with field measurements. Not only is this modification shown to be applicable to the joint wave breaking parameterization proposed in this study, but also for well-established parameterizations.
The effects of both the proposed scaling and directional modification are then reviewed from an operational context and are compared to state-of-the-art source terms, field observations and a hypothetical storm representative of Dutch design conditions. Such design conditions are expected to be representative of design conditions found globally. In an environment where storm intensities may be increasing, for example due to global warming, the results of wave breaking models near the coast under such extreme conditions become of greater relevance. The influence of wave breaking models in coupled model systems is anticipated to provide important new insights in understanding the various wave-driven processes along our coasts.
Next, the representation of the nonlinear triad wave-wave interactions in stochastic wave models is reviewed. In particular, the collinear approximation used to transform 1D triad source terms for implementation in 2D stochastic wave models is revisited. These approximations are necessitated by considerations of computational efficiency. The conventional collinear approximation is shown to be inconsistent at the unidirectional limit and to be a primary source of modelling error. Instead of converging to the values predicted by the 1D triad source terms at the unidirectional limit, the energy transfers as computed by stochastic wave models are shown to become unbounded. This results in a dimensional calibration coefficient which is at least an order of magnitude smaller than that found in the wave literature. Consequently, for directional wave conditions, 1D triad source terms implemented with the conventional collinear approximation insufficiently capture the wave evolution. To address this problem, a new collinear approximation is presented which accounts for the wave energy contained within a finite directional bandwidth. This collinear approximation is shown to converge correctly at the unidirectional limit and to agree well with predictions from a second-order accurate deterministic wave model. In particular, better agreement is shown in the modelling prediction of the spectral shape and related integral parameters, e.g. wave period, under idealized wave conditions. Under certain conditions, these error reductions are shown to be more significant than differences between the underlying triad models.
The contribution of this work demonstrates that while the underlying theory underpinning stochastic wave modelling in the coastal environment still remains questionable, the accurate determination of wave statistics in the coastal zone is tenable. With the advancements presented in this study, the new source terms correspond better with the current wave literature and are shown to provide significant steps forward over existing default source terms. The developments presented here are anticipated to form the foundation for future source term research, and to be used for the representation of the dominant wave physics in the coastal environment in operational wave models.","wave dynamics; numerical modelling; coastal systems; wave breaking; nonlinear interactions; stochastic models","en","doctoral thesis","","978-94-92516-17-6","","","","","","","","","Environmental Fluid Mechanics","","",""
"uuid:9b46e18b-1fa3-4517-a666-660e4a50f18e","http://resolver.tudelft.nl/uuid:9b46e18b-1fa3-4517-a666-660e4a50f18e","Computationally efficient analysis & design of optimally compact gear pairs and assessment of gear compliance","Amani, A. (TU Delft Emerging Materials)","Spitas, C. (promotor); Spitas, Vasilios (promotor); Delft University of Technology (degree granting institution)","2016","","gear design; spur gear; design parameters; pitch compatibility; interference; corner contact; pointed tip; undercutting; non-standard; non-dimensional; design guidelines; highest point of single tooth contact (HPSTC); finite element analysis; stress analysis; bending strength; compact gears; optimization; centre distance; deviation; tolerance zone; computational modelling; compact gear drive; compliance; bending compliance; foundational compliance; Hertzian compliance; non-dimensional modelling; Saint-Venant's Principle; cubic Hermitian interpolation","en","doctoral thesis","","978-94-6186-739-1","","","","","","2018-11-15","","","Emerging Materials","","",""
"uuid:99e0af9f-067e-4e67-8c81-d5beca0aeb18","http://resolver.tudelft.nl/uuid:99e0af9f-067e-4e67-8c81-d5beca0aeb18","Closed-Loop Surface Related Multiple Estimation","Lopez Angarita, G.A. (TU Delft ImPhys/Acoustical Wavefield Imaging)","de Jong, N. (promotor); Verschuur, D.J. (copromotor); Delft University of Technology (degree granting institution)","2016","Surface-related multiple elimination (SRME) is one of the most commonly used methods for suppressing surface multiples. However, in order to obtain an accurate surface multiple estimation, dense source and receiver sampling is required. The traditional approach to this problem is performing data interpolation prior to multiple estimation. Though appropriate in many cases, this methodology fails when big data gaps are present or when relevant information is not recovered, e.g. near-offset data in shallow-water environments. We propose a solution in which multiple estimation is performed simultaneously with data reconstruction, such that data reconstruction helps obtaining better multiple estimates and in which the physical primary-multiple relationship helps constraining the data interpolation. To accomplish this we propose to extend the recently introduced Closed-Loop SRME (CL-SRME) algorithm to account for primary estimation in the case of coarsely sampled data. This is achieved by introducing a focal domain parameterization of the primaries in a sparsity-promoting CL-SRME method. Results proof that the method is capable of reliably estimating primaries data in case of shallow water and with large undersampling factors.","Closed Loop; Surface Multiples; Multiples; Multiple Scattering; Multiple Estimation; SRME; EPSI; Delphi multiples; CL-SRME","en","doctoral thesis","","978-94-6233-465-6","","","","","","","","","ImPhys/Acoustical Wavefield Imaging","","",""
"uuid:2e35b789-735a-47a3-ac4f-63dd7651de44","http://resolver.tudelft.nl/uuid:2e35b789-735a-47a3-ac4f-63dd7651de44","Rotational Dynamics of Icy Satellites: Tidal response and forced longitudinal librations at the surface of a viscoelastic Europa","Jara Orue, H.M. (TU Delft Astrodynamics & Space Missions)","Vermeersen, L.L.A. (promotor); Visser, P.N.A.M. (promotor); Delft University of Technology (degree granting institution)","2016","The icy satellites of the giant planets Jupiter and Saturn are among the most interesting celestial bodies in our Solar System. The interpretation of various remote sensing observations performed by the Voyager, Galileo and Cassini-Huygens missions strongly suggests that many icy satellites harbor a subsurface water ocean underneath the ice shell covering the satellites. Since the availability of liquid water is one of the prerequisites for the origin and evolution of life as we know it, the characterization of the physical properties of the putative subsurface oceans (and the overlying ice shells) has become a key research topic in planetary sciences. Due to the unavailability of direct means to explore these internal oceans, the relevant physical properties of the interior need to be derived from the combined interpretation of remote sensing measurements of observables such as the strength of the induced magnetic field, the amplitude of tidal deformations, the strength of gravity perturbations due to tides and the amplitude of forced longitudinal librations. This dissertation concentrates on the development of a unified and self-consistent physical model to determine the tidal and rotational response of a viscoelastic icy satellite with an internal water ocean. The developed tidal-rotational model is then applied to analyze the relation between the aforementioned observables (with the exception of the induced magnetic field) and the physical properties that characterize the internal structure of an icy satellite. The results in this thesis strongly suggest that the measurement of the libration amplitude of Europa’s shell with an accuracy of a few meters has the potential to provide a reasonable constraint on the rigidity of the ice-I shell in combination with measurements of the tidal response (Love numbers) at the surface. However, the modelling presented here is not extensive enough to support a strong conclusion regarding whether the ice shell thickness could be inferred from combined measurements of the libration amplitude and tidal Love numbers at the surface of Europa.","Icy moons; Tidal dynamics; Rotational dynamics; Geodynamics; Viscoelasticity; Longitudinal librations","en","doctoral thesis","","978-94-6299-466-9","","","","","","","","","Astrodynamics & Space Missions","","",""
"uuid:9d479423-a0de-4715-ab2b-d5c495e47960","http://resolver.tudelft.nl/uuid:9d479423-a0de-4715-ab2b-d5c495e47960","Policy Instruments to Improve Energy Performance of Existing Owner Occupied Dwellings","Murphy, L.C. (TU Delft OLD Housing Quality and Process Innovation)","Visscher, H.J. (promotor); Meijer, F.M. (copromotor); Delft University of Technology (degree granting institution)","2016","The aim of this thesis is to add knowledge to the role and impact of policy instruments in meeting energy performance ambition in the existing owner occupied housing stock. The focus was instruments available in the Netherlands in 2011 and 2012. These instruments represented the 'on the ground' efforts to meet climate change targets.","policy instruments; climate change","en","doctoral thesis","","978-94-92516-18-3","","","","","","","","","OLD Housing Quality and Process Innovation","","",""
"uuid:126c4b76-235e-411b-b5ed-c513b4d59dad","http://resolver.tudelft.nl/uuid:126c4b76-235e-411b-b5ed-c513b4d59dad","Detection, Imaging and Characterisation of Fog Fields by Radar","Li, Y. (TU Delft Atmospheric Remote Sensing)","Hoogeboom, P. (promotor); Russchenberg, H.W.J. (promotor); Delft University of Technology (degree granting institution)","2016","As a significant phenomenon in meteorology, fog has attracted more and more concern from the scientific community, because of its impacts on visibility in air- and road-transportation. E.g., at airports, the frequency of aircrafts taking off and landing has to be reduced during heavy fogs, because in conditions of low visibility the pilots need to have more space between the aircrafts during landing and taxying. In this context, many approaches have been proposed to detect fog with various types of instruments. Among the active remote sensing instruments, radars are well suited for continuous fog observations, and they can satisfy the need for high spatial resolution and sensitivity. Compared to traditional centimeter-wave radars, millimeter-wave radars are more sensitive to minute fog droplets, whereas the gaseous attenuation from oxygen and water vapor is still very small. The trade-off is that the attenuation from fog droplets at millimeter waves is much larger than at centimeter waves. In this thesis, we study the observation of fog with millimeter-wave radars and investigate the feasibility of developing an advanced fog-visibility radar.","Fog; Radar; Detection; Visibility estimator; Dual-wavelength technique","en","doctoral thesis","","978-94-028-0406-5","","","","","","","","","Atmospheric Remote Sensing","","",""
"uuid:b4fe0ca4-b8c7-4e23-a2f1-247ac3b61aeb","http://resolver.tudelft.nl/uuid:b4fe0ca4-b8c7-4e23-a2f1-247ac3b61aeb","Static aeroelastic optimization of composite wind turbine blades using variable stiffness laminates: Exploring twist coupled composite blades in stall control","Ferede, E.A. (TU Delft Wind Energy)","van Bussel, G.J.W. (promotor); Abdalla, M.M. (copromotor); Delft University of Technology (degree granting institution)","2016","There is a growth in the energy consumption of the world, leading to rapid depletion of natural resources, such as fossil fuels. Added to that, the environmental impact of fossil fuels (e.g. global warming) makes a renewable source of energy a better alternative for power generation. Among renewable energy sources, generating energy from wind is becoming more popular. Although the number of, installed, wind turbines is increasing rapidly, there are still many challenges ahead for making the cost of generating energy from offshore wind competitive with other energy sources. One method for making the Cost of Energy from wind competitive is to reduce the operational and maintenance cost of wind turbines. The operational and maintenance cost of wind turbines may be reduced by eliminating, as much as possible, rotating components of the turbine which are prone to wear and tear. An alternative way to regulate power is to use stall control scheme, thereby eliminating the need to use the pitch mechanism. With recent advances in composite technology for tailoring the structural response of composite structures, it is possible to apply the technique to the conventional passive stall control scheme. Particularly, the use of twist coupling for regulating (passively) the angle of attack, thus also the torque and power of the wind turbine, shows a promise to design adaptive blades for stall regulated wind turbines, with improved performance in terms of power and load control, as well as in terms of cost reduction. Most of the research conducted so far investigates the benefit of twist coupled blades for power and/or load regulation; either based on a parametric study using few design variables or using simplified models for analysing the aeroelastic response of adaptive blades. In this thesis, a detailed optimization study is performed using variable stiffness laminates, to evaluate the potential of twist coupled blades to enhance the aerodynamic performance of stall controlled wind turbines. Furthermore, detailed structural and aerodynamic constraints are included in the optimization study, while using an analysis tool with sufficient complexity to accurately capture the aeroelastic response of twist coupled blades.","Isogeometric analysis; Stall Control; Adaptive blades; Composite Optimization; Blade ElementMomentum theory","en","doctoral thesis","","978-94-6299-421-8","","","","","","","","","Wind Energy","","",""
"uuid:264107d4-30bc-414c-b1d4-34f48aeda6d8","http://resolver.tudelft.nl/uuid:264107d4-30bc-414c-b1d4-34f48aeda6d8","Design for Well-Being: An Approach for Understanding Users' Lives in Design for Development","Mink, A. (TU Delft Design for Sustainability)","Kandachar, P.V. (promotor); Diehl, J.C. (copromotor); Delft University of Technology (degree granting institution)","2016","Many of the Design for Development outcomes which are unsuited to the users and their environment are based on poorly defined needs and preferences. Product designers are trained to take the user perspective into account, but they are not specifically trained to conduct ethnographic research. Thereby, they have limited time and resources to explore the user context. A systemic approach that efficiently guides designers to develop a social needs inventory would therefore be valuable. An approach that urges designers to move beyond the investigation of product-user interaction and to look comprehensively towards their potential users’ context and their valued beings and doings.
This book is about the development of such an approach. The Capability Driven Design approach guides product designers to conduct rapid, rigorous and comprehensive user context research, specifically in Design for Development projects. By using this approach, designers are guided to make informed design decisions and to improve the
accessibility, applicability, acceptance and adoption of their designs. To develop this approach, analytic guidance was derived from Sen’s ‘Capability Approach’, and practical guidance was derived from the domains of Human-Centred Design, Design for Development and Rapid Ethnography. The Capability Driven Design approach aims to support designers in designing products and / or services that improve the well-being of their users by enabling them to choose the lives that they value.","Design for Development; product design; user-centred design; user context research; rapid ethnography; capability approach; well-being","en","doctoral thesis","","978-90-6562-397-3","","","","This research was made possible by the Netherlands Organization for Scientific Research (NWO) under grant number 2009/06098","","","","","Design for Sustainability","","",""
"uuid:af7174a3-af24-42f4-8512-08a49382b69b","http://resolver.tudelft.nl/uuid:af7174a3-af24-42f4-8512-08a49382b69b","Computational Modeling of Multiphysics Multidomain Multiphase Flow in Fracturing Porous Media: Leakage Hazards in CO2 Geosequestration","Musivand Arzanfudi, M. (TU Delft Applied Mechanics)","Sluys, Lambertus J. (promotor); Al-Khoury, Rafid (copromotor); Delft University of Technology (degree granting institution)","2016","Geological CO2 sequestration, also known as CO2 geo-sequestration, is a process to mitigate CO2 emission into the earth atmosphere in an attempt to reduce the likely greenhouse effect. It involves injection of carbon dioxide, normally in a supercritical state, into a carefully selected underground formation. Selection of an appropriate geological formation for CO2 geo-sequestration requires a good knowledge of the involved processes and phenomena that occur at the subsurface, and in particular, an estimate of the amount of leakage that might take place in time. Modeling leakage of CO2 in a deformable porous medium constitutes the focal point of this thesis.
To this aim, a computationally efficient multiphysics multidomain multiphase numerical modeling framework has been developed which accounts for all important physical processes, interacting domains, and different material phases. The computational efficiency is achieved via tailoring several state of the art numerical techniques in order to attain an accurate, geometry-independent, and mesh-independent model. Deriving such a model for thermo-hydrodynamic-mechanical behavior of a multiphase domain, exhibiting deformation and crack propagation requires a well-designed conceptual model, a descriptive mathematical formulation and an innovative numerical method. The conceptual model distinguishes different domains representing a porous matrix domain, an abandoned wellbore domain, a fracture domain and a fracture-matrix domain. The mathematical formulation adopts the representative elementary volume (REV) averaging based conservation equations for porous media, the drift-flux model averaging of Navier-Stokes equations for the wellbore and fracture domains, and equations of state and constitutive relationships for the involved brine, CO2, air, and solid phases. The numerical solution method adopts a mixed discretization scheme, in which, the standard Galerkin finite element method (SG), the partition of unity finite element method (PUM) within the framework of the extended finite element method (XFEM), and the level-set method (LS) are tailored together to obtain an accurate, geometry-independent, and mesh-independent solution.
The thesis introduces four computational models. The first model deals with CO2 leakage via formation layer boundaries, which is capable of simulating multiphase flow in rigid heterogeneous layered porous media, with particular emphasis on the inter-layer leakage of CO2. This model is presented in Chapter 2. The second model deals with CO2 leakage via abandoned wellbores, which is capable of simulating all important physical phenomena and processes occurring along the wellbore path, including fluid dynamics, buoyancy, phase change, compressibility, thermal interaction, wall friction and slip between phases, together with a jump in density and enthalpy between the air and the CO2. This model is presented in Chapter 3. The third model introduces the integration of the first and second models to create an integrated wellbore-reservoir numerical tool for the simulation of sequestrated CO2 multi-path leakage through formation layers and abandoned wellbores. This model is presented in Chapter 4. Finally, the fourth model deals with fracturing and CO2 leakage through cracks. It presents a fully coupled thermo-hydrodynamic-mechanical computational model for multiphase flow in a deformable and fracturing porous media. This model is presented in Chapter 5. These four models cover all important CO2 sequestration processes and leakage mechanisms which might occur in a CO2 geo-sequestration site.
The numerical examples show that the proposed computational model, despite the relatively large number of degrees of freedom of different physical nature per node, is computationally efficient. Physically, the numerical examples show that for the normal initial and boundary conditions encountered in CO2 geo-sequestration, leakage via abandoned wellbores and leakage via formation layers can be equally important. Deformation and fracturing, together with leakage via the fractures seem, following the studied cases, a secondary concern. Although the leakage via abandoned wellbores and the leakage via formation layers appear to be equally important in terms of the quantity of leaked CO2, the leakage through the wellbore comes with a greater risk because it can rapidly reach the ground surface. The results of leakage via the fractures show that, in case of having a relatively less permeable cap-rock, the risk of leakage via the fractures increases.
The proposed computational models presented in this thesis can be utilized as a framework for the development of efficient and comprehensive numerical software, in such a way that engineers can carry out realistic simulations on relatively limited hardware resources and CPU time. This is due to the computational efficiency of the proposed mixed discretization scheme. Further extensions of this work include: tailoring to other applications, improvement of the constitutive relationships of the solid phase, adding crack initiation and velocity, and adding dynamic forces effects to the solid medium in order to account for the seismic forces.
Methods for designing such swarms are proposed and analysed, as well as the purported robustness and reliability commonly associated with swarms. The investigations show that, like with many systems, it is possible to create a swarm that is less reliable than even a single satellite, yet it is also possible to create one that is more reliable. However, this requires a paradigm shift, as in order to achieve this goal, a satellite swarm's satellites should be built as simple as possible, and this implies without internally redundant systems. The OLFA'R (Orbiting Low Frequency Antennas for Radio astronomy) mission, studying astronomical phenomena at low frequencies* has been used as a test case throughout the thesis, and various technological hurdles required for achieving the OLFAR mission are investigated and solved. This shows that while the OLFAR swarm itself is still slightly beyond current-day technologies, it is not as far out as originally thought, and it could well serve as a prime example of a mission for which a satellite swarm not only would be beneficial, but almost imperative.
Two-dimensional physical model tests were used to study wave overtopping and overtopping wave impact for the situation of coastal dikes where a shallow foreshore affects the wave overtopping.
Chapter 2 deals with a representation developed to unify spatial and non-spatial
anatomical knowledge. Via this representation, it is possible to store, access and visualize these heterogeneous datasets through a shared coordinate system. This allows us to construct the VSP atlas, a process which we describe in detail in Chapter 3, where we also detail the application potential of the VSP. We present several examples of the VSP mapped to clinical pre-operative MRI scans, as examples of how the VSP can be used to enrich clinical data with surgically relevant information that is not available from the scans themselves.
To share the VSP for educational purposes, we present an online tool, the Online Anatomical Human (OAH) in Chapter 4. OAH runs directly in the browser and can be used to explore the complex relation between 2D and 3D anatomy. Furthermore, annotations can be added directly on the 3D structures for quizzing purposes, or to enrich the VSP further with annotations performed by experts. The OAH was successfully deployed in aMassiveOpenOnline Course (MOOC), where thousands of studentsworldwide used the application to study pelvic anatomy.
While the VSP is based on multiple datasets, it does not include all potential topological anatomical variations in branching structures such as vessels and nerves. Illustrations and text are traditionally used by medical specialists to study these variations, but it is difficult to compare complex variations in such illustrations. Therefore, in Chapter 5 we present an interactive visualization application for anatomical variations, which allows the user to compare and explore variations of branching structures interactively for educational purposes. With methods inspired by graph theory, users can intuitively select groups of variations, based on a similarity measure, and compare local differences.
In Chapter 6, we present a state-of-the-art report on multimodal medical visualization. We describe the basics of medical image acquisition, and the clinical workflow for dealing with such data. We discuss suitable rendering and visualization techniques appropriate for rendering multiple modalities. The core contribution of this work is a taxonomy based on the multimodal medical visualization applications so far, the visualization techniques they employ, and the medical domain context. Additionally, we provide an outlook on open problems and potential future research directions.
To make the VSP patient-specific and to enrich the VSP with more datasets, registration is needed. Unfortunately, current registration software is often difficult to use for non-medical-imaging-experts. In Chapter 7 we present a new registration application, RegistrationShop, that allows user to register 3D medical image datasets based on 3D visualizations and simple interactive transformation tools. Based on real-time visual feedback via comparative visualization techniques, users can inspect the current registration result and iteratively improve the alignment. Besides basic interactive transformation tools, we propose a novel way of placing corresponding landmark-pairs in 3D volumes.
After combining the VSP atlas with patient-specific pre-operative MRI scans, we visualize the results in an interactive application for surgical planning aimed at pelvic oncological procedures, entitled PelVis, which is described in Chapter 8. We present visualization methods to represent context, target, and risk structures for surgical planning of the Total Mesorectal Excision (TME) procedure. We employ distance-based and occlusion management techniques to represent the patient-specific pathology and anatomy. Furthermore, we visualize the confidence in the registration outcome in relation to the distance of the target structure to the risk zones.
The research described in this thesis was supported by the Dutch Technology Foundation STW via project 10903: “High-definition Atlas-based surgical planning for Pelvic Surgery”.","medical visualization; anatomy; education; surgical planning","en","doctoral thesis","","978-94-6186-719-3","","","","","","","","","Computer Graphics and Visualisation","","",""
"uuid:e203ee61-a314-4500-89e1-237e9f0133fd","http://resolver.tudelft.nl/uuid:e203ee61-a314-4500-89e1-237e9f0133fd","Adaptation by product hacking: A cybernetic design perspective on the co-construction of Do-It-Yourself assistive technology","De Couvreur, L.B.J. (TU Delft Applied Ergonomics and Design)","Goossens, R.H.M. (promotor); Detand, J (copromotor); Delft University of Technology (degree granting institution)","2016","Whatever you may have heard about product hackers, the truth is they do something really, really well. In short: “hackers build things, crackers break them.” Through their experiential and social approach product hackers discover new possibilities in a frugal manner with the local resources and skills at hand. The human race has built up a rich history in adapting and designing his living
environment and surrounding artifacts. Although the phenomenon of product hacking has been around for a long time, it’s manifestation has drastically changed through several paradigm shifts within the DIY culture which lead to open-design. These shifts imply that professional designers are no longer placed above users when determining what is right or wrong for them. Within the context of design for disability this perspective opens-up a complementary alternative to universal design. Today there are a lot of people with disabilities whose assistive devices have not yet come about, due to unique needs and challenges. A new generation of makers and occupational therapists are
seizing this opportunity by producing one of a kind product adaptations in people’s homes, sheltered workshops and rehabilitation centers. This dissertation explores the role of professional designers within this new and open-ended context. In general the research focus is on the epistemic dynamics of hacking behavior within the pursuit of making a tailored product adaptation for a single user. Generally speaking collaborative hacking activities are a form of self-organizing co-design activities driven by participatory prototyping-interactions. For this reason, the starting point of this thesis was the question : “How do specific prototyping-interactions influence general adaptation within participatory hacking behavior?” To answer this question we propose a framework which illustrates hacking entities as a self-regulating systems. A cybernetic design approach was chosen to develop a framework to explain the circular
causality and relationships within local hacking ecologies. We list the minimum conditions and elements of an autonomous hacking entity in order for it to be able to adapt to changing circumstances and ‘to get what it wants’. With his holistic thinking, it integrates the surroundings as part of the a self-regulating system by means of two adaptation types, namely single and double-loop adaptation. Both loops enact respectively as an (1) active (agents actively change their environments through external adaptation) and (2) passive (agents compulsory change their internal construction of the environment through internal adaptation) component of adaptation. Although both type of adaptations are strongly intertwined we tried to illustrated them through the variety of data from living lab practices and illustrate how they self-organize the hacking process.","","en","doctoral thesis","","","","","","","","","","","Applied Ergonomics and Design","","",""
"uuid:f1105c2c-5162-4417-89f8-8a4a44bbaec2","http://resolver.tudelft.nl/uuid:f1105c2c-5162-4417-89f8-8a4a44bbaec2","Developing a Service Platform for Health and Wellbeing in a Living Lab Setting: An Action Design Research Approach","Keijzer-Broers, W.J.W. (TU Delft Information and Communication Technology)","Tan, Y. (promotor); de Reuver, Mark (copromotor); Delft University of Technology (degree granting institution)","2016","","smart living; aging-in-place; elderly people; platform; design science; action design research; capability approach; social innovation","en","doctoral thesis","","978-9462955097","","","","Parts of this research were funded by ZonMW (VIMP Implementation grant) and the Ambient Assisted Living Joint Programme (Care@Home project).","","","","","Information and Communication Technology","","",""
"uuid:40736144-a35d-4b88-aa77-8d51f5e8d1fd","http://resolver.tudelft.nl/uuid:40736144-a35d-4b88-aa77-8d51f5e8d1fd","Accounting for Values in Design","Detweiler, C.A. (TU Delft Interactive Intelligence)","Jonker, C.M. (promotor); van den Hoven, M.J. (promotor); Hindriks, K.V. (copromotor); Delft University of Technology (degree granting institution)","2016","One of the more notable technologies to enter and affect everyday life is information and communication technology (ICT). Since the twentieth century, ICTs have had a considerable impact on many aspects of everyday life. This impact on individuals and society is rarely neutral; ICTs can have both desirable and undesirable consequences — ethical implications. One field of computing in particular envisions computing technology permeating everyday life. This field, known as Ubiquitous Computing or Pervasive Computing, aims to integrate computing technology seamlessly into the physical world and everyday life. This pervasiveness has the potential to amplify pervasive computing’s ethical implications. Human values such as social well-being, privacy, trust, accountability and responsibility lie at the heart of these ethical implications. With a technology already so deeply intertwined with so many aspects of everyday life, it is increasingly important to consider the human values at stake.","","en","doctoral thesis","","","","","","SIKS Dissertation Series No. 2016-40","","","","","Interactive Intelligence","","",""
"uuid:0bcfc55b-be81-4326-855c-3a97ba126521","http://resolver.tudelft.nl/uuid:0bcfc55b-be81-4326-855c-3a97ba126521","Relative Space-Time Kinematics of an Anchorless Network","Rajan, R.T. (TU Delft Signal Processing Systems)","van der Veen, A.J. (promotor); Delft University of Technology (degree granting institution)","2016","","relative kinematics; anchorless network; wireless sensor network; localization; synchronization; radio astronomy","en","doctoral thesis","","978-94-6186-724-7","","","","The work described in this thesis was in part financially supported by STW-sponsored OLFAR project (Contract Number: 10556) within the Dutch ASSYS perspectief program","","","","","Signal Processing Systems","","",""
"uuid:bdea8caa-28fa-40e5-822b-fe7a23c6dfb3","http://resolver.tudelft.nl/uuid:bdea8caa-28fa-40e5-822b-fe7a23c6dfb3","Development of nano-encapsulation systems for the food antifungal natamycin: Formulation, characterization and post-processing","Bouaoud, C. (TU Delft ChemE/Product and Process Engineering; TU Delft ChemE/Materials for Energy Conversion and Storage)","Schmidt-Ott, A. (promotor); Mendes, E. (copromotor); Meesters, G.M.H. (copromotor); Delft University of Technology (degree granting institution)","2016","Food spoilage has become in the last decades one of the biggest challenges faced by the food industry, with a significant amount of products thrown away at every step of the supply chain. Microbial contamination is listed as one of the major causes of food spoilage and can be at a large extent prevented by application of antimicrobial compounds. Recent trends on the market such as an increasing demand of consumers for natural preservatives and their application in reduced quantities, coupled with the difficulty for industrials to get new antimicrobials approved by health authorities, lead the food industry to a process of reformulation and improvement of functionality and efficiency of already approved ingredients. Natamycin, a naturally-occurring food preservative widely used for the protection of food surfaces, is one of the most popular antifungal agents currently used. This molecule presents several advantages linked to its natural origin, long history of safe use, efficiency at low concentrations and limited modification of food products when applied. Current formulations of the preservative offer however limited specificity or tunability towards applications and little possibilities of controlled/triggered release. This compound also presents a relatively poor aqueous solubility detrimental for its antifungal action and is very sensitive to early-stage degradation by environmental factors such as extreme pH, oxidation and UV exposure.
The main goal of this PhD thesis was to determine if the incorporation of this molecule within nano-encapsulation systems could provide benefits for availability, tunability and degradation issues. As a first step, formulation, optimization and characterization of two model nano-encapsulation systems (biodegradable polymeric nanospheres and nano-liposomes) were performed and compared in terms of relative benefit for the encapsulation, delivery, antifungal performance and stability of the antimicrobial. Post-processing of the most promising nano-encapsulation systems in order to obtain commercial products was further evaluated by purification/concentration (tangential flow filtration) or by transformation into redispersible dry products by lyophilization.
Nano-liposomes were found overall superior to polymeric nanospheres for the encapsulation and delivery of our molecule and offer higher possible levels of tunability in terms of release rates and antifungal performance. Lyophilization in presence of carbohydrates turned out to be a valuable method for the preparation of dried products with enhanced long-term stability of the antifungal, compared to concentrates prepared by tangential flow filtration, a tedious process that impacted negatively the stability of the preservative.","food preservative; biodegradable polymeric nanospheres; nano-liposomes; antifungal performance; tangential flow filtration; lyophilization","en","doctoral thesis","","978-94-6186-507-6","","","","The research described in this thesis was performed at the Downstream Processing Department of the DSM Biotechnology Center (DSM Food Specialties, Delft) and the Chemical Engineering Department of the Faculty of Applied Sciences of the TU Delft. This research was co-funded by the European Union (Marie Curie Actions 7th Framework, Initial Training Network PowTech, grant agreement n°264722) and DSM Food Specialties B.V.","","","","","ChemE/Product and Process Engineering","","",""
"uuid:82438672-3e8b-477a-a39e-0ce189639e88","http://resolver.tudelft.nl/uuid:82438672-3e8b-477a-a39e-0ce189639e88","Governing Governance: A formal framework for analysing institutional design and enactment governance","King, T.C. (TU Delft Interactive Intelligence)","Jonker, C.M. (promotor); Dignum, M.V. (copromotor); van Riemsdijk, M.B. (copromotor); Delft University of Technology (degree granting institution)","2016","This dissertation is motivated by the need, in today’s globalist world, for a precise way to enable governments, organisations and other regulatory bodies to evaluate the constraints they place on themselves and others. An organisation’s modus operandi is enacting and fulfilling contracts between itself and its participants. Yet, organisational contracts should respect external laws, such as those setting out data privacy rights and liberties. Contracts can only be enacted by following contract law processes, which often require bilateral agreement and consideration. Governments need to legislate whilst understanding today’s context of national and international governance hierarchy where law makers shun isolationism and seek to influence one another. Governments should avoid punishment by respecting constraints from international treaties and human rights charters. Governments can only enact legislation by following their own, pre-existing, law making procedures. In other words, institutions, such as laws and contracts are designed and enacted under constraints.","","en","doctoral thesis","","978-94-6186-726-1","","","","SIKS Dissertation Series No. 2016-41","","","","","Interactive Intelligence","","",""
"uuid:392e166c-9a34-4631-949f-ce128cfb4b14","http://resolver.tudelft.nl/uuid:392e166c-9a34-4631-949f-ce128cfb4b14","On parametric transversal vibrations of axially moving strings","Ali, R. (TU Delft Mathematical Physics)","Heemink, A.W. (promotor); van Horssen, W.T. (copromotor); Delft University of Technology (degree granting institution)","2016","","","en","doctoral thesis","","978-94-6186-722-3","","","","","","","","","Mathematical Physics","","",""
"uuid:a9432862-4793-45d7-ade4-2e266e3e6b9f","http://resolver.tudelft.nl/uuid:a9432862-4793-45d7-ade4-2e266e3e6b9f","Weld seams in aluminium alloy extrusions: Microstructure and properties","den Bakker, A.J. (TU Delft (OLD) MSE-1)","van der Zwaag, S. (promotor); Katgerman, L. (copromotor); Delft University of Technology (degree granting institution)","2016","","extrusion; aluminium alloys; weld seams; mechanical properties; microstructure","en","doctoral thesis","","978-94-028-0329-7","","","","","","","","","(OLD) MSE-1","","",""
"uuid:6c66df88-b72d-4604-b1b6-f7141e251775","http://resolver.tudelft.nl/uuid:6c66df88-b72d-4604-b1b6-f7141e251775","Energetics, robustness and product formation in slow- and non-growing yeast cultures","Vos, T. (TU Delft BT/Industriele Microbiologie)","Pronk, J.T. (promotor); Daran-Lapujade, P.A.S. (copromotor); Delft University of Technology (degree granting institution)","2016","","","en","doctoral thesis","","","","","","Financed by the European Union via the FP7 grant and the Netherlands Organisation for Scientific Research (NWO) via the B-Basic programme.","","","","","BT/Industriele Microbiologie","","",""
"uuid:0f74bf4d-8919-4b5f-b1f1-75b731230468","http://resolver.tudelft.nl/uuid:0f74bf4d-8919-4b5f-b1f1-75b731230468","Dicarboxylic acids transport, metabolism and roduction in aerobic Saccharomyces cerevisia","Shah, M.V. (TU Delft OLD BT/Cell Systems Engineering)","Heijnen, J.J. (promotor); van Gulik, W.M. (copromotor); Delft University of Technology (degree granting institution)","2016","","","en","doctoral thesis","","","","","","","","","","","OLD BT/Cell Systems Engineering","","",""
"uuid:f317d84b-3a30-4991-a6c1-861b06c781cc","http://resolver.tudelft.nl/uuid:f317d84b-3a30-4991-a6c1-861b06c781cc","Properties of advanced (reduced) graphene oxide-alginate biopolymer films","Vilcinskas, K. (TU Delft ChemE/Advanced Soft Matter)","Picken, S.J. (promotor); Koper, G.J.M. (copromotor); Delft University of Technology (degree granting institution)","2016","In this work, properties of Calcium alginate-reduced graphene oxide and Barium alginate‐reduced graphene oxide composite films are explored. In addition, the properties of the divalent metal ion-cross-linked alginate composite films are compared to the analogous properties of uncross‐linked Sodium alginate-graphene oxide composite films of the corresponding compositions. As the filler, used in the preparation of the composite films, is obtained by chemical oxidation of graphite, the prevailing knowledge of the process coupled with in situ X-ray diffraction investigation of the samples prepared by such a method is presented as well.
Needle steering involves the planning and timely modifying of instrument-tissue interaction forces in order to control the deflections in tissue. Currently investigated steering methods employ needle base manipulations, bevel-tip needles, pre-curved stylets, active cannulas, programmable bevel-tip needles, and articulated-tip needles. The technique proposed in this work employs an actively articulated needle tip.
The aim of this research is to enhance our understanding of where needle-tissue interaction forces originate and how they can be effectively modified to steer needles. This is done by means of force measurements and device functionality evaluations during needle insertions in tissue simulants.
The influence of tip shape on the formation of bending forces during needle insertion was studied in a fundamental and macroscopic experiment (Chapter 3). It was found that articulated bevel-tip needles are more efficient in building up bending force than matched conical-tip needles. However, increasing the tip articulation angle has a larger positive effect on bending force. Furthermore, it was found that the resultant force orientation depends on the insertion force and that the size of this vector rotation varies per tip shape. In general, the radial (bending) force component increases faster than the axial (insertion) force component. The study of these relations is relevant for the accurate estimation of tip-loads in mechanics-based needle steering models.
To reach predefined targets, a teleoperation platform was developed (Chapter 4). The angle of an articulated, conical-tip needle was controlled in a closed-loop system. On-line feedback on the tip position was obtained through 3-D shape reconstructions, using fiber Bragg grating (FBG) based strain measurements. A simple PI-controller demonstrated the needle's nimble maneuverability by continuously amending the tip angle and navigation path. An advantage of articulated-tip needles is that they do not require axial rotations to change the steering plane. Optimal paths may in the future be defined with respect to the clinical task, the limitation of tissue damage, and (when applicable) the abilities of a human operator.
Human operation of steerable needles is discussed by means of experimental results in manual and shared control steering tasks. In the implemented shared control setting (Chapter 5), a path planner determined a single-curved path to the target, in which the needle curvature and tissue straining conditions were minimized. The controller estimated the error between the actual and planned path and informed the human operator by means of low intensity force guidance. The ability of users to interact with the teleoperation platform and the acting kinematic needle steering constraints, was found to vary considerably. This stresses the need for studying the effective use of communication channels, e.g. by evaluating the weights users assign to the presented feedback. In the end, shared control may teach users how to cope with the acting needle steering constraints, and guide them in complicated steering tasks.
Manual needle steering tasks were performed by means of a novel, tip-articulated and hand-held instrument (Chapter 6). Targets in five principal steering directions were successfully reached under visual feedback. An average targeting accuracy of 0.5 ± 1.1 mm is reported for 100 mm insertions. This shows that active manual needle steering allows for an effective compensation of the variability among insertion paths.
This dissertation discusses important remaining challenges in the bridging of technical and clinical work fields and the realization of an operational steerable needle. The tip-tissue force measurements have provided insights in the ways current needle designs and mechanics-based navigation models can be improved. The tip-articulated needles show clear advantages for control systems, and allow for a manual approach in needle steering. Finally, the shared control of steerable needles was studied and may be of use to guide practitioners in case of a complex navigation task.","Steerable Needle; Medical Robotics; Instrument-tissue Interactions","en","doctoral thesis","","978-94-028-0291-7","","","","","","2018-09-22","","","Medical Instruments & Bio-Inspired Technology","","",""
"uuid:565eb3e7-e0ea-4a88-abbb-eb0ae2c6c36f","http://resolver.tudelft.nl/uuid:565eb3e7-e0ea-4a88-abbb-eb0ae2c6c36f","Entropy and Kolmogorov complexity","Moriakov, N.V. (TU Delft Analysis)","de Pagter, B. (promotor); Haase, M.H.A. (promotor); Delft University of Technology (degree granting institution)","2016","This thesis is dedicated to studying the theory of entropy and its relation to the Kolmogorov complexity. Originating in physics, the notion of entropy was introduced to mathematics by C. E. Shannon as a way of measuring the rate at which information is coming from a data source. There are, however, a few different ways of telling how much information there is: an alternative approach to quantifying the amount of information is the Kolmogorov complexity, which was proposed by A. N. Kolmogorov. The Shannon entropy is the key ingredient in the definition of the Kolmogorov-Sinai entropy of a measure-preserving systems. Roughly speaking, the Kolmogorov-Sinai entropy is the expected amount of information in `Shannon sense' that one obtains per unit of time by observing the evolution of a measure-preserving system. In topological dynamics, the topological entropy takes place of the Kolmogorov-Sinai entropy. For metrizable systems, the topological entropy measures the exponential growth rate of the number of distinguishable partial orbits of length n as n tends to infinity. Originally defined for Z-actions, the 'classical' theories of entropy were later extended to actions of amenable groups.
We provide a necessary background on amenable groups, topological/measurepreserving dynamics and the entropy theory in Chapters 1, 2, 3 and 5.
The main focus of this thesis is extending the following results. First of all, a common generalization of the topological and the Kolmogorov-Sinai entropy theories for Z-systems was suggested by G. Palm. We provide an abstract generalization of the work of Palm for actions of discrete amenable groups in the language of measurement functors in Chapter 6.
Secondly, we investigate the connection of entropy and Kolmogorov complexity.
Originally, the equality between the topological entropy and a certain quantity measuring maximal asymptotic Kolmogorov complexity of the trajectories was established by A. A. Brudno for subshifts over Z. Later, he proved the equality of the Kolmogorov-Sinai entropy and the asymptotic Kolmogorov complexity of almost every trajectory for ergodic subshifts over Z. We provide a generalization of these results as follows. Firstly, in Chapter 4 we give a background on computability and Kolmogorov complexity and, further, introduce computable Fölner monotilings, which are central in our extensions of Brudno's results. We treat the 'first' and the 'second' theorems of Brudno in Chapter 7.
The first theorem is generalized for subshifts over computable groups admitting computable Fölner monotilings, while the second theorem is proved under the assertion of regularity of the monotiling, which we introduce in Chapter 7 as well.","","en","doctoral thesis","","","","","","","","","","","Analysis","","",""
"uuid:ff150699-5f50-4196-8d25-9e959f06ce51","http://resolver.tudelft.nl/uuid:ff150699-5f50-4196-8d25-9e959f06ce51","The why’s and how’s of public sector scientists’ policy engagement: The lessons from agricultural biotechnology","van der Werf-Kulichova, Z. (TU Delft BT/Biotechnology and Society)","Osseweijer, P. (promotor); Delft University of Technology (degree granting institution)","2016","The biobased economy is regarded as a possible solution for
addressing the challenges associated with climate change and
the growing human population. Due to progress in science
and technology the biobased economy can provide additional
food and renewable energy to meet the needs of the expected
9 billion people by 2050.
However, the implementation of the biobased economy also
raises many questions about the transition paths, including the
political and regulatory climate for new technologies that are
necessary to accomplish this transition. Policy decisions and
new regulations require input from the scientific community.
While most policy stakeholders agree that we need new technologies
that can reduce or eliminate greenhouse gas emissions,
we witness controversy about the best solutions to realize
sustainable production. Scientists have the potential to play an
important role in policy debates and processes, but presently
their involvement is not adequate.
This thesis explores how scientists perceive their role in policymaking
and which factors are relevant for their motivation for
policy engagement. Using the empirical data from the research
with agricultural biotechnology scientists this thesis identifies and
describes a new role for scientists in controversial policy-making
and provides recommendations for institutional strategies that
are needed to facilitate that scientists adopt this role in practice.","","en","doctoral thesis","","978-94-6299-449-2","","","","","","","","","BT/Biotechnology and Society","","",""
"uuid:665728be-2b91-41e7-9a60-2f83f7ee4728","http://resolver.tudelft.nl/uuid:665728be-2b91-41e7-9a60-2f83f7ee4728","Molecular gymnastics: Single-molecule investigations of protein jumping and dna dancing","Ganji, M. (TU Delft BN/Cees Dekker Lab)","Dekker, C. (promotor); Abbondanzieri, E. (copromotor); Delft University of Technology (degree granting institution)","2016","","","en","doctoral thesis","","978-90-8593-273-4","","","","Casimir PhD series 2016-29","","2017-06-30","","","BN/Cees Dekker Lab","","",""
"uuid:86ac7352-46b8-4c2d-9014-817472d80174","http://resolver.tudelft.nl/uuid:86ac7352-46b8-4c2d-9014-817472d80174","On the aerodynamics of a vertical axis wind turbine wake: An experimental and numerical study","Tescione, G. (TU Delft Wind Energy)","van Bussel, G.J.W. (promotor); Ferreira, Carlos (copromotor); Delft University of Technology (degree granting institution)","2016","THE recent trend in wind energy industry, with the increasing deployment of offshore wind farms, has revived the interest in the concept of a vertical axis wind turbine. The scientific, technological and economical challenges of the next generation of wind turbines indicate that a transformative approach is the key for the reduction of the cost of energy. The adaptation of current designs and practice may not be the best solution to face rotor up-scaling, wind farmlosses, floating support structures and improved reliability.
In this context, the vertical axis wind turbine has the potential to respond to someof the new environment’s challenges. The new interest has to face a lack of knowledge and proper models; the tendency to adapt both from the more developed horizontal axis wind turbine research field is often inaccurate.","","en","doctoral thesis","","978-94-6299-462-1","","","","","","","","","Wind Energy","","",""
"uuid:fe1ecd53-b939-4efc-b6e9-34ec098efde2","http://resolver.tudelft.nl/uuid:fe1ecd53-b939-4efc-b6e9-34ec098efde2","Instrument Development for Nanomaterial Risk Assessment","Brossell, D. (TU Delft ChemE/Materials for Energy Conversion and Storage)","Schmidt-Ott, A. (promotor); Delft University of Technology (degree granting institution)","2016","Nanomaterials are a growing source for innovation. However, the very properties that make them so effective for their desired purpose might also render them more hazardous towards humans and the environment. Adequate risk assessment tools are often missing, partly due to instrumental gaps in exposure assessment and toxicity testing. In this thesis, two of these instrumental gaps served as the motivation to develop two new instruments, both with the purpose to reduce these gaps. The nano-PMC is a Particle Mass Classifier that is able to directly measure the mass of single particles down to a few zeptograms. Information about nanoparticle mass is vital for converting nanoparticle number concentrations to mass concentrations, a metric which is very useful for exposure assessment. In combination with mobility classification, the so-called apparent density of non-spherical particles like agglomerates can be measured, a property that might determine the health effect of non-toxic but biopersistent dust particles. The Cyto-TP is a thermal precipitator that deposits airborne nanoparticles onto living cells. Both the exposure mode and the cell configuration are designed to mimic the contamination of the pulmonary alveoli to inhaled nanoparticles. This instrument, as part of an in vitro inhalation toxicity test, might contribute to the reduction of the need for animal testing. Both instruments find application within a proposed test procedure to assess the risk potential of nanomaterials still in development - to promote safety-by-design. This precautionary approach might help to increase public confidence in nanomaterials.","Nanomaterials; Instrument development; Particle mass classification; In vitro inhalation toxicity testing; Risk assessment; Safety-by-design","en","doctoral thesis","","","","","","","","","","","ChemE/Materials for Energy Conversion and Storage","","",""
"uuid:2d8739e2-d56c-46f1-b050-ee5d5479e928","http://resolver.tudelft.nl/uuid:2d8739e2-d56c-46f1-b050-ee5d5479e928","The beauty of efficiency in design","Da Silva Cardozo, O. (TU Delft Design Aesthetics)","Hekkert, P.P.M. (promotor); Crilly, N (promotor); Delft University of Technology (degree granting institution)","2016","The aesthetic appreciation of a product is often described without taking into account that the product has been designed for a purpose; for instance, merely based on the product’s appearance. This dissertation examines the kind of aesthetic appreciation that involves recognizing that the product has been designed (as a means) to achieve a particular effect and, more specifically, evaluating how the product achieves such effect. It focuses on the principle of efficiency or “MEMM”, which indicates that people perceive beauty in a product when they perceive it to achieve “the maximum effect” with “the minimum means”.
A combination of research methods is used to address the following four questions: (Q1) Is the appreciation of a product affected by knowledge of the product’s intended effect and, if so, how? (Q2) How can the aesthetic appreciation of a product be understood based on the principle of MEMM? (Q3) Is the aesthetic appreciation of a product positively affected by the perception of the product as the minimum means achieving the maximum effect? (Q4) How can designers enhance a product’s aesthetic appeal by considering the product as the means to achieve an intended effect?
A mixed-methods investigation of Q1 indicates that intention knowledge does affect product appreciation, partly insofar as it enables an (aesthetic) evaluation of the product as a means to achieve an intended effect. A conceptual analysis of Q2 reveals how a product and its intended effect can be judged to be the minimum means and the maximum effect with grounds in a set of assumed alternatives for both the means and the effect. An experimental examination of Q3 provides evidence that a product is aesthetically appreciated when it is perceived to achieve more than other products from the same category (the maximum effect) by making an efficient use of resources (the minimum means). A mixed-methods study of Q4 finally suggests a set of qualities that designers can aim at when defining an intended effect and developing a product (means), and also indicates the aspects of the product that can be manipulated based on these qualities.
The findings here presented have a number of implications. For design research, they indicate that people’s (aesthetic or non-aesthetic) experience of a product or service should be examined with attention to their knowledge of the designer’s intended effect. For design practice, they propose a strategy for enhancing aesthetic appeal that involves manipulating aspects such as user interaction and that can, therefore, not just help develop a beautiful product, but a beautiful service too. For design education, they suggest the value of teaching that beauty and efficiency can be combined in designing and experiencing a product or service; they also trigger a reflection on means- and effect-based teaching approaches. For marketing, they identify several qualities that potential consumers might appreciate in a product or service, qualities that can thus guide the creation of an advertisement and that can make the advertisement, in itself, more appealing. With regards to the day-to-day experience of products and services, they offer an understanding of the reason why people might like a particular product or service, which in turn might help them make more knowledgeable consumer choices. Because MEMM can be applied to many other artifacts besides products and services, the findings here presented are also relevant to fields of knowledge and practice such as the arts.","","en","doctoral thesis","","978-94-6186-729-2","","","","","","","","","Design Aesthetics","","",""
"uuid:e13e0924-8f35-430d-8ed0-6b552bc26439","http://resolver.tudelft.nl/uuid:e13e0924-8f35-430d-8ed0-6b552bc26439","The beauty of Unity-in-Variety: Studies on the multisensory aesthetic appreciation of product designs","Post, R.A.G. (TU Delft Design Aesthetics)","Hekkert, P.P.M. (promotor); Delft University of Technology (degree granting institution)","2016","This thesis embarks from the idea that aesthetic appreciation of product designs is determined by simultaneously perceiving the two partially opposing dimensions of unity and variety. People actively avoid boredom by searching for variety because it challenges the senses and offers the potential of learning new information. Hence, people browse through thick catalogues, are attracted to colourful bouquets and let their eyes and hands explore a novel car interior. In doing so, these products offer stimulation to the senses. However, too much variety leads to confusion, as people fail to make sense of what they perceive. It is therefore that they appreciate perceiving unity at the same time, as it brings structure to variety; items in a catalogue are precisely ordered, flowers are neatly arranged and components of a car interior are carefully picked and organized. The above idea is captured in an age-old aesthetic principle, aptly named Unity-in-Variety (UiV). The principle states that perceiving a balance between the opposing forces of unity and variety is aesthetically preferred. While this principle has been argued to explain aesthetic appreciation for works of art, music and landscapes, little empirical research existed on this principle and, to our knowledge, none for product designs. By performing twelve studies and multiple pilot studies, mostly quantitative in nature, we empirically investigated the principle of UiV to determine whether it can explain how and why we aesthetically appreciate perceiving product designs by vision and touch.
To demonstrate how unity and variety relate to each other and to aesthetic appreciation, we first separately researched the visual and tactile modality using a range of products readily found on the market. We continued with experimental investigations of the principle by systematically manipulating unity and variety in product designs through various design factors (e.g. the Gestalt laws of symmetry and similarity). For the visual modality, these manipulations were performed in newly designed sets of webpages. For the tactile modality, we designed and produced 3-D printed models of car key remotes to systematically manipulate materials and shapes. The investigations within vision and touch were followed by a study combining both senses to assess how unity and variety relate to visual-tactile aesthetic appreciation, and an additional study exploring how unity and variety may interact across the senses. Furthermore, to build a broader understanding of what influences our appreciation for unity and variety, we investigated individual differences in motivational drives and design expertise. Lastly, we explored the possibility of extending the principle’s applicability from individual products to product-service systems.
Our main finding is that unity and variety, despite being negatively correlated, positively influence aesthetic appreciation of product designs. As a result of their partial opposition, there is a trade-off between unity and variety leading to a balance where aesthetic appreciation is highest. Additionally, we found that unity is the dominant factor of the two; its influence is on average twice the size of variety, and its presence is a condition for an appreciation of variety. These results were obtained with a range of products from different product categories and replicated in the visual, tactile and visual-tactile product experience.
Having demonstrated how unity and variety together determine aesthetic appreciation, we investigated how several factors underlie the degree of unity and variety and their respective appreciation. Several commonly used design factors were experimentally shown to influence unity and variety in vision (through symmetry, contrast, similarity and colourfulness), and in touch (through continuity, emergence and similarity). Besides these design factors, we identified how individual differences in motivational drives and expertise can influence the preferred balance between unity and variety. Individuals with safety needs preferred visual or tactile unity to individuals with accomplishment needs. As a result, the preferred balance between unity and variety shifts towards unity for safety seekers. A similar shift towards a preference for unity occurred for (design) experts. Experts rate the same products differently on unity and variety compared to laymen, possibly as a result of their explicit and implicit training in applying unifying design factors. Yet, they also appreciate an optimum balance between unity and variety and the principle therefore equally holds for design experts.
The combined results of our studies demonstrate that the UiV principle can consistently explain visual, tactile and visual-tactile aesthetic appreciation for product designs. It does so by showing that aesthetic appreciation is highest when the two partially opposing dimensions of unity and variety are increased until they arrive at an optimum balance. We furthermore demonstrated how various design factors and individual differences underlie this preferred unity and variety balance. In doing so, the principle offers a holistic understanding of how the smallest perceptual properties of a design are combined to form the unified experience of the product and its aesthetic appreciation. The knowledge generated through our research contributes to current theories and models of aesthetic appreciation by explaining how and why people find aesthetic pleasure in perceiving product designs. Furthermore, UiV is possibly the first principle to account for tactile aesthetic appreciation, as we illustrated how the Gestalt laws play an important role in creating tactile unity within the variety of shapes and materials of a physical product. Next to this, our methodological approach demonstrates how novel 3-D printing technologies can aid in accurately studying realistic stimuli. Lastly, the results from our research can act as a guideline for designers and provides a promising basis for researching the principle in other modalities (such as auditory and gustatory), as well as for other domains (such as for product-service systems, architecture and the arts).
The second is optimization of specific components of the system. We develop two approaches, geometric verification and geo-distinctive visual element matching, that address specific challenges faced by our retrieval-based framework. The resulting system makes location estimation more tractable in case of large image collections, and also more reliable. Our experimental results demonstrate that the system leads to an overall significant improvement of the location estimation performance and redefines the state-of-the art in both geo-constrained and geo-unconstrained location estimation. Based on the findings presented in this thesis, we make recommendations for future research directions, which we think are substantial and promising for large scale image retrieval and geo-location estimation.
• Microtubule-based transport of polarity factors.
• Elongated cell shape.
• Cortical receptor for the polarity factors.","Microtubules; +TIPS; Polarity; Reconstitution","en","doctoral thesis","","978-94-6299-446-1","","","","","","","","","BN/Marileen Dogterom Lab","","",""
"uuid:29de98e9-29c2-4738-a62d-9557094fe9a8","http://resolver.tudelft.nl/uuid:29de98e9-29c2-4738-a62d-9557094fe9a8","De Landschapsarchitectuur van het Polder-boezemsysteem: Structuur en vorm van waterstelsel, waterpatroon en waterwerk in het Nederlandse laagland","Bobbink, I. (TU Delft Landscape Architecture)","Meyer, Han (promotor); Delft University of Technology (degree granting institution)","2016","Het polder-boezemsysteem, een watersysteem dat is ontstaan door vallen en opstaan, continue is aangepast, en vanwege de invloed van de klimaatsverandering moet worden uitgebreid vormt het onderwerp van dit proefschrift. In het onderzoek wordt gezocht naar een antwoord op de volgende vraag: Welke potentie heeft het huidige polder-boezemsysteem om door middel van het landschapsarchitectonische ontwerp (weer) tot ruimtelijke drager van de laagland-identiteit uit te groeien? Zodanig dat het laagland-water (weer) als ruimtelijke en compositorische kracht van het (stedelijk)landschap (her)ontdekt en versterkt kan worden.","water netwerk; landschapsarchitctuur; ruimtelijk ontwerpen; patroon en werk","nl","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-92516-14-5","","","","A+BE | Architecture and the Built Environment No 15 (2016)","","","","","Landscape Architecture","","",""
"uuid:ebbf552a-ce98-4ab6-b9cc-0b939e12ba8b","http://resolver.tudelft.nl/uuid:ebbf552a-ce98-4ab6-b9cc-0b939e12ba8b","Characterisation of Fatigue Crack Growth in Adhesive Bonds","Pascoe, J.A. (TU Delft Structural Integrity & Composites)","Benedictus, R. (promotor); Alderliesten, R.C. (copromotor); Delft University of Technology (degree granting institution)","2016","","Adhesive Bonds; Crack Growth; Fatigue; Strain Energy Dissipation; Strain Energy Release","en","doctoral thesis","","978-94-6186-718-6","","","","","","","","","Structural Integrity & Composites","","",""
"uuid:e8dbb294-dd57-4c10-b733-b4aded62607c","http://resolver.tudelft.nl/uuid:e8dbb294-dd57-4c10-b733-b4aded62607c","Strategies, Methods and Tools for Solving Long-term Transmission Expansion Planning in Large-scale Power Systems","Fitiwi, D.Z. (TU Delft Energie and Industrie)","Herder, P.M. (promotor); Rivier Abbad, M. (promotor); Delft University of Technology (degree granting institution)","2016","","transmission expansion planning; uncertainty and variability; optimization; stochastic programming; moments technique; clustering","en","doctoral thesis","","978-84-608-9955-6","","","","","","","","","Energie and Industrie","","",""
"uuid:08831f69-9b8e-44cf-8afe-f4a3e7bc9a9c","http://resolver.tudelft.nl/uuid:08831f69-9b8e-44cf-8afe-f4a3e7bc9a9c","Analysis and Modelling of Pedestrian Movement Dynamics at Large-scale Events","Duives, D.C. (TU Delft Transport and Planning)","Hoogendoorn, S.P. (promotor); Daamen, W. (copromotor); Delft University of Technology (degree granting institution)","2016","To what extent can we model the movements of pedestrians who walk across a large-scale event terrain? This dissertation answers this question by analysing the operational movement dynamics of pedestrians in crowds at several large music and sport events in the Netherlands and extracting the key crowd movement phenomena. A conceptual model and an assessment framework for pedestrian simulation models are developed specifically to describe and simulate this type of movement dynamics.","Crowd movement; operational pedestrian movement dynamics; Pedestrian simulation models; Calibration method; Crowd movement phenomena","en","doctoral thesis","TRAIL Research School","978-90-5584-208-7","","","","TRAIL Thesis Series no. T2016/16, the Netherlands Research School TRAIL. This thesis is a result from the research program ‘Traffic and Travel Behavior in case of Exceptional Events’ which is sponsored by the Dutch Foundation of Scientific Research MaGW-NWO","","","","","Transport and Planning","","",""
"uuid:8f23c2ff-1589-4a8c-8868-03c2f42b4d73","http://resolver.tudelft.nl/uuid:8f23c2ff-1589-4a8c-8868-03c2f42b4d73","Shape fitting: Application to the estimation of wafer chuck deformation","Vogel, J.G. (TU Delft Mechatronic Systems Design)","Munnig Schmidt, R.H. (promotor); Spronck, J.W. (copromotor); Delft University of Technology (degree granting institution)","2016","In wafer scanners - the machines that define the details of electronic chips - there is a need for highly accurate deformation measurements of the machine components during the chip manufacturing process.
This thesis develops an estimation methodology, based on shape fitting principles, that aims at a low estimation error and addresses the specific requirements related to one of the components of a wafer scanner, the wafer chuck.","shape estimation; shape fitting; wafer chuck deformation; least squares optimisation; position sensing","en","doctoral thesis","","978-94-028-0338-9","","","","","","2019-04-08","","","Mechatronic Systems Design","","",""
"uuid:30fe9aa3-1250-4470-99b1-6d3990d81bb8","http://resolver.tudelft.nl/uuid:30fe9aa3-1250-4470-99b1-6d3990d81bb8","Towards high resolution operando electron microscopy of a working catalyst","Puspitasari, I. (TU Delft ChemE/Catalysis Engineering)","Kapteijn, F. (promotor); Kooyman, P.J. (promotor); Delft University of Technology (degree granting institution)","2016","The objectives of this PhD project are to address the challenges of in-situ TEM and introduce a new generation of in-situ TEM equipment. In Chapter 2 the in-situ TEM facilities are introduced, focusing on the nanoreactor that has gone through quite some development stages during this project. Several types of in-situ TEM nanoreactors were fabricated using Microelectromechanical Systems (MEMS) technology, which enables miniaturisation of the complete catalytic reactor (reactor column, heating system and gas system). The different generations of nanoreactors are the glued nanoreactor (GNR), the wafer bonded","transmission electron microscopy; in-situ and operando experiments; catalysis","en","doctoral thesis","","978-94-028-0322-8","","","","","","","","","ChemE/Catalysis Engineering","","",""
"uuid:0e2a3bd4-40b1-472f-8af2-04ead4414c72","http://resolver.tudelft.nl/uuid:0e2a3bd4-40b1-472f-8af2-04ead4414c72","Brushless Doubly-Fed Induction Machines for Wind Turbine Drive-Train Applications","Strous, T.D. (TU Delft Electrical Power Processing)","Ferreira, Jan Abraham (promotor); Polinder, H. (copromotor); Delft University of Technology (degree granting institution)","2016","","","en","doctoral thesis","","978-94-6299-419-5","","","","The research leading to these results has received funding from the European Union’s Seventh framework Programme (FP7/2007_2013) for theWindrive project under Grant Agreement 315485.","","","","","Electrical Power Processing","","",""
"uuid:8661dcc6-9c0b-4df1-9bf5-166fe7527ee7","http://resolver.tudelft.nl/uuid:8661dcc6-9c0b-4df1-9bf5-166fe7527ee7","Integrated Recovery and Upgrading of Bio-based Dicarboxylates","Lopez Garzon, C.S. (TU Delft BT/Bioprocess Engineering)","van der Wielen, L.A.M. (promotor); Straathof, Adrie J.J. (copromotor); Delft University of Technology (degree granting institution)","2016","The inevitable depletion of non-renewable resources for the production of chemicals requires continued research efforts to make platform chemicals from renewable resources. In particular, the development and establishment of processing routes based on biomass should be prioritized in the short to midterm and further to mitigate climate change effects. Although several of these routes are technically feasible, different challenges and pitfalls towards better utilization of raw materials, emissions and overall sustainability have been identified. Particularly, in the case of the pathway from sugars to derivatives and materials via bio-based dicarboxylates, better technologies on downstream processing and upgrading of these dicarboxylates are required to minimize waste salt emission if efficient neutral pH bio-based transformations are used.","","en","doctoral thesis","","978-94-6186-715-5","","","","","","","","","BT/Bioprocess Engineering","","",""
"uuid:ae8304ba-aecf-4908-a856-f9da08cdf3ed","http://resolver.tudelft.nl/uuid:ae8304ba-aecf-4908-a856-f9da08cdf3ed","Flood Hazard Mapping: Uncertainty and its Value in the Decision-making Process","Mukolwe, M.M. (TU Delft Water Resources)","Solomatine, D.P. (promotor); Di Baldassarre, G (promotor); Delft University of Technology (degree granting institution)","2016","Computers are increasingly used in the simulation of natural phenomena such as floods. However, these simulations are based on numerical approximations of equations formalizing our conceptual understanding of flood flows. Thus, model results are intrinsically subject to uncertainty and the use of probabilistic approaches seems more appropriate. Uncertain, probabilistic floodplain maps are widely used in the scientific domain, but still not sufficiently exploited to support the development of flood mitigation strategies. In this thesis the major sources of uncertainty in flood inundation models are analyzed, resulting in the generation of probabilistic floodplain maps. The utility of probabilistic model output is assessed using value of information and the prospect theory. The use of these maps to support decision making in terms of floodplain development under flood hazard threat is demonstrated.","uncertainty; probabilistic flood map; floodplain planning","en","doctoral thesis","CRC Press / Balkema - Taylor & Francis Group","978-1-138-03286-6","","","","Dissertation submitted in fulfilment of the requirements of the Board for Doctorates of Delft University of Technology and of the Academic Board of the UNESCO-IHE Institute for Water Education.","","","","","Water Resources","","",""
"uuid:e1462fab-b35c-4506-aa93-45d37eaf7872","http://resolver.tudelft.nl/uuid:e1462fab-b35c-4506-aa93-45d37eaf7872","Active Stall Control of Horizontal Axis Wind Turbines: A dedicated study with emphasis on DBD plasma actuators","Balbino dos Santos Pereira, R. (TU Delft Wind Energy)","van Bussel, G.J.W. (promotor); Kotsonis, M. (copromotor); Delft University of Technology (degree granting institution)","2016","The contribution of sustainable Wind Energy (WE) to the global energy scenario has been steadily increasing over the past decades. In the process, Horizontal Axis Wind Turbines (HAWT) became the most widespread and largest WE harvesting machines. Nevertheless, significant challenges still lie ahead of further expansion of HAWT, namely concerning systemrobustness and cost-of-energy(COE) competitiveness. This dissertation studies aHAWT design concept termed modern Active Stall Control (ASC). With this concept HAWT power regulation is achieved using flowcontrol actuators to trim the aerodynamic loads across the operational envelope. The underpinning idea is that as the aerodynamic loads are trimmed by flowcontrol actuatorswithout pitching the blades, the pitch system may be mitigated. In turn, this might lead to decreased failure-rates and down-time, and thus eventually present a more cost-effective solution than state-of-the art HAWTs. Going specifically into ASC, if aerodynamic load trimming is performed it is necessary to employ a flow control actuator. From different flow control actuator types, since the aim is to reduce the maintenance and operational costs of ASC machines, actuators with few mechanical parts become more interesting. As such the present research also focuses on the Alternating Current Dielectric Barrier Discharge (AC-DBD) plasma actuator, owing among other things to its absence of moving parts, negligible mass and virtually unlimited bandwidth of actuation. A preliminary study on the feasibility of active stall control to regulate HAWT power production in replacement of the pitch system is conducted. By taking the National Renewable Energy Laboratory 5MWturbine as reference, a simple blade element momentum code is used to assess the required actuation authority. Considering half of the blade span is equipped with actuators, the required change in the lift coefficient to regulate power is estimated in ¢Cl Æ 0.7. Concerning actuation technologies, three flow control devices are investigated, namely Boundary Layer Transpiration, Trailing Edge Jets and Dielectric Barrier Discharge plasma actuators. Results indicate the authority of the actuators considered is not sufficient to regulate power, since the change in the lift coefficient is not large enough. Especially if a pitch-controlledmachine is used as baseline case. Active stall control of Horizontal AxisWind Turbines appears feasible only if the rotor is re-designed from the start to incorporate active-stall devices. Regarding AC-DBD plasma actuators, three specific topics are investigated. The different studies aim at DBD performance characterization, namely at the influence of external flow on DBD plasma momentum transfer and on the frequency response of actuator flow region characteristic of DBD pulse operation. Both these topics are important to bridge the gap between academic-laboratory employment of DBD and large-size industrial applications. Finally regarding DBD plasma actuator modeling, a method is developed to describe plasma actuation effects in integral boundary layer formulation, and coupled to a viscous-inviscid panel code (similar to XFOIL), while an experimental campaign is carried to validate the predictions. The three DBD plasma studies are further described below. Addressing cross-talk effects between DBD plasma actuators and external flow, a study is carried out in which an actuator is positioned in a boundary layer operated in a range of free stream velocities from 0 to 60m/s, and tested both in counter-flow and co-flow forcing configurations. Electrical measurements and a CCD camera are used to characterize the DBD performance at different external flow speeds, while the actuator thrust is measured using a sensitive load cell. Results show the power consumption is constant for different flow velocities and actuator configurations, while the plasma light emission is constant for co-flow forcing but increases with counter-flow forcing for increasing free stream velocities. The measured force is constant for free stream velocities larger than 20m/s, with same magnitude and opposite direction for the counter-flow and co-flow configurations. In quiescent conditions the measured force is smaller due to the change in wall shear force by the induced wall-jet. In addition to the experimental study, an analytical model is presented to estimate the influence of external flow on the actuator force. It is based on conservation of momentum through the ion-neutral collisional process while including the contribution of the wall shear force. Model results compare well with experimental data at different external flow velocities, while extrapolation to larger velocities shows variation in actuator thrust of at least 10% for external speedU Æ 200m/s. Concerning the response of DBD actuator region flow to pulsed operation, a methodology is provided to derive the local frequency response of flow under actuation, in terms of the magnitude of actuator induced velocity perturbations. The method is applied to an AC- DBD plasma actuator but can be extended to other kinds of pulsed actuation. For the derivation, the actuator body force termis introduced in the Navier-Stokes equations, from which the flow is locally approximated with a linear-time-invariant (LTI) system. The proposed semi-phenomenologicalmodel includes the effect of both viscosity and external flow velocity, while providing a system response in the frequency domain. Experimental data is compared with analytical results for a typical DBD plasma actuator operating in quiescent flow and in a laminar boundary layer. Good agreement is obtained between analytical and experimental results for cases below the model validity threshold frequency. These results demonstrate an efficient yet simple approach towards prediction of the response of a convective flow to pulsed actuation. Future application of the methodology might include actuation scheduling design and optimization for different flow control scenarios. The third study specifically addressing DBD plasma actuators presents a methodology to model the effect of DBD plasma actuators on airfoil performance within the framework of a viscous-inviscid airfoil analysis code. The approach is valid for incompressible, turbulent flow applications. The effect of (plasma) body forces in the boundary layer is analyzed with a generalized form of the von Kármán integral boundary layer equations. The additional terms appearing in the von Kármán equations give rise to a new closure relation. The model is implemented in a viscous-inviscid airfoil analysis code and validated by carrying out an experimental study. PIV measurements are performed on an airfoil equipped with DBD plasma actuators over a range of Reynolds number and angle of attack combinations. Balance measurements are also collected to evaluate the lift and drag coefficients. Results show the proposed model captures the magnitude of the variation in IBL parameters from DBD actuation. Additionally the magnitude of the lift coefficient variations (¢Cl ) induced by plasma actuation is reasonably estimated. As such, this approach enables the design of airfoils specifically tailored for DBD plasma flow control. Transitioning into ASCrotor design, and building on the previously presented, a methodology is introduced for designing airfoils suitable to employ actuation in a wind energy environment. The novel airfoil sections are baptized WAP (Wind Energy Actuated Profiles). A genetic algorithm based multiobjective airfoil optimizer is formulated by setting two cost functions, one for wind energy performance and the other representing actuation suitability. The wind energy cost function considers ’reference’ wind energy airfoils while using a probabilistic approach to include the effects of turbulence and wind shear. The actuation suitability cost function is developed for HAWT active stall control, including two different control strategies designated by ’enhanced’ and ’decreased’ performance. Two different actuation types are considered, namely boundary layer transpiration and DBD plasma. Results show that using WAP airfoils provides much higher control efficiency than adding actuation on reference wind energy airfoils, without detrimental effects in non-actuated operation. The WAP sections yield an actuator employment efficiency that is 2 to 4 times larger than obtained with reference wind energy airfoils. Regarding geometry, WAP sections for decreased performance display an upper surface concave aft-region compared to typical wind energy ’reference’ airfoils,while retaining common sharp nose and S-tail (characteristic aft-loading) features. Results show there is much to gain in designing airfoils from the beginning to include actuation effects, especially compared to employing actuation on already existing airfoils, which ultimately might pave theway for novelHAWT control strategies. Finally addressing the complete rotor planform design, an optimization study tailors a HAWT rotor to ASC operation, in a aero-structural-servo formulation. The study considers the aerodynamic and structural loads are in static equilibrium, and as such no unsteady effects are taken into account. The optimization includes planformgeometry design but also actuation scheduling, rated rotational speed and spanwise laminate skin thickness. Results show that, compared to variable-pitch turbines, ASC planform displays increased chord at inboard stations with decreased twist angle towards the tip, resulting in increased AOA. Actuation is employed to trim the (static) loads across the operational wind speed envelope and hence reduce load overshoots and associated costs. Comparing with state-of-the-art pitch machines, the expected COE of the ASC rotor does not indicate a significant decrease, but appears to be at least competitive with pitch-controlled HAWTs if the pitch system is effectively mitigated. Additionally, and though not explicitly considered in the present work, it is foreseen ASC might become interesting if the actuation system allows for further OM cost reduction via fatigue load-alleviation, since the actuation trimming load system is anyhow included in an ASC machine.","Active Stall Control; DBD Plasma Actuators","en","doctoral thesis","","978-94-6299-417-1","","","","","","","","","Wind Energy","","",""
"uuid:3dec02ac-c659-4741-980f-85619f2c4da6","http://resolver.tudelft.nl/uuid:3dec02ac-c659-4741-980f-85619f2c4da6","Securing safety: Resilience time as a hidden critical factor","Beukenkamp, W.R. (TU Delft Transport and Planning)","Stoop, J.A.A.M. (promotor); Hoogendoorn, S.P. (promotor); Delft University of Technology (degree granting institution)","2016","Nowadays we have all sorts of legislation to safeguard safety at home and at work. Safety management systems are supposed to safeguard safety issues at system level. We have advanced computer models to test system designs when still on the drawing board. Safety as a whole is very much safety conscious. Yet despite all our efforts accidents still happen, unwanted or wanted. Apparently safety as such is not an issue in our modern western society, securing safety is the challenge, hence the title of this thesis. Modern society is increasingly complex and vulnerable. On the other hand our knowledge of risks has increased over the years. Using modern digital risk analysis tools it is possible to design and build structures that would have been impossible only two or three decades ago. Risk management as such is nothing new. The various tools we use depend on the level of knowledge we have about the systems. One thing is clear: accidents should not happen, yet it is difficult to avoid them as this thesis will show. They can be the consequence of unintentional misapprehensions through lack of knowledge and/or understanding of the factual functioning of the system (safety). Or they can be the consequence of intentional acts of destruction such as terrorism (security). A third often used notion is risk: the exposure of a danger or unwanted event. Sooner or later a system can be exposed to one or more threats. These distinction between these notions (safety, security and risk) must be clear. This thesis will use them in several analyses. Damage is unavoidable but should it be fatal? What do we know about the effects when things can go wrong and how reliable is our knowledge about the likelihood of occurrence of such a condition/situation? In most cases the extend of our knowledge is well defined, both about what we know and what we don’t know. It is the knowledge about what we don’t know we don’t know (the unknown risks, the unknown unexpected behaviour of a system following from its hidden properties) that poses a challenge to adequate risk management. How to prepare yourself for the risk you don’t know?","","en","doctoral thesis","TRAIL Research School","978-90-5584-210-0","","","","TRAIL Thesis Series no. T2016/18","","","","","Transport and Planning","","",""
"uuid:7a57b79e-8416-444c-b492-8f12d8102fd0","http://resolver.tudelft.nl/uuid:7a57b79e-8416-444c-b492-8f12d8102fd0","Perceptual metrics of light fields","Xia, L. (TU Delft Human Information Communication Design)","Pont, S.C. (promotor); Heynderickx, I.E.J.R. (promotor); Delft University of Technology (degree granting institution)","2016","","","en","doctoral thesis","","9789461867070","","","","","","","","","Human Information Communication Design","","",""
"uuid:aa07ed45-93bc-4e8e-a7f2-e5728b187ac2","http://resolver.tudelft.nl/uuid:aa07ed45-93bc-4e8e-a7f2-e5728b187ac2","Human centric object perception for service robots","Chandarr, Aswin (TU Delft OLD Intelligent Vehicles & Cognitive Robotics)","Jonker, P.P. (promotor); Rudinac, M. (copromotor); Delft University of Technology (degree granting institution)","2016","The research interests and applicability of robotics have diversified and seen a
tremendous growth in recent years. There has been a shift from industrial robots operating in constrained settings to consumer robots working in dynamic environments associated closely with everyday human activities. Personal service robots to assist elderly, compliant robots with advanced perception skills for flexible manufacturing and autonomous driving vehicles for safe transportation are among the promising directions. In all these cases, robots have to work in close cooperation with human users and an intuitive higher level interaction between robots and layman users is essential for its widespread acceptability. Hence in this thesis, development of cognitive and perceptual skills in humans is studied and applied to the development of robot’s perceptual skills, especially based on visual information from a user interaction point of view.
A physical robot is developed from scratch considering the aspects of affordability and user acceptability. A 9 DoF robot, LEA which incorporates a differential drive base, 4 DoF arm with a gripper and a pan-tilt neck supporting the robot’s head. The entire mechanics and control electronics are custom developed leading to decreased mechanical complexity and increased flexibility in physical dimensions. All the components are well integrated with a socially appealing industrial design which has been well received by the public and media. The limitations arising from simplified mechanics and affordable hardware are compensated by advanced adaptive vision algorithms to achieve the required functionalities of a service robot. A generic human centric architecture for highly autonomous and interactive robots is proposed to integrate various capabilities of a robot that are triggered by user interaction. A specific case of object recognition is investigated, as many tasks faced by such robots involve perception and manipulation of different household objects.
An intuitive non-verbal interaction between a user and a robot for conveying objects of interest to the robot is developed. The developed spatial grounding model can detect the object of user interest independent of the relative position between the robot, the user and the object and without any prior training. This is achieved by a hybrid attention system combining bottom-up color saliency with depth image and top-down cues comprising user’s pointing direction and gaze. Robustness of gaze based attention system is improved by automatically switching between a keypoint based and a color based approach depending on objects’ texture.
The recognition of these objects is achieved with a three layered semantic recognition framework that can incorporate multiple modalities of information. Developed based on studies of human perception, this method achieves recognition robustness in unconstrained domestic environments while providing semantic grounding with human users. Modalities of color, shape and object location have been incorporated into this recognition model while maintaining flexibility to include additional modalities. The first layer consists of semantic grounding modules that abstract raw sensory information into a probability distribution over meaningful semantic concepts familiar to humans. A second layer operates on these semantic features to obtain an object hypothesis based on every individual modality. The last layer performs knowledge association to estimate combined probability over known objects to obtain the final inference.
A novel algorithm to track contours of objects and persons to allow exploration from different viewpoints is developed. Visual model of the target is refined by considering only the dominant 3D cluster within the initial bounding box. A tracking-by detection algorithm constrains the search space in the image by removing regions based on metric size constancy of the object and other structural patterns like perpendicular planes. A feature based on Color Naming System has been used with an online learning classifier to obtain a color probability map while the depth probability map is obtained by using a Gaussian model of the object’s depth distribution. An optimal fusion of different object modalities using a target-background dissimilarity measure is developed and is used in a graphcut framework to continuously obtain contours of the target object.
The reliability of recognition of these objects in challenging domestic environments is enhanced using visual appearances from multiple views while incorporating the spatial relations between these viewpoints as well. A Sequence Alignment algorithm has been used with vector quantized features from each view to achieve view point correlation in object recognition. A fast Visual Odometry estimation has been used to obtain viewpoint relations in an unsupervised manner and this has been incorporated with segmentation to provide a standalone system that can be used in real world scenario. This system is made generic to be used with different feature vectors and a benchmark is created to compare the performance improvement achieved by the developed system with respect to single view object recognition using different feature vectors.
Object recognition in service robots can be augmented by incorporating the context of objects’ use within the developed semantic recognition framework. The utility of an object can be understood by the actions performed by the user on the object and hence an Action Recognition system based on human skeletal tracking with a novelty detection method is developed to facilitate the incremental learning of new actions. Compact representations of skeletal structure are obtained using a Torso-PCA transform and are used as observations for a HMM based system to recognize user actions. Uncertainty in predictions, quantified as confidence measures are thresholded to detect unknown actions. These confidence measures are obtained through background models and different methods are evaluated with respect to sensitivity and specificity of recognition performance.
Various algorithms are developed to enhance the reliability of object perception overcoming challenges posed by dynamic environments and affordable hardware by incorporating different modalities of information available to a robot. The development of algorithms in this direction is significant as these concepts can be readily extended to incorporate user and environment recognition to complete the perceptual capabilities of robots …","Service robots; Robot vision; Object recognition","en","doctoral thesis","","978-90-8759-634-7","","","","","","","","","OLD Intelligent Vehicles & Cognitive Robotics","","",""
"uuid:2d375f1b-3857-4c03-87e8-cb0fc45f3f13","http://resolver.tudelft.nl/uuid:2d375f1b-3857-4c03-87e8-cb0fc45f3f13","Air-based contactless actuation system for thin substrates: The concept of using a controlled deformable surface","Vuong, H.P. (TU Delft Mechatronic Systems Design)","Munnig Schmidt, R.H. (promotor); van Ostayen, R.A.J. (copromotor); Delft University of Technology (degree granting institution)","2016","","","en","doctoral thesis","","9789461867148","","","","","","","","","Mechatronic Systems Design","","",""
"uuid:5b8551b1-f12a-463d-97e5-5f3ca4473557","http://resolver.tudelft.nl/uuid:5b8551b1-f12a-463d-97e5-5f3ca4473557","Coordination in hinterland chains: An institutional analysis of port-related transport","van der Horst, M.R.J. (TU Delft Economics of Technology and Innovation; Erasmus Universiteit Rotterdam)","Groenewegen, J.P.M. (promotor); Delft University of Technology (degree granting institution)","2016","Containerisation has led to increased competition between ports and put pressure on the use of scarce hinterland infrastructure. Having good coordination between all actors involved in port-related transport, including infrastructural access to the hinterland, is required to be successful in container port competition. In hinterland chains, different coordination problems exist for different reasons. As a response, different public and private actors undertake coordination arrangements to solve coordination problems. The goal of this thesis is to advance the understanding of how actors in port-related transport chains improve this coordination. The core of the thesis consists of five article. They form a ‘pattern of discovery’ of different issues related to coordination in hinterland chains applying different theoretical lenses from inter-organisational theories in which Institutional Economics plays a central role. This thesis introduces a framework to analyse coordination in hinterland chains. The framework helps to cope with the complexity of coordination in port-related transport chains and it is a tool to explore coordination issues systematically.
The first study shows that different coordination problems exist in transport by road, rail, and waterway. These coordination problems occur due to the imbalance between the costs and benefits of coordination, a lack of willingness to invest, the strategic considerations of the actors involved, and risk-averse behaviour. Based on literature review, desk research, interviews, and cases of coordination arrangements from the port of Rotterdam, we introduce a typology of four main categories of coordination arrangements. The categories are inspired by Transaction Cost Economics, theory on Property Rights, and Collective Action theory, and include: introduction of incentives, creation of interfirm alliances, changing scope of the organisation, and creating collective action. In the empirical part, coordination arrangements from container bargingin the port of Rotterdam are discussed and linked with the relevant coordination problem.
The second study further explores coordination arrangements in the port of Rotterdam taking the typology from the first article as a starting point. Key characteristics related to the complexity of the transaction (number of actors involved, group character, and coordination problems to be solved) and of the coordination arrangements (type of coordination arrangement, function of actors involved, function of the initiator, power base of the initiator, transport mode and use of ICT) are defined. The analysis shows that transport companies are the most important initiator of coordination arrangements. The Rotterdam Port Authority and terminal operators also play an important role. This article assumes a relationship between the chosen coordination arrangement and the complexity of the transaction. More actors involved leads to more complexity, resulting in more hierarchical coordination arrangements; the involvement of public actors or the port authority reduces transaction costs. When the group size is large, initiators of coordination arrangements do not enforce coordination, but act mainly as a stimulator or enabler (leader firms). The analysis shows that ICT is usually applied to solve the lack of operational coordination, and when the group size is large.
The third article further explores one main category of coordination arrangements, namely ‘changing scope’, thereby focussing on two actors, namely shipping lines and terminal operating companies. By making use of insights from Transaction Cost Economics and the Resource-based View, the paper helps to understand why and how shipping lines and terminal operating companies vertically integrate into intermodal transport and in inland terminals. The paper discusses a number of cases from the Hamburg–Le Havre range, where shipping lines and terminal operating companies have changed their scope. After the theoretical and empirical analysis, the papers draws conclusions on the explanatory power of the theories. From a theoretical point of view, and based on empirical observations, the study shows that three other aspects are relevant to take into account: the geographical scale of vertical integration strategies, the elements of power and culture of the firms, and the role of the formal institutional environment.
In the fourth study, the focus is on including the role of the institutional environment and dynamics in the analysis of coordination in hinterland chains. Based on an in-depth study into coordination in liberalised railway market in the Port of Rotterdam, empirical illustrations are used to adjust the Transaction Cost Economics approach towards a dynamic model influenced by Douglas North's theory on economic and institutional change. The study states that such a framework is relevant to study port-related railway chains that changed from a single and homogenous actor constellation to a multiple and heterogeneous actor constellation. In the adapted framework, the institutional environment is not only a constraint but also an instrument creating possibilities for improving coordinating behaviour, and allowing interaction between the coordination arrangements and the institutional environment.
The last article deepened the insights on causes of coordination problems focussing on container barging in the port of Rotterdam. A multidisciplinary analysis is performed, analysing possible institutional reasons that cause coordination problems. The study shows that container barging has a large track record of coordination arrangements. The sector is embedded in a history with many vertical and horizontal alliances. Although the Inland Waterway Transport sector can be characterised as conservative and individualistic, container barge operators act with an entrepreneurial, adaptive and future-oriented spirit. The degree of organisation among barge operators and inland terminal operators, active in organising barge transport, is relatively high, reflecting an ability to work improve coordination in the future. The present division of property and decision rights forms a bad condition for future improvement. This includes the missing contract between the barge operator and the deep-sea terminal operator, and between the barge operator or skipper and the infrastructure manager. This is difficult to change in the short term.","hinterland; Transportation; Institutions; seaport; railways; road transport; inland waterway transport","en","doctoral thesis","TRAIL Research School","978-90-5584-221-7","","","","TRAIL Thesis Series no. T2016/19, the Netherlands Research School TRAIL","","2016-11-04","","","Economics of Technology and Innovation","","",""
"uuid:2ecf0e07-58c8-42b9-bbf1-67878a3f6018","http://resolver.tudelft.nl/uuid:2ecf0e07-58c8-42b9-bbf1-67878a3f6018","Steady State and transient behavior of underground cables in 380 kV transmission grids","Hoogendorp, G. (TU Delft Intelligent Electrical Power Grids)","van der Sluis, L. (promotor); Popov, M. (copromotor); Delft University of Technology (degree granting institution)","2016","The extension of the Dutch 380 kV high voltage grid is necessary in order to guarantee security of electricity supply to the consumers. To achieve this extension, there are two new 380 kV connections under construction in the Randstad area, a densely populated area in the western part of the Netherlands. In these 380 kV connections, underground cables are applied. The work described in this thesis forms a part of a monitoring program that is managed by the Dutch transmission grid operator TenneT. In this program, the behavior of the two new underground cable connections in the Dutch 380 kV grid is being investigated and the work described in this thesis contributes to this program. Unlike the common overhead transmission line, which has an inductive behavior, a cable acts as a capacitance when it is in operation. This difference in electrical behavior makes research on the grid behavior necessary. One of the research goals is to evaluate whether the power flows and voltage levels in the 380 kV grid stay within the prescribed limits. Another goal of this work is investigating whether the peak of the transient voltages at the cable terminals after lightning currents will stay below the Basic Insulation Level. The research is divided into two parts: steady state behavior and transient behavior. For the steady state part, PI sections are used to model the underground cable and overhead line to perform load flow studies in order to investigate the power flows and voltage levels in the 380 kV grid. For the transient part, the widely applied Frequency Dependent Phase Model (FDPM), which is a transmission line model that takes into account the frequency dependency of cable and line parameters, is used to model the cable and overhead line sections. For the transient part, the focus is on the modeling work of the complete underground cable system. Field measurements are performed for both the 380 kV cross-bonded cable and the overhead line sections and the results are used for transient cable model verification. The scientific contribution of this work is the determination of the parameters for the 380 kV cross bonding system by using reflection measurements and the validation of the FDPM transmission line model by using field measurements on the actual cross bonded 380 kV cable system. Finally, based on the studies described in this thesis, one can conclude that no remarkable overvoltages occur during the studied steady state and transient situations for the given dimensions of the observed 380 kV system.","","en","doctoral thesis","","978-90-73077-81-2","","","","","","","","","Intelligent Electrical Power Grids","","",""
"uuid:f1c3f187-508b-4a4f-8a8f-33c71a5b3898","http://resolver.tudelft.nl/uuid:f1c3f187-508b-4a4f-8a8f-33c71a5b3898","Configraphics: Graph Theoretical Methods for Design and Analysis of Spatial Configurations","Nourian, Pirouz (TU Delft Design Informatics)","Sariyildiz, I.S. (promotor); van der Hoeven, F.D. (promotor); Delft University of Technology (degree granting institution)","2016","This dissertation reports a PhD research on mathematical-computational models, methods, and techniques for analysis, synthesis, and evaluation of spatial configurations in architecture and urban design. Spatial configuration is a technical term that refers to the particular way in which a set of spaces are connected to one another as a network. Spatial configuration affects safety, security, and efficiency of functioning of complex buildings by facilitating certain patterns of movement and/or impeding other patterns. In cities and suburban built environments, spatialconfiguration affects accessibilities and influences travel behavioural patterns, e.g. choosing walking and cycling for short trips instead of travelling by cars. As such, spatial configuration effectively influences the social, economic, and environmental functioning of cities and complex buildings, by conducting human movement patterns. In this research, graph theory is used to mathematically model spatial configurations
in order to provide intuitive ways of studying and designing spatial arrangements for architects and urban designers. The methods and tools presented in this dissertation are applicable in:
–– arranging spatial layouts based on configuration graphs, e.g. by using bubble diagrams to ensure certain spatial requirements and qualities in complex buildings; and
–– analysing the potential effects of decisions on the likely spatial performance of buildings and on mobility patterns in built environments for systematic comparison of designs or plans, e.g. as to their aptitude for pedestrians and cyclists. The dissertation reports two parallel tracks of work on architectural and urban configurations. The core concept of the architectural configuration track is the ‘bubble diagram’ and the core concept of the urban configuration track is the ‘easiest paths’ for walking and cycling. Walking and cycling have been chosen as the foci of this theme as they involve active physical, cognitive, and social encounter of people with built environments, all of which are influenced by spatial configuration. The methodologies presented in this dissertation have been implemented in design toolkits and made publicly available as freeware applications.","Spatial Configuration; Architecture; Urban Design; Graph Theory; Mathematical Modelling; Computational Design","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-6186-720-9","","","","A+BE | Architecture and the Built Environment No 14 (2016)","","2016-09-30","","","Design Informatics","","",""
"uuid:5609f331-6473-447e-b271-bdb76823960f","http://resolver.tudelft.nl/uuid:5609f331-6473-447e-b271-bdb76823960f","Integrated Quantum Photonics: from modular to monolithic integration","Esmaeil Zadeh, I.Z. (TU Delft QN/Quantum Nanoscience)","van der Zant, H.S.J. (promotor); Zwiller, V.G. (promotor); Delft University of Technology (degree granting institution)","2016","In the past decades quantum optics has been at the forefront of quantum innovative technologies. For practical applications, scalable platforms for implementation of quantum optical circuits are vital. This thesis presents two new platforms for scalable implementation of quantum optical circuits, namely, modular approach and monolithic integration. Here, we take the first steps towards the integration of three main elements of every quantum optics circuits: Single-photon emitters, single-photon detectors, and quantum logics. Until now, most quantum optical circuits used separate platforms for single-photon generation and detection. The main challenge in the integration of these technologies, which have different requirements, has slowed down the research in the field. Here, we integrate sources and detectors by first fabricating the devices on their own platform and then transferring and combining them together. Plasma enhanced chemical vapor deposition of silicon nitride followed by etching optical waveguides connect these elements. Removing the Poissonian optical excitation field from the quantum circuit is necessary for integration. Classical optical excitation can be avoided if the sources are electrically pumped. However, fabrication of high-quality electrically pumped sources, suitable for integration, has been limited. The experiments described in chapter 4 are our first step towards addressing the mentioned problem. Defect-free nanowires are grown on <100> direction and their optoelectronic performance are characterized. Nanowire quantum dots, thanks to their waveguiding, purity, coherence and their potentials for deterministic integration with other optical circuits, are promising single-photon sources for on-chip quantum optics. However, precise control of the emission energy of the quantum dots by growth has not become possible yet. Chapter 5 describes a method for on-chip tuning of emission energy of nanowire quantum dots using strain fields. We show the emission energy of independent nanowire quantum dots can be brought into degeneracy without affecting their single-photon emission properties. The quantum optical components have to be routed and connected together to form functional circuits. On a chip, this is usually carried out using optical waveguides. Moreover, manipulation of single photons has to be done in a scalable fashion. Again optical waveguides and ring resonators are very good candidates for this task. Therefore, understanding the behavior of these circuits such as their losses, polarization dependence, and temperature behavior is important. The experiment described in chapter 6 studies the behavior of plasma enhanced silicon nitride waveguides in cryogenic temperatures. We concluded in this chapter that due to weak thermo-optic sensitivity of silicon nitride at cryogenic temperatures, the available thermal budget on the system should be carefully considered. An important step in achieving a scalable platform for quantum optical circuits is deterministic and efficient integration of single-photon sources. In chapter 7, we demonstrate successful integration of III-V nanowire quantum dots with silicon nitride waveguides. The nanowires are deterministically selected and transferred from the original growth chip to the new substrate where they are integrated with low-loss silicon nitride waveguides. Our measurements show that the integrated sources preserve their high quality emission properties. In chapter 8, we describe an alternative approach: amodular method for scalable quantum optics. The proposed technique is based on coupling the single-photon from sources into optical fibers where the photons can be processed and then fed into the single-photon detectors. This approach has high flexibility and is easier to implement but as described in the chapter, at the moment, losses in the interfaces between optical fibers and single-photon sources are a major limiting factor. We conclude the thesis with some possible future directions and exciting new results on integration of single-photon detectors with sources and waveguides. Finally, primary results on on-chip single-photon filtering and removal of the optical excitation field are demonstrated.","","en","doctoral thesis","","978-90-8593-271-0","","","","Casimir PhD series, Delft-Leiden 2016-27","","","","QN/Quantum Nanoscience","","","",""
"uuid:73fe7835-43ac-4d65-bbf1-9202c7d72c45","http://resolver.tudelft.nl/uuid:73fe7835-43ac-4d65-bbf1-9202c7d72c45","Opportunistic Communication in Extreme Wireless Sensor Networks: A step back towards the smart dust dream","Cattani, M. (TU Delft Embedded Systems)","Langendoen, K.G. (promotor); Zuniga, Marco (copromotor); Delft University of Technology (degree granting institution)","2016","Sensor networks can nowadays deliver 99.9% of their data with duty cycles below 1%. This remarkable performance is, however, dependent on some important underlying assumptions: low traffic rates, medium size densities and static nodes. In this thesis, we investigate the performance of these same resource-constrained devices, but under scenarios that present extreme conditions: high traffic rates, high densities and mobility: the so-called ExtremeWireless Sensor Networks (EWSNs). From a networking perspective, communicating in these extreme scenarios is very challenging. The combined effect of high network densities and dynamics makes the network’s characteristics fluctuate drastically both in space and time. Traditional mechanisms struggle to cope with these sudden changes, resulting in a continuous exchange of information that saturates the bandwidth and increases the energy consumption. Once this saturation threshold is reached, mechanisms take decisions based on wrong, outdated information and soon stop working. Even flexible mechanisms have difficulties adapting their settings to the fickly conditions of EWSNs and result in poor performance.
To efficiently communicate in EWSNs, mechanisms must therefore comply to a set of requirements i. e., design principles, which are explained next. First, they need to be resilient to local and remote failures and operate as independent as possible from the status of other nodes (state-less principle). Second, because in EWSNs bandwidth is a scarce resource, it should be mainly used for the transmission of the actual data. Mechanisms should not be artificially orchestrated and should exploit each other in a cross-layer fashion to reduce as much as possible their communication overhead (opportunistic principle). Third, mechanisms should support extreme network conditions from their inception. Adapting traditional mechanisms, which are designed for milder conditions, would otherwise result in complex and fragile mechanisms (anti-fragile principle). Fourth, in the case the resources saturate, mechanisms should operate in a conservative fashion, so that performance degrades gracefully without drastic disruptions (robustness principle).
Inspired by these four principles, this thesis detaches from traditional communication primitives – which are deterministic and based on rigid structures – and proposes a novel communication stack based on opportunistic anycast. According to this primitive, nodes communicate with the first available neighbor, independently of its location and identity. The more neighbors, themore efficient the communication.
At the foundation of this communication stack lays SOFA, a medium access control (MAC) protocol that exploits opportunistic anycast to handle extreme densities in an efficient manner. Its implementation details are presented in Chapter 2. On top of the SOFA layer, this thesis builds two essential network services: neighborhood cardinality (density) estimation and data collection. The former service is provided by Estreme, a mechanism presented in Chapter 3, which exploits the rendezvous time of SOFA to estimate the number of neighbors with almost zero overhead. The latter service is provided by Staffetta in Chapter 4, a mechanism that adapts the wake-up frequency of nodes to bias the opportunistic neighbor-selection of SOFA towards the desired direction e. g., towards the sink node collecting all data.
Finally, this thesis presents an extensive evaluation of a complete opportunistic stack on simulations, testbeds and a challenging real-world deployment in the formof the NEMO science museumin Amsterdam. Results showthat opportunistic behavior can lead to mechanisms that are both lightweight and robust and, thus, are able to scale to EWSNs.","","en","doctoral thesis","","978-94-6233-386-4","","","","","","","","","Embedded Systems","","",""
"uuid:a8fd4a11-f0b1-4373-817f-f07a27235bde","http://resolver.tudelft.nl/uuid:a8fd4a11-f0b1-4373-817f-f07a27235bde","Interface Defects and Advanced Engineering of Silicon Heterojunction Solar Cells","Vasudevan, R.A. (TU Delft Photovoltaic Materials and Devices)","Smets, A.H.M. (promotor); Zeman, M. (promotor); Delft University of Technology (degree granting institution)","2016","","","en","doctoral thesis","","978-94-6328-086-0","","","","","","","","","Photovoltaic Materials and Devices","","",""
"uuid:a3e32a8c-bc0a-497d-9666-15ec44f2e5c2","http://resolver.tudelft.nl/uuid:a3e32a8c-bc0a-497d-9666-15ec44f2e5c2","Steels for nuclear reactors: Eurofer 97","Monteiro De Sena Silvares de Carvalho, I. (TU Delft OLD Metals Processing, Microstructures and Properties; TU Delft RST/Neutron and Positron Methods in Materials; TU Delft (OLD) MSE-3)","Sietsma, J. (promotor); Schut, H. (copromotor); Delft University of Technology (degree granting institution)","2016","These days, climate change and its consequences regularly make the news. Sea levels are rising, poles are defrosting, species are becoming extinct, the ozone layer is being destroyed and the earth's temperature is increasing. These symptoms convey a clear call to action: the preservation of the environment needs to be of the highest concern. In parallel, the world's population grows larger and energy consumption grows with it. With these worries in mind, more attention is put on how to be more earth-minded and environment friendly. Often, this attention has a common theme: the need for renewable energies. Large investments have been made towards the development of renewable sources of energy such as solar and wind power. However, the energy demand is so high that an extra source of power is needed, for which nuclear power is a candidate.","","en","doctoral thesis","","978-94-91909-40-5","","","","This research was carried out under project number M74.5.10393 in the framework of the Research Program of Materials innovation institute (M2i) in the Netherlands (www.m2i.nl)","","","","","(OLD) MSE-3","","",""
"uuid:9fa27d25-1e58-473e-828a-b219bf465438","http://resolver.tudelft.nl/uuid:9fa27d25-1e58-473e-828a-b219bf465438","In-line monitoring of solvents during CO2 absorption using multivariate data analysis","Kachko, A. (TU Delft Engineering Thermodynamics)","Vlugt, T.J.H. (promotor); Bardow, André (promotor); Delft University of Technology (degree granting institution)","2016","","chemometrics; in-line; chemical process monitoring; multivariate data analysis; carbon dioxide capture","en","doctoral thesis","","978-94-6186-673-8","","","","","","","","","Engineering Thermodynamics","","",""
"uuid:9a7eddec-5e49-4268-bbcd-5f58977b3f11","http://resolver.tudelft.nl/uuid:9a7eddec-5e49-4268-bbcd-5f58977b3f11","In vivo investigations of E. coli chromosomal replication using single-molecule imaging","Tiruvadi Krishnan, S. (TU Delft BN/Nynke Dekker Lab)","Dekker, N.H. (promotor); Delft University of Technology (degree granting institution)","2016","All living organisms pass on their genetic information to their offspring in the form of DNA or RNA molecules by duplicating them across generations. In the bacteria, their genes are packed in long chains of DNA molecules or chromosomes. One of the widely studied model organisms, Escherichia coli, replicates its circular chromosome in two directions starting from an origin region of chromosome with independent replication complexes or replisomes simultaneously synthesizing the daughter chromosomes. DNA replication is an important process of the E. coli life cycle because of occurrence of small errors in its mechanism will affect the cell’s normal state larger. Much of our current knowledge about the dynamics of replisome complex has been obtained from in vitro experiments. However, the natural environment of the cell is considerably different from that of in vitro solutions.","DNA replication; E. coli; chromosomal engineering; single-molecule; epi-fluorescence microscopy; microfluidics; photo-activable fluorescence microscopy; in vivo stoichiometry","en","doctoral thesis","","978-90-8593-267-3","","","","Casimir Ph.D. Series, 2016-23","","","","","BN/Nynke Dekker Lab","","",""
"uuid:299c48c9-5bc7-4c5a-aab8-1b82696fbb5b","http://resolver.tudelft.nl/uuid:299c48c9-5bc7-4c5a-aab8-1b82696fbb5b","Optical degradation mechanisms and accelerated reliability evaluation for LEDs","Huang, J. (TU Delft Electronic Components, Technology and Materials)","Zhang, Kouchi (promotor); van Driel, W.D. (promotor); Delft University of Technology (degree granting institution)","2016","","","en","doctoral thesis","","978-94-028-0296-2","","","","","","","","","Electronic Components, Technology and Materials","","",""
"uuid:03bc62aa-c046-417d-82d6-ef8d51d9a965","http://resolver.tudelft.nl/uuid:03bc62aa-c046-417d-82d6-ef8d51d9a965","Ultrafast tangential micro-mixers for the study of biochemical reactions on the microsecond time scale","Mitic, S. (TU Delft BT/Biocatalysis)","de Vries, S. (promotor); Hagen, W.R. (promotor); Delft University of Technology (degree granting institution)","2016","Detailed understanding of chemical and enzyme catalysis constitutes a main focus of current biochemical research. Fundamental insight in how (bio)catalysts function, requires knowledge of their three dimensional structure and a wide range of time resolved experiments that monitor the reaction progress. The ultimate aim is the determination of the molecular structure of transition and transient states during the chemical bond-breaking and bond-making step that occurs as part of the overall reaction. Chemists claim to have observed transient or transition states with lifetimes as short as 100-500 femtoseconds. Single steps in enzyme catalysis are usually slower than this, although electron transfer and proton transfer can occur in picoseconds or nanoseconds, respectively. The movements of protein domains which are critical to drive enzyme catalysis because they directly promote the breaking and reforming of chemical bonds, occur at a longer time scale of ~0.1-1 µs. This time range can thus be regarded as the fastest in which formation of enzyme catalytic intermediates occur or protein domains can fold into the native structure of the active enzyme.
To study catalytic mechanisms of enzymes and chemical reactions in detail, the reaction should be initiated so rapidly that the subsequent formation and decay of all reaction intermediates can in fact be detected. Even the fastest present-day continuous-flow mixing equipment is too slow (~45 µs) to monitor the very beginning of enzyme catalysis. In order to design a general kinetic instrument with a much shorter dead-time to mix reactants and observe the reaction progress both the mixer and observation cell need to be miniaturized to micrometer dimensions (~100 µm) while maintaining high mixing efficiency and good optical quality. This thesis deals with the design and development of a new kinetic instrument that can perform, observe and detail, on the μs time scale, the catalytic mechanism of enzymes, in particular those of the oxidoreductases.","","en","doctoral thesis","","978-94-0280316-7","","","","","","","","","BT/Biocatalysis","","",""
"uuid:b9a684b9-b75f-4b59-b623-37c977ae8b85","http://resolver.tudelft.nl/uuid:b9a684b9-b75f-4b59-b623-37c977ae8b85","System in package for intelligent lighting and sensing applications","Dong, M. (TU Delft Electronic Components, Technology and Materials)","Zhang, Kouchi (copromotor); Delft University of Technology (degree granting institution)","2016","As we enter era of “intelligence”, an increasing number of smart devices are being equipped around us. Such devices reflect the emerging need in microelectronics industry: increased functionality and miniaturization. As Moore’s law gradually meets its bottleneck, the concept of “More-than-Moore” (MtM) is proposed to address this issue by migrating board level system assembly to package level system integration. System in package (SiP) technology emerges as a promising solution for increasing the level of integration. Together with other novel packaging technologies, such as wafer level packaging (WLP), SiP shows great potential for providing intensely integrated functional system with miniaturized form factor. This thesis aims to develop novel SiP design and implement it into intelligent applications such as solid state lighting (SSL) and particulate matter (PM) sensor.","system in package; intelligent lighting; particulate matter sensor","en","doctoral thesis","","978-94-028-0268-9","","","","","","","","","Electronic Components, Technology and Materials","","",""
"uuid:405cdc2f-cac1-4c60-a93c-c951e192dd26","http://resolver.tudelft.nl/uuid:405cdc2f-cac1-4c60-a93c-c951e192dd26","Contextual Factors of Sustainable Supply Chain Management Practices in the Oil and Gas Industry","Wan Ahmad, W.N.K. (TU Delft Transport and Logistics)","Rezaei, J. (copromotor); Tavasszy, Lorant (promotor); Delft University of Technology (degree granting institution)","2016","","Sustainable supply chain management; oil and gas; contextual factors","en","doctoral thesis","TRAIL Research School","978-90-5584-206-3","","","","TRAIL Thesis Series no. T2016/15, the Netherlands Research School TRAIL","","","","","Transport and Logistics","","",""
"uuid:b7c8a92b-14fc-42b7-bfa2-af8c735df7be","http://resolver.tudelft.nl/uuid:b7c8a92b-14fc-42b7-bfa2-af8c735df7be","The Demand for Technical Excellence: A mixed methods approach towards conceptualizing the influence of training and technology innovation on organizational effectiveness","Dang, V.T. (TU Delft Policy Analysis)","Verbraeck, A. (promotor); De Graaff, E. (promotor); Delft University of Technology (degree granting institution)","2016","","","en","doctoral thesis","","978-94-6328-080-8","","","","","","","","","Policy Analysis","","",""
"uuid:11717f7d-51c9-471b-8f9e-ee1a70e7f032","http://resolver.tudelft.nl/uuid:11717f7d-51c9-471b-8f9e-ee1a70e7f032","Flexible CMOS Single-Photon Avalanche Diode Image Sensor Technology","Sun, P. (TU Delft QID/Ishihara Lab)","Charbon-Iwasaki-Charbon, E. (promotor); Ishihara, R. (promotor); Sarro, Pasqualina M (promotor); Delft University of Technology (degree granting institution)","2016","","single-photon avalanche diode; SOI; flexible substrate; CMOS integration; photon counting and image sensor; 866801","en","doctoral thesis","","978-94-028-0314-3","","","","","","2018-01-01","","","QID/Ishihara Lab","","",""
"uuid:1d8aaa59-c1fd-44a7-934f-f6ce99930235","http://resolver.tudelft.nl/uuid:1d8aaa59-c1fd-44a7-934f-f6ce99930235","Adaptive thermal comfort opportunities for dwellings: Providing thermal comfort only when and where needed in dwellings in the Netherlands","Alders, E.E. (TU Delft Building Services)","van den Dobbelsteen, A.A.J.F. (promotor); van der Linden, A.C. (copromotor); Delft University of Technology (degree granting institution)","2016","br","adaptive thermal comfort; dwellings; architecture; adaptive control; Sustainable design; energy efficiency","en","doctoral thesis","A+BE | Architecture and the Built Environment","978-94-92516-12-1","","","","A+BE | Architecture and the Built Environment No 13 (2016)","","","","","Building Services","","",""
"uuid:361057db-a92a-4c1c-82bc-9350b8883ac0","http://resolver.tudelft.nl/uuid:361057db-a92a-4c1c-82bc-9350b8883ac0","Liquid Silicon for Printed Polycrystalline Silicon Thin-Film Transistors on Paper","Trifunovic, M. (TU Delft QID/Ishihara Lab)","Sarro, Pasqualina M (promotor); Ishihara, R. (copromotor); Delft University of Technology (degree granting institution)","2016","","Liquid Silicon; Solution-Processing; Flexible Electronics; Paper Electronics; Excimer Laser Crystallization; Printed Electronics","en","doctoral thesis","","978-94-028-0315-0","","","","","","2018-08-29","","","QID/Ishihara Lab","","",""
"uuid:55b7d7ed-dba5-41ab-9796-fbe3d6659f84","http://resolver.tudelft.nl/uuid:55b7d7ed-dba5-41ab-9796-fbe3d6659f84","Multi-physics computational models of articular cartilage for estimation of its mechanical and physical properties","Arbabi, V. (TU Delft Biomaterials & Tissue Biomechanics)","Weinans, Harrie (promotor); Zadpoor, A.A. (copromotor); Delft University of Technology (degree granting institution)","2016","Recent advances in the realm of computational modeling of complex multiphysics phenomena in articular cartilage enabled efficient and precise determination of articular cartilage properties. However, still accurate quantification of complicated indentation and diffusion processes tying closely with the inhomogeneity of articular cartilage remains challenging. In the present thesis accurate approaches are proposed to capture the mechanical and physical behavior of articular cartilage as faithfully as possible. Finite element models (FE-models) capable of detecting contact between indenter and cartilage surface are developed and applied to spherical indentation process. To predict mechanical and physical properties of cartilage artificial neural networks (ANN) were used and to guarantee the efficacy of the generated ANN they were trained using simulated noisy force-time data. The combination of FE-model and ANN trained with noisy data allowed obtaining cartilage properties robustly. FE-models taking the inhomogeneity of articular cartilage into account were developed and validated and applied to capture neutral (biphasicsolute model) and charged (multiphasic model) solute transfer across articular cartilage in a finite bath experimental setup. Those models could capture the behavior of solute diffusion across cartilage and provide diffusivities and fixed charge densities (FCD) of different cartilage zones. An algorithm consisting of inverse and forward ANNs was developed to obtain the diffusivities of cartilage layers which eliminates the need for computational expertise. The final goal of this algorithm is to introduce a methodology by which properties of cartilage can be determined without any need for computational expertise, which provides a promising opportunity to meet the needs for clinics when it comes to assess the healthiness of articular cartilage during osteoarthritis progression. Effects of bath osmolarity, concentration and charge of solute were investigated using a combination of micro-CT experiments and FE-models. The results suggested that solute charge unlike the osmoalrity and solute concentration had a profound effect on solute diffusion. Porosity and thickness of subchondral plate were identified as two primary factors affecting the diffusion of neutral solutes across subchondral plate. Using a developed multi-zone biphasic-solute model allowed obtaining the diffusivities of cartilage layers as well as subchondral plate. Using a multi-zone biphasic-solute model, we found that overlying bath size, bath stirring and thickness of the formed stagnant layer can substantially influence the diffusion across cartilage. This provides an opportunity to optimally design diffusion experiments.","multiphysics; finite element modelling; articular cartilage; osteoarthritis; poroelastic; biphasic-solute and multiphasic model; indentation; diffusion of charge and uncharged solute; fixed charge density","en","doctoral thesis","","978-94-6186-713-1","","","","","","","","","Biomaterials & Tissue Biomechanics","","",""
"uuid:c34395a2-43c8-44f2-bc13-01d42bec992e","http://resolver.tudelft.nl/uuid:c34395a2-43c8-44f2-bc13-01d42bec992e","Modelling waves and their impact on moored ships","Rijnsdorp, D.P. (TU Delft Environmental Fluid Mechanics)","Pietrzak, J.D. (promotor); Zijlema, Marcel (promotor); Delft University of Technology (degree granting institution)","2016","Ships that are moored at a berth in coastal waters are subject to various external forcings, including the hydrodynamic loads that are induced by the local wave field. If the ship motions resulting from these wave-induced loads become too large, they may hamper safe operations (e.g., the loading of a container ship). Accurate predictions of the hydrodynamic loads are therefore desired to ensure safe operations of moored ships. In a coastal environment, the wave field is generally dominated by short waves. The majority of these waves originate from the open ocean, where they are generated by the wind. If the short waves are energetic at a berth, they may cause a significant response of a moored ship. In addition, nonlinear wave effects can excite significant ship motions, which may even occur during relatively calm wave conditions or in a region that is sheltered from energetic short waves. This significant response is primarily related to the presence of infragravity waves, which are excited through nonlinear interactions amongst pairs of short waves. An accurate description of this nonlinear wave field is therefore indispensable when predicting the hydrodynamic loads that act on a ship which is moored in coastal waters. The range of scales and physical processes involved in such studies make this a challenging problem to solve using numerical models. At present, the existing models that can predict the wave impact on a moored ship based on an offshore wave climate are restricted to relatively mild wave conditions. This thesis set out to develop a new modelling approach to advance our capabilities in solving this complex problem.
The proposed model aims to be applicable at the scale of a realistic coastal or harbour region (say in the order of $1 \times 1$ km$^2$), while accounting for the relevant physical processes. This includes the processes that govern the nonlinear wave evolution over a varying bottom topography (e.g., the nonlinear interactions that excite infragravity waves), and the interactions between the waves and a moored ship (e.g., the scattering of waves by a fixed floating body). The approach is based on the recently developed non-hydrostatic wave-flow model SWASH, which has been successfully applied to simulate a range of wave related processes. This work pursues the development of a new modelling approach through a further development and evaluation of the SWASH model in (i) simulating the nonlinear wave dynamics in a coastal region, and (ii) simulating the interactions between waves and a restrained ship. The first crucial step in this development is to determine if the model can resolve the nonlinear wave field in a coastal environment. Previous studies showed that models like SWASH can resolve the short-wave dynamics in coastal waters. However, they did not address if such models can resolve the dynamics of the infragravity-wave field. Furthermore, most of these studies focussed on laboratory applications due to computational limitations, whereas field scale applications of non-hydrostatic models have been rarely reported. With the ever increasing computational capabilities, such scales are now within the reach of the state-of-the-art computer systems. To advance the capability of the non-hydrostatic approach towards such realistic applications, this work presents a thorough evaluation of the SWASH model in resolving the nonlinear wave dynamics at the scale of a realistic coastal region. Given the importance of infragravity waves with respect to the wave-induced response of a moored ship, this work particularly determines if the model can resolve their nearshore evolution. The model was validated using both laboratory and field experiments, covering a range of wave conditions (varying from bichromatic waves to short-crested sea states). A comparison between model predictions and laboratory measurements showed that the model captures the frequency dependent cross-shore evolution of infragravity waves with a coarse vertical resolution (2 layers), including their steepening and eventual breaking close to the shoreline. These results demonstrate that the model can efficiently resolve the dominant processes that affect their nearshore evolution (e.g., nonlinear interactions, shoreline reflections, and dissipation), permitting applications at the scale of a realistic harbour or coastal region. To determine the capability of the model at such scales, SWASH was applied to study the infragravity wave dynamics at a field site near Egmond aan Zee (the Netherlands), which is characterised by a complex bottom topography. The model was used to reproduce a total of six sea states (including mild and storm conditions), which were measured as part of a two month field campaign. For all conditions, the predicted wave field gave a good representation of the natural conditions, supporting a further study into the infragravity wave dynamics. A unique feature of these predictions is their extensive spatial coverage, allowing analyses of the wave dynamics at scales not easily covered by in-situ measurement devices.
Amongst others, this study showed that a significant portion (up to 50%) of the infragravity wave motion can be trapped at a nearshore bar. This shows the potential of the model to improve our understanding of such complex wave dynamics. The findings of the flume and field studies further show that the SWASH model provides a powerful tool to predict the nonlinear wave field at a coastal berth based on an offshore wave climate. To predict the impact of this wave field on a ship that is moored at such a berth, the next crucial step in the model development is to account for the interactions between the waves and a restrained ship. For this purpose, a fixed floating body was schematised within SWASH. The model was validated by comparing model results with an analytical solution, a numerical solution, and two laboratory experiments that consider the wave impact on a restrained ship for a range of wave conditions (varying from a solitary wave to a short-crested wave field). These comparisons showed that the model captures the scattering of waves, and the hydrodynamic loads that act on the body. Remarkably, a coarse vertical resolution sufficed to resolve these dynamics. This shows the potential of the model in efficiently simulating the wave-ship interactions. The findings of this thesis demonstrate that, with the inclusion of a fixed floating body in SWASH, a novel modelling approach has been developed that can efficiently resolve the key dynamics that govern the nearshore evolution of waves and their interactions with a restrained ship. Although further work is required, for example, accounting for the motions of a moored ship, this demonstrates the approach has the potential to simulate the wave-induced response of a ship that is moored in coastal waters. This thesis thereby sets the stage to advance our modelling capabilities towards such realistic applications in a complex coastal environment.","","en","doctoral thesis","","","","","","","","","","","Environmental Fluid Mechanics","","",""
"uuid:dd0886f0-2f3a-421d-ac92-bdc4da5985b5","http://resolver.tudelft.nl/uuid:dd0886f0-2f3a-421d-ac92-bdc4da5985b5","Characterization of an electron spin qubit in a Si/SiGe quantum dot","Kawakami, E. (TU Delft QCD/Vandersypen Lab)","Vandersypen, L.M.K. (promotor); Delft University of Technology (degree granting institution)","2016","","","en","doctoral thesis","","978-90-8593-266-6","","","","The works presented in this thesiswere supported in part by ArmyResearch Office (W911NF-12-0607), the Dutch Foundation for Fundamental Research on Matter (FOM) and a European Research Council (ERC) Synergy grant; development and maintenance of the growth facilities used for fabricating samples is supported byDOE(DE-FG02-03ER46028). This research utilized NSF-supported shared facilities at the University of Wisconsin-Madison. Work at the Ames Laboratory was supported by the Department of Energy-Basic Energy Sciences under Contract No. DE-AC02-07CH11358. Erika Kawakami was supported by a fellowship from the Nakajima Foundation.","","","","","QCD/Vandersypen Lab","","",""
"uuid:344b7d9c-f54c-4836-87ca-28582231a3d3","http://resolver.tudelft.nl/uuid:344b7d9c-f54c-4836-87ca-28582231a3d3","Modeling and Characteristics of a Novel Multi-fuel Hybrid Engine for Future Aircraft","Yin, F. (TU Delft Aircraft Noise and Climate Effects)","van Buijtenen, J.P. (promotor); Gangoli Rao, A. (promotor); Delft University of Technology (degree granting institution)","2016","Civil air transportation has undergone significant expansion over the past decades and is continuing to grow. Nevertheless, the tendency of energy depletion and the severe environmental problems yield challenges in its further development. To mitigate the climate impact of civil aviation, the Advisory Council for Aeronautics Research in Europe (ACARE) has set ambitious objectives for the year 2050 to reduce the CO2 emission by 75% per passenger kilometre, the NOx emissions by 90% and the perceived noise emission by 65% relative to the capacities of aircraft operating in the year 2000.
The conventional approach of increasing Bypass Ratio (BPR), Overall Pressure Ratio (OPR), and Turbine Inlet Temperature (TIT) to improve the cycle efficiency, and thereby reducing the fossil fuel consumption and the associated emissions is unlikely to meet the ACARE goals. Moreover, the high OPR and TIT aggravate the NOx emissions for a given combustion technique. A novel multi-fuel hybrid engine for a Multi-Fuel Blended Wing Body (MFBWB) aircraft conceived in the “Advanced Hybrid Engine for Aircraft Development (AHEAD)” project brings to light promising solutions in this regard.
The multi-fuel hybrid engine is a turbofan engine with the following added components: a Contra-Rotating Fans (CRF) system, two sequential combustors burning different fuels simultaneously, and a Cryogenic Bleed Air Cooling System (CBACS). The CRF can sustain the non-uniform flow ingested from the boundary layer of the airframe. The first combustor is the main combustor, where the Liquid Hydrogen (LH2) or the Liquid Natural Gas (LNG) is burnt to reduce CO2. The second combustor, Interstage Turbine Burner (ITB), is located between the high pressure and the low pressure turbine burning kerosene or biofuel in a Flameless Combustion (FC) mode. With the thermal energy provided by different fuel sources, the volume required to store cryogenic fuels is less; meanwhile, the FC technique is beneficial to reduce NOx. By introducing the CBACS, LH2 or LNG is used as a coolant to cool down the bleed air.
According to fuel combinations, the hybrid engine is classified as LNG-kerosene version and LH2-kerosene version, where kerosene might be replaced by biofuel. By defining an “ITB energy fraction” as the ratio of the energy input of the ITB to the overall energy consumed, the fuel flow rates of two combustors are controlled. Using the developed model framework, the characteristics of the hybrid engine are studied and summarized in the following three aspects:
Potentials of the ITB engine cycle:
The sequential combustor configuration of the hybrid engine forms a reheat cycle. By distributing the energy into two combustors, the heat addition to each combustor decreases; therefore, the TIT is lower. Consequently, the turbine cooling air and the associated loss in the turbine efficiency reduces. Moreover, the NOx produced from the upstream combustor dissociates again in the ITB, which helps to lower the overall NOx emissions. These remarkable features are appreciable when the OPR and BPR are forced to continuously increase, which causes a substantial increase in the TIT of a classical engine. A turbine with very high inlet temperature has to be cooled substantially. Eventually, the gain in cycle efficiency might be canceled by the loss in the turbine efficiency. Moreover, when the TIT is increased beyond 1800 K, the NOx exhibits an exponential increase. Hence following the evolution of the engine technology, the reheat cycle would be an option for the next step.
Characteristics of the multi-fuel hybrid engine:
The features of the hybrid engine have been explored from various aspects. The isobaric heat capacity of the combustion products from LNG and LH2 is higher than that from kerosene, which is beneficial to the thermal efficiency. Using LNG and LH2 as a coolant, the bleed air temperature reduces substantially (maximum by more than 500 K), thereby, the turbine cooling air mass flow rate decreases by half. Moreover, the increase in fuel temperature is favourable to enhance the thermal efficiency. The hybrid engine has been optimized at cruise considering various ITB energy fractions. The optimized engine cycle is verified at critical operating points. The assessment of the standalone engine performance with baseline engines shows that the LH2-kerosene hybrid engine is superior to the LNG-kerosene hybrid engine in terms of the cycle efficiency and the CO2 reduction. However, the mission analysis shows conflicting results. Due to the stronger installation effect, the MFBWB together with the LH2-kerosene hybrid engine scores lower, implying that the LNG-kerosene BWB would have the least climate impact.
Operating strategy of the multi-fuel hybrid engine:
The operating strategy of the hybrid combustion system has been developed to enhance the steady state performance of the hybrid engine. The analysis exhibits that using an ITB is beneficial for the high pressure spool speed, the HPC exit temperature, and the HPT inlet temperature. However, the LPC surge margin and LPT inlet temperature conflict their limits as the ITB energy fraction increases. For the various thrust requirements at Sea Level Static (SLS) standard condition, a fuel control schedule together with a variable bleed valve schedule is proposed. Moreover, another fuel control strategy is suggested for the flat rating at SLS.
In the experiments, a stereoscopic particle image velocimetry (SPIV) setup is used to measure velocity in the near wake region at different azimuth angles and around the blade at different radial positions. This experimental setup allows measuring three velocity components on 2D planes which can be used to construct three-dimensional flow fields. With this approach, a detailed description of the flow-field in the root region is obtained and 3D visualizations are presented.
Further analysis of the velocity fields allows illustrating and understanding the physics of the formation of the root flow structures for different blade geometries and their evolution in the blade's near wake for different blade tip speed ratios. The effect of the root vortex on the blade's root flow and in the near wake region is studied. In particular, the experimentally-observed spanwise flow in the blade's outer flow region (outside the boundary layer of the blade) questions the two-dimensional flow assumption of the classical momentum theory. The velocity fields are also used to deduce the loads on the blade through the calculation of the momentum change in the fluid.
In addition to the analysis of the experimental results, also comparisons with numerical simulations from Blade Element Momentum (BEM) and Computational Fluid Dynamics (CFD) are made. The (Open Foam) CFD results are validated by comparing the computed velocity fields with the PIV results and a good agreement is obtained. The comparison of the load predictions from the numerical and the experimental methods also shows a very good agreement, which brings confidence about the capability of these numerical methods to estimate the forces along the blade.
This thesis has contributed to narrowing the knowledge gap in the field of HAWT blade's root flow aerodynamics by:
(i) providing a solid experimental database of root flow velocities and vortical structures;
(ii) investigating the existence and hence the role of the root vortex;
(iii) studying the spanwise flow over the blade's surface and hence identifying the three-dimensionality of the flow in the outer flow region;
(iv) comparing the experimental and numerical results to study and explain the physics of the root flow;
(v) demonstrating that with advanced numerical tools realistic and complicated root flow details can be simulated.","","en","doctoral thesis","","978-90-76468-15-0","","","","","","","","","Wind Energy","","",""
"uuid:11be69d2-44ae-429c-9746-7e3ced35f464","http://resolver.tudelft.nl/uuid:11be69d2-44ae-429c-9746-7e3ced35f464","Fault Diagnosis and Fault-Tolerant Control for Aircraft Subjected to Sensor and Actuator Faults","Lu, P. (TU Delft Control & Simulation)","Mulder, Max (promotor); Chu, Q. P. (copromotor); Delft University of Technology (degree granting institution)","2016","","Fault-Tolerant Control; Fault Detection and Diagnosis; Nonlinear Control; Flight Control; Sensor faults; Actuator faults; Extended Kalman Filter; Unscented Kalman Filter; Adaptive filtering; Disturbance estimation; Fault estimation; Turbulence; Real flight data","en","doctoral thesis","","978-94-6186-701-8","","","","","","","","","Control & Simulation","","",""
"uuid:e813298e-93d8-4a76-a7ab-72b327bcde4b","http://resolver.tudelft.nl/uuid:e813298e-93d8-4a76-a7ab-72b327bcde4b","Prediction of belt conveyor idler performance","Liu, X. (TU Delft Transport Engineering and Logistics)","Lodewijks, G. (promotor); Pang, Y. (copromotor); Delft University of Technology (degree granting institution)","2016","","bulk material; condition monitoring; maintenance; belt conveyor; idler; reliability","en","doctoral thesis","TRAIL Research School","978-90-5584-207-0","","","","TRAIL Thesis Series T2016/14, The Netherlands TRAIL Research School","","","","","Transport Engineering and Logistics","","",""
"uuid:e35c4735-1f6f-4e4c-b7b8-130f68a7dd02","http://resolver.tudelft.nl/uuid:e35c4735-1f6f-4e4c-b7b8-130f68a7dd02","Reducing the cover-to-diameter ratio for shallow tunnels in soft soils","Vu, M.N. (TU Delft Geo-engineering)","Bosch, J.W. (promotor); Broere, W. (copromotor); Delft University of Technology (degree granting institution)","2016","Despite the fact that shallow tunnels have the benefits of low short-term construction costs and long-term operational costs primarily due to the shallow depth of the station boxes, the limited understanding of shallow tunnelling in soft soils is an obstacle to the development of shallow tunnels in urban areas. This study carries out a theoretical investigation of the effects of reducing the cover-to-diameter ratio C/D for shallow tunnels in soft soils.
In stability analysis, the uplift, face stability and blow-out mechanisms are investigated. This study investigates interactions between the TBM and surrounding soil in tunnelling process, the stability of the TBM is not taken into account. The relationship between the C/D ratio and the required thickness-to-diameter ratio d/D as well as the required support pressures will be derived in various soils. Ranges of support pressures are also estimated for the TBM.
Structural analysis is carried out for the variation of deformations and internal forces of the tunnel lining when reducing the C/D ratio. Since the conventional design models are not suitable in the case of shallow tunnels a new structural analysis model, which includes the difference between loads at the top and at the bottom of the tunnel, is proposed. Optimal C/D ratios with various d/D ratios for shallow tunnels in soft soils are also derived.
With respect to ground movement analysis, this research investigates the areas affected by shallow tunnelling with a preliminary assessment of the risk of building damage by investigating surface and subsurface soil displacements. These areas are determined for different tunnel diameters in various soil types and are then compared to recent studies.
The total volume loss is estimated at the tunnelling face, along the TBM, at the tail and includes long-term consolidation settlements. By combining empirical models from the literature and the proposed new models, the volume loss components are estimated both for short-term construction and for the long-term consolidation effects. This shows that a no volume loss is feasible in shallow tunnelling with careful control of the support pressure.
The boundaries of the influence zones in shallow tunnelling are identified and discussed on the basis of various case studies. The effects of the soil parameters on the influence areas are also investigated.
From these calculations, the limits and optimal C/D ratios for shallow tunnelling are deduced and recommendations and solutions for improving the shallow tunnelling process are proposed in this dissertation.
A major drawback of current methods that detect single molecules is that the number of false positives is unknown. Therefore, a generalized likelihood ratio test is proposed here, which can control both the number of false positives and true positives withminimal user input. This target is stably achieved in the experiment over a large range of signal to background conditions.
A key application of single molecule localization microscopy can be found in in vivo RNA imaging. In this thesis two RNA studies are presented: i) The nuclear pore complex structures and the kinetic interaction with mRNA are investigated, where a point mutation of mex67p triggered the nuclear export efficiency to drop significantly; and ii) A study on the mobility and occupation of mRNA within the nucleus is performed by instantaneously imaging the whole three-dimensional volume ofmRNAs in the nucleus of a living cell. Fromthis study, no evidence was obtained for exclusion or enrichment of the heterochromatin by mRNAs.
For the general applicability of singlemolecule localization microscopy, the two-dimensionalmethods will need to be extended to three-dimensions. In threedimensional imaging small aberrations will become significant when imaging away from focus and therefore need to be compensated or calibrated. This thesis shows that one can extend single molecule localization into three-dimensions, or even a fourth-dimension such as color when the point spread function of a microscope is correctly distorted. Additionally, an adaptive optics strategy is presented that can correct for sample induced aberrations, and a method is proposed to encode emission color of the single molecule into the measurement.","","en","doctoral thesis","","978-94-6186-703-2","","","","","","","","","ImPhys/Quantitative Imaging","","",""